uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,153,728 | arxiv | \section{Introduction}
\label{s1}
Let $V(n)$ be the set of all positive divisors of a positive integer $n$ as defined in~(\ref{e01}).
For instance, $V(20) = \{1, 2, 4, 5, 10, 20\}$.
The partial order called the {\em divides} relation, $a$ divides $b$ denoted $a|b$, is applied to $V(n)$ and
yields two types of directed acyclic graphs (henceforth referred simply as graphs) as shown in Figure~\ref{f01}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{cc}
\resizebox{!}{1.2in}{\includegraphics{f01b.eps}}
&
\resizebox{!}{1.2in}{\includegraphics{f01a.eps}}\\
(a) Transitive Closure $G^T(20)$ & (b) Hasse diagram $G^H(20)$
\end{tabular}
\end{center}
\caption{\label{f01} Two basic graphs derived from the divides relation.}
\end{figure}
The first graph is called the {\em transitive closure}, $G^T(n) = (V(n), E^T(n))$ where
\begin{equation}
V(n) = \{x \suchthat{ x \in Z^{+} \wedge x|n}\}
\label{e01}
\end{equation}
\begin{equation}
E^T(n) = \{(a,b) \suchthat{a,b \in V(n) \wedge a < b \wedge a|b}\}
\label{e02}
\end{equation}
Next, when all arcs in $G^T(n)$ with alternative transitive paths are excluded, the graph becomes a {\em Hasse diagram} denoted as $G^H(n) = (V(n), E^H(n))$ where $E^H(n)$ is defined in~(\ref{e03}).
\begin{equation}
E^H(n) = E^T(n) - \{(a,b) \in E^T(n) \suchthat{\exists c \in V(n) (a < c < b \wedge a|c \wedge c|b)}\}
\label{e03}
\end{equation}
Figures~\ref{f01} (a) and (b) show the {\em Transitive Closure} $G^T(20)$ and {\em Hasse diagram} $G^H(20)$ , respectively.
Note that $G^H(n) = G^T(n)$ if and only if $n$ is a prime.
Numerous integer sequences have been discovered from the divides relation from the number theory point of view (see ~\cite{oeis}).
In Section~\ref{s2}, this paper not only compiles
various existing integer sequences in~\cite{oeis}, but also discovers numerous integer sequences from the graph theory point of view, mainly from $G^H(n)$ and $G^T(n)$.
By the {\em Fundamental Theorem of Arithmetic}, every positive integer $n > 1$ can be represented by $\omega$ distinct prime numbers $p_1, p_2, \cdots, p_\omega$ and positive integers $m_1, m_2, \cdots, m_\omega$ as corresponding exponents such that $n = p_1^{m_1} p_2^{m_2} \cdots p_\omega^{m_\omega}$ where $p_1 < p_2 < \cdots < p_\omega$. Let $M(n)= (m_1, m_2, \cdots, m_\omega)$ be the sequence of the exponents.
In~\cite{HW1979}, Hardy and Wright used $\Omega(n)$ and $\omega(n)$ to denote the number of prime divisors of $n$ counted with multiplicity and the number of distinct prime factors of $n$, respectively.
For example, $20 = 2 \times 2 \times 5 = 2^2 \times 5^1$ has $\Omega(20) = 3$ and $\omega(20) = 2$.
Let $M'(n) = [m_1, m_2, \cdots, m_\omega]$ be the {\em multiset} known as the {\em prime signature} of $n$ where the order does not matter and repetitions are allowed.
For example, $M'(4500 = 2^2\times3^2\times5^3) = [2, 2, 3]$ has the same prime signature as $M'(33075 = 3^3\times5^2\times7^2) = [3,2,2]$.
The prime signature $M'(n)$ uniquely determines the structures of $G^H(n)$ and $G^T(n)$ and play a central role in this work as they partition the $G^H(n)$ and $G^T(n)$ into isomorphism classes and
are used as the labels of the nodes of $G^H(n)$ and $G^T(n)$ .
Any ordering of the prime signatures corresponds to an ordering of the isomorphism classes of $G^H(n)$ and $G^T(n)$ and consequently of their associated graph invariants, such as their order, size, and path counts.
Two kinds of orderings of prime signatures such as the {\em graded colexicographic} and
{\em canonical orderings} appear in the literature and the On-line Encyclopedia of Integer Sequences~\cite{oeis}.
Several integer sequences by prime signatures have been studied from the number theory point of view~\cite{AS1972,HW1979},
the earliest one of which dates from 1919~\cite{MacMahon1919}.
However, some sequences have interpretations different from the graph theory interpretations provided here.
Most importantly, over twenty new integer sequences of great interest are presented in Section~\ref{s3}.
\section{Graph Theoretic Properties and Invariants of the Divides Relation}
\label{s2}
In this section, fourteen graph invariants such as order, size, degree, etc. for the {\em Hasse Diagram} and/or {\em Transitive Closure} graphs are formally defined and investigated. Furthermore, various graph theoretic properties are also determined.
The first graph invariant of interest is the common {\em order} of $G^H(n)$ and $G^T(n)$, i.e., the number of nodes, $|V(n)|$. By definition, this is simply the number of divisors of $n$.
\begin{theorem}[Order of $G^H(n)$ and $G^T(n)$]
\label{Tmorder}
\begin{equation}
|V(n)| = |V(M(n))|= \prod_ {m_i \in M(n)}(m_i + 1)
\end{equation}
\end{theorem}
\begin{proof}
Each $p_i^{m_i}$ term contains $m_i + 1$ factors which can contribute to a divisor of $n$. Thus, the number of divisors of $n$ is $(m_1+1)\times(m_2+1)\times\cdots\times(m_\omega+1)$ by the {\em product rule of counting}.
\end{proof}
This classic and important integer sequence of $|V(n)|$ in natural order is given in Table~\ref{t01} and listed as A000005 in~\cite{oeis}.
Table~\ref{t01} lists 14 integer sequences of all forthcoming graph invariants with OEIS number if listed and blank in the OEIS column if not listed.
\begin{table}[hbtp]\vspace*{-3ex}
\caption[]{divides relation graph invariants in natural order}
\label{t01}
\centering
{\footnotesize
\begin{tabular}{cp{3.6in}c} \hline
\multicolumn{1}{c}{Invariant} &
\multicolumn{1}{c}{Integer sequence for $n = 1,\cdots,50$} &
\multicolumn{1}{c}{OEIS} \\ \hline \hline
$|V(n)|$ & 1, 2, 2, 3, 2, 4, 2, 4, 3, 4, 2, 6, 2, 4, 4, 5, 2, 6, 2, 6, 4, 4, 2, 8, 3, 4, 4, 6, 2, 8, 2, 6, 4, 4, 4, 9, 2, 4, 4, 8, 2, 8, 2, 6, 6, 4, 2, 10, 3, $\cdots$ & A000005\\ \hline
$|E^H(n)|$ &0, 1, 1, 2, 1, 4, 1, 3, 2, 4, 1, 7, 1, 4, 4, 4, 1, 7, 1, 7, 4, 4, 1, 10, 2, 4, 3, 7, 1, 12, 1, 5, 4, 4, 4, 12, 1, 4, 4, 10, 1, 12, 1, 7, 7, 4, 1, 13, $\cdots$ & A062799\\ \hline
$\Omega(n)$ &0, 1, 1, 2, 1, 2, 1, 3, 2, 2, 1, 3, 1, 2, 2, 4, 1, 3, 1, 3, 2, 2, 1, 4, 2, 2, 3, 3, 1, 3, 1, 5, 2, 2, 2, 4, 1, 2, 2, 4, 1, 3, 1, 3, 3, 2, 1, 5, 2, 3, 2, $\cdots$ & A001222\\ \hline
$\omega(n)$ &0, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 2, 1, 2, 2, 1, 1, 2, 1, 2, 2, 2, 1, 2, 1, 2, 1, 2, 1, 3, 1, 1, 2, 2, 2, 2, 1, 2, 2, 2, 1, 3, 1, 2, 2, 2, 1, 2, 1, 2, 2, 2, $\cdots$ & A001221 \\ \hline
$W_v(n)$ & 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 2, 1, 2, 2, 1, 1, 2, 1, 2, 2, 2, 1, 2, 1, 2, 1, 2, 1, 3, 1, 1, 2, 2, 2, 3, 1, 2, 2, 2, 1, 3, 1, 2, 2, 2, 1, 2, 1, 2, $\cdots$ & A096825\\ \hline
$W_e(n)$ & 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 3, 1, 2, 2, 1, 1, 3, 1, 3, 2, 2, 1, 3, 1, 2, 1, 3, 1, 6, 1, 1, 2, 2, 2, 4, 1, 2, 2, 3, 1, 6, 1, 3, 3, 2, 1, 3, 1, 3, $\cdots$ & -\\ \hline
$\Delta(n)$ &0, 1, 1, 2, 1, 2, 1, 2, 2, 2, 1, 3, 1, 2, 2, 2, 1, 3, 1, 3, 2, 2, 1, 3, 2, 2, 2, 3, 1, 3, 1, 2, 2, 2, 2, 4, 1, 2, 2, 3, 1, 3, 1, 3, 3, 2, 1, 3, 2, 3, $\cdots$ & -\\ \hline
$|P^H(n)|$ & 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 3, 1, 2, 2, 1, 1, 3, 1, 3, 2, 2, 1, 4, 1, 2, 1, 3, 1, 6, 1, 1, 2, 2, 2, 6, 1, 2, 2, 4, 1, 6, 1, 3, 3, 2, 1, 5, 1, 3, 2, 3, $\cdots$ & A008480\\ \hline
$|V_E(n)|$ & 1, 1, 1, 2, 1, 2, 1, 2, 2, 2, 1, 3, 1, 2, 2, 3, 1, 3, 1, 3, 2, 2, 1, 4, 2, 2, 2, 3, 1, 4, 1, 3, 2, 2, 2, 5, 1, 2, 2, 4, 1, 4, 1, 3, 3, 2, 1, 5, 2, 3, $\cdots$ &A038548\\ \hline
$|V_O(n)|$ &0, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 3, 1, 2, 2, 2, 1, 3, 1, 3, 2, 2, 1, 4, 1, 2, 2, 3, 1, 4, 1, 3, 2, 2, 2, 4, 1, 2, 2, 4, 1, 4, 1, 3, 3, 2, 1, 5, 1, 3, $\cdots$ & A056924\\ \hline
$|E_E(n)|$ & 0, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 4, 1, 2, 2, 2, 1, 4, 1, 4, 2, 2, 1, 5, 1, 2, 2, 4, 1, 6, 1, 3, 2, 2, 2, 6, 1, 2, 2, 5, 1, 6, 1, 4, 4, 2, 1, 7, 1, 4, $\cdots$ & - \\ \hline
$|E_O(n)|$ & 0, 0, 0, 1, 0, 2, 0, 1, 1, 2, 0, 3, 0, 2, 2, 2, 0, 3, 0, 3, 2, 2, 0, 5, 1, 2, 1, 3, 0, 6, 0, 2, 2, 2, 2, 6, 0, 2, 2, 5, 0, 6, 0, 3, 3, 2, 0, 6, 1, 3, $\cdots$ & -\\ \hline
$|E^T(n)|$ & 0, 1, 1, 3, 1, 5, 1, 6, 3, 5, 1, 12, 1, 5, 5, 10, 1, 12, 1, 12, 5, 5, 1, 22, 3, 5, 6, 12, 1, 19, 1, 15, 5, 5, 5, 27, 1, 5, 5, 22, 1, 19, 1, 12, 12, 5, $\cdots$ & - \\ \hline
$|P^T(n)|$ & 1, 1, 1, 2, 1, 3, 1, 4, 2, 3, 1, 8, 1, 3, 3, 8, 1, 8, 1, 8, 3, 3, 1, 20, 2, 3, 4, 8, 1, 13, 1, 16, 3, 3, 3, 26, 1, 3, 3, 20, 1, 13, 1, 8, 8, 3, 1, 48, 2, $\cdots$ & A002033 \\ \hline
\end{tabular} }
\end{table}
The next eleven graph invariants of interest are for $G^H(n)$ exclusively.
The second graph invariant of interest is the {\em size} of $G^H(n)$ which is the cardinality of the arc set $|E^H(n)| = |E^H(M(n))|$.
A recursive algorithm to compute $|E^H(n)|$ is given in Algorithm~\ref{Ahsize} which utilizes a size fact about the {\em Cartesian product} of two graphs.
\begin{algorithm}[Size of $G^H(n)$]
Let $m_i \in M'$ and the multiset, $M = M'(n)$ initially.
\label{Ahsize}
\begin{equation}
|E^H(M)|=
\left\{
\begin{array}{l l}
|E^H(M - \{m_i\})|\times(m_i+1) + m_i\times|V(M- \{m_i\})| & \textrm{if } |M| > 1 \\
m_1 & \textrm{if } |M| = 1
\end{array} \right.
\end{equation}
\end{algorithm}
\begin{theorem}[Algorithm~\ref{Ahsize} correctly computes $|E^H(n)|$]
\end{theorem}
\begin{proof}
In~\cite{Harary1972}, a theorem about the size of the {\em Cartesian product} of two graphs is given, i.e., the size of a {\em Cartesian product} of two graphs is the size of the first multiplied by the order of the second added to the size of the second multiplied by the order of the first.
Using this theorem and the fact that $G^H(n)$ is isomorphic to the Cartesian product of paths,
it is clear inductively that the recursive Algorithm~\ref{Ahsize} correctly computes the size of $G^H(n)$.
\end{proof}
The integer sequence of $|E^H(n)|$ is listed as A062799 with an alternative formula and described as the {\em inverse M\"{o}bius transform} of the number of distinct prime factors of $n$ in~\cite{oeis}.
For the purpose of illustrating the various concepts that are defined in what follows $G^H(540)$ is shown in Figure~\ref{f02}.
Note that $540 = 2^23^35$ and that the nodes of $G^H(540)$ are labeled with the sequence of exponents with respect of the order of $M(n)$. Each node $v \in V(n)$ is expressed as a sequence, $M_n(v) = (v_1,\cdots, v_{\omega(n)})$ where $0 \le v_i \le m_i$.
\begin{definition}[Node as a sequence] If $v \in V(n)$ and $n = p_1^{m_1}p_2^{m_2} \cdots p_\omega^{m_\omega}$, then
\begin{equation}
v = p_1^{v_1}p_2^{v_2} \cdots p_\omega^{v_\omega} \textrm{ and } M_n(v) = (v_1, v_2, \cdots, v_\omega)
\end{equation}
\end{definition}
To minimize clutter in Figure~\ref{f02} the sequences $(2,3,1), (2,3,0), \cdots, (0,0,0)$ are written \newline
2 3 1, 2 3 0, $\cdots$, 0 0 0.
\begin{figure}[htb]
\centering
\includegraphics[scale=.5]{f02.eps}
\caption{\label{f02} $G^H(540) = G^H(M(540)) = G^H((2,3,1))$.}
\end{figure}
Let $V_l(n)$ denote the set of nodes lying in the $l$ level of the decomposition of $G^H(n)$. For example in Figure~\ref{f02}, $V_5(540) = \{108, 180, 270\}$.
\begin{lemma}[The sum of the prime signature of a node equals its level]
\label{Lmlevel}
\begin{equation}
V_l(n) = \left\{v \in V(n) \suchthat{\sum_{v_i \in M_n(v)}v_i = l}\right\}
\end{equation}
\end{lemma}
\begin{proof}
If $v \in V(n)$, then $v = n/x$, where $x$ is the product of $\Omega(n) - l$ primes (multiplicities counted) contained in
$\{p_1, p_2, \cdots p_w\}$. Thus, the nodes in $V_l(n)$ are precisely the nodes with signature sum $\sum_{v_i \in M_n(v)}v_i = l$.
\end{proof}
\begin{observation} Nodes partitioned by their level.
\begin{equation}
V_{l_1}(n) \cap V_{l_2}(n) = {\O} \textrm{ if } l_1 \not= l_2 \wedge l_1, l_2 \in \{0,..,\Omega(n)\}
\end{equation}
\begin{equation}
V(n) = \bigcup_{l \in \{0,..,\Omega(n)\}} V_l(n)
\end{equation}
\begin{equation}
|V(n)| = \sum_{l \in \{0,..,\Omega(n)\}}|V_l(n)|
\end{equation}
\end{observation}
Let $P(x,y)$ be the set of paths from node $x$ to node $y$ in a directed acyclic graph where each path is a sequence of arcs from $x$ to $y$. For example in $G^H(20)$ as shown in Figure~\ref{f01} (b),
$P(1,20) = \{\langle(1,2),(2,4),(4,20)\rangle, \langle(1,2),(2,10),(10,20)\rangle, \langle(1,5),(5,10),(10,20)\rangle\}$.
Let $sp(x,y)$ and $lp(x,y)$ be the lengths of the shortest path and longest path from $x$ to $y$.
Let $G(n)$ be a directed acyclic graph with a single source node, $1$ and a single sink node, $n$.
Let $sp(G(n))$ and $lp(G(n))$ be the lengths of the shortest path and longest path from $1$ to $n$, respectively.
For simplicity sake, we shall denote $P^H(n)$ and $P^T(n)$ for $P(1,n)$ in $G^H(n)$ and $G^T(n)$, respectively.
The height of $G^H(n)$ is the maximum level in the level decomposition of $G^H(n)$, namely the number of prime factors.
\begin{theorem}[Height of $G^H(n)$]
\label{Tmdepth}
\begin{equation}
height(G^H(n)) = sp(G^H(n)) = \sum_ {m_i \in M(n)}m_i = \Omega(n)
\end{equation}
\end{theorem}
\begin{proof}
Follows directly from Lemma~\ref{Lmlevel}.
\end{proof}
\begin{corollary}[Length of Paths in $G^H(n)$ and $G^T(n)$]
\label{Cldepth}
\begin{equation}
sp(G^H(n)) = lp(G^H(n)) = lp(G^T(n)) = \Omega(n)
\end{equation}
\end{corollary}
\begin{proof}
Follows directly from Lemma~\ref{Lmlevel}.
\end{proof}
Note that $sp(G^T(n)) = 1$ since the arc with a single path, $(1,n) \in P^T(n)$.
\begin{theorem}[Symmetry of $V_l(n)$]
\label{Tmsymmv}
\begin{equation}
|V_l(n)| = |V_{\Omega(n)-l}(n)|
\end{equation}
\end{theorem}
\begin{proof}
A $1-1$ correspondence $f$ is defined between $V_l(n)$ and $V_{\Omega(n)-l}(n)$.
Let $v$ be a node in $V_l(n)$ and
$f$ the function from $V_l(n)$ to $V_{\Omega(n)-l}(n)$ defined by
\begin{equation}
f(v) = p_1^{m_1 - v_1} p_2^{m_2 - v_2}\cdots p_\omega^{m_\omega - v_\omega}
\label{ef}
\end{equation}
By Lemma~\ref{Lmlevel}, $f(v)$ is on level $\Omega(n) - l$ and $f$ is clearly $1-1$ into.
Similarly, the function $g$ from $V_{\Omega(n)-l}(n)$ to $V_l(n)$ defined by
\begin{equation}
g(u) = p_1^{m_1 - u_1} p_2^{m_2 - u_2}\cdots p_\omega^{m_\omega - n_\omega} \textrm{ where } u \in V_{\Omega(n)-l}(n)
\label{eg}
\end{equation}
is clearly $1-1$ into with $g(u)$ in $V_l(n)$. Thus, $g$ is $f^{-1}$ and $|V_l(n)| = |V_{\Omega(n)-l}(n)|$.
\end{proof}
Let $E^H_l(n)$ be the set of arcs from nodes in level $l$ to level $l+1$ and formally defined in Definition~\ref{dlev}.
\begin{definition}
\label{dlev}
\begin{equation}
E^H_l(n) = \{(a,b) \in E^H(n) | a \in V_l(n)\}
\end{equation}
\end{definition}
For example in Figure~\ref{f02}, $E^H_0(540) = \{(1,2), (1,3), (1,5)\}$ and \newline
$E^H_5(540) = \{(108,540), (180,540), (270,540)\}$.
The following is a symmetry property of $E^H(n)$.
\begin{theorem}[Symmetry of $E^H_l(n)$]
\label{Tmsymme}
\begin{equation}
|E^H_l(n)| = |E^H_{\Omega(n)-l-1}(n)|
\end{equation}
\end{theorem}
\begin{proof}
Let $a \in V_l(n)$ and $b \in V_{l+1}(n)$, and $(a,b)$ be an arc from $V_l(n)$ to $V_{l+1}(n)$.
Then, using $f$ in~(\ref{ef}), the function $F$ defined by $F(a,b) = (f(b),f(a))$ provides a $1-1$ into function from
$E^H_l(n)$ to $E^H_{\Omega(n)-l-1}(n)$.
This is seen by noting that
\begin{eqnarray}
f(b) & = & p_1^{m_1-b_1} p_2^{m_2-b_2} \cdots p_\omega^{m_\omega-b_\omega} \textrm{ is in } V_{\Omega(n) - l - 1}\\
f(a) & = & p_1^{m_1-a_1} p_2^{m_2-a_2} \cdots p_\omega^{m_\omega-a_\omega} \textrm{ is in } V_{\Omega(n) - l}\\
\frac{f(a)}{f(b)} & = & \frac{p_1^{m_1-a_1} p_2^{m_2-a_2} \cdots p_\omega^{m_\omega-a_\omega}}{p_1^{m_1-b_1} p_2^{m_2-b_2}\cdots p_\omega^{m_\omega-b_\omega}} \nonumber \\
& = & \frac{p_1^{m_1} p_2^{m_2} \cdots p_\omega^{m_\omega} p_1^{b_1} p_2^{b_2} \cdots p_\omega^{b_\omega}}
{p_1^{m_1} p_2^{m_2} \cdots p_\omega^{m_\omega}p_1^{a_1} p_2^{a_2} \cdots p_\omega^{a_\omega}} = \frac{b}{a} = p \label{epf1}
\end{eqnarray}
Thus, from~(\ref{epf1}), since $(a,b)$ is an arc, $(f(b),f(a))$ is an arc from $V_{\Omega(n) - l - 1}$ to $V_{\Omega(n) - l}$.
Therefore, $F$ provides a $1-1$ into function from $E^H_l(n)$ to $E^H_{\Omega(n)-l-1}(n)$.
Similarly, the function $G$ defined by $G(c,d) = (g(d),g(c))$ is a $1-1$ into function
from $E^H_{\Omega(n)-l-1}(n)$ to $E^H_l(n)$ .
Therefore, $|E^H_l(n)| = |E^H_{\Omega(n)-l-1}(n)|$.
\end{proof}
All $G^H(n)$ have a single source node, $1$ and a single sink node, $n$. Thus $|V_0(n)| = |V_{\Omega(n)}(n)| = 1$.
There are two other special levels with $\omega(n)$ as their cardinalities.
\begin{theorem}[Two special levels with $\omega(n)$ nodes]
\label{Tmuniqp}
\begin{equation}
|V_{\Omega(n)-1}(n)|= |V_1(n)| = \omega(n)
\end{equation}
\end{theorem}
\begin{proof}
$V_1(n)$ consists of the $\omega(n)$ distinct prime factors of $n$. By Theorem~\ref{Tmsymmv} $|V_1(n)| = |V_{\Omega(n)-1}(n)| = \omega(n) $.
\end{proof}
\begin{definition} Width of $G^H(n)$ in terms of nodes
\begin{equation}
W_v(n) = \max_{l \in \{0,..,\Omega(n)\}}|V_l(n)|
\end{equation}
\end{definition}
For example in Figure~\ref{f02}, $W_v(540) = 6$ at level $3$.
The $W_v(n)$ sequence is listed as A096825, the maximal size of an {\em antichain} in a divisor lattice in~\cite{oeis}.
A different width can be defined in terms of arc cardinality in each level as depicted in Figure~\ref{f03}.
\begin{definition} Width of $G^H(n)$ in terms of arcs
\begin{equation}
W_e(n) = \max_{l \in \{0,..,\Omega(n)-1\}}|E^H_l(n)|
\end{equation}
\end{definition}
\begin{figure}[htb]
\begin{center}
\resizebox{!}{2.4in}{\includegraphics{f03.eps}}
\end{center}
\caption{\label{f03} Anatomy of $(n)$.}
\end{figure}
For example in Figure~\ref{f02}, $W_e(540) = 12$ at levels $2$ and $3$.
The $W_e(n)$ sequence does not appear in~\cite{oeis}.
Since $G^H(n)$ is a digraph, each node, $v$ has an {\em in-degree}, $\Delta^-(v)$, number of incoming arcs and an {\em out-degree}, $\Delta^+(v)$, number of outgoing arcs and the degree of $v$ is defined $\Delta(v) = \Delta^+(v) + \Delta^-(v)$ .
\begin{lemma}[Upper bound for indegrees and outdegrees]
\label{Tmbinout}
For a node $v \in V(n)$, \newline
\[ \Delta^-(v) \le \omega(n), \Delta^+(v) \le \omega(n), \textrm{ and } \Delta(v) \le 2\omega(n)\]
\end{lemma}
\begin{proof}
For the outdegree, each node can add at most one more of each distinct prime to the product.
For the indegree, the product represented by the node was obtained by adding at most one prime to the product at the level just below.
\end{proof}
\begin{definition} The degree of the graph $G^H(n)$ denoted , $\Delta(G^H(n))$ is defined by
\begin{equation}
\Delta(G^H(n)) = \max_{v \in V(n)} \Delta(v)
\end{equation}
\end{definition}
For example from Figure~\ref{f02}, $\Delta(G^H(540)) = 5$ because the maximum node degree of $G^H(540)$ occurs at $90, 30, 18,$ and $6$.
The $\Delta(G^H(n))$ or simply $\Delta(n)$ sequence is not listed in~\cite{oeis}.
The $\Delta(G^H(n))$ can be computed very efficiently as stated in Theorem~\ref{Tdelta} using only $M'(n)$.
Let $G(n)$ be a sub-multiset of $M'(n)$.
\begin{eqnarray}
G(n) & =& [m_i \in M'(n) \suchthat m_i > 1] \\
|G(n)| & = & \sum_{m_i \in M(n)}gto(m_i) \textrm{ where } gto(m_i) =
\left\{
\begin{array}{l l}
1 & \textrm{if } m_i > 1 \\
0 & \textrm{otherwise }
\end{array} \right.
\end{eqnarray}
For example of $M'(540) = [2, 3, 1]$, $G(540) = [2, 3]$, and $|G(540)| = 2$.
\begin{theorem}[Degree of $G^H(n)$]
\label{Tdelta}
\begin{equation}
\Delta(G^H(n)) = \omega(n) + |G(n)|
\end{equation}
\end{theorem}
\begin{proof}
Consider $v \in V(n)$ with $M_n(v) = (v_1,\cdots, v_{\omega})$ where $0 \le v_i \le m_i$.
For a $v_i$ whose $m_i > 1$, $v$ has an incoming arc from a node $u$ whose $M_n(u) = (v_1,\cdots, (u_i = v_i - 1), \cdots, v_{\omega})$ provided $v_i > 0$ and $v$ has an outgoing arc to a node $w$ whose $M_n(w)= (v_1,\cdots, (w_i = v_i + 1), \cdots, v_{\omega})$ as long as $v_i < m_i$.
Every element in $G(n)$ contributes $2$ to $\Delta(v)$.
For a $v_i$ in the $M'(n) - G(n)$ multiset, whose $m_i = 1$, $v$ can have either only the incoming arc from a node $u$ whose $M_n(u) = (v_1,\cdots, (u_i = 0), \cdots, v_{\omega(n)})$ if $v_i = 1$ or the outgoing arc to a node $w$ whose $M_n(w) = [v_1,\cdots, (w_i = 1), \cdots, v_{\omega(n)}]$ if $v_i = 0$. There are $\omega(n) - |G(n)|$ number of such elements, $\le 1$.
Therefore, for every node $v \in V(n)$, $\Delta(v) \le 2 \times |G(n)|+ \omega(n) - |G(n)| = \omega(n) + |G(n)|$. There exists a node $v$ whose $\Delta(v) = \omega(n) + |G(n)|$. One such node is $v$ such that $M_n(v) = (m_1-1, m_2-1,\cdots,m_{\omega(n)}-1)$.
\end{proof}
For example in Figure~\ref{f02}, in $G^H((2,3,1))$, the node $18$ whose $M_n(18) = (1,2,0)$ has the maximum degree, $5$.
The next graph invariant of interest is the cardinality of paths, $|P(G^H(n))|$.
The first 200 integer sequence entries match with those labeled as A008480~\cite{oeis} which is the number of ordered prime factorizations of $n$ with its multinomial coefficient formula given in Theorem~\ref{Tmnumop}~\cite{AS1972,KKW1993}.
\begin{theorem}[the number of ordered prime factorizations of $n$~\cite{AS1972,KKW1993}]
\label{Tmnumop}
\begin{equation}
opf(n)= \frac{(\sum_{x \in M(n)} x)!}{ \prod_{x \in M(n)} x!}
\end{equation}
\end{theorem}
While a nice formula has been given in~\cite{AS1972,KKW1993}, a recursive definition is given here where the {\em dynamic programming} technique can be applied to quickly generate the integer sequence.
\begin{theorem}[Cardinality of $P(G^H(n))$]
\label{Tmpnumh}
\begin{equation}
|P(G^H(n))|=
\left\{
\begin{array}{l l}
\sum\limits_{v \in V_{\Omega(n)-1}(n)}|P(G^H(v))| & \textrm{if } \Omega(n) > 1 \\
1 & \textrm{if } \Omega(n) \le 1
\end{array} \right.
\end{equation}
\end{theorem}
\begin{proof}
All paths in $P(G^H(n))$ must contain exactly one node at level $\Omega(n) - 1$.
\end{proof}
The next four graph invariants involve the fact that $G^H(n)$ is {\em bipartite} as depicted in Figure~\ref{f04}.
\begin{theorem}[$G^H(n)$ is bipartite]
\label{Tmbipart}
\end{theorem}
\begin{proof}
Arcs join only even level nodes to odd level nodes and vice versa.
Thus, the nodes at even and odd levels form a bipartition of $V(n)$.
\end{proof}
\begin{figure}[htb]
\begin{center}
\begin{tabular}{cc}
\resizebox{!}{1.8in}{\includegraphics{f04a.eps}}
&
\resizebox{!}{1.8in}{\includegraphics{f04b.eps}}\\
(a) Hasse diagram $G^H(60)$ & (b) $G^H(60)$ shown as a bipartite graph
\end{tabular}
\end{center}
\caption{\label{f04} $G^H(60)$}
\end{figure}
\begin{definition}
\label{Defoddv}
\begin{eqnarray}
V_E(n) & = & \{v \in V(n) \mid \sum_{m_i \in M_n(v)}m_i = even\} \\
V_O(n) & = & \{v \in V(n) \mid \sum_{m_i \in M_n(v)}m_i = odd\}
\end{eqnarray}
\end{definition}
The integer sequence of the cardinality of $V_E$ matches with A038548 which is the number of divisors of $n$ that are at most $\sqrt{n}$~\cite{oeis,Andrews2004}.
The integer sequence of $|V_O|$ also appears as A056924, described as the number of divisors of $n$ that are smaller than $\sqrt{n}$~\cite{oeis,Andrews2004}.
\begin{theorem}[Cardinality of $V_O(n)$]
\label{TmnumOv}
\begin{equation}
|V_O(n)| = \left \lfloor \frac{|V(n)|}{2}\right \rfloor
\end{equation}
\end{theorem}
\begin{proof}
The proof is by induction. For the base case $\omega = 1$, each divisor has a single exponent, i.e., $v_i \in \{p_1^0,p_1^1,\cdots,p_1^{m_1}\}$. Clearly, $|V_O| = \left \lfloor \frac{|V(n)|}{2}\right \rfloor$.
For the inductive step $\omega + 1$, let $M_{\omega + 1}$ be $M_{\omega}$ with $m_{\omega + 1}$ appended.
$V_O(M_{\omega+1})$ is the union of the cartesian product of $V_O(M_{\omega})$ and $V_E(m_{\omega + 1})$
together with the cartesian product of $V_E(M_{\omega})$ and $V_O(m_{\omega + 1})$, thus
\begin{equation}
|V_O(M_{\omega + 1})| = |V_O(M_{\omega})|\times|V_E(m_{\omega + 1})| + |V_E(M_{\omega})|\times|V_O(m_{\omega + 1})|
\end{equation}
There are four cases depending on the parities of $|V(M_{\omega})|$ and $m_{\omega + 1}$.
The following uses Theorem~\ref{Tmorder} and Definition~\ref{Defoddv}.
\newline
If $|V(M_{\omega})|$ is odd and $m_{\omega + 1}$ is odd,
\begin{eqnarray}
|V_O(M_{\omega + 1})| & = & \frac{|V(M_{\omega})|-1}{2} \times \frac{m_{\omega + 1}+1}{2} + \frac{|V(M_{\omega})|+1}{2} \times \frac{m_{\omega + 1}+1}{2} \nonumber \\
& = & \frac{|V(M_{\omega})|(m_{\omega + 1}+1)-(m_{\omega + 1}+1)}{4} + \frac{|V(M_{\omega})|(m_{\omega + 1}+1)+(m_{\omega + 1}+1)}{4} \nonumber \\
& = & \frac{|V(M_{\omega + 1})|-(m_{\omega + 1}+1) + |V(M_{\omega + 1})|+(m_{\omega + 1}+1)}{4}
= \left \lfloor \frac{|V(M_{\omega + 1})|}{2}\right \rfloor \nonumber
\end{eqnarray}
If $|V(M_{\omega})|$ is odd and $m_{\omega + 1}$ is even,
\begin{eqnarray}
|V_O(M_{\omega + 1})| & = & \frac{|V(M_{\omega})|-1}{2} \times \frac{m_{\omega + 1}+2}{2} + \frac{|V(M_{\omega})|+1}{2} \times \frac{m_{\omega + 1}}{2} \nonumber \\
& = & \frac{|V(M_{\omega})|(m_{\omega + 1})-(m_{\omega + 1} + 2)}{4} + \frac{|V(M_{\omega})|(m_{\omega + 1}+2)+m_{\omega + 1}}{4} \nonumber \\
& = & \frac{|V(M_{\omega})|(2m_{\omega + 1}+2)-(m_{\omega + 1} + 2)+m_{\omega + 1}}{4}
= \frac{|V(M_{\omega + 1})|-1}{2} = \left \lfloor \frac{|V(M_{\omega + 1})|}{2}\right \rfloor \nonumber
\end{eqnarray}
If $|V(M_{\omega})|$ is even and $m_{\omega + 1}$ is odd,
\begin{eqnarray}
|V_O(M_{\omega + 1})| & = & \frac{|V(M_{\omega})|}{2} \times \frac{m_{\omega + 1}+1}{2} + \frac{|V(M_{\omega})|}{2} \times \frac{m_{\omega + 1}+1}{2} \nonumber \\
& = & \frac{|V(M_{\omega})|(m_{\omega + 1}+1)}{4} + \frac{|V(M_{\omega})|(m_{\omega + 1}+1)}{4} = \frac{|V(M_{\omega + 1})|}{2} = \left \lfloor \frac{|V(M_{\omega + 1})|}{2}\right \rfloor \nonumber
\end{eqnarray}
If $|V(M_{\omega})|$ is even and $m_{\omega + 1}$ is even,
\begin{eqnarray}
|V_O(M_{\omega + 1})| & = & \frac{|V(M_{\omega})|}{2} \times \frac{m_{\omega + 1}+2}{2} + \frac{|V(M_{\omega})|}{2} \times \frac{m_{\omega + 1}}{2} \nonumber \\
& = & \frac{|V(M_{\omega})|(2m_{\omega + 1}+2)}{4} = \frac{|V(M_{\omega + 1})|}{2} = \left \lfloor \frac{|V(M_{\omega + 1})|}{2}\right \rfloor \nonumber
\end{eqnarray}
Therefore, $|V_O(M_{\omega + 1})| = \left \lfloor \frac{|V(M_{\omega + 1})|}{2}\right \rfloor$ in all four cases.
\end{proof}
\begin{corollary}[Cardinality of $V_E(n)$]
\label{TmnumEv}
\begin{equation}
|V_E(n)| =|V(n)| - |V_O(n)| =|V(n)| - \left \lfloor |V(n)|/2\right \rfloor
\end{equation}
\end{corollary}
\begin{proof}
Since $V_E(n)$ and $V_O(n)$ partition $V(n)$, $|V_E(n)| =|V(n)| - |V_O(n)|$.
\end{proof}
Similarly as with $V(n)$, $E(n)$ is bipartite as follows.
\begin{definition}
\begin{eqnarray}
E_E(n) & = & \{ (a,b) \in E^H(n) \mid \sum_{m_i \in M(a)}m_i = even\} \\
E_O(n) & = & \{ (a,b) \in E^H(n) \mid \sum_{m_i \in M(a)}m_i = odd\}
\end{eqnarray}
\end{definition}
Surprisingly, the integer sequences of $|E_E(n)|$ and $|E_O(n)|$ are not listed in~\cite{oeis}.
\begin{theorem}[Cardinality of $E_O(n)$]
\label{TmnumSe}
\begin{equation}
|E_O(n)| = \left \lfloor \frac{|E^H(n)|}{2}\right \rfloor
\end{equation}
\end{theorem}
\begin{proof}
An inductive proof, similar to the proof for the node parity decomposition in Theorem~\ref{TmnumOv}, can be applied using the cartesian product of two graphs~(\ref{eec}) and the arc parity decomposition~(\ref{eeo}).
\begin{equation}
|E^H(M_{\omega + 1})| = |E^H(M_{\omega})| \times |V(m_{\omega + 1})| + |V(M_{\omega})| \times |E^H(m_{\omega + 1})|
\label{eec}
\end{equation}
\begin{eqnarray}
|E_O(M_{\omega + 1})| & = & |E_O(M_{\omega})| \times |V_E(m_{\omega + 1})| + |E_E(M_{\omega})| \times |V_O(m_{\omega + 1})|
\label{eeo} \\
&& + |V_O(M_{\omega})| \times |E_E(m_{\omega + 1})| + |V_E(M_{\omega})| \times |E_O(m_{\omega + 1})| \nonumber
\end{eqnarray}
$|E_O(M_{\omega + 1})| = \left \lfloor |E^H(M_{\omega + 1})|/2 \right \rfloor$
in all eight cases bsed on parities of $|E^H(M_{\omega})|$, $|V(M_{\omega})|$, and $m_{\omega + 1}$.
\end{proof}
\begin{corollary}[Cardinality of $E_E(n)$]
\label{TmnumEe}
\begin{equation}
|E_E(n)| =|E^H(n)| - |E_O(n)| =|E^H(n)| - \left \lfloor \frac{|E^H(n)|}{2}\right \rfloor
\end{equation}
\end{corollary}
\begin{proof}
Since $E_E(n)$ and $E_O(n)$ partition $E^H(n)$, $|E_E(n)| =|E^H(n)| - |E_O(n)|$.
\end{proof}
The last two graph invariants of Table~\ref{t01} are exclusive to the transitive closure, $G^T(n)$, namely the size and the number of paths in $G^T(n)$.
Also surprisingly, the sequence for the size of $G^T(n)$ is not listed in~\cite{oeis}.
\begin{theorem}[Size of $G^T(n)$]
\label{TmsizeT}
\begin{equation}
|E^T(n)| = \sum_ {v \in V(n)}(|V(v)| - 1)
\end{equation}
\end{theorem}
\begin{proof}
The number of incoming arcs to node $v$ is the number of divisors of $v$ that are less than $v$ itself. Thus the indegree of $v$ is $|V(v)| - 1$ and the sum of the indegrees of all nodes in $G^T(n)$ is the size of $G^T(n)$.
\end{proof}
\begin{theorem}[Cardinality of $P(G^T(n))$]
\label{Tmpnumt}
\begin{equation}
|P(G^T(n))|=
\left\{
\begin{array}{l l}
\sum\limits_{v \in V(n)-\{n\}}|P(G^T(v))| & \textrm{if } \Omega(n) > 1 \\
1 & \textrm{if } \Omega(n) \le 1
\end{array} \right.
\end{equation}
\end{theorem}
\begin{proof}
Let $P(G^T(v))$ be the set of all paths from $1$ to $v$ where $v \ne n$. The addition of the arc $(v,n)$ to each path in $P(G^T(v))$ yields a path from $1$ to $n$. Thus, summing over all $v \in V(n) - \{n\}$ is equal to $|P(G^T(n))|$.
\end{proof}
The integer sequence of $|P(G^T(n))|$ matches with
A002033~\cite{oeis} and described as the number of {\em perfect partitions} of $n$~\cite{oeis,Comtet1974}.
Thus, the interpretation as the number of paths from $1$ to $n$ is part of the original contributions of this work.
\section{Graph Invariant Integer Sequences ordered by Prime Signature}
\label{s3}
The set of positive integers $> 1$ is partitioned by their prime signatures as exemplified in Table~\ref{t02}.
\begin{table}[b]\vspace*{-3ex}
\caption[]{Partitions of integers ($> 1$) by prime signature congruency.}
\label{t02}
\centering
{\footnotesize
\begin{tabular}{p{0.32in}p{3.4in}p{0.9in}} \hline
\multicolumn{1}{c}{$M$ / $S$} &
\multicolumn{1}{c}{Integer sequence for $n = 1,\cdots,20$} &
\multicolumn{1}{c}{OEIS} \\ \hline \hline
(1) & 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, $\cdots$ & A000040 \newline (Primes)\\ \hline
(2) & 4, 9, 25, 49, 121, 169, 289, 361, 529, 841, 961, 1369, 1681, 1849, 2209, 2809, 3481, 3721, 4489, 5041, $\cdots$ & A001248 \newline(Squared prime)\\ \hline
(1,1) & 6, 10, 14, 15, 21, 22, 26, 33, 34, 35, 38, 39, 46, 51, 55, 57, 58, 62, 65, 69, $\cdots$ & A006881 \\ \hline
(3) & 8, 27, 125, 343, 1331, 2197, 4913, 6859, 12167, 24389, 29791, 50653, 68921, 79507, 103823, 148877, 205379, 226981, 300763, 357911, $\cdots$ & A030078\newline (Cubed prime)\\ \hline
(2,1) & 12, 18, 20, 28, 44, 45, 50, 52, 63, 68, 75, 76, 92, 98, 99, 116, 117, 124, 147, 148, $\cdots$ & A054753\\ \hline
(1,1,1) &30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154, 165, 170, 174, 182, 186, 190, 195, 222, $\cdots$ & A007304 \\ \hline
\multicolumn{1}{c}{$\vdots$} &
\multicolumn{1}{c}{$\vdots$} &
\multicolumn{1}{c}{$\vdots$} \\
\end{tabular} }
\end{table}
\begin{definition}
$n_x$ and $n_y$ are {\em prime signature congruent} iff $M(n_x) = M(n_y)$.
\end{definition}
Let $S(n)$ be a representative sequence of the {\em prime signature} $M(n)$ written in descending order.
More formally, $S(n) = (s_1, s_2, \cdots, s_\omega)$ is the permutation of the multiset, $M(n) = [m_1, m_2, \cdots, m_\omega]$ such that $s_1 \ge s_2 \ge \cdots \ge s_\omega$.
For example, $S(4500) = S(33075) = (3,2,2)$ because $M(4500 = 2^2\times3^2\times5^3) = [2, 2, 3]$ has the same prime signature as $M(33075 = 3^3\times5^2\times7^2) = [3,2,2]$.
Albeit there are numerous ways of ordering $S$, the set of all $S(n)$, two particular orderings such as the {\em graded colexicographic} and {\em canonical} orders of $S$ appear in the literature~\cite{HW1979,AS1972}. First in the {\em graded colexicographic order}, $S$ are first grouped by $\Omega(S)$ and then by $\omega(S)$ in ascending order. Finally, the reverse lexicographic order is applied to the sub-group. It is closely related to the {\em graded reflected colexicographic order} used and denoted as $\pi$ in~\cite{AS1972} .
Let $LI(S)$ denote the least integer of a prime signature in the graded (reflected or not) colexicographic order. This sequence is listed as A036035 in~\cite{oeis}.
\begin{figure}[htb]
\begin{center}
{\small
\begin{tabular}{ llllll }
1 (0) & 2 (1) & 3 (2) & 5 (3) & 8 (4) & 13 (5) \\
& & 4 (1,1) & 6 (2,1) & 9 (3,1) & 14 (4,1) \\
& & & 7 (1,1,1) & 10 (2,2) & 15 (3,2) \\
& & & & 11 (2,1,1) & 16 (3,1,1) \\
& & & & 12 (1,1,1,1) & 17 (2,2,1) \\
& & & & & 18 (2,1,1,1) \\
& & & & & 19 (1,1,1,1,1) \\
\end{tabular}}\\
{\small
\begin{tabular}{ l l l l l l ll }
Index & Graded Colexicographic & Canonical\\
20 & (6) & (6) \\
21 & (5,1) & (5,1) \\
22 & (4,2) & (4,2) \\
23 & (3,3) & (4,1,1) \\
24 & (4,1,1) & (3,3) \\
25 & (3,2,1) & (3,2,1) \\
26 & (2,2,2) & (3,1,1,1) \\
27 & (3,1,1,1) & (2,2,2) \\
28 & (2,2,1,1) & (2,2,1,1) \\
29 & (2,1,1,1,1) & (2,1,1,1,1) \\
30 & (1,1,1,1,1,1) & (1,1,1,1,1,1)
\end{tabular}}
\end{center}
\caption{\label{f05} First 30 prime signatures in colexicographic and canonical orders.}
\end{figure}
Next, the {\em canonical order}, also known as the {\em graded reverse lexicographic order}, is often used to order the partitions~\cite{HW1979}. It first groups prime signatures by $\Omega(S)$ and then uses the reverse lexicographic order. Although this order is identical to the {\em graded colexicographic} order for the first 22 prime signatures, they clearly differ at 23, 24, 26, 27, etc., as seen in Figure~\ref{f05}.
The integer sequence of the least integer, $LI(S)$ in canonical order is listed as the Canonical partition sequence encoded by prime factorization (A063008) in~\cite{oeis}.
\begin{figure}[htb]
\begin{center}
\resizebox{5.1in}{!}{\includegraphics{f05.eps}}\\
\end{center}
\caption{\label{f06} First seven Hasse diagrams ordered by prime signatures.}
\end{figure}
The $S(n)$ determine the structure of $G^H(S(n))$ and $G^T(S(n))$ as shown in Figure~\ref{f06} with the first few simple {\em Hasse diagrams}.
All integer sequences of graph invariants in natural order in Table~\ref{t01} can be ordered in the graded colexicographic order (Table~\ref{t04}) and the canonical order (Table~\ref{t05}).
However, very little has been investigated concerning these sequences since most of them are in fact new.
In~\cite{AS1972}, Abramowitz and Stegun labeled $\Omega(S)$, $\omega(S)$, and $|P^H(S)|$ in the graded colexicographic order as $n$, $m$, and $M_1$, respectively. Only these three graph invariants and the number of divisors, $|V(S)|$ are found in~\cite{oeis} for the graded colexicographic order. Only $|P^H(S)|$ is found in~\cite{oeis} for the canonical order.
\section{Conclusion}
\label{s4}
In this article, fourteen graph invariants were investigated for two classic graphs, the {\em Hasse diagram}, $G^H(n)$ and its {\em transitive closure}, $G^T(n)$.
Integer sequences with their first two hundred entries in natural order by $n$ are computed and compared to existing sequences in the On-Line Encyclopedia of Integer Sequences.
Five new integer sequences in natural order, shown in Table~\ref{t01} were discovered, i.e., not found in~\cite{oeis}.
New interpretations based on graph theory are provided for sequences found in ~\cite{oeis}.
Ten (Table~\ref{t04}) and thirteen (Table~\ref{t05}) new integer sequences were discovered for the graded colexicographic and canonical orders, respectively.
Here are some intriguing conjectures stated as open problems.
\begin{conjecture}[Cardinality of disjoint paths] Let $P'(G^H(n))$ be the set of {\em disjoint paths}. $|P'(G^H(n))|=\omega(n)$?
\label{Tmdpnum}
\end{conjecture}
\begin{conjecture}[Node width at middle level] $W_v(n) = |V_{\lceil \Omega(n)/2\rceil}(n)|$?
\label{Conj5}
\end{conjecture}
\begin{conjecture}[Relationship between widths by nodes and arcs] There always exists a level $l$ such that
if $|V_l(n)| = W_v(n)$, then $|E^H_l(n)| = W_e(n)$.
\begin{equation}
\argmax_{l \in \{0,..,\Omega(n)-1\}}|E^H_l(n)| = \argmax_{l \in \{0,..,\Omega(n)-1\}}|V_l(n)|?
\end{equation}
\label{Conj6}
\end{conjecture}
Other future work includes finding either a closed and/or a simpler recursive formula for the cardinality of $P(G^T(n))$ in Theorem~\ref{Tmpnumt}.
Note that entries for $|P^T(S)|$ in Table~\ref{t04} and~\ref{t05} are less than 50 as computing $|P^T(S)|$ by Theorem~\ref{Tmpnumt} took too long time.
\oneappendix
\section{Integer Sequences by Prime Signatures}
\begin{table}[btp]\vspace*{-3ex}
\caption[]{divides relation graph invariants in graded colexicographic order}
\label{t04}
\centering
{\footnotesize
\begin{tabular}{cp{3.7in}p{0.4in}} \hline
\multicolumn{1}{c}{Invariant} &
\multicolumn{1}{c}{Integer sequence for $S = [0],\cdots,[4,4]$} &
\multicolumn{1}{c}{OEIS} \\ \hline \hline
$LI(S)$ &
1, 2, 4, 6, 8, 12, 30, 16, 24, 36, 60, 210, 32, 48, 72, 120, 180, 420, 2310, 64, 96, 144, 216, 240, 360, 900, 840, 1260, 4620, 30030, 128, 192, 288, 432, 480, 720, 1080, 1800, 1680, 2520, 6300, 9240, 13860, 60060, 510510, 256, 384, 576, 864, 1296, $\cdots$ & A036035 \\ \hline
$|V(S)|$ &1, 2, 3, 4, 4, 6, 8, 5, 8, 9, 12, 16, 6, 10, 12, 16, 18, 24, 32, 7, 12, 15, 16, 20, 24, 27, 32, 36, 48, 64, 8, 14, 18, 20, 24, 30, 32, 36, 40, 48, 54, 64, 72, 96, 128, 9, 16, 21, 24, 25, $\cdots$ & A074139\\ \hline
$|E^H(S)|$ & 0, 1, 2, 4, 3, 7, 12, 4, 10, 12, 20, 32, 5, 13, 17, 28, 33, 52, 80, 6, 16, 22, 24, 36, 46, 54, 72, 84, 128, 192, 7, 19, 27, 31, 44, 59, 64, 75, 92, 116, 135, 176, 204, 304, 448, 8, 22, 32, 38, 40, $\cdots$ & -\\ \hline
$\Omega(S)$ &0, 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, $\cdots$ &A036042\\ \hline
$\omega(S)$ &0, 1, 1, 2, 1, 2, 3, 1, 2, 2, 3, 4, 1, 2, 2, 3, 3, 4, 5, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 6, 1, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 5, 5, 6, 7, 1, 2, 2, 2, 2, $\cdots$ & A036043 \\ \hline
$W_v(S)$ & 1, 1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 6, 1, 2, 3, 4, 5, 7, 10, 1, 2, 3, 4, 4, 6, 7, 8, 10, 14, 20, 1, 2, 3, 4, 4, 6, 7, 8, 8, 11, 13, 15, 18, 25, 35, 1, 2, 3, 4, 5, $\cdots$ & -\\ \hline
$W_e(S)$ & 0, 1, 1, 2, 1, 3, 6, 1, 3, 4, 7, 12, 1, 3, 5, 8, 11, 18, 30, 1, 3, 5, 6, 8, 12, 15, 19, 24, 38, 60, 1, 3, 5, 7, 8, 13, 16, 19, 20, 30, 37, 46, 58, 90, 140, 1, 3, 5, 7, 8, $\cdots$ & -\\ \hline
$\Delta(S)$ & 0, 1, 2, 2, 2, 3, 3, 2, 3, 4, 4, 4, 2, 3, 4, 4, 5, 5, 5, 2, 3, 4, 4, 4, 5, 6, 5, 6, 6, 6, 2, 3, 4, 4, 4, 5, 5, 6, 5, 6, 7, 6, 7, 7, 7, 2, 3, 4, 4, 4, $\cdots$ & -\\ \hline
$|P^H(S)|$ & 1, 1, 1, 2, 1, 3, 6, 1, 4, 6, 12, 24, 1, 5, 10, 20, 30, 60, 120, 1, 6, 15, 20, 30, 60, 90, 120, 180, 360, 720, 1, 7, 21, 35, 42, 105, 140, 210, 210, 420, 630, 840, 1260, 2520, 5040, 1, 8, 28, 56, 70, $\cdots$ & A036038\\ \hline
$|V_E(S)|$ &1, 1, 2, 2, 2, 3, 4, 3, 4, 5, 6, 8, 3, 5, 6, 8, 9, 12, 16, 4, 6, 8, 8, 10, 12, 14, 16, 18, 24, 32, 4, 7, 9, 10, 12, 15, 16, 18, 20, 24, 27, 32, 36, 48, 64, 5, 8, 11, 12, 13, $\cdots$ & - \\ \hline
$|V_O(S)|$ & 0, 1, 1, 2, 2, 3, 4, 2, 4, 4, 6, 8, 3, 5, 6, 8, 9, 12, 16, 3, 6, 7, 8, 10, 12, 13, 16, 18, 24, 32, 4, 7, 9, 10, 12, 15, 16, 18, 20, 24, 27, 32, 36, 48, 64, 4, 8, 10, 12, 12, $\cdots$ & -\\ \hline
$|E_E(S)|$ & 0, 1, 1, 2, 2, 4, 6, 2, 5, 6, 10, 16, 3, 7, 9, 14, 17, 26, 40, 3, 8, 11, 12, 18, 23, 27, 36, 42, 64, 96, 4, 10, 14, 16, 22, 30, 32, 38, 46, 58, 68, 88, 102, 152, 224, 4, 11, 16, 19, 20, $\cdots$ & -\\ \hline
$|E_O(S)|$ &0, 0, 1, 2, 1, 3, 6, 2, 5, 6, 10, 16, 2, 6, 8, 14, 16, 26, 40, 3, 8, 11, 12, 18, 23, 27, 36, 42, 64, 96, 3, 9, 13, 15, 22, 29, 32, 37, 46, 58, 67, 88, 102, 152, 224, 4, 11, 16, 19, 20, $\cdots$ & -\\ \hline
$|E^T(S)|$ & 0, 1, 3, 5, 6, 12, 19, 10, 22, 27, 42, 65, 15, 35, 48, 74, 90, 138, 211, 21, 51, 75, 84, 115, 156, 189, 238, 288, 438, 665, 28, 70, 108, 130, 165, 240, 268, 324, 365, 492, 594, 746, 900, 1362, 2059, 36, 92, 147, 186, 200, $\cdots$ & - \\ \hline
$|P^T(S)|$ & 1, 1, 2, 3, 4, 8, 13, 8, 20, 26, 44, 75, 16, 48, 76, 132, 176, 308, 541, 32, 112, 208, 252, 368, 604, 818, 1076, 1460, 2612, 4683, 64, 256, 544, 768, 976, 1888, 2316, 3172, 3408, 5740, 7880, 10404, 14300, 25988, $\cdots$ & - \\ \hline
\end{tabular} }
\end{table}
\begin{table}[btp]\vspace*{-3ex}
\caption[]{divides relation graph invariants in canonical order}
\label{t05}
\centering
{\footnotesize
\begin{tabular}{cp{3.7in}p{0.4in}} \hline
\multicolumn{1}{c}{Invariant} &
\multicolumn{1}{c}{Integer sequence for $S = [0],\cdots, [5,3]$} &
\multicolumn{1}{c}{OEIS} \\ \hline \hline
$LI(S)$ &1, 2, 4, 6, 8, 12, 30, 16, 24, 36, 60, 210, 32, 48, 72, 120, 180, 420, 2310, 64, 96, 144, 240, 216, 360, 840, 900, 1260, 4620, 30030, 128, 192, 288, 480, 432, 720, 1680, 1080, 1800, 2520, 9240, 6300, 13860, 60060, 510510, 256, 384, 576, 960, 864, $\cdots$ & A063008 \\ \hline
$|V(S)|$ & 1, 2, 3, 4, 4, 6, 8, 5, 8, 9, 12, 16, 6, 10, 12, 16, 18, 24, 32, 7, 12, 15, 20, 16, 24, 32, 27, 36, 48, 64, 8, 14, 18, 24, 20, 30, 40, 32, 36, 48, 64, 54, 72, 96, 128, 9, 16, 21, 28, 24, $\cdots$ & - \\ \hline
$|E^H(S)|$ & 0, 1, 2, 4, 3, 7, 12, 4, 10, 12, 20, 32, 5, 13, 17, 28, 33, 52, 80, 6, 16, 22, 36, 24, 46, 72, 54, 84, 128, 192, 7, 19, 27, 44, 31, 59, 92, 64, 75, 116, 176, 135, 204, 304, 448, 8, 22, 32, 52, 38, $\cdots$ & -\\ \hline
$\Omega(S)$ & 0, 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, $\cdots$ & - \\ \hline
$\omega(S)$ & 0, 1, 1, 2, 1, 2, 3, 1, 2, 2, 3, 4, 1, 2, 2, 3, 3, 4, 5, 1, 2, 2, 3, 2, 3, 4, 3, 4, 5, 6, 1, 2, 2, 3, 2, 3, 4, 3, 3, 4, 5, 4, 5, 6, 7, 1, 2, 2, 3, 2, $\cdots$ & - \\ \hline
$W_v(S)$ & 1, 1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 6, 1, 2, 3, 4, 5, 7, 10, 1, 2, 3, 4, 4, 6, 8, 7, 10, 14, 20, 1, 2, 3, 4, 4, 6, 8, 7, 8, 11, 15, 13, 18, 25, 35, 1, 2, 3, 4, 4, $\cdots$ & -\\ \hline
$W_e(S)$ & 0, 1, 1, 2, 1, 3, 6, 1, 3, 4, 7, 12, 1, 3, 5, 8, 11, 18, 30, 1, 3, 5, 8, 6, 12, 19, 15, 24, 38, 60, 1, 3, 5, 8, 7, 13, 20, 16, 19, 30, 46, 37, 58, 90, 140, 1, 3, 5, 8, 7, $\cdots$ & -\\ \hline
$\Delta(S)$ & 0, 1, 2, 2, 2, 3, 3, 2, 3, 4, 4, 4, 2, 3, 4, 4, 5, 5, 5, 2, 3, 4, 4, 4, 5, 5, 6, 6, 6, 6, 2, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 7, 2, 3, 4, 4, 4, $\cdots$ & -\\ \hline
$|P^H(S)|$ & 1, 1, 1, 2, 1, 3, 6, 1, 4, 6, 12, 24, 1, 5, 10, 20, 30, 60, 120, 1, 6, 15, 30, 20, 60, 120, 90, 180, 360, 720, 1, 7, 21, 42, 35, 105, 210, 140, 210, 420, 840, 630, 1260, 2520, 5040, 1, 8, 28, 56, 56, $\cdots$ & A078760\\ \hline
$|V_E(S)|$ & 1, 1, 2, 2, 2, 3, 4, 3, 4, 5, 6, 8, 3, 5, 6, 8, 9, 12, 16, 4, 6, 8, 10, 8, 12, 16, 14, 18, 24, 32, 4, 7, 9, 12, 10, 15, 20, 16, 18, 24, 32, 27, 36, 48, 64, 5, 8, 11, 14, 12, $\cdots$ & - \\ \hline
$|V_O(S)|$ & 0, 1, 1, 2, 2, 3, 4, 2, 4, 4, 6, 8, 3, 5, 6, 8, 9, 12, 16, 3, 6, 7, 10, 8, 12, 16, 13, 18, 24, 32, 4, 7, 9, 12, 10, 15, 20, 16, 18, 24, 32, 27, 36, 48, 64, 4, 8, 10, 14, 12, $\cdots$ & -\\ \hline
$|E_E(S)|$ & 0, 1, 1, 2, 2, 4, 6, 2, 5, 6, 10, 16, 3, 7, 9, 14, 17, 26, 40, 3, 8, 11, 18, 12, 23, 36, 27, 42, 64, 96, 4, 10, 14, 22, 16, 30, 46, 32, 38, 58, 88, 68, 102, 152, 224, 4, 11, 16, 26, 19, $\cdots$ & -\\ \hline
$|E_O(S)|$ & 0, 0, 1, 2, 1, 3, 6, 2, 5, 6, 10, 16, 2, 6, 8, 14, 16, 26, 40, 3, 8, 11, 18, 12, 23, 36, 27, 42, 64, 96, 3, 9, 13, 22, 15, 29, 46, 32, 37, 58, 88, 67, 102, 152, 224, 4, 11, 16, 26, 19, $\cdots$ & -\\ \hline
$|E^T(S)|$ & 0, 1, 3, 5, 6, 12, 19, 10, 22, 27, 42, 65, 15, 35, 48, 74, 90, 138, 211, 21, 51, 75, 115, 84, 156, 238, 189, 288, 438, 665, 28, 70, 108, 165, 130, 240, 365, 268, 324, 492, 746, 594, 900, 1362, 2059, 36, 92, 147, 224, 186, $\cdots$ & - \\ \hline
$|P^T(S)|$ & 1, 1, 2, 3, 4, 8, 13, 8, 20, 26, 44, 75, 16, 48, 76, 132, 176, 308, 541, 32, 112, 208, 368, 252, 604, 1076, 818, 1460, 2612, 4683, 64, 256, 544, 976, 768, 1888, 3408, 2316, 3172, 5740, 10404, 7880, $\cdots$ & - \\ \hline
\end{tabular} }
\end{table}
|
2,869,038,153,729 | arxiv | \section{Introduction}
\label{intro}
Einstein's equivalence principle (EEP) is at the core of our understanding of gravitation and is among the most important postulates of modern physics. It is under constant scrutiny since a violation of any of its pillars would lead to new physics beyond general relativity (GR) and would mark an important milestone in the search for a theory of everything (TOE). The EEP is comprised of three separate postulates: the Universality of Free Fall (UFF), Local Lorentz Invariance (LLI) and Local Position Invariance (LPI). Free fall experiments, as the one described in this letter, test the UFF by comparing the accelerations of two bodies of different internal structure and mass in a gravitational field. This inertial and gravitational mass equality is also known as the weak equivalence principle (WEP). To quantify a possible violation of the UFF it is common to normalise the acceleration difference between two test masses to the average local gravitational acceleration. This parametrization leads to the E\"otv\"os ratio defined by
$$\eta_{A,B}=2 \frac{g_{A}-g_{B}}{g_{A}+g_{B}},$$
with $g_{A,B}$ being the gravitational acceleration of test masses $A$ and $B$ respectively. The most straightforward way to do such a test is to directly measure the acceleration of two bodies in the same gravitational field. This class of tests is called Galilean and the most accurate to date was performed by comparing uranium and copper at a level of $10^{-10}$~\cite{Niebauer1987PRL}. The most accurate tests of the UFF were performed by the lunar laser ranging project (LLR), measuring the free fall of the moon and the earth in the gravitational field of the solar system. Since the UFF is a statement about the acting forces, not only Galilean type free fall experiments are performed to test it, but also force balance experiments with torsion balances. Torsion balances and LLR constrain possible violations of UFF to less than $10^{-13}$ in E\"otv\"os ratio~\cite{Williams2004PRL,Schlamminger2008PRL}. No violation was found so far. Future experiments with classical bodies are striving towards spaceborne platforms, to reduce the influence of external error source and allow measurements far beyond current state of the art~\cite{Nobili2012CQG,Touboul2012CQG}.\\
The use of atom interferometry broadens the field of test masses and allows an operation in the quantum regime. As such it is a complementary method to experiments with macroscopic bodies and will test aspects formerly inaccessible, such as violations linked to the coherence length of the test mass~\cite{Goklu2008CQG}, the possibility to employ cold atoms as accelerometers and clocks, and the possibility of spin-polarisation~\cite{Tarallo2014PRL}. A first measurement was performed by a device measuring gravity with a fountain of cold caesium atoms and comparing their fall rates to a commercial falling corner cube gravimeter at a level of $7\cdot 10^{-9}$~\cite{Peters1999Nat}. More recent experiments demonstrate tests of the UFF by using atom interferometry with two different quantum objects within the same device but do not yet reach the same precision. They are in part relying on two isotopes of the same species~\cite{Fray04PRL,Bonnin2013PRA,Tarallo2014PRL} but also on isotopes of two different elements~\cite{Schlippert2014PRL}. Especially tests with two isotopes want to take benefit from similarities for large noise suppression factors intrinsically arising
from the measurement’s arrangement. New experiments of both types are proposed to exceed the limits of current sensitivities, either on ground~\cite{Dickerson2013PRL,Dimopoulos2007PRL} or in micro-gravity environments~\cite{Rudolph2011MST,Geiger2011natcomm}, including the STE-QUEST space mission~\cite{Aguilera2014CQG}.\\
To employ this variety of test candidates in a precision experiment, a crucial point is the ability to trap both of the species not only simultaneously but rather in the same trap to have a well defined overlap of their initial positions and velocities. In this respect we propose quantum degenerate mixtures of rubidium and ytterbium for testing the UFF in a large scale device on ground.\\
In this paper we discuss the unique features of these mixtures that make them an ideal choice as test masses by calculating their violation parameters and comparing them to the ones used in other experiments and recent proposals. Focusing on the miscibility of different isotopes of these two elements, we will give a description on the source setup we aim for. Besides this description, we present possible scenarios for performing a UFF test with Bragg-type beam splitters. Along this we analyze noise contributions to the measured signal and estimate the performance of a test of the UFF to be $7 \cdot 10^{-13}$ in the E\"otv\"os ratio.
\subsection{Species miscibility and dynamical evolution}
\label{mixtures}
This ability to cool non-magnetic ytterbium isotopes to quantum degeneracy inside the 2\,$\mu$m dipole trap via evaporation without additional effort is a key motivation for our choice. Fermionic isotopes are not considered in this study since degenerate Fermi gases are large and expand with higher rates than BECs, which is an important parameter for long baseline interferometry. They might nevertheless be interesting for future tests and the device is designed to keep this option open. As table 2 shows, we are left with five bosonic isotopes where two of them, $^{172}$Yb and $^{176}$Yb, have negative intra-species scattering length. They would require a more complex experimental design including the manipulation of an optical Feshbach resonance to reach degeneracy. $^{174}$Yb is the most abundant isotope which was already condensed~\cite{Yamazaki2010PRL}. Nevertheless, due to the repulsive collisions to $^{87}$Rb (inter-species scattering lengths of $(880\pm 120)\,\textit{a}_0$), a binary mixture will not be stable due to three-body losses.
For all the reasons stated above, we focus our investigations on $^{168}$Yb, $^{170}$Yb and possible mixtures with $^{87}$Rb. Unfortunately, $^{168}$Yb and $^{170}$Yb are the least abundant isotopes making loading rates significantly low which constrains the cycling rate in the order of tens of seconds unless they are enriched. The $^{168}$Yb -$^{87}$Rb mixture features an inter-species positive scattering length of $39.2\pm 1.6\,\textit{a}_0$ meaning that this Yb isotope can be sympathetically cooled by $^{87}$Rb atoms.
As shown in our systematics study in section \ref{requirements_accuracy}, the separation between the two components of a binary mixture has a dramatic effect on the performance of the UFF test. Therefore, quantum miscibility cannot be neglected in this density regime. Indeed, if the interspecies repulsion exceeds the miscibility threshold~\cite{Papp2008PRL}, the two atomic clouds spatially separate to minimize the interaction energy. This immiscible state is a hindrance for optimising the overlap of the centre of mass of the two wave packets fed into the interferometer for comparison. This makes it necessary to carefully check for the proposed isotopes if they can be prepared in overlapping pairs of spherical symmetry. We therefore solve a system of 3D-coupled Gross-Pitaevskii equations describing the ground state of the mixture \cite{Ho1996PRL}. The results of these simulations are shown in figure \ref{miscibilityplot}.
\begin{figure}[t]
\centering
\begin{subfigure}[$^{168}$Yb-$^{87}$Rb]{
\includegraphics[width=0.4\textwidth]{miscibilitya.pdf}
\label{miscibilitya}}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[$^{170}$Yb-$^{87}$Rb]{
\includegraphics[width=0.4\textwidth]{miscibilityb.pdf}
\label{miscibilityb}}
\end{subfigure}
\caption{Density plots of the ground states of the $^{170}$Yb/$^{168}$Yb and $^{87}$Rb mixtures. For each pair mixture, the wave functions are computed solving the Gross-Pitaevkii equation in 3D including the intra-species interactions of the two isotopes and the inter-species one with $^{87}$Rb.
The magnitudes of these interactions are the same shown in table~\ref{yb_isotopes}. We assume that each mixture is confined by the same external trap with frequencies solely differing due to the mass difference. The trapping frequencies are $2\pi\cdot 88$\,Hz for Rb and $2\pi\cdot 67$\,Hz for Yb. In both cases, a symmetric mixture ground state is found illustrating the miscibility of the two pairs without further tuning of external optical or magnetic parameters (Feshbach for example).}
\label{miscibilityplot}
\end{figure}
The calculations confirm the miscibility of $^{87}$Rb with the two Yb isotopes considered making it a suitable candidate for an UFF test. In contrast, the combination of $^{168}$Yb with $^{170}$Yb builds up a symmetric shell structure. These binary states numerically found are susceptible to and deformable by external fields (magnetic forces, gravitational sag, etc.) present in the science chamber. Therefore, this mixture is not considered for dynamics and systematics. \\
In order to reduce systematic errors of the atom interferometric comparison and allow for an extended interrogation time, it is crucial to reduce the size of the atomic samples. In the proposed facility, few seconds of free fall or launch time are used to reach the target accuracy of the UFF test. It is clear that thermal ensembles would reach very large sizes at these time scales. This motivates the use of degenerate matter waves characterized by a slow expansion. The state-of-the art in slowing down the expansion of BECs improved dramatically with the use of delta-kick cooling (DKC) techniques \cite{Muntinga2013PRL,Dickerson2013PRL}. In recent experiments with a comparable baseline \cite{Kovachy2014Arx}, it was experimentally demonstrated that the expansion energy of a degenerate $^{87}$Rb ensemble could be restricted to only few tens of pK in 2D. We anticipated such records when proposing space missions with more than 10\,s of free evolution time~\cite{Aguilera2014CQG} of a mixture of a $^{87}$Rb / $^{85}$Rb condensates. \\
The DKC manipulation~\cite{Chu1986pro} consists in collimating matter waves by suddenly reducing the frequency of the initial trap holding the atoms and cutting it when all atoms reach the turning points of the trap walls (at t$_p$/4, where t$_p$ is the trap period). The same result is expected by re-pulsing the initial trap after switching it off for some free expansion time. A substantial part of the atoms kinetic energy is absorbed by this process leading to a slowed expansion. The analogy with light beams collimation often led to label this manipulation as an atomic lens. We anticipate the use of a double lens to match the expansion rates of $^{87}$Rb an $^{170}$Yb. This match is mandatory to mitigate errors related to residual wave front curvatures and relaxes the requirements on the initial collimation and retro reflection mirror planarity.
\section{Requirements and error budget}
\label{requirements_accuracy}
\begin{table}[tb]
\caption{Contributions of the different error sources to the uncertainty in $\eta$ in different configurations. 1) Requires back correction via knowledge of g, $T_{zz}$, $T_{zzz}$, and $\Omega_{y}$}
\begin{tabular}{p{0.3\textwidth}p{0.18\textwidth}p{0.18\textwidth}p{0.18\textwidth}}\hline
Error source & Initial & Intermediate & Advanced \\
u$_{\eta}$ & in $10^{-12}$ & $10^{-13}$ & $10^{-14}$ \\ \hline
Gravity gradient + position overlap & 0.3 & 0.3 & 0.3 \\
Gravity gradient + velocity overlap & 0.15 & 0.15 & 0.4 \\
Gravity gradient + g, v$_{0}$ & 0.15 & 0.15 & 0.15 \\
Coriolis x & 0.23 & 0.23 & 0.23 \\
Coriolis y & 0.2 & 0.2 & 0.2 \\
Other terms $^{1)}$ & 1 & 1 & 1 \\\hline
Magnetic fields & 0.3 & 1 & 1 \\ \hline
Wave fronts & 5.1 & 5.2 & 5.7 \\ \hline
Mean field & 1.3 & 3.6 & 3.9 \\ \hline
Sum & 5.7 & 6.7 & 7.4 \\ \hline
\end{tabular}
\label{tab:error_budget_table}
\end{table} This chapter summarizes the requirements on experimental and environmental parameters to restrict statistical and systematic errors. These requirements are partly relaxed compared to single species gravimetry measurements~\cite{Louchet2011NJP,LeGouet2008APB}, because the simultaneous operation of the dual atom interferometer and certain parameters choices allow to engineer suppression ratios for inertial phase shifts and inhomogeneities in the beam splitting wave fronts. A detailed derivation and discussion of error terms for an UFF-test with $^{87}$Rb / $^{85}$Rb in the 10\,m tower in Stanford was reported in~\cite{Hogan08arXiv} and the error budget for a satellite based test can be found in~\cite{Aguilera2014CQG,Schubert13arXiv}. This paper utilizes the same approaches for error assessment and thus focuses on the results. \\
We consider three different scenarios. In the near future, atoms will be dropped from the top chamber, and the scaling factors $k_{Rb}T_{Rb}^{2}=k_{Yb}T_{Yb}^{2}$ will be matched. In this case of matched scaling factors, correlation between the two atom interferometers will then allow to extract the differential phase corresponding to the differential acceleration via ellipse fitting~\cite{Varoquaux2009NJP,Foster2002OL}. The next intermediate step is to use the same free evolution time $T_{Rb}=T_{Yb}$ which mitigates bias terms $\sim kT^{3}$, $\sim kT^{4}$ but requires a more complex read out scheme. Since the scale factors differ now, the correlated signal will not form an ellipse. Restricting phase excursion to below 2$\pi$ still allows the extraction of the differential phase via fitting the Lissajous figure~\cite{Chen2014PRA}. However, the expected vibration noise level is above 2$\pi$. As mentioned earlier this ambiguity may be lifted via correlation with classical sensor mounted in close proximity to the retro reflection mirror as demonstrated for an atom interferometer on a plane~\cite{Geiger2011natcomm} or by adapting the phase extraction algorithms. Finally, the advanced scenario considers launched atoms from the bottom chamber and increased momentum transfers by the beam splitters. A lattice launching technique inside a 10\,m fountain~\cite{Dickerson2013PRL} and high momentum transfer beam splitters~\cite{Chiow2009PRL,Chiow2011PRL} which meet the requirements of this paper were already successfully implemented by other experiments. Requirements for systematics are summed up in table~\ref{tab:error_budget_reqs} and the resulting uncertainties in table~\ref{tab:error_budget_table}. Statistical fluctuations in these parameters are allowed up to the levels reported in table~\ref{tab:statistical_errors_reqs} which implies the errors in table~\ref{tab:statistical_errors_table}. \\
\begin{table}[tb]
\begin{center}
\caption{Requirements on noise sources for the dual species atom interferometers in different configurations. All contributions are expected to be uncorrelated. The requirements were set to reach the shot noise limit. Where appropriate values are given as a requirement for a single measurement cycle. (1) Assuming correlation with an additional classical seismometer or advanced data fitting eliminating the $2\pi$ ambiguity.}
\begin{tabular}{p{0.25\textwidth}p{0.325\textwidth}p{0.325\textwidth}}\hline
Noise source & Near / intermediate & Advanced \\ \hline
Shot noise & \multicolumn{2}{c}{\textit{See tab.~\ref{tab:error_budget_reqs} for N, k, and T.}} \\
Beam splitter & \multicolumn{2}{c}{1\,kHz Lorentzian linewidth} \\
Linear vibrations & $10^{-6}\,\mathrm{m\,s}^{-2}\,\mathrm{Hz}^{-1/2}$ & $10^{-6}\,\mathrm{m\,s}^{-2}\,\mathrm{Hz}^{-1/2}$ $^{(1)}$ \\
Starting velocity & $\sigma_{v}<0.3$\,mm/s & $\sigma_{v}<3.8\,\mu$m/s \\
Overlap & $\sigma_{\Delta r}<10\,\mu$m, & $\sigma_{\Delta r}<0.3\,\mu$m, \\
& $\sigma_{\Delta v}<10\,\mu$m/s & $\sigma_{\Delta v}<0.3\,\mu$m/s \\
Magnetic fields & $\sigma_{\delta B}<0.5$\,mG/m & $\sigma_{\delta B}<45\,\mu$G/m \\
Wave fronts & \multicolumn{2}{c}{$\sigma_{df}=\sigma_{\Delta z}=100\,\mu$m, jitter telescope \& mirror position 1\,mm} \\
& \multicolumn{2}{c}{in z-direction (g)} \\
Mean field & 5\,\% jitter in beam splitting ratio, 20\,\% in atom numbers & 1\,\% jitter in beam splitting ratio, 20\,\% in atom numbers \\
Cycle times & 11\,s & 12.6\,s \\ \hline
\end{tabular}
\label{tab:statistical_errors_reqs}
\end{center}
\end{table}
\begin{table}[tb]
\begin{center}
\caption{Resulting noise contributions following tab.~\ref{tab:statistical_errors_reqs}. All contributions are expected to be uncorrelated. The requirements were set to reach the shot noise limit. All values are given as the noise of a single measurement.}
\begin{tabular}{p{0.25\textwidth}p{0.325\textwidth}p{0.325\textwidth}}\hline
Noise source & Near / intermediate & Advanced \\
& in $10^{-10}$\,m/s$^{2}$ & in $10^{-11}$\,m/s$^{2}$ \\ \hline
Shot noise & 4.8 & 1.8 \\
Beam splitter & 2.8 & 1 \\
Linear vibrations & 2.8 & 1.8 \\
Overlap & 1 & 0.3 \\
Starting velocity & 0.1 & 0.03 \\
Magnetic fields & 0.3 & 0.3 \\
Wave fronts & 0.12 / $<0.01$ & $<0.01$ \\
Mean field & 0.6 & 0.4 \\ \hline
Sum & 6.3 & 2.8 \\
- after 24 h & - 7.1$\cdot$10$^{-2}$ & - 3.4$\cdot$10$^{-2}$ \\ \hline
\end{tabular}
\label{tab:statistical_errors_table}
\end{center}
\end{table} To engineer a high common mode rejection ratio, the center of mass positions, center of mass velocities, size and expansion ratios of the two atomic species have to be matched. Coupled to gravity gradients and rotations, position and velocity differences in the center of mass positions cause spurious phase shifts in the differential signal. Using trapping frequencies of $2\pi\cdot 500\,$Hz implies a gravitational sag of 1\,$\mu$m which will need to be characterized to $1\,\%$ in the advanced scenario. Due to the lattice launch, we expect a differential velocity of 31\,$\mu$m/s. The corresponding biases will be subtracted from the signal which imposes the requirement of knowing the gravity gradient to 0.1\,\%. This will be measured with the apparatus itself in a gradiometer operation mode. Existing gradiometer experiments reached noise floor of down to $3\cdot10^{-8}\,$s$^{-2}$\,Hz$^{-1/2}$~\cite{McGuirk2002,Rosi2014Nat}. Furthermore, a counter rotation of the retro reflection mirror will reduce the bias due to the earth's rotation~\cite{Dickerson2013PRL}. Additional errors occur if the atoms map different parts of the beam splitter wave fronts to which imperfect collimation or the finite quality of the retro reflection mirror cause inhomogeneities. Commercially available mirrors are rated up to $\lambda/20$ (peak to valley) ~\cite{Fichou} which puts requirements onto the maximum allowable expansion rates. Demonstrated perfomances of lensing $^{87}$Rb atoms to 1\,nK in 3D~\cite{Muntinga2013PRL}, and to 50\,pK in 2D~\cite{Kovachy2014Arx} are sufficient for the experiment.\\
Additional sources for errors are magnetic fields inducing a second order Zeeman shift in the $^{87}$Rb interferometer and the scattering properties of the individual ensembles and the mixture. Suppression of magnetic stray fields with residual rms deviations of $\sim$0.8\,mG inside a three layer 8.8\,m $\mu$-metal shield were demonstrated~\cite{Dickerson2012RSI}. Therefore, additional calibration might be necessary to characterize the magnetic fields to the required level.
\section{Conclusion and outlook}
\label{outlook}
We presented a novel experimental scheme to test the EEP with two different atomic species, namely ytterbium and rubidium which is in the progress of being set up in Hanover in the new infrastructure of the Hanover Institute of Technology. Using this particular test pair for precision inertial sensing with atom interferometry imposes some challenges which are discussed in this letter together with appropriate specific solutions. Based on the knowledge of this kind of measurement we provide an assessment of the expected performances of the experiment and of the major systematic effects. They should allow to test the E\"otv\"os parameter at a level of $7 \cdot 10^{-13}$ in the next few years. The work described in this letter is the first step in a complete investigation of inertial sensing with an alkaline earth like element as ytterbium. In the framework of the collaborative research center geo-Q we will investigate possible applications of this technology for geodesy and further ways to improve ground based EEP tests beyond the level of tests with devices employing classical test masses. We expect this work to have a major influence on to the field of fundamental sciences by giving new limits to possible violation scenarios. Moreover, the possibility to investigate interferometric techniques on long time scales with a high repetition rate will benefit atom interferometry experiments in micro-gravity environment or space platforms.
\section{Choice of test pairs}
\label{parameters}
As already mentioned the common way of quantifying an experiment testing the UFF is the E\"otv\"os ratio, which scales a measured differential acceleration to the strength of the local gravitational field comparing any abnormal composition based forces to the composition independent force. While this is a reasonable way to quantify the result of the performed measurement it does not take into account the specific kind of composition dependence in question. By just using the E\"otv\"os parameter as a tool for comparing two tests, an experiment with two spin polarized samples of the same isotope would not be treated different than a comparison between hydrogen and anti-hydrogen as proposed in~\cite{Hamilton2014PRL} while being fundamentally different. Taking the specific composition difference into account is part of the interpretation of the data and is strongly dependent on the model used to assess a possible violation theory. The use of extended wave functions for testing UFF opens the path to formerly unexplored theoretical models which are probing the quantum nature of matter and its interaction with space time~\cite{Goklu2008CQG}. While this is a vast field of study, we will focus on models which allow us a comparison to classical experiments. Specifically we asses the dilaton scenario~\cite{Damour2012CQG} and a scenario-independent scaling approach based on the standard model extension (SME)~\cite{Hohensee13PRL}. Atom interferometry can provide several new aspects different with respect to classical test masses as the test masses are of high isotopic purity and the choices of test masses can be extended beyond non-magnetic, conducting solids which are typically used in torsion balances. \\
According to the dilation model~\cite{Damour2012CQG} a violation may be caused by forces acting differently on neutron and proton number. With the introduced effective charges \Q{A,B} and \QQ{A,B} calculated from the composition of a test particle a measurement of the E\"otv\"os ratio set bounds to the parameters $D_1$ and $D_2$ according to the formula
\begin{equation}
\eta_{\text{A,B}}~\widetilde =~ D_1(\Delta Q^{'1}_{\text{A,B}})+D_2(\Delta Q^{'2}_{\text{A,B}})\text{.}
\end{equation}\\
A similar kind of parametrization can be given for the standard model extension~\cite{Hohensee13PRL}
\begin{equation}
\eta_{\text{A,B}}~\widetilde = ~\Delta f_{-n}+\Delta f_{+n}+\bar{\Delta f_{-n}}+\bar{\Delta f_{+n}}
\end{equation}
with the defined violation parameters for matter and anti-matter linked to neutron excess and total baryon number
\begin{equation}
\begin{aligned}
\Delta f_{-n} = f_{\beta^{e+p-n}_{\text{A}}}\beta^{e+p-n} - f_{\beta^{e+p-n}_{\text{B}}}\beta^{e+p-n}\\
\Delta f_{+n} = f_{\beta^{e+p+n}_{\text{A}}}\beta^{e+p+n} - f_{\beta^{e+p+n}_{\text{B}}}\beta^{e+p+n}\\
\bar{\Delta f_{-n}} = f_{\beta^{\bar{e}+\bar{p}-\bar{n}}_{\text{A}}}\beta^{\bar{e}+\bar{p}-\bar{n}} - f_{\beta^{\bar{e}+\bar{p}-\bar{n}}_{\text{B}}}\beta^{\bar{e}+\bar{p}-\bar{n}}\\
\bar{\Delta f_{+n}} = f_{\beta^{\bar{e}+\bar{p}+\bar{n}}_{\text{A}}}\beta^{\bar{e}+\bar{p}+\bar{n}} -f_{\beta^{\bar{e}+\bar{p}+\bar{n}}_{\text{B}}}\beta^{\bar{e}+\bar{p}+\bar{n}}\text{.}
\end{aligned}
\end{equation}
In both models larger absolute differences in the sensitivity factors of the employed test mass pair give rise to a larger signal in case of a violation of the UFF. Vice versa, an experimental determination of the E\"otv\"os ratio for such a test mass choice better constrains the existence of violations than tests performed with lower sensitivity factors for the same accuracy. Moreover, different test mass pairs probe different linear combinations of suspected violations linked to the neutron excess and the total baryon number of the test masses. In order to unambiguously determine the origin of a violation, a minimum of two test mass pairs needs to be employed. Interestingly, as shown in Ref.~\cite{Mueller2013proc}, even a test performed at a lower accuracy as compared to state of the art tests can further constrain possible violations, when the used test masses are significantly different to previously utilized ones. The sensitivity factors for different choices of test pairs are presented in table~\ref{violation}. For example, in comparison to Be-Ti the combination of ytterbium and rubidium isotopes is a factor of 2 more sensitive to baryon number related violations and even three orders of magnitude more sensitive in the parameter $\bar{\Delta f_{-n}}$.
\begin{table*}[h!]
\caption{Comparison of choices for test masses A and B employed in existing and planned tests of the UFF parametrized for violation scenarios with respect to their effective charges \Q{A,B}, \QQ{A,B}~and \fbplus{A,B}, \fbminus{A,B}, \fbbarminus{A,B}, \fbbarplus{A,B} calculated according to \cite{Damour2012CQG} and \cite{Hohensee13PRL}. Nuclide data is used from~\cite{Audi03} and for Ti a natural occurrence of isotopes is assumed~\cite{Laeter09}.}
\begin{tabular}{ c c c | c c c c c c } \hline
\multirow{2}{*}{A}& \multirow{2}{*}{B}&\multirow{2}{*}{Ref.} &\multicolumn{1}{c}{$\Delta$\Q{A,B}}&\multicolumn{1}{c}{$\Delta$\QQ{A,B}}& \multicolumn{1}{c}{$\Delta f_{-n}$} & \multicolumn{1}{c}{$\Delta f_{+n}$} & \multicolumn{1}{c}{$\bar{\Delta f_{-n}}$} & \multicolumn{1}{c}{$\bar{\Delta f_{+n}}$} \\
&&&\multicolumn{1}{c}{$\cdot 10^4$}&\multicolumn{1}{c}{$\cdot 10^4$}&\multicolumn{1}{c}{$\cdot 10^2$}&\multicolumn{1}{c}{$\cdot 10^4$}&\multicolumn{1}{c}{$\cdot 10^5$}&\multicolumn{1}{c}{$\cdot 10^4$}\\
\hline
\textsuperscript{9}Be& Ti&\cite{Schlamminger2008PRL} &-15.46 &-71.20& 1.48 &-4.16 & -0.24 &-16.24\\
Cu &\textsuperscript{238}U&\cite{Niebauer1987PRL} &-19.09 & -28.62 &-7.08& -8.31 &-89.89& -2.38\\
\textsuperscript{6}Li&\textsuperscript{7}Li &\cite{Hohensee2011JMO} &0.79& -10.07 &-7.26& 7.79 &-72.05& 5.82\\
\textsuperscript{85}Rb&\textsuperscript{87}Rb&\cite{Fray04PRL,Fray2009SSR,Bonnin13PRA} &0.84& -0.79 &-1.01& 1.81 &1.04& 1.67\\
\textsuperscript{87}Sr&\textsuperscript{88}Sr &\cite{Tarallo2014PRL} &0.42& -0.39 &-0.49& 2.04 &10.81& 1.85\\
\textsuperscript{39}K&\textsuperscript{87}Rb&\cite{Schlippert2014PRL}& -6.69& -23.69& -6.31& 1.90& -62.30& 0.64\\
\textsuperscript{87}Rb&\textsuperscript{170}Yb&[This work]& -12.87 &-13.92 &-1.36& -8.64 &86.00 &-5.46\\ \hline
\end{tabular}
\label{violation}
\end{table*}
\section{Atom interferometry in a 10~m atomic fountain}
\label{vlbai}
The inertial sensitive interferometry with cold rubidium clouds is well covered by state-of-the-art experiments for measuring gravity~\cite{Hauth2013APB,Gillot2014Met}, gravity gradients \cite{Rosi2014Nat} and rotations~\cite{Tackmann2012NJP} as well as for measuring fundamental constants \cite{Bouchendira2011PRL}. Similarly laser-cooled ytterbium is by now very successfully utilized in optical clocks, especially optical lattice clocks~\cite{Hinkley2013Sci}. A key prerequisite to perform interferometry over long baselines is the preparation of a very narrow velocity distribution even beyond the ones of typical Bose-Einstein condensates which was already demonstrated for both species~\cite{Anderson1995Sci,Cornish2000PRL,Takasu2003PRL,Yamazaki2010PRL}. This can be reached by delta-kick cooling~\cite{Muntinga2013PRL,Kovachy2014Arx}. The facility we want to employ for a test of the UFF is the {\it VLBAI-Teststand} located at the new founded Hanover Institute for Technology (HITec)~\cite{HITEC_WEBPAGE}. This device will provide two experimental chambers for the preparation of atomic ensembles with two independent source chambers for a maximum flexibility in the choice of atomic species. A 10\,m ultra-high vacuum-tube with a magnetically shielded region of approximately 9\,m forms the baseline for an extended free fall. Since operation of the equivalence principle test only occurs in the magnetically shielded region we anticipate a free fall time of 1\,s and up to 2.6\,s if the atoms are launched. Assuming a measurement with $1\cdot 10^{5}$ ytterbium atoms and $2\cdot 10^{5}$ rubidium atoms produced in 10\,s, this leads to a shot noise limited performance of $1.6\cdot 10^{-10}\,\mathrm{Hz}^{-1/2}$ and $6.5\cdot 10^{-12}\,\mathrm{Hz}^{-1/2}$ in the E\"otv\"os ratio respectively. The second value relies on higher order beam splitters, as explained in chapter \ref{requirements_accuracy}.\\
\begin{figure}[t]
\centering
\begin{subfigure}[Mach Zehnder geometry]{
\includegraphics[width=7cm]{machzehnder.png}
\label{machzehnderscheme}}
\end{subfigure}
\hspace{1 cm}
\begin{subfigure}[Setup]{
\includegraphics[width=2cm]{scheme.png}
\label{experimentalscheme}}
\end{subfigure}
\caption{Mode of operation in Mach-Zehnder configuration and sketch of the experimental setup. Shown in \ref{experimentalscheme} is an operation in drop configuration.}
\end{figure}
\subsection{Interferometer sequence}
\label{sequence}
As described earlier, performing an UFF-test is equivalent to a simultaneous measurement of the gravitational acceleration $g_{A,B}$ acting on the two test masses. To perform this measurement with atoms a sequence of light pulses has to be applied to interrogate them with respect to a common reference mirror which acts as a phase front reference. The most prominent configuration for inertial sensitive atom interferometry is the Mach-Zehnder-type $\pi/2-\pi-\pi/2$ sequence with a time $T$ of free evolution in between each of the pulses. Two different modes of operation can be distinguished: (i) dropping atoms from a source on the top of the device and (ii) launching atoms onto a parabolic trajectory from a source at the bottom of the device. While the first mode is characterized by a good control over the initial conditions at free evolution times of $2T=1-1.3$\,s at a baseline of roughly 9\,m, the second one offers the perspective to increase the overall length of the interferometer up to $2T=2.6$\,s. Launching over approximately 10~m was already demonstrated for rubidium in an accelerated optical lattice by coherently transferring a large number of photons at a decent efficiency \cite{Dickerson2013PRL} and appears also realizable for ytterbium with similar parameters. Nevertheless, this fountain mode requires a well controlled launching velocity of both test masses.
\subsection{Beam splitting and match of scaling factor.}
A major limitation for inertial measurements with atom interferometers is seismic noise which scales similar to the acceleration signal with $T^2$ and thus limits the maximum time of interferometry where the signal to noise ratio is still improving. When using a common mirror for a differential measurement, as planned for this experiment, the seismic noise for both interferometers is common and thus suppressed in the difference signal~\cite{Varoquaux2009NJP,Chen2014PRA}.
To fully benefit from the non magnetic properties of the ytterbium $^1S_0$ state and allow for higher order beam splitting we plan to use Bragg type beam splitters, coupling momentum states of the respective ground states. The used off resonant transitions are the $^1S_0$-$^1P_1$ transition for ytterbium at 399\,nm and the $5^2S_{1/2}$-$5^2P_{3/2}$ transition for rubidium at 780\,nm.
The suppression factor depends on the match of the scaling factor $kT^2$, with the effective wave vectors $k$, and of the sensitivity function which is itself dependent on the timing of the interferometer pulse sequence. The basic approach is to match the scaling factors by tuning the interferometry time $T$ for each species individually~\cite{Varoquaux2009NJP}. This will lead to a small difference in the frequency response of the two interferometers and will not properly suppress contributions scaling differently with $T$ but allows for a simple data analysis scheme.\\
In the case of mismatched effective wave vectors and same pulse timing, the phase frequency response is similar between the two species but rescaled according to the appropriate wave vector. As long as the resulting phase noise is smaller than 1\,rad the phase information can still be fully recovered by weighting the results with the wave vector ratio. An analysis of this case can be found in~\cite{Chen2014PRA}. Even in the case of noise above $\pi$ most of the information can be recovered at the cost of signal to noise ratio. In the case of higher common noise contributions the resulting 2$\pi$ ambiguity can be fully resolved by operating an additional classical sensor \cite{Geiger2011natcomm}. Another option is to adapt the model used for data interpretation and recover at least some level of suppression by fitting an appropriate probability distribution.
\section{Concept for a dual species source of rubidium and ytterbium}
\label{source}
Mixtures of rubidium and ytterbium have been studied before in various experiments \cite{Munchow2011PCC,Baumer2011PRA} but were not yet used for precision interferometry. The construction of a dual species source capable of supporting an EEP test experiment faces a variety of challenges which are studied in the first phase of the experiment described in this work. A source has to fulfill the following characteristics:
\begin{itemize}
\item The clouds have to be able to be cooled down to quantum degeneracy to fully exploit the long time of free fall achievable in the used infrastructure. Although this is relaxed by employing so called delta kick cooling, the efficiency of this process is strongly dependent on the initial temperature.
\item The initial collocation has to be very well known and controlled. To a certain degree this excludes isotope combination which are immiscible as discussed in chapter~\ref{mixtures}.
\item The initial velocity distribution of the two species has to be matched to a high degree to allow for differential suppression of systematic effects, like wave front curvature or residual rotations.
\item To achieve the target performance, $1\cdot 10^{5}$ ytterbium atoms and $2\cdot 10^{5}$ rubidium atoms have to be brought to degeneracy in less than 10\,s. If this performance is not reached, it will increase the time needed for integration, but is not prohibitive to the overall experiment.
\end{itemize}
\subsection{MOT Operation} Rubidium has two stable isotopes with mass numbers 87 and 85, both are bosonic and can be brought to degeneracy with common methods \cite{Anderson1995Sci,Cornish2000PRL}. Since both are also natural abundant and can be cooled similar well by standard laser cooling techniques, the specific decision for a rubidium species will be taken based on the miscibility with the ytterbium isotopes. The widely spread method for the preparation of rubidium ensembles is laser-cooling on the $5^2S_{1/2}$-$5^2P_{3/2}$ transition with a subsequent optical molasses step for achieving sub-Doppler temperatures down to approximately $2\,\mu$K. With a combination of a multi-layer atom chip allowing for an efficient transfer of laser cooled atoms to a magnetic trap and a 2D$^+$-MOT, quantum degenerated ensembles with $4\cdot10^{5}$ rubidium atoms were produced in 1.6\,s~\cite{Rudolph2015arXiv}.\\
With in total five bosonic and two fermionic stable isotopes that have all been brought to quantum degeneracy before \cite{Takasu2003PRL,Yamazaki2010PRL}, ytterbium offers a variety of choices for test masses as seen in table~\ref{yb_isotopes}. The bosonic isotopes have no hyperfine splitting and therefore a very low magnetic sensitivity compared to rubidium for example \cite{Taichenachev2006PRL}. While this is beneficial to counteract systematic effects, the missing possibility to drive Raman-transitions between the hyperfine states is limiting the implementation scenarios. Ytterbium, an alkaline earth like element, offers the possibility to perform narrow-line cooling on the inter-combination transition $^1S_0$-$^3P_1$ with a Doppler-temperature of $T_D=4.4\,\mu$K. Due to a low vapor pressure one has to face the challenge to pre-cool the hot source for efficient MOT operation. The common method is the use of a Zeeman-slower with a transversal cooling stage at the singlet transition $^1S_0$-$^1P_1$ \cite{Miranda2012PRA}. Another comparably new option is the use of 2D-MOT at same transition \cite{Dorscher2013RSI}. Experimentally loading rates of $6\cdot 10^7$ $^{174}$Yb atoms per second have been achieved by both methods. The 2D-MOT seems preferable over the Zeeman-slower setup in terms of vacuum quality in the main chamber due to the use of differential pumping stages and offers higher scalability with available laser power at 398.9\,nm.
\begin{table*}[h!]
\caption{Stable isotopes of ytterbium and their relative natural abundance~\cite{Lide2008CRC} in $\%$, character of spin-statistic, intra-species scattering length~\cite{Kitagawa2008}, inter-species scattering length with $^{87}$Rb in $a_0$~\cite{Borkowski2013arXiv}, as well as isotope-shift relative to $^{174}$Yb of the relevant cooling transitions in MHz.}
\begin{tabular}{ c | c c c c c c c } \hline
Isotope & Abund. & Spin st. &$a_{Yb/Yb}$ & $a_{Yb/Rb}$ & $J$ & $^1S_0$-$^3P_1$ & $^1S_0$-$^1P_1$\\
\hline
$^{168}$Yb & 0.13 & boson & $252 \pm 3$ & $39.2 \pm 1.6$ & & 3655 & 1887.4\\
$^{170}$Yb & 3.05 & boson & $64 \pm 2$ & $-11.5 \pm 2.5$ & & 2287 & 1192.4\\
$^{171}$Yb & 14.3 & fermion & $-2.8 \pm 3.6$& $58.9 \pm 4.4$ &(1/2-1/2)& -2132& 1153.7\\
& & & & &(1/2-3/2)& 3805 & 832.4\\
$^{172}$Yb & 21.9 & boson & $-599 \pm 64$ & $-161 \pm 11$ && 1000 & 1887.4\\
$^{173}$Yb & 16.1 & fermion & $199 \pm 2$ & $626 \pm 88$ &(5/2-5/2)& 2312 & -253.4 \\
& & & & &(5/2-7/2)& -2386 & 588\\
& & & & &(5/2-3/2)& 3806 & 516\\
$^{174}$Yb & 31.8 & boson & $105 \pm 2$ & $880 \pm 120$ & & 0 & 0\\
$^{176}$Yb & 12.7 & boson & $-24 \pm 4$ & $216.8 \pm 4.7$ & & -955 & -509.3\\ \hline
\end{tabular}
\label{yb_isotopes}
\end{table*}
\subsection{Trapping and evaporation} Since we aim for a combined trap of both species, magnetic traps are not an option for the magnetically not trappable ytterbium. As a result a far detuned optical dipole-trap in the mid-infrared will be used as a common trap. Figure~\ref{polarsim} shows the scalar polarisability at a certain wavelength with respect to the inter-combination MOT for ytterbium. The differential polarisability shows mainly two remarkable results: Ytterbium is not trapped at $1\,\mu$m and there is a zero-crossing close to $1.5\,\mu$m, that would potentially allow for AC-Stark shift compensated dipole-trap. A more conservative and less demanding solution would be the use of a dipole-trap beyond the zero-crossing for example at 1960\,nm. To compensate AC-Stark shift dispersion over the cloud, which would be large due to the narrow linewidth of the transition a low-intensity blue detuned compensation beam can be used \cite{Kaplan2002PRA} with a detuning of $\Delta_{\text{comp.}} = 2\pi\cdot1$\,GHz and a power of $I_{\text{comp.}} = 8.84$\,mW. The Bose-Einstein condensation in a single beam dipole-trap at this wavelength for $^{87}$Rb was already shown in a weak hybrid trap configuration in~\cite{Zaiser11PRA}. Therefore, a 1960\,nm trap appears to be an ideal solution and lasers with output powers up to 100\,W are available.
\begin{figure}
\centering
\begin{subfigure}[Scalar polarizability $^1S_0$]{
\includegraphics[width=0.45\textwidth]{polarizability_Yb.pdf}
\label{scPol-S}}
\end{subfigure}
\begin{subfigure}[Scalar polarizability $^3P_1$]{
\includegraphics[width=0.45\textwidth]{polarizability_YbP.pdf}
\label{scPol-P}}
\end{subfigure}
\begin{subfigure}[Differential scalar polarizability $^1S_0$-$^3P_1$]{
\includegraphics[width=0.45\textwidth]{diffpolarizability_Yb.pdf}
\label{diffPo-SP}}
\end{subfigure}
\begin{subfigure}[Differential AC-Stark shift]{
\includegraphics[width=0.45\textwidth]{ODT.pdf}
\label{AC-Stark}}
\end{subfigure}
\caption{Scalar polarisability and effective AC-Stark shift. The upper curves \ref{scPol-S} and \ref{scPol-P} show the laser wavelength dependent scalar polarisability of the states in the transition used for the intercombination line cooling. The lower curves show in \ref{diffPo-SP} the differential polarisability and in \ref{AC-Stark} the resulting differential AC Stark shift imposed on the intercombination line by a 1960\,nm ODT with 100\,W, a 50\,$\mu$m waist and using an additional 8.84\,mW dressing beam with 1\,GHz blue detuned to the transition.}
\label{polarsim}
\end{figure}
\subsection{Dual species loading sequence} The cycle time of the experiment will be limited by smaller loading rates of the ytterbium, even with the use of a 2D$^{+}$-MOT and the expected increase in flux, due to the use of higher laser power. In addition the $^1S_0$-$^1P_1$ transition cannot be driven together with the rubidium cooling transition $5^2S_{1/2}$-$5^2P_{3/2}$, since the ionization energy of the upper state of rubidium is 2.59\,eV that corresponds to 478.7\,nm. Therefore the dual species sequence will first completely undergo the loading steps for cooling and trapping ytterbium into the dipole trap before we start the fast loading of the rubidium MOT. To avoid losses due to collisions at this stage of the experiment it is possible to shift the center of the rubidium MOT against the dipole trap via adjusting the magnetic field gradient before both isotopes are co-located inside the dipole trap.
\section*{Acknowledgements}
This work is supported by the DFG in the scope of the SFB geo-Q and will facilitate the major research instrumentation { \it VLBAI-Teststand } applied for at the DFG. The authors would like to also acknowledge the support of the German Space Agency (DLR) with funds provided by the Federal
Ministry of Economic affairs and Energy (BMWi) due to an enactment of the German Bundestag under Grant No. DLR 50WM1131-1137 (project QUANTUS-III). We would like to thank M. Kasevich, J. Hogan and A. Wanner for their help during the planning of the { \it VLBAI-Teststand}. We thank H. Mueller, M. Hohensee, W. Schleich and A. Roura for support concerning the calculation and interpretation of the violation parameters. We thank C. Klempt for fruitful discussions and L. Richardson, P. Berg and E. Wodey for proof reading this document. \\
\section*{References}
|
2,869,038,153,730 | arxiv | \section{Introduction}
Black holes are believed to play a key role in a number of highly energetic astrophysical phenomena, from active galactic nuclei to gamma-ray bursts to ultraluminous X-ray binaries.
The extraordinary amounts of energy released during such events may have two different origins. It can be the gravitational potential energy of the matter falling toward an existing or forming black hole during accretion or a gravitational collapse. Or it can also be the energy of the black hole itself. Indeed, a remarkable prediction of general relativity is that a spinning black hole has free energy available to be tapped. How this occurs has fundamental implications for our understanding of high energy astrophysical phenomena powered by black holes.
It was shown by Christodoulou \cite{christodoulou70} that for a spinning (Kerr) black hole having mass $M$ and dimensionless spin parameter $a$, a portion of the black hole mass is ``irreducible'',
\begin{equation}
M_{\rm irr} = M \sqrt{\frac{1}{2} \left( {1+\sqrt{1-a^2}} \right)} \, .
\end{equation}
The irreducible mass has a one-to-one connection with the surface area of the event horizon, $A_H =4\pi(r_H^2+a^2) = 16 \pi M_{\rm irr}^2$, which is proportional to the black hole enthropy $S_{\rm BH} = ({k_B c^3}/{4 G \hbar}) A_H$ \cite{bekenstein72,bekenstein73,hawking74,hawking75}, where $k_B$, $G$, $\hbar$, and $c$ denote, respectively, the Boltzmann constant, the gravitational constant, the reduced Planck constant, and the speed of light in vacuum. Thus, the maximum amount of energy that can be extracted from a black hole without violating the second law of thermodynamics is the rotational energy
\begin{equation}
E_{\rm rot} = \left[ {1-\sqrt{\frac{1}{2} \left( {1+\sqrt{1-a^2}} \right)}} \right] M c^2 \, .
\end{equation}
For a maximally rotating black hole ($a =1$), this gives $E_{\rm rot} = (1-1/\sqrt{2}) M c^2 \simeq 0.29 M c^2$. Therefore, a substantial fraction of black hole energy can, in principle, be extracted \cite{note1}.
The possibility of extracting black hole rotational energy was first realized by Penrose \cite{penrose69}, who envisioned a thought experiment in which particle fission ($0 \rightarrow 1 + 2$) occurs in the ergosphere surrounding a rotating black hole. If the angular momentum of particle $1$ is opposite to that of the black hole and is sufficiently high, then the energy of particle $1$, as viewed from infinity, may be negative. Hence, since the total energy at infinity is conserved, the energy of particle $2$ as measured from infinity will be larger than that of the initial particle $0$. When the particle with negative energy at infinity ($1$) falls into the black hole's event horizon, the total energy of the black hole decreases. Therefore, the energy of the escaping particle $2$, which is higher than that of the original particle $0$, is increased at the expense of the rotational energy of the black hole.
Although the Penrose process indicates that it is possible to extract energy from a black hole, it is believed to be impractical in astrophysical scenarios. Indeed, energy extraction by means of the Penrose process requires that the two newborn particles separate with a relative velocity that is greater than half of the speed of light \cite{Bardeen_1972,wald74apj}, and the expected rate of such events is too rare to extract a sizable amount of black hole's rotational energy. On the other hand, Penrose's suggestion sparked the interest to find alternative mechanisms for extracting black hole rotational energy, such as superradiant scattering \cite{TP74}, the collisional Penrose process \cite{Piran75}, the Blandford-Znajek process \cite{BZ77} and the magnetohydrodynamic (MHD) Penrose process \cite{Takahashi90}.
Among them, the Blandford-Znajek process, in which energy is extracted electromagnetically through the magnetic field supported by an accretion disk around the black hole, is thought to be the leading mechanism for powering the relativistic jets of active galactic nuclei (AGNs) \citep[e.g.][]{McKGamm04,Hawley06,komissarov07,Tchekho11} and gamma-ray bursts (GRBs) \citep[e.g.][]{HKLee2000,Tchekho08,komissarov09}.
While different mechanisms of energy extraction have been carefully analyzed over the years, the possibility of extracting black hole rotational energy as a result of rapid reconnection of magnetic field lines has been generally overlooked. An exploratory study conducted by Koide and Arai \cite{KA} analyzed the feasibility conditions for energy extraction by means of the outflow jets produced in a laminar reconnection configuration with a purely toroidal magnetic field. In this simplified scenario, they suggested that relativistic reconnection was required for energy extraction, but the extracted power and the efficiency of the reconnection process were not evaluated. This is necessary for determining whether magnetic reconnection can play a significant role in the extraction of black hole energy.
The recent advent of general-relativistic kinetic simulations of black hole magnetospheres \cite{parfrey} do indeed suggest that particles accelerated during magnetic reconnection may spread onto negative energy-at-infinity trajectories, and that the energy extraction via negative-energy particles could be comparable to the energy extracted through the Blandford-Znajek process.
In this paper we provide an analytical analysis of black hole energy extraction via fast magnetic reconnection as a function of the key parameters that regulate the process: black hole spin, reconnection location, orientation of the reconnecting magnetic field, and plasma magnetization.
Our main objective is to evaluate the viability, feasibility conditions, and efficiency of magnetic reconnection as a black hole energy extraction mechanism.
In Section \ref{section2} we delineate how we envision the extraction of black hole rotational energy by means of fast magnetic reconnection, and we derive the conditions under which such energy extraction occurs. In Section \ref{section3} we show that magnetic reconnection is a viable mechanism of energy extraction for a substantial region of the parameter space.
In Section \ref{section4} we quantify the rate of energy extraction and the reconnection efficiency in order to evaluate whether magnetic reconnection is an effective energy extraction mechanism for astrophysical purposes. We further compare the power extracted by fast magnetic reconnection with the power that can be extracted through the Blandford-Znajek mechanism. Finally, we summarize our results in Section \ref{section5}.
\section{Energy Extraction by Magnetic Reconnection} \label{section2}
The possibility of extracting black hole rotational energy via negative-energy particles requires magnetic reconnection to take place in the ergosphere of the spinning black hole since the static limit is the boundary of the region containing negative-energy orbits. Magnetic reconnection inside the ergosphere is expected to occur routinely for fast rotating black holes. Indeed, a configuration with antiparallel magnetic field lines that is prone to magnetic reconnection is caused naturally by the frame-dragging effect of a rapidly spinning black hole.
In this paper, we envision the situation illustrated in Fig. \ref{fig1}, where the fast rotation of the black hole leads to antiparallel magnetic field lines adjacent to the equatorial plane.
This scenario is also consistent with numerical simulations of rapidly spinning black holes \citep[e.g.][]{parfrey,komissarov05,East18,ripperda20}.
The change in magnetic field direction at the equatorial plane produces an equatorial current sheet.
This current sheet forms dynamically and is destroyed by the plasmoid instability (permitted by non-ideal magnetohydrodynamic effects such as thermal-inertial effects, pressure agyrotropy, or electric resistivity) when the current sheet exceeds a critical aspect ratio \cite{Comisso_2016,UzdLou_2016,Comisso_2017}. The formation of plasmoids/flux ropes (see circular sections in the zoomed-in region of Fig. \ref{fig1}) drives fast magnetic reconnection \citep[e.g.][]{daughton09,bhatta09}, which rapidly converts the available magnetic energy into plasma particle energy.
Eventually, the plasma is expelled out of the reconnection layer and the magnetic tension that drives the plasma outflow relaxes. The field lines are then stretched again by the frame-dragging effect and a current layer prone to fast plasmoid-mediated reconnection forms again. This leads to reconnecting current sheets that form rapidly and intermittently.
\begin{figure}[]
\begin{center}
\includegraphics[width=8.5cm]{Luca_BlackHole_100820.pdf}
\end{center}
\caption{Schematic illustration of the mechanism of energy extraction from a rotating black hole by magnetic reconnection in the black hole ergosphere.
A configuration with antiparallel magnetic field lines adjacent to the equatorial plane is favored by the frame-dragging effect of the rapidly spinning black hole (panels (a) and (b) portray meridional and equatorial views, respectively), and the resulting equatorial current sheet is prone to fast plasmoid-mediated magnetic reconnection (see circular structures in the zoomed-in region \cite{noteplasmoids3D}).
Magnetic reconnection in the plasma that rotates in the equatorial plane extracts black hole energy if the decelerated plasma that is swallowed by the black hole has negative energy as viewed from infinity, while the accelerated plasma with a component in the same direction of the black hole rotation escapes to infinity.
The outer boundary (static limit) of the ergosphere is indicated by the short-dashed lines in both panels. In panel (b), long-dashed and solid lines indicate magnetic field lines below and above of the equatorial plane, respectively. Finally, the dashed lines in the zoomed region indicate the two magnetic reconnection separatrices intersecting at the dominant magnetic reconnection $X$-point.}
\label{fig1}
\end{figure}
Magnetic reconnection in the plasma that rotates around the black hole has the effect of accelerating part of the plasma and decelerating another part. If the decelerated plasma has negative energy at infinity and the accelerated one has energy at infinity larger than its rest mass and thermal energies (see the example regions in orange in Fig. \ref{fig1}(b)), then the plasma that escapes to infinity acquires energy at the expense of the black hole rotational energy when the negative-energy particles are swallowed by the black hole as in the standard Penrose process \cite{penrose69}. Therefore, we want to examine when magnetic reconnection in the ergosphere of the black hole redistributes the angular momentum of the plasma in such a way to satisfy these conditions. Furthermore, we want to evaluate if the extraction of black hole rotational energy via fast plasmoid-mediated reconnection can constitute a major energy release channel.
We describe the spacetime around the rotating black hole by using the Kerr metric in Boyer-Lindquist coordinates $x^\mu=(t, r, \theta, \phi)$, where $r$ is the radial distance, $\theta$ is the polar angle, and $\phi$ is the azimuthal angle. The Kerr metric can be expressed in terms of the square of the line element $d{s^2} = g_{\mu \nu} d{x^\mu}d{x^\nu}$ as \citep[e.g.][]{MTW}
\begin{equation} \label{BL_coord}
d{s^2} = g_{tt} d{t^2} + 2 g_{t\phi} dt d\phi + g_{\phi\phi} d{\phi^2} + g_{rr} d{r^2} + g_{\theta\theta} d{\theta^2} \, ,
\end{equation}
where the non-zero components of the metric are given by
\begin{equation}
g_{tt} = \frac{2 Mr}{\Sigma} -1 \, , \; \; \; g_{t\phi} = - \frac{2 M^2 a r \sin^2 \theta}{\Sigma} \, ,
\end{equation}
\begin{equation}
g_{\phi\phi} = \frac{A}{\Sigma} \sin^2 \theta \, , \quad g_{rr} = \frac{\Sigma}{\Delta} \, , \quad g_{\theta\theta} = \Sigma \, ,
\end{equation}
with
\begin{equation}
\Sigma = {r^2} + {\left( {aM} \right)^2}{\cos ^2}\theta \, ,
\end{equation}
\begin{equation}
\Delta = {r^2} - 2Mr + {\left( {aM} \right)^2} \, ,
\end{equation}
\begin{equation}
A = \big[ {{r^2} + {{\left( {a M} \right)}^2}} \big]^2 - {\left( {aM} \right)^2} \Delta \, {\sin ^2}\theta \, .
\end{equation}
The only two parameters that appear in the metric are the black hole mass, $M$, and the black hole dimensionless spin, $0 \leq a \leq 1$. Here, and in all subsequent expressions, we use geometrized units with $G=c=1$.
The inner boundary of the ergosphere of the Kerr black hole, which coincides with the outer event horizon, is given by the radial distance
\begin{equation}\label{outerevent}
r_{H}=M+ M ({1 - a^2})^{1/2} \, ,
\end{equation}
while the outer boundary (static limit) is given by
\begin{equation}\label{outerergo}
r_{E} = M+ M ({1- a^2 \cos^2 \theta})^{1/2} \, ,
\end{equation}
which yields $r_{E} =2M $ at the equatorial plane $\theta=\pi/2$.
In this paper we make the simplifying assumption that magnetic reconnection happens in the bulk plasma that rotates circularly around the black hole at the equatorial plane.
This corresponds to a Keplerian angular velocity
\begin{equation}\label{keplerOmega}
\Omega_K= \pm \frac{M^{1/2}}{r^{3/2} \pm a M^{3/2}} \, ,
\end{equation}
as seen by an observer at infinity. The upper sign refers to corotating orbits, while the lower sign applies to counter-rotating orbits. Circular orbits can exist from $r \rightarrow \infty$ down to the limiting circular photon orbit, whose radius is given by
\begin{equation}\label{circularorbitphotonrad}
r_{\rm ph}=2M \left[ 1+\cos\left(\frac{2}{3} \arccos(\mp a) \right)\right] \, .
\end{equation}
For a maximally rotating black hole ($a =1$), one has $r_{\rm ph}=M$ (corotating orbit) or $r_{\rm ph}=4M$ (counter-rotating orbit).
However, for $r > r_{\rm ph}$ not all circular orbits are stable. Non-spinning test particles can stably orbit the black hole if they are at distances larger than or equal to the innermost stable circular orbit \cite{Bardeen_1972}
\begin{equation}\label{rmargbsc}
r_{\rm isco}=M\left[3+Z_2 \mp {\Big( {(3-Z_1)(3+Z_1+2Z_2)} \Big)^{1/2}} \right] \, ,
\end{equation}
where
\begin{equation}\label{}
Z_1=1+(1-a^2)^{1/3}[(1+a)^{1/3}+(1-a)^{1/3}] \, ,
\end{equation}
\begin{equation}\label{}
Z_2=(3a^2+Z_1^2)^{1/2} \, .
\end{equation}
For a maximally rotating black hole $r_{\rm isco}=M$ (corotating orbit) or $r_{\rm isco}=9M$ (counter-rotating orbit). Here we focus on corotating orbits since we are interested in magnetic reconnection occurring inside the ergosphere.
We also assume that the plasma acceleration through magnetic reconnection is localized in a small region (close to the dominant reconnection $X$-point) compared to the size of the black hole ergosphere.
In what follows, it is convenient to analyze the plasma energy density in a locally nonrotating frame, the so called ``zero-angular-momentum-observer'' (ZAMO) frame \cite{Bardeen_1972}. In the ZAMO frame, the square of the line element is given by $d{s^2} = - d{{\hat t}^2} + \sum\nolimits_{i=1}^3 {{{(d{{\hat x}^i})}^2}} = {\eta _{\mu \nu }}d{{\hat x}^\mu }d{{\hat x}^\nu }$, where
\begin{equation}
d\hat t = \alpha \, dt \, , \quad \; d{{\hat x}^i} = \sqrt{g_{ii}} \, d{x^i} - \alpha {\beta^i}dt \,
\end{equation}
(no implicit summation is assumed over $i$), with $\alpha$ indicating the lapse function
\begin{equation}
\alpha= \left( { -g_{tt} + \frac{g_{\phi t}^2}{g_{\phi\phi}} } \right)^{1/2} = \left(\frac{\Delta \Sigma}{A} \right)^{1/2} \,
\end{equation}
and $\beta^i$ indicating the shift vector $(0, 0, \beta^\phi)$, with
\begin{equation}
\beta^\phi = \frac{\sqrt{g_{\phi\phi}} \, \omega^\phi}{\alpha} = \frac{\omega^\phi}{\alpha} \left(\frac{A}{\Sigma} \right)^{1/2} \sin\theta \,
\end{equation}
and $\omega^\phi = - g_{\phi t}/g_{\phi\phi} = 2 M^2 a r/A$ being the angular velocity of the frame dragging. An advantage of this reference frame is that equations become intuitive since the spacetime is locally Minkowskian for observers in this frame. Hereinafter, quantities observed in the ZAMO frame are denoted with hats.
Vectors in the ZAMO frame are related to the vectors in the Boyer-Lindquist coordinates as $\hat b^{0}=\alpha b^{0}$ and $\hat b^{i}= \sqrt{g_{ii}} \, b^{i} - \alpha\beta^i b^{0}$ for the contravariant components, while $\hat b_{0}=b_{0}/\alpha + \sum\nolimits_{i=1}^3 {(\beta^i/\sqrt{g_{ii}}) \, b_i} $ and $\hat b_i= b_i/\sqrt{g_{ii}}$ for the covariant components.
We evaluate the capability of magnetic reconnection to extract black hole energy by examining the conditions for the formation of negative energy at infinity and escaping to infinity of the plasma accelerated/decelerated by the reconnection process in the ergosphere (in this work we do not address the origin of the plasma properties but rather assume a plasma with a given particle density and pressure). From the energy-momentum tensor in the one-fluid approximation,
\begin{equation}
T^{\mu \nu} = p g^{\mu \nu} + w U^{\mu} U^{\nu} + {F^\mu}_{\delta} F^{\nu \delta} - \frac{1}{4} g^{\mu \nu} F^{\rho \delta} F_{\rho \delta} \, ,
\end{equation}
where, $p$, $w$, $U^{\mu}$, and $F^{\mu \nu}$ are the proper plasma pressure, enthalpy density, four-velocity, and electromagnetic field tensor, respectively, one has the ``energy-at-infinity'' density $e^\infty = - \alpha g_{\mu 0} T^{\mu 0}$. Therefore, the energy-at-infinity density is given by
\begin{equation}
e^\infty = \alpha {\hat e} + {\alpha \beta^\phi {\hat P}^\phi} \, ,
\label{einfty}
\end{equation}
where
\begin{equation}
{\hat e} = w \hat\gamma^2 -p + \frac{{\hat B}^2 + {\hat E}^2}{2} \,
\end{equation}
is the total energy density and
\begin{equation}
{\hat P}^\phi = w \hat\gamma^2 {\hat v}^\phi + {\big({\bm{\hat{B}}} \times {\bm{\hat{E}}}\big)^\phi} \,
\end{equation}
is the azimuthal component of the momentum density, with $\hat\gamma = \hat U^0 = \big[ 1 - \sum\nolimits_{i=1}^3 {{{(d{{\hat v}^i})}^2}} \big]^{-1/2}$, $\hat B^i = \epsilon^{ijk} \hat F_{jk}/2$, and $\hat E^i = \eta^{ij} \hat F_{j0} = \hat F_{i0}$.
The energy-at-infinity density can be conveniently separated into hydrodynamic and electromagnetic components as $e^\infty = e^\infty_{\rm hyd} + e^\infty_{\rm em}$, where
\begin{equation}\label{enerhyd}
e^\infty_{\rm hyd} = \alpha {\hat e}_{\rm hyd} + {\alpha \beta^\phi w \hat\gamma^2 {\hat v}^\phi } \,
\end{equation}
is the hydrodynamic energy-at-infinity density and
\begin{equation}\label{enerem}
e^\infty_{\rm em} = \alpha {\hat e}_{\rm em} + {\alpha \beta^\phi {\big({\bm{\hat{B}}} \times {\bm{\hat{E}}}\big)_\phi} } \,
\end{equation}
is the electromagnetic energy-at-infinity density, with ${\hat e}_{\rm hyd} = w \hat\gamma^2 - p$ and ${\hat e}_{\rm em} = ({\hat B}^2 + {\hat E}^2)/{2} $ indicating the hydrodynamic and electromagnetic energy densities observed in the ZAMO frame.
In this paper we assume an efficient magnetic reconnection process that converts most of the magnetic energy into kinetic energy, so that the electromagnetic energy at infinity is negligible with respect to the hydrodynamic energy at infinity. Then, from Eq. \eqref{enerhyd}, we can evaluate the energy-at-infinity density of the expelled plasma using the approximation that the plasma element is incompressible and adiabatic, which leads to \cite{KA}
\begin{equation}\label{enerhydincompress}
e^\infty_{\rm hyd} = \alpha \Big[ (\hat\gamma + \beta^\phi \hat\gamma {\hat v}^\phi)w - \frac{p}{\hat\gamma} \Big] \, .
\end{equation}
To analyze the localized reconnection process, we introduce the local rest frame $x^{\mu \prime}=(x^{0 \prime}, x^{1 \prime}, x^{2 \prime}, x^{3 \prime})$ of the bulk plasma that rotates with Keplerian angular velocity $\Omega_K$ in the equatorial plane. We choose the frame $x^{\mu \prime}$ in such a way that the direction of $x^{1 \prime}$ is parallel to the radial direction $x^{1}=r$ and the direction of $x^{3 \prime}$ is parallel to the azimuthal direction $x^{3}=\phi$. The orientation of the reconnecting magnetic field lines is kept arbitrary as it ultimately depends on the large scale magnetic field configuration, the black hole spin, and is also time dependent. Indeed, the complex nonlinear dynamics around the spinning black hole induces magnetic field line stretching, with magnetic reconnection causing a topological change of the macroscopic magnetic field configuration on short time scales.
Therefore, here we introduce the orientation angle
\begin{equation}
\xi=\arctan \big({{v}_{\rm out}^{1 \prime}}/{{v}_{\rm out}^{3 \prime}} \big) \, ,
\label{anglexi}
\end{equation}
where ${{v}_{\rm out}^{1 \prime}}$ and ${{v}_{\rm out}^{3 \prime}}$ are the radial and azimuthal components of the outward-directed plasma in the frame $x^{\mu \prime}$. Accordingly, the plasma escaping from the reconnection layer has velocities ${\bm{v}}_{\pm}^{\prime}=v_{\rm out} (\pm \cos\xi\, {\bm{e}}_3^{\prime} \mp \sin\xi\, {\bm{e}}_1^{\prime})$, with $v_{\rm out}$ indicating the magnitude of the outflow velocity observed in the frame $x^{\mu \prime}$ and the subscripts $+$ and $-$ indicating the corotating and counterrotating outflow direction, respectively. In the plasmoid-mediated reconnection regime, a large fraction of the plasma is evacuated through plasmoid-like structures \cite{noteplasmoids}, which can also contain a significant component of nonthermal particles. Such particles gain most of their energy from the motional electric field \citep[e.g.][]{GuoPoP20} and are carried out by the plasmoids (where most of them are trapped) in the outflow direction \citep[e.g.][]{sironi16}.
The outflow Lorentz factor $\hat\gamma$ and the outflow velocity component ${\hat v}^\phi$ observed by the ZAMO can be conveniently expressed in terms of the Keplerian velocity in the ZAMO frame and the outflow velocities in the local frame $x^{\mu \prime}$. From Eq. \eqref{keplerOmega}, we can express the corotating Keplerian velocity observed in the ZAMO frame as
\begin{equation}\label{keplerv}
\hat v_K = \frac{A}{\Delta^{1/2}} {\left[ { \frac{ (M/r)^{1/2} -a (M/r)^2 }{r^3-a^2 M^3} } \right]} -\beta^\phi \, .
\end{equation}
Then, using ${\hat v}_{\pm}^\phi = ({\hat v_K} \pm v_{\rm out} \cos \xi)/(1 \pm {\hat v_K} v_{\rm out} \cos \xi)$ for the azimuthal components of the two outflow velocities and introducing the Lorentz factors $\hat\gamma_K=(1-\hat v_K^2)^{-1/2}$ and $\gamma_{\rm out} =(1-v_{\rm out}^2)^{-1/2}$, we can write the energy-at-infinity density of the reconnection outflows as
\begin{eqnarray}\label{energuis}
e^\infty_{{\rm hyd},\pm}& \!=\! &\alpha \hat\gamma_K \Bigg[ \left(1 \!+\! \hat v_K \beta^\phi \right) \gamma_{\rm out} w \nonumber \\
&& \pm \cos\xi \left(\hat v_K \!+\! \beta^\phi \right) \gamma_{\rm out} v_{\rm out} w \nonumber \\
&& -\frac{p}{\left(1 \!\pm\! \cos\xi \, \hat v_K v_{\rm out} \right) \gamma_{\rm out} \hat\gamma_K^2} \Bigg] \, ,
\end{eqnarray}
where the subscripts $+$ and $-$ indicate the energy-at-infinity density associated with corotating (${\bm{v}}_{+}^{\prime}$) and counterrotating (${\bm{v}}_{-}^{\prime}$) outflow directions as observed in the local frame $x^{\mu \prime}$.
The outflow velocity $v_{\rm out}$ can be evaluated by assuming that the local current sheet at the dominant $X$-point has a small inverse aspect ratio $\delta_X /L_X \ll 1$, where $\delta_X$ and $L_X$ are the half-thickness and half-length of this local current sheet.
If we consider that the rest frame rotating with Keplerian velocity is in a gravity-free state and neglect general relativistic corrections \cite{AsenjComisPRL,comiAsenjblackhole,AsenjComiPRD19}, then, the conservation of momentum along the reconnection neutral line gives
\begin{equation}
w \gamma_{\rm out}^2 v_{\rm out}^2/L_X + {{B}_{\rm up}^2} \delta_X^2/L_X^3 \simeq ({{B}_{\rm up}}/\delta_X) ({{B}_{\rm up}} \delta_X/L_X) \, ,
\label{mom_eq}
\end{equation}
where $B_{\rm up}$ is the local magnetic field strength immediately upstream of the local current sheet. Here we have used Maxwell's equations to estimate the current density at the neutral line in addition to the outflow magnetic field strength \cite{Lyubarsky,comiAsenjoPRLspecial}. We also assumed that the thermal pressure gradient force in the outflow direction is small compared to the magnetic tension force, as verified by numerical simulations of relativistic reconnection with antiparallel magnetic fields \cite{Liu17}. Then, from Eq. \eqref{mom_eq} one gets
\begin{eqnarray}\label{velocityBup}
v_{\rm out} \simeq \left[ {\frac{ \left( 1-\delta_X^2/L_X^2 \right) \sigma_{\rm up}}{1 + \left( 1-\delta_X^2/L_X^2 \right) \sigma_{\rm up}}} \right]^{1/2} \, ,
\end{eqnarray}
where $\sigma_{\rm up} = B_{\rm up}^2/w_0$ is the plasma magnetization immediately upstream of the local current sheet at the dominant $X$-point. Consequently, for $\delta_X /L_X \ll 1$, the outflow velocity reduces to $v_{\rm out} \simeq \left[ {{\sigma_{\rm up}}/{(1 + \sigma_{\rm up})}} \right]^{1/2}$.
The local magnetic field $B_{\rm up}$ can be connected to the asymptotic macro-scale magnetic field $B_0$ by considering force balance along the inflow direction.
In the magnetically dominated regime, thermal pressure is negligible, and the inward-directed magnetic pressure gradient force must be balanced by the outward-directed magnetic tension (the inertia of the inflowing plasma is negligible if $\delta_X /L_X \ll 1$). Then, from geometrical considerations one gets \cite{Liu17}
\begin{equation}
B_{\rm up} = \frac{1- (\tan \varphi)^2}{1+ (\tan \varphi)^2} B_0 \, ,
\label{drop_B_eq}
\end{equation}
where $\varphi$ is the opening angle of the magnetic reconnection separatrix. Estimating $\tan \varphi \simeq \delta_X/L_X$, we have simply
\begin{eqnarray}\label{velocityB0}
v_{\rm out} \simeq {\left( {\frac{\sigma_0}{1 + \sigma_0}} \right)^{1/2}} \, ,\quad \gamma_{\rm out} \simeq {\left( {1+\sigma_0} \right)^{1/2}} \, ,
\end{eqnarray}
where we have defined $\sigma_0 = B_0^2/w_0$ as the plasma magnetization upstream of the reconnection layer. Accordingly, in the magnetically dominated regime $\sigma_0 \gg 1$, the reconnection outflow velocity approaches the speed of light. We finally note that in the presence of significant embedding of the local current sheet, the scaling of the outflow velocity could be weakened with respect to $B_0$, while Eq. \eqref{velocityBup} remains accurate \cite{Liu17,sironi16}.
We must point out that in the plasmoid-mediated reconnection regime considered here, the continuous formation of plasmoids/flux ropes prevents the formation of extremely elongated ``laminar'' reconnection layers, thereby permitting a high reconnection rate \citep[e.g.][]{daughton09,bhatta09}. Depending on the plasma collisionality regime, plasmoid-mediated reconnection yields an inflow velocity (as observed in the frame $x^{\mu \prime}$)
\begin{equation} \label{recvelocity}
v_{\rm in} =
\begin{cases}
\mathcal{O}(10^{-2}) & {\rm for} \quad \delta_X > \ell_k \; [44\!-\!47] \\
\mathcal{O}(10^{-1}) & {\rm for} \quad \delta_X \lesssim \ell_k \; [42, 43] \, ,
\end{cases}
\end{equation}
where $\ell_k$ is the relevant kinetic scale that determines the transition between the collisional and collisionless regimes. The collisional regime is characterized by $\delta_X > \ell_k$, while the collisionless regime occurs if $\delta_X \lesssim \ell_k$. For a pair (${e^-} {e^+}$) dominated plasma, we have \cite{comiAsenjoPRLspecial} $\ell_k = \sqrt{\gamma_{{\rm th},e}} \, \lambda_e$, where $\lambda_e$ is the nonrelativistic plasma skin depth and ${\gamma_{{\rm th},e}}$ is the electron/positron thermal Lorentz factor.
If there is also a significant ion component, then \cite{daughton09} $\ell_k = \sqrt{\gamma_{{\rm th},i}} \, \lambda_i$, where $\lambda_i$ is the nonrelativistic ion inertial length and ${\gamma_{{\rm th},i}}$ is the ion thermal Lorentz factor.
We emphasize that the reconnection rate is independent of the microscopic plasma parameters when magnetic reconnection proceeds in the plasmoid-mediated regime. In particular, plasmoid-mediated reconnection in the collisionless regime has an inflow velocity $v_{\rm in}$ that is a significant fraction of the speed of light, which potentially allows for a high energy extraction rate from the black hole (see Sec. \ref{section4}).
The expression for the energy at infinity associated with the accelerated/decelerated plasma as a function of the critical parameters ($a$, $r/M$, $\sigma_0$, $\xi$) can be finally obtained by substituting the magnetization dependence of the outflow velocity into Eq. \eqref{energuis}. Then, the hydrodynamic energy at infinity per enthalpy $\epsilon^\infty_\pm = e^\infty_{{\rm hyd},\pm}/w$ becomes
\begin{eqnarray}\label{energuisMagnet}
\epsilon^\infty_\pm& \!=\! &\alpha \hat\gamma_K \Bigg[ \left(1 \!+\! \beta^\phi \hat v_K\right) {\left( {1 \!+\! \sigma_0} \right)^{1/2}} \pm \cos{\xi} \left(\beta^\phi \!+\! \hat v_K \right) \sigma_0^{1/2} \nonumber\\
&&\qquad\qquad - \frac{1}{4} \frac{{\left( {1 \!+\! \sigma_0} \right)^{1/2}} \mp \cos{\xi} \, \hat v_K \sigma_0^{1/2}}{\hat\gamma_K^2 (1+\sigma_0 \!-\! \cos^2{\xi} \, \hat v_K^2 \sigma_0)}\, \Bigg]\, ,
\end{eqnarray}
where we have assumed a relativistically hot plasma with polytropic index $\Gamma=4/3$.
Similarly to the original Penrose process \cite{penrose69}, energy extraction from the black hole through magnetic reconnection occurs when
\begin{equation}\label{conditionsenergy}
\epsilon^\infty_-<0\, \quad {\rm and} \quad \Delta \epsilon^\infty_+ >0 \, ,
\end{equation}
where
\begin{equation}\label{conditionsenergy2}
\Delta \epsilon^\infty_+ = \epsilon^\infty_+ - \left( {1-\frac{\Gamma}{\Gamma-1} \frac{p}{w} } \right) = \epsilon^\infty_+
\end{equation}
for a relativistically hot plasma.
Therefore, black hole rotational energy is extracted if the decelerated plasma acquires negative energy as measured at infinity, while the plasma that is accelerated acquires energy at infinity larger than its rest mass and thermal energies.
\begin{figure}[]
\begin{center}
\vspace{0.20cm}
\includegraphics[width=8.4cm]{Fig2.pdf}
\vspace{-0.30cm}
\end{center}
\caption{Energy at infinity per enthalpy $\epsilon^\infty_+$ (gray line) and $\epsilon^\infty_-$ (orange line) for optimal energy extraction conditions ($a, r/M \rightarrow 1$ and $\xi \rightarrow 0$). Energy extraction requires $\sigma_0 > 1/3$. For $\sigma_0 \gg 1$, $\epsilon^\infty_+ \simeq \sqrt{3 \sigma_0}$ (dash-dotted black line) and $\epsilon^\infty_- \simeq - \sqrt{\sigma_0/3}$ (dashed black line).}
\label{fig2}
\end{figure}
The energy at infinity per enthalpy $\epsilon^\infty_\pm$ given by Eq. \eqref{energuisMagnet} depends on the black hole spin $a$ and the $X$-point distance $r/M$, as well as the plasma magnetization $\sigma_0$ and the orientation angle $\xi$, which encodes the information of the magnetic field configuration surrounding the black hole. Eqs. \eqref{energuisMagnet}-\eqref{conditionsenergy2} indicate that energy extraction is favored by lower values of the orientation angle $\xi$ and higher values of the magnetization $\sigma_0$. It is instructive to consider the limit $a \rightarrow 1$, $\xi \rightarrow 0$, and $r \rightarrow M$ (the metric \eqref{BL_coord} has a coordinate singularity at the event horizon that can be removed by a coordinate transformation). In this case, from Eq. \eqref{energuisMagnet} we obtain $\epsilon^\infty_+>0$ and $\epsilon^\infty_-<0$ when
\begin{equation}\label{}
\sigma_0 > {1}/{3} \, .
\end{equation}
Therefore, in principle, it is possible to extract rotational energy via magnetic reconnection for values of $\sigma_0$ below unity. However, higher $\sigma_0$ values are required to extract sizable amounts of energy. If, in addition to $a, r/M \rightarrow 1$ and $\xi \rightarrow 0$, we also consider $\sigma_0 \gg 1$, from Eq. \eqref{energuisMagnet} we obtain
\begin{equation}\label{energ_mas_simple}
\epsilon^\infty_+ \simeq \sqrt{3 g_{\phi\phi}} \, {\omega^\phi} \gamma_{\rm out} v_{\rm out} \simeq \sqrt{3 \sigma_0} \, ,
\end{equation}
\begin{equation}\label{energ_minus_simple}
\epsilon^\infty_- \simeq - {\sqrt{\frac{g_{\phi\phi}}{3}} \, \omega^\phi} \gamma_{\rm out} v_{\rm out} \simeq - \sqrt{\frac{\sigma_0}{3}} \, .
\end{equation}
These relations give us the energy at infinity per enthalpy of the accelerated ($+$) and decelerated ($-$) plasma in the maximal energy extraction regime (as can be seen from Fig. \ref{fig2}, they provide a fairly accurate estimate already at values of $\sigma_0$ moderately larger then unity).
In the next sections, we will show that magnetic reconnection is a viable mechanism for extracting energy from rotating black holes for a significant region of the parameter space, we will evaluate the rate of black hole energy extraction, and we will determine the efficiency of the reconnection process.
\section{Energy Extraction Assessment in Phase Space} \label{section3}
We analyze the viability of energy extraction via magnetic reconnection by considering solutions of Eq. \eqref{energuisMagnet}. In particular, in Figs. \ref{fig3} and \ref{fig4} we display the regions of the phase-space $\{a,r/M\}$ where $\epsilon^\infty_- <0$ and $ \Delta \epsilon^\infty_+ >0$, which correspond to the conditions for energy extraction. This is done for a reconnecting magnetic field with orientation angle $\xi = \pi/12$ and different values of the magnetization parameter $\sigma_0 \in \left\{ {1,3,10,30,100} \right\}$ (Fig. \ref{fig3}), and for a plasma magnetization $\sigma_0 = 100$ and different values of the orientation angle $\xi \in \left\{ {\pi/20,\pi/12,\pi/6,\pi/4} \right\}$ (Fig. \ref{fig4}).
\begin{figure}[]
\begin{center}
\includegraphics[width=8.4cm]{Fig3.pdf}
\vspace{-0.30cm}
\end{center}
\caption{Regions of the phase-space $\{a,r/M\}$ where the energies at infinity per enthalpy from Eq. \eqref{energuisMagnet} are such that $\Delta \epsilon^\infty_+ >0$ (gray area) and $\epsilon^\infty_- <0$ (orange to red areas), for a reconnecting magnetic field having orientation angle $\xi = \pi/12$ and different values of the magnetization parameter $\sigma_0 \in \left\{ {1,3,10,30,100} \right\}$. The area with $\epsilon^\infty_- <0$ increases monotonically as $\sigma_0$ increases.
The solid black line indicates the limit of the outer event horizon, Eq. \eqref{outerevent}, the dashed black line represents the limiting corotating circular photon orbit, Eq. \eqref{circularorbitphotonrad}, while the dash-dotted black line corresponds to the innermost stable circular orbit, Eq. \eqref{rmargbsc}. The limit $r/M = 2$ corresponds to the outer boundary of the ergosphere at $\theta = \pi/2$.}
\label{fig3}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=8.4cm]{Fig4.pdf}
\vspace{-0.30cm}
\end{center}
\caption{Regions of the phase-space $\{a,r/M\}$ where the energies at infinity per enthalpy from Eq. \eqref{energuisMagnet} are such that $\Delta \epsilon^\infty_+ >0$ (gray area) and $\epsilon^\infty_- <0$ (green areas), for plasma magnetization $\sigma_0 = 100$ and different values of the orientation angle $\xi \in \left\{ {\pi/20,\pi/12,\pi/6,\pi/4} \right\}$. Other lines are the same as in Figure \ref{fig3}. The area with $\epsilon^\infty_- <0$ increases monotonically as $\xi$ decreases.}
\label{fig4}
\end{figure}
As the magnetization of the plasma increases, the region of the phase-space $\{a,r/M\}$ where magnetic reconnection extracts black hole rotational energy extends to larger $r/M$ values and lower values of the dimensionless spin $a$ (Fig. \ref{fig3}). From Eq. \eqref{energuisMagnet} we can see that $\epsilon^\infty_-$ is a monotonically decreasing function of $\sigma_0$, while $\epsilon^\infty_+$ monotonically increases with $\sigma_0$. $\epsilon^\infty_+ > 0$ is easily satisfied for $r_{\rm ph} < r < r_E$, $a>0$, and $\xi < \pi/2$. On the other hand, $\epsilon^\infty_- < 0$ requires $\sigma_0 \gg 1$ in order for reconnection to extract black hole energy in a significant region of the phase-space $\{a,r/M\}$. High values of the plasma magnetization can extend the energy extraction region up to the outer boundary of the ergosphere, while energy extraction for moderate values of the spin parameter $a$ is subject to the occurrence of particle orbits inside the ergosphere.
Energy extraction via magnetic reconnection is also favored by reconnection outflows whose orientation is close to the azimuthal direction. The region of the phase-space $\{a,r/M\}$ where energy extraction occurs increases to larger $r/M$ values and lower $a$ values as the orientation angle $\xi$ decreases. Notwithstanding, even an angle as large as $\xi = \pi/4$ admits a modest region of the phase-space where magnetic reconnection extracts rotational energy. The increase of the energy extraction region for decreasing angle $\xi$ is due to the fact that only the azimuthal component of the outflow velocity contributes to the extraction of rotational energy. For an angle $\xi = \pi/20$, the extraction of black hole energy happens for $X$-points up to $r/M \approx 1.96$ (for $\sigma_0 =100$), while $\xi \rightarrow 0$ can extend this margin up to the outer boundary of the ergosphere.
The ergosphere of spinning black holes ($r_{H} <r < r_{E}$) can reach very high plasma magnetizations (e.g, $\sigma_0 \gg 100$ close to the event horizon of the black hole M87* \cite{EHT_5_2019}). Furthermore, for rapidly spinning ($a$ close to unity) black holes, we expect a reconnecting magnetic field with small orientation angle, $\xi \lesssim \pi/6$, as the strong frame-dragging effect inside the ergosphere stretches the magnetic field lines along the azimuthal direction \citep[e.g.][]{Koide02,Semenov04}. Therefore, the plots shown in Figs. \ref{fig3} and \ref{fig4} indicate that magnetic reconnection is a viable mechanism for extracting energy from rotating black holes with dimensionless spin $a$ close to unity. On the other hand, energy extraction via magnetic reconnection becomes negligible for spin values $a \lesssim 0.8$. The availability of reconnection regions inside the ergosphere decreases as the spin parameter decreases, with no circular orbits inside the ergosphere for spin $a \leq 1/\sqrt{2}$. Magnetic reconnection could still be capable of extracting energy in such cases if a circular orbit is sustained thanks to the help of the magnetic field or if one considers non-circular orbits.
\section{Energy Extraction Rate and Reconnection Efficiency} \label{section4}
We now evaluate the rate of black hole energy extraction. This depends on the amount of plasma with negative energy at infinity that is swallowed by the black hole in the unit time. Therefore, a high reconnection rate can potentially induce a high energy extraction rate. The power $P_{\rm extr}$ extracted from the black hole by the escaping plasma can be estimated as
\begin{equation} \label{Pextr}
P_{\rm extr} = - \epsilon_-^\infty w_0 A_{\rm in} U_{\rm in} \, ,
\end{equation}
where $U_{\rm in} = \mathcal{O}(10^{-1})$ for the collisionless regime, while $U_{\rm in} = \mathcal{O}(10^{-2})$ for the collisional one. $A_{\rm in}$ is the cross-sectional area of the inflowing plasma, which can be estimated as ${A}_{\rm in} \sim (r_E^2 - r_{{\rm ph}}^2)$ for rapidly spinning black holes. In particular, for $a \rightarrow 1$ one has $(r_E^2 - r_{{\rm ph}}^2) = (r_{E}^2 - r_{H}^2) = 3M^2$.
We show in Fig. \ref{fig5} the ratio $P_{\rm extr}/w_0$ as a function of the dominant $X$-point location $r/M$ for a rapidly spinning black hole with $a=0.99$ and magnetic reconnection in the collisionless regime.
This is done for a typical reconnecting magnetic field with orientation angle $\xi = \pi/12$ and different values of the magnetization parameter $\sigma_0 \in \left\{ {10,10^2,10^3,10^4,10^5} \right\}$ (top panel), and for a typical magnetization $\sigma_0 = 10^4$ and different values of the orientation angle $\xi \in \left\{ {0,\pi/20,\pi/12,\pi/8,\pi/6} \right\}$ (bottom panel).
The power extracted from the black hole increases monotonically for increasing values of the plasma magnetization and for lower values of the orientation angle. It reaches a peak for $X$-point locations that are close to the limiting circular orbit until it drops off. The peak of the extracted power can continue to raise up to a maximum value that is achieved for $r/M \rightarrow 1$ if $a \rightarrow 1$. The theoretical limit of the maximum power is given by
\begin{equation} \label{PextrMAX}
P_{\rm extr}^{\rm max} \simeq \sqrt{\sigma_0/3} \, w_0 A_{\rm in} U_{\rm in} \sim 0.1 M^2 \sqrt{\sigma_0} \, w_0 \, ,
\end{equation}
which follows directly from Eqs. \eqref{energ_minus_simple} and \eqref{Pextr}. We can see from Fig. \ref{fig5} that the peak of the extracted power is already close to the maximum theoretical limit when $\xi \lesssim \pi/12$.
\begin{figure}[]
\begin{center}
\includegraphics[width=8.4cm]{Fig5a.pdf}
\bigskip $\,$
\hspace*{-0.05cm}\includegraphics[width=8.4cm]{Fig5b.pdf}
\vspace{-0.30cm}
\end{center}
\caption{${P_{\rm extr}}/w_0 = - \epsilon_-^\infty A_{\rm in} U_{\rm in}$ as a function of the dominant $X$-point location $r/M$ for a rapidly spinning black hole with $a = 0.99$ and reconnection inflow four-velocity $U_{\rm in} = 0.1$ (i.e., collisionless reconnection regime). $\epsilon_-^\infty$ is evaluated using Eq. \eqref{energuisMagnet}, while $A_{\rm in} = (r_{{\rm ph}}^2 - r_{H}^2)$. We have also set $M=1$. Different colors (from indigo to red) refer to different plasma magnetizations (from $\sigma_0 = 10$ to $\sigma_0 = 10^5$) and $\xi = \pi/12$ (top panel) or different orientation angles (from $\xi = \pi/6$ to $\xi = 0$) and $\sigma_0 = 10^4$ (bottom panel). The vertical dashed line indicates the limiting circular orbit $r_{\rm ph}(a=0.99)$.}
\label{fig5}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=8.4cm]{Fig6.pdf}
\vspace{-0.30cm}
\end{center}
\caption{Efficiency $\eta$ of the reconnection process as a function of the dominant $X$-point location $r/M$ for a reconnection layer with upstream plasma magnetization $\sigma_0 = 100$ and reconnecting magnetic field having orientation angle $\xi = \pi/20$. Different colors (from indigo to red) refer to different black hole spin values (from $a = 0.9$ to $a = 1$). }
\label{fig6}
\end{figure}
The proposed mechanism of energy extraction via magnetic reconnection generates energetic plasma outflows that steal energy from the black hole, but it also necessitates magnetic field energy to operate. Magnetic energy is indeed needed in order to redistribute the angular momentum of the particles in such a way to generate particles with negative energy at infinity and particles escaping to infinity. Therefore, it is convenient to define the efficiency of the plasma energization process via magnetic reconnection as
\begin{equation} \label{eff}
\eta = \frac{\epsilon^\infty_+}{\epsilon^\infty_+ + \epsilon^\infty_-} \, .
\end{equation}
Extraction of energy from the black hole takes place when $\eta > 1$. Figure \ref{fig6} shows the efficiency $\eta$ as a function of the dominant $X$-point location $r/M$ for a reconnection layer with magnetization parameter $\sigma_0=100$, orientation angle $\xi = \pi/20$, and different black hole spin values $a \in \left\{ {0.90,0.96,0.99,0.999,1} \right\}$. The efficiency $\eta$ significantly increases for reconnection $X$-points that are closer to the black hole event horizon and falls off below unity when the inner radius reaches $r_{\rm ph}$. The maximum efficiency can be evaluated by considering the optimal energy extraction conditions ($a, r/M \rightarrow 1$, $\xi \rightarrow 0$) and $\sigma_0 \gg 1$. In this case, Eq. \eqref{eff} gives
\begin{equation} \label{effmax}
\eta_{\rm max} \simeq \frac{\sqrt{3 \sigma_0}}{ \sqrt{3 \sigma_0} - \sqrt{\sigma_0/3}} = {3}/{2} \, .
\end{equation}
Therefore, the additional energy extracted from the black hole, while non-negligible, do not extensively modify the energetics of the escaping plasma.
We can also compare the power extracted from the black hole by fast magnetic reconnection with the one that can be extracted via the Blandford-Znajek mechanism, in which the rotational energy is extracted electromagnetically through a magnetic field that threads the black hole event horizon.
For maximum efficiency conditions \cite{MT82,Thorne86,komissarov01}, the rate of black hole energy extraction via the Blandford-Znajek mechanism is given by \cite{BZ77,Tchekhovskoy10}
\begin{equation} \label{P_BZ}
P_{\rm BZ} \simeq \kappa \Phi_{\rm BH}^2 \left( {\Omega_H^2 + \chi \Omega_H^4 + \zeta \Omega_H^6 } \right) \, ,
\end{equation}
where $\Phi_{\rm BH} = \frac{1}{2} \int_{\theta} \int_{\phi} |B^r| dA_{\theta \phi}$ is the magnetic flux threading one hemisphere of the black hole horizon (with $dA_{\theta \phi} = \sqrt{-g} \, d\theta d\phi$ indicating the area element in the $\theta$-$\phi$ plane), $\Omega_H = a /2 r_{H}$ is the angular frequency of the black hole horizon, while $\kappa$, $\chi$, and $\zeta$ are numerical constants. The numerical prefactor $\kappa$ depends on the magnetic field geometry near the black hole ($\kappa \approx 0.053$ for a split monopole geometry and $\kappa \approx 0.044$ for a parabolic geometry), while $\chi \approx 1.38$ and $\zeta \approx -9.2$ \cite{Tchekhovskoy10}.
Eq. \eqref{P_BZ} is a generalization of the original Blandford-Znajek scaling \cite{BZ77} $P_{\rm BZ} \simeq \kappa \Phi_{\rm BH}^2 (a/4M)^2$, which is recovered in the small spin limit $a \ll 1$.
\begin{figure}[]
\begin{center}
\includegraphics[width=8.4cm]{Fig7.pdf}
\vspace{-0.30cm}
\end{center}
\caption{Power ratio ${P_{\rm extr}}/{P_{\rm BZ}}$ as a function of the plasma magnetization $\sigma_0$ for a black hole with dimensionless spin $a = 0.99$ and a reconnecting magnetic field having orientation angle $\xi = \pi/12$. Different colors (from indigo to red) refer to different dominant $X$-point locations $r/M \in \left\{ {1.3,1.4,1.5,1.6,1.7} \right\}$. We considered $U_{\rm in} = 0.1$ (i.e., collisionless reconnection regime), $A_{\rm in} = (r_{{\rm ph}}^2 - r_{H}^2)$, and $\kappa = 0.05$.}
\label{Fig7}
\end{figure}
In order to provide a rough order of magnitude estimate of the power extracted during the occurrence of fast magnetic reconnection with respect to the approximately steady-state Blandford-Znajek process,
we assume $\Phi_{\rm BH} \sim |B^r| r_{H}^2 \sim B_0 {\sin \xi} \, r_{H}^2$ (we point out that a precise evaluation of $\Phi_{\rm BH}$ requires direct numerical simulations that reproduce the detailed magnetic field configuration at all latitudes, while the angle $\xi$ is a good estimate for the magnetic field configuration only at low latitudes \citep[e.g.][]{Koide02,Semenov04}). Then, we can evaluate the ratio ${P_{\rm extr}}/{P_{\rm BZ}}$ as
\begin{equation} \label{powerratiowithBZ1}
\frac{P_{\rm extr}}{P_{\rm BZ}} \sim\frac{ - \epsilon_-^\infty A_{\rm in} U_{\rm in}} {\kappa \, \Omega_H^2 r_{H}^4 \sigma_0 \sin^2 \xi \, (1+ \chi \Omega_H^2 + \zeta \Omega_H^4)}\, .
\end{equation}
Fig. \ref{Fig7} shows the ratio ${P_{\rm extr}}/{P_{\rm BZ}}$ given by the right-hand side of Eq. \eqref{powerratiowithBZ1} as a function of the plasma magnetization $\sigma_0$ for the fast collisionless reconnection regime. ${P_{\rm extr}}/{P_{\rm BZ}} \gg 1$ for an extended range of plasma magnetizations. For $\sigma_0 \sim 1$, the force-free approximation (the inertia of the plasma is ignored, i.e. $w_0 \rightarrow 0$) that is used to derive the extracted power in the Blandford-Znajek process becomes invalid. In this case, magnetic reconnection is an effective mechanism of energy extraction provided that the plasma magnetization is sufficient to satisfy the condition $ \epsilon_-^\infty < 0$ (as well as $\Delta \epsilon^\infty_+ >0$). On the other hand, for $\sigma_0 \rightarrow \infty$, energy extraction via fast magnetic reconnection is always subdominant to the Blandford-Znajek process since ${P_{\rm extr}}/{P_{\rm BZ}} \rightarrow 0$ in this limit.
If we neglect higher order corrections with respect to $\Omega_H^2$ (which leads to an overprediction of $P_{\rm BZ}$ by about 25\% as $a \rightarrow 1$ \cite{Tchekhovskoy10}), and recalling that $\Omega_H = 1/2M$ for $a \rightarrow 1$, we can estimate the ratio ${P_{\rm extr}}/{P_{\rm BZ}}$ for a rapidly spinning black hole as
\begin{equation} \label{powerratiowithBZ2}
\frac{P_{\rm extr}}{P_{\rm BZ}}\sim\frac{- \epsilon_-^\infty}{\kappa \, \sigma_0 \sin^2 \xi}\, ,
\end{equation}
where we considered plasmoid-mediated reconnection in the collisionless regime. Therefore, the power extracted via fast collisionless magnetic reconnection can exceed the one extracted through the Blandford-Znajek process for an extended range of plasma magnetizations if there is a significant toroidal component of the magnetic field in the black hole ergosphere. Note, however, that energy extraction by fast magnetic reconnection is localized in time, since it requires a certain time to build-up the magnetic field configuration storing the magnetic energy that is eventually dissipated via fast magnetic reconnection.
\section{Conclusions}
\label{section5}
In this paper, we envisioned the possibility of extracting black hole rotational energy via fast magnetic reconnection in the black hole ergosphere. We considered a configuration with antiparallel magnetic field lines near the equatorial plane, which is induced by the frame dragging of the spinning black hole. The change in magnetic field direction at the equatorial plane produces an equatorial current sheet that is disrupted by the plasmoid instability when its aspect ratio reaches a critical value (for a collisionless relativistic pair plasma, the critical aspect ratio condition is derived in Ref. \cite{Comisso2019}). The formation of plasmoids/flux ropes drives fast magnetic reconnection, which rapidly converts the available magnetic energy into plasma particle energy. When the plasma is expelled out of the reconnection layer, the magnetic tension that drives the plasma outflow relaxes. The field lines are then stretched again as a consequence of the frame dragging and a current layer prone to fast plasmoid-mediated reconnection forms again. This process leads to reconnecting current sheets that form rapidly and intermittently.
Magnetic reconnection accelerates part of the plasma in the direction of the black hole rotation, while another part of the plasma is accelerated in the opposite direction and falls into the black hole. Black hole energy extraction occurs if the plasma that is swallowed by the black hole has negative energy as viewed from infinity, while the accelerated plasma that gains energy from the black hole escapes to infinity. Therefore, differently from the Blandford-Znajek process, in which the extraction of rotational energy is obtained through a purely electromagnetic mechanism, the energy extraction mechanism described here requires non-zero particle inertia. This mechanism is also different from the original Penrose process, since dissipation of magnetic energy is required to produce the negative-energy particles. Clearly, all mechanisms extract black hole rotational energy by feeding the black hole with negative energy and angular momentum.
We showed analytically that energy extraction via magnetic reconnection is possible when the black hole spin is high (dimensionless spin $a \sim 1$) and the plasma is strongly magnetized (plasma magnetization $\sigma_0 > 1/3$).
Magnetic reconnection is assumed to occur in a circularly rotating plasma with a reconnecting field having both azimuthal and radial components. The region of the phase-space $\{a,r/M\}$ where magnetic reconnection is capable of extracting black hole energy depends on the plasma magnetization $\sigma_0$ and the orientation $\xi$ of the reconnecting magnetic field. We showed that high values of the plasma magnetization and mostly azimuthal reconnecting fields can expand the energy extraction region up to the outer boundary of the ergosphere. For a dimensionless spin parameter that approaches unity, the extraction of black hole energy is maximal when the dominant reconnection $X$-point (where the two magnetic reconnection separatrices intersect) is close to the event horizon. For $\sigma_0 \gg 1$, we showed that the asymptotic negative energy at infinity per enthalpy of the plasma that is swallowed by the black hole is $\epsilon^\infty_- \simeq - \gamma_{\rm out} v_{\rm out}/ {\sqrt{3}} \simeq - \sqrt{\sigma_0/3}$. On the other hand, the plasma that escapes to infinity and takes away black hole energy asymptotes the energy at infinity per enthalpy $\epsilon^\infty_+ \simeq \sqrt{3} \, \gamma_{\rm out} v_{\rm out} \simeq \sqrt{3 \sigma_0}$.
We calculated the power extracted from the black hole by the escaping plasma and evaluated its maximum when the dominant reconnection $X$-point is close to the event horizon. This corresponds to $P_{\rm extr}^{\rm max} \sim 0.1 M^2 \sqrt{\sigma_0} \, w_0$ for the collisionless plasma regime and one order of magnitude lower for the collisional regime. The overall efficiency of the plasma energization process via magnetic reconnection can reach a maximum of $\eta_{\rm max} \simeq 3/2$. Therefore, the additional energy extracted from the black hole, while important, do not extensively modify the energetics of the escaping plasma. On the other hand, the power extracted via fast magnetic reconnection can induce a significant reduction of the rotational energy of the black hole, ${d E_{\rm rot}}/{dt} = \epsilon_-^\infty w_0 A_{\rm in} U_{\rm in}$. This is effective when $a$ is close to unity. Therefore, if we consider a black hole with dimensionless spin parameter close to unity and define $\varpi = 1-a \ll 1$, we have ${d E_{\rm rot}}/{dt} \simeq - (M/4 \sqrt{\varpi}) d\varpi/dt$ and the spindown time can be obtained as
\begin{equation} \label{}
{t_{\rm sd}} = \frac{\mathcal{O}(10)}{2 \sqrt{\sigma_0} \, w_0 M} (\sqrt{\varpi_{\rm f}}-\sqrt{\varpi_{\rm i}}) \, ,
\end{equation}
where the subscripts ${\rm f}$ and ${\rm i}$ are used to label final and initial values, respectively. This indicates that magnetic reconnection can cause a significant spindown of the black hole when $a \sim 1$. For example, fast magnetic reconnection in the ergosphere can reduce the black hole dimensionless spin from $a=0.999$ to $a=0.99$ in ${t_{\rm sd}} \sim 1/(\sqrt{\sigma_0} \, w_0 M)$. On the other hand, at lower spin values, especially for $a <0.9$, magnetic reconnection loses its efficacy as the plasma available in the ergosphere diminishes.
Various systems hosting a black hole are expected to have magnetization $\sigma_0 \gtrsim 1$ in the ergosphere.
For the typical conditions around supermassive black holes in active galactic nuclei (AGNs), the energy density of the electromagnetic field far exceeds the enthalpy density of the plasma and $\sigma_0 \sim 10^{4}$ or larger \cite{DoddsEden2010,Ponti17,EHT_5_2019} is foreseeable. Likewise, long and short gamma-ray bursts (GRBs) may have $\sigma_0 \sim 1$ or larger \cite{MacFadyen99,vanPutten99,Kiuchi15,Ruiz19} in the ergosphere (a central black hole is assumed).
Under these magnetization conditions (in addition to $a \sim 1$), magnetic reconnection is capable of extracting energy from the black hole. For $\sigma_0 \sim 1 - 10^4$, we have shown that the bursty energy extraction rate occurring during fast magnetic reconnection can exceed the more steady energy extraction rate expected from the Blandford-Znajek mechanism. On the other hand, as the plasma magnetization increases, energy extraction via fast magnetic reconnection becomes always subdominant since it requires non-vanishing plasma inertia.
In the scenario proposed here, fast magnetic reconnection occurs rapidly and intermittently, so that the associated emission within a few gravitational radii from the black hole is expected to be bursty in nature. This bursty behavior of fast magnetic reconnection might be responsible for triggering flares in the vicinity of rotating black holes. Indeed, frequent X-ray and near-infrared flares are detected on a regular basis from the Galactic Center black hole Sgr A* \citep[e.g.][]{Baganoff01,Genzel03,Meyer08,Neilsen13}, and magnetic reconnection close to the black hole is often conjectured to induce these flares \citep[e.g.][]{DoddsEden2010,ripperda20,Dexter20}. Recent observations by the GRAVITY collaboration \cite{Gravity2018} have been able to pin down the motion of near-infrared flares originating near the last stable circular orbit of Sgr A*.
Reconnection layers originate naturally in the ergosphere of rotating black holes and produce plasmoids/flux ropes that are filled with energized plasma with an energy budget that can exceed the energy originally stored in the magnetic field.
In this paper we have assumed that the plasma rotates circularly around the black hole. This assumption may be relaxed in order to treat more complex scenarios in which reconnection occurs in non-circular orbits. In this case, the plasma could approach the event horizon even when the black hole spin is not particularly high, expanding the parameter space region where magnetic reconnection can extract black hole energy.
Another situation that could increase the efficacy of magnetic reconnection is the simultaneous presence of equatorial and non-equatorial current sheets \cite{ripperda20}, which may result in an increase of the extracted power to some degree.
Finally, for reconnecting magnetic fields that have a significant radial component, particle acceleration owing to the reconnection electric field can increase the rate of energy extraction and the overall efficiency of the reconnection process.
$\,$
\begin{acknowledgments}
We gratefully acknowledge discussions with Lorenzo Sironi, Daniel Gro\v{s}elj, Russell Kulsrud, Manasvi Lingam, Yi-Hsin Liu, Joonas N\"attil\"a, Kyle Parfrey, Bart Ripperda, Daniel Siegel, and Yajie Yuan. L.C. acknowledges support by the NASA ATP NNX17AG21G and NSF PHY-1903412 grants. F.A.A. acknowledges support by the Fondecyt-Chile Grant No. 1180139.
\end{acknowledgments}
|
2,869,038,153,731 | arxiv | \section{Introduction}
In this article we show that the adele class space\footnote{More specifically the sector ${\mathbb Q}^\times\backslash{\mathbb A}_{\mathbb Q}/\hat{\mathbb Z}^*$ corresponding to the trivial Grossencharacter.} of ${\mathbb Q}$ admits a natural structure of tropical curve. We follow the strategy outlined in \cite{CC,CC1} and investigate the algebraic geometric structure of the
Scaling Site\footnote{These results have been announced in \cite{CC2}.} ${\mathscr S}$ obtained from the arithmetic site ${\mathscr A}$ by extension of scalars from the Boolean semifield ${\mathbb B}$ to the tropical semifield $\R_+^{\rm max}$ ({\it cf.}~Figure~\ref{scalingpic}). As a Grothendieck topos ${\mathscr S}$ is described as ${[0,\infty)\rtimes{\N^{\times}}}$: the topos of $\N^{\times}$-equivariant sheaves (of sets) on the half-line and our first result (Theorem \ref{scaltop}) states that the isomorphism classes of points of this topos form the basic sector of the adele class space of ${\mathbb Q}$
\begin{thm*}\label{scaltopintro}The space of points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$
is canonically isomorphic to ${\mathscr A}(\R_+^{\rm max})\simeq {\mathbb Q}^\times\backslash{\mathbb A}_{\mathbb Q}/\hat{\mathbb Z}^*$.
\end{thm*}
This result provides the missing geometric structure on the adele class space since the topos ${[0,\infty)\rtimes{\N^{\times}}}$ inherits, from its construction by extension of scalars, a natural sheaf ${\mathcal O}$ of regular functions. We call {\em Scaling Site} the semi-ringed topos
\[
{\mathscr S}:=\left({[0,\infty)\rtimes{\N^{\times}}},{\mathcal O}\right)
\]
so obtained. The sections of the sheaf ${\mathcal O}$ are convex, piecewise affine functions with integral slopes. In Appendix \ref{apptropic} we review the well known results on the localization of zeros of analytic functions showing in which sense the tropical half-line $(0,\infty)$, endowed with the sheaf of convex piecewise affine functions with integral slopes, provides a suitable structure for the localization (both in the archimedean and non-archimedean case) of zeros of analytic functions in the punctured unit disk. The new component supplied with the scaling site is the action of $\N^{\times}$ by multiplication on the tropical half-line $[0,\infty)$. On analytic functions this action is given by the transformation $f(z)\mapsto f(z^n)$ for $n\in \N^{\times}$, {\it i.e.\/}\ the action of the degree $n$ endomorphism $z\mapsto z^n$ on the punctured unit disk. \newline
The structure sheaf ${\mathcal O}$ of ${\mathscr S}$ is a sheaf of semirings
of ``characteristic one" ({\it i.e.\/}\ of semirings in which $1+1=1$) and the naturalness of this structure is justified at the conceptual level (see Appendix \ref{appzmax}) by two facts. First, the endomorphisms of any object admit a {\em canonical} structure of semiring in any category with finite products and coproducts when the canonical morphisms from coproducts to products are isomorphisms. Second, passing from rings to semirings only adds one more object to the list of finite fields ${\mathbb F}_q$, namely the Boolean semifield ${\mathbb B}$, and only one object to the list of fields whose multiplicative group is cyclic, {\it i.e.\/}\ the semifield ${\Z_{\rm max}}$ whose multiplicative group is infinite cyclic. Both ${\mathbb B}$ and ${\Z_{\rm max}}$ are semirings of characteristic one.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{scalingpic26.pdf}
\end{center}
\caption{The extension of scalars from ${\mathscr A}$ to ${\mathscr S}$ \label{scalingpic} }
\end{figure}
In section \ref{sectsheafstalks} we describe the stalks of the structure sheaf ${\mathcal O}$ and show (Theorem \ref{structure3}) that the points ${\mathscr S}(\R_+^{\rm max})$ of the scaling site defined over $\R_+^{\rm max}$ coincide with the points
${\mathscr A}(\R_+^{\rm max})$ of the arithmetic site defined over the same semifield. As stated in \cite{CC,CC1} a long term goal of this project is to develop an adequate version of the Riemann-Roch theorem in characteristic $1$, suitable to transplant the pRH proof of Weil to the Riemann zeta function. In this paper we test this idea by restricting our geometric structure to the periodic orbits of the scaling flow, {\it i.e.\/}\ to the points of ${[0,\infty)\rtimes{\N^{\times}}}$ over the image of ${\rm Spec\,}{\mathbb Z}$ ({\it cf.}~Figure~\ref{scalingpic} and \cite{CC1}, \S 5.1). We find that for each prime $p$ the corresponding circle of length $\log p$ is endowed with a quasi-tropical structure which turns this orbit into a variant $C_p={\mathbb R}_+^*/p^{\mathbb Z}$ of the classical Jacobi description ${\mathbb C}^*/q^{\mathbb Z}$ of an elliptic curve. The structure sheaf ${\mathcal O}_p$ of $C_p$ is obtained by restriction of ${\mathcal O}$ to $C_p$ and its sections are periodic functions $f(p\lambda)=f(\lambda)$, $\lambda \in {\mathbb R}_+^*$, which are convex, piecewise affine and whose derivatives take values in the group $H_p\subset {\mathbb R}$ of rational numbers with denominators a power of $p$. When suitably understood in conceptual terms using Cartier divisors, the notions of rational functions, divisors, etc. on $C_p$ are all meaningful. The global rational functions form a semifield ${\mathcal K}(C_p)$ (of characteristic one). A new feature of this construction is that the degree of a divisor can be any real number. We introduce an invariant $\chi(D)\in {\mathbb Z}/(p-1){\mathbb Z}$ for divisors $D$ on $C_p$ and determine, in Theorem \ref{thmjaccp}, the precise structure of the quotient ${\rm Div}(C_p)/{\mathcal P}$ of the abelian group of divisors by the subgroup of principal divisors
\begin{thm*}\label{thmjaccpintro} The map $(\deg,\chi)$ is an isomorphism of abelian groups
$$
(\deg,\chi):{\rm Div}(C_p)/{\mathcal P}\to {\mathbb R}\times ({\mathbb Z}/(p-1){\mathbb Z}).
$$
\end{thm*}
We develop, in analogy with the non-archimedean version established in \cite{Tate}, the theory of theta functions on $C_p$, starting with
the following infinite sums as the analogues\footnote{We use the notation $x\vee y$ for the max of two real numbers} of the infinite products defining classical theta functions
$$
\theta(\lambda):=\sum_0^\infty \left(0 \vee (1-p^{m}\lambda)\right)
+\sum_1^\infty \left(0 \vee (p^{-m}\lambda-1)\right).
$$
We define $\theta$-functions $\Theta_{h,\mu}$ for $h\in H_p$, $h>0$ and $\mu\in {\mathbb R}_+^*$. They are obtained by applying to the basic theta function $\theta(\lambda)$ defined above the symmetries associated to the various incarnations (arithmetic, relative, absolute, geometric) of the ``Frobenius" operator in this context. This part is discussed in details in \S \ref{sectsymm}. The main output ({\it cf.}~Theorem \ref{thmtheta1}) is provided by the following
\begin{thm*} Any function $f\in {\mathcal K}(C_p)$ is canonically expressed in terms of theta
functions associated to the principal divisor of $f$, and a constant $c\in{\mathbb R}$
$$
f(\lambda):=\sum_i \Theta_{h_i,\mu_i}(\lambda)-\sum_j \Theta_{h'_j,\mu'_j}(\lambda)-h\lambda +c.
$$
\end{thm*}
For each divisor $D$ on $C_p$ we define the corresponding Riemann-Roch problem with solution space $H^0(D):=H^0(C_p,{\mathcal O}(D))$. We introduce the continuous dimension ${{\mbox{Dim}_\R}}(H^0(D))$ of this $\R_{\rm max}$-module using a limit of normalized topological dimensions and find that ${{\mbox{Dim}_\R}}(H^0(D))$ is a real number. The topological dimension used in this part is the Lebesgue covering dimension which assigns to any topological space $X$ an integer ${{\mbox{dim}_{\rm top}}}(X)\in \{0,\ldots ,\infty\}$ counting the minimal overlap of refinements of open covers. The appearance of arbitrary positive real numbers as continuous dimensions of ${{\mbox{Dim}_\R}}(H^0(D))$ is due to the density in ${\mathbb R}$ of the subgroup $H_p\subset {\mathbb Q}$ and the fact that continuous dimensions are defined as limits
$$
{{\mbox{Dim}_\R}}(H^0(D)):=\lim_{n\to \infty} p^{-n}{{\mbox{dim}_{\rm top}}}(H^0(D)^{p^n})
$$
of normalized dimensions $p^{-n}{{\mbox{dim}_{\rm top}}}(H^0(D)^{p^n})$ where $H^0(D)^{p^n}$ is a natural filtration of $H^0(D)$ involving the $p$-adic norms of the derivatives. We interpret this result as the characteristic $1$ counterpart of the statement for matroid $C^*$-algebras and the type II normalized traces as in \cite{dix}. The continuous dimensions which affect arbitrary positive {\em real} values appear when passing to the von Neumann algebra of type II obtained as the weak closure of the $C^*$-algebra using the trace to perform the completion. Finally, in Theorem \ref{RRperiodic} we prove that the Riemann-Roch formula holds for $C_p$
\begin{thm*}\label{RRperiodicintro}
Let $D\in {\rm Div}(C_p)$ be a divisor, then the limit
$
{{\mbox{Dim}_\R}}(H^0(D))
$ exists and
one has the Riemann-Roch formula:
$$
{{\mbox{Dim}_\R}}(H^0(D))-{{\mbox{Dim}_\R}}(H^0(-D))=\deg(D)\,,\,~\forall D\in {\rm Div}(C_p).
$$
\end{thm*}
By comparing the periodic orbit $C_p$ with a tropical elliptic curve and our Riemann-Roch theorem with the tropical Riemann-Roch theorem of \cite{BN,GK,MZ} and its variants we find several fundamental differences. First, for an elliptic tropical curve $C$ given by a circle of length $L$, the structure of the group ${\rm Div}(C)/{\mathcal P}$ of divisor classes is inserted into an exact sequence of the form
$
0\to {\mathbb R}/L{\mathbb Z}\to {\rm Div}(C)/{\mathcal P}\stackrel{\deg}{\to} {\mathbb Z}\to 0
$ ({\it cf.} \cite{MZ}\!\! ),
while for the periodic orbit $C_p$ the group of divisor classes is ${\rm Div}(C_p)/{\mathcal P}\simeq
{\mathbb R}\times ({\mathbb Z}/(p-1){\mathbb Z})$. The second fundamental difference is the appearance of continuous dimensions in our Riemann-Roch theorem.
The source for these differences is seen when one compares the structure sheaf of $C_p$
with that of the elliptic tropical curve $C:={\mathbb R}/L{\mathbb Z}$, $L=\log p$. Let us use for $C_p$ the variable $u=\log\lambda$, so that the periodicity condition $f(px)=f(x)$ becomes translation invariance by $\log p$. Then the local sections of the structure sheaf of $C_p$ are in particular piecewise affine in the parameter $\lambda$
and this condition is expressed, in the variable $u$, by the piecewise vanishing of $\Delta_2f$, where $\Delta_2$ is the elliptic translation invariant operator
\begin{equation}\label{maineq}
\Delta_2=\lambda^2\left(\frac{\partial}{\partial \lambda}\right)^2, \ \ \Delta_2(f):=(D_u^2-D_u)f, \ \ \ D_u:=\frac{\partial}{\partial u}.
\end{equation}
On the other hand, the local sections of the structure sheaf of $C$ are in particular piecewise affine in the parameter $u$, and this condition is expressed, in the variable $u$, by the piecewise vanishing of $D_u^2\, f$. Thus one readily sees that the difference between the two sheaves is due to the presence of the sub-principal term $- D_u$ in \eqref{maineq}.
\subsection*{Notations} For any abelian ordered group $H$ we denote by $H_{\rm max}=H\cup \{-\infty\}$ the semifield obtained from $H$ by applying the max-plus construction {\it i.e.\/}\ the addition is given by the max and the multiplication by the addition in $H$. Since ${\mathbb R}_{\rm max}$ is isomorphic to $\R_+^{\rm max}$ by the exponential map ({\it cf.} \cite{Gaubert}) we shall pass freely from the ``additive" notation $\R_{\rm max}$ to the ``multiplicative" one $\R_+^{\rm max}$.
\vspace*{-.5cm}
\section{The topos ${[0,\infty)\rtimes{\N^{\times}}}$}
In this section we define the topos underlying the scaling site ${\mathscr S}$ as a Grothendieck site, {\it i.e.\/}\ as a small category $\mathscr C$ endowed with a Grothendieck topology $J$. In \S \ref{sectextS} we shortly explain its structure as naturally arising from the arithmetic site ${\mathscr A}$ by extension of scalars from ${\mathbb B}$ to $\R_+^{\rm max}$. In \S \ref{sectcatC} we provide the definition of the small category $\mathscr C$ and in \S \ref{sectGtop} we describe its Grothendieck topology.
\subsection{Extension of scalars}\label{sectextS}
The arithmetic site ${\mathscr A}$ of \cite{CC,CC1} is defined using the action of $\N^{\times}$ by Frobenius endomorphisms ${\rm Fr}_n$ on the semifield ${\Z_{\rm max}}$ of characteristic one. To define the extension of scalars from ${\mathbb B}$ to $\R_+^{\rm max}$ we consider the semiring ${\mathcal R}({\mathbb Z})={\Z_{\rm max}}\hat\otimes_{\mathbb B}\R_{\rm max}$
obtained as the multiplicatively cancellative reduction of the tensor product ${\Z_{\rm max}}\otimes_{\mathbb B}\R_{\rm max}$ and we endow ${\mathcal R}({\mathbb Z})$ with the $\R_{\rm max}$-linear endomorphisms ${\rm Fr}_n\otimes {\mbox{Id}}$. Then, by applying the Legendre transform we identify ${\mathcal R}({\mathbb Z})$ with the semiring of convex piecewise affine functions on
${\mathbb R}_+$ with slopes in ${\mathbb Z}\subset {\mathbb R}$ and only finitely many discontinuities of the derivative. These functions are endowed with the pointwise
operations of functions taking values in $\R_{\rm max}$.
The operation of reduction from ${\Z_{\rm max}}\otimes_{\mathbb B}\R_{\rm max}$ to ${\mathcal R}({\mathbb Z})={\Z_{\rm max}}\hat\otimes_{\mathbb B}\R_{\rm max}$ is obtained as described in \cite{CC1} Lemma 6.20 and Proposition 6.21. More precisely, the
elements of ${\mathcal R}({\mathbb Z})$ are given by the convex hull $C$ of the union of finitely many quadrants of the form $(x_j,y_j)-Q$, for $Q={\mathbb R}_+\times {\mathbb R}_+$, where $x_j\in {\mathbb Z}$ and $y_j\in {\mathbb R}$. To determine this convex hull it is enough to know which half planes $P\subset {\mathbb R}^2$ contain it, and any such half-plane has the form
$$
P_{\lambda,u}:=\{(x,y)\in{\mathbb R}^2\mid \lambda x+y\leq u\}, \qquad P^v:=\{(x,y)\in{\mathbb R}^2\mid x\leq v\}
$$
where $\lambda \in {\mathbb R}_+$ and $u,v\in{\mathbb R}$.
Thus $C$ is uniquely determined by the function
\begin{equation*}\label{hdefn0}
\ell_C(\lambda):=\inf \{u\in{\mathbb R}\mid C\subset P_{\lambda,u}\}
\end{equation*}
and this function is given in terms of the finitely many vertices $(x_j,y_j)$ of the polygon $C$ by the formula
\begin{equation}\label{hdefn}
\ell_C(\lambda)=\max_j \lambda x_j+y_j.
\end{equation}
\begin{figure
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.3]{gathmannminus4.pdf}
\caption{An element $C$ of ${\mathcal R}({\mathbb Z})$}
\label{elem}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\includegraphics[scale=0.5]{gathmannplus5.pdf}
\caption{The Legendre transform $\ell_C(\lambda)$}
\label{lege}
\end{subfigure}
\end{figure}
Note that $\ell_C(\lambda)$ is meaningful also for $\lambda=0$ and that $\lim_{\lambda\to \infty}\, \ell_C(\lambda) /\lambda =\max x_j=\inf \{v\in {\mathbb R}\mid C\subset P^v\}$.
One then obtains the required identification
\begin{lem} \label{legendrelem}
The map $L:C\mapsto \ell_C$ where $\ell_C(\lambda)$, for $\forall \lambda\in {\mathbb R}_+$ has been defined in \eqref{hdefn}, is an isomorphism of ${\Z_{\rm max}}\hat\otimes_{\mathbb B}\R_{\rm max}$ with the semiring ${\mathcal R}({\mathbb Z})$ of continuous\footnote{Continuity is automatic for convex functions on open intervals, see \cite{Rudin}, Theorem 3.2.} convex piecewise affine functions on
${\mathbb R}_+$ with slopes in ${\mathbb Z}\subset {\mathbb R}$ and only finitely many discontinuities of the derivative. These functions are endowed with the pointwise
operations of functions with values in $\R_{\rm max}$.
\end{lem}
\proof It follows by the formula \eqref{hdefn} that the function $\ell_C$ belongs to ${\mathcal R}({\mathbb Z})$ since the slopes $x_j\in {\mathbb Z}$, and the discontinuities of the derivative only occur when there exists a pair of vertices $(x_i,y_i)\neq (x_j,y_j)$ such that $\lambda(x_i-x_j)=y_j-y_i$. Moreover $\ell_C$ is convex by construction as a supremum of finitely many affine functions. The map $C\mapsto \ell_C$ is evidently injective. The surjectivity is a consequence of the fact that an element $f\in {\mathcal R}({\mathbb Z})$ is a finite supremum of affine functions $\lambda\mapsto a\lambda+b$ with $a\in {\mathbb Z}$ and $b\in {\mathbb R}$, and thus of the form $\ell_C$ where $C$ is the convex hull of the union of quadrants $(a,b)-Q$. The pointwise
operations of functions with values in ${\mathbb R}_{\rm max}$ are given for the addition by the rule $(f,g)\mapsto f\vee g$, where $(f\vee g)(\lambda):=f(\lambda)\vee g(\lambda)=\max\{f(\lambda), g(\lambda)\}$. This corresponds to the $\max$ in \eqref{hdefn} and thus to the convex hull of the union in terms of the convex sets $C$. This shows that the map $L:C\mapsto \ell_C$ is additive. It is also multiplicative, {\it i.e.\/}\ $\ell_{C+C'}=\ell_C+\ell_{C'}$. This follows using \eqref{hdefn} and the identity $\max(A+B)=\max(A)+\max(B)$ holding for any two finite subsets $A,B\subset {\mathbb R}$.\endproof
\begin{prop}\label{propextscal} $(i)$~Under the isomorphism $L$ the endomorphism ${\rm Fr}_n\otimes {\mbox{Id}}$ of ${\Z_{\rm max}}\hat\otimes_{\mathbb B}\R_{\rm max}$ corresponds to the action of $\N^{\times}$ on ${\mathbb R}_+$ by multiplication.\newline
$(ii)$~The following map identifies the half line $[0,\infty)$ with the space of characters of ${\mathcal R}({\mathbb Z})$
$$
[0,\infty)\ni\lambda\mapsto \chi_\lambda \in {\mbox{Hom}}_{\R_{\rm max}}({\mathcal R}({\mathbb Z}),\R_{\rm max}), \qquad \chi_\lambda(f):=f(\lambda).
$$
\end{prop}
\proof $(i)$~We use multiplicative notations both for ${\Z_{\rm max}}$ and $\R_{\rm max}$ and represent elements of
${\Z_{\rm max}}\hat\otimes_{\mathbb B}{\mathbb R}_{\rm max}$ in terms of finite sums $\sum q^{x_j}\otimes_{\mathbb B} q^{y_j}$ with $x_j\in {\mathbb Z}$ and $y_j\in {\mathbb R}$. In these terms
the isomorphism $L:{\Z_{\rm max}}\hat\otimes_{\mathbb B}{\mathbb R}_{\rm max}\to {\mathcal R}({\mathbb Z})$ is such that
$$
L(\sum q^{x_j}\otimes_{\mathbb B} q^{y_j})(\lambda)=\max\{\lambda x_j+y_j\}
$$
With $X=\sum q^{x_j}\otimes_{\mathbb B} q^{y_j}$ one has
$$
L(({\rm Fr}_n\otimes {\mbox{Id}})(X)(\lambda)=
L(\sum q^{nx_j}\otimes_{\mathbb B} q^{y_j})(\lambda)=\max\{n\lambda x_j+y_j\}=L(X)(n\lambda).
$$
$(ii)$~Let $\iota\in {\mathcal R}({\mathbb Z})$ be the function $\iota(\lambda)=\lambda$. An element $\rho\in {\mbox{Hom}}_{\R_{\rm max}}({\mathcal R}({\mathbb Z}),\R_{\rm max})$ is uniquely specified by $\rho(\iota)\in \R_{\rm max}$ and since $0\vee \iota=\iota$ and $\rho(0)=0$ (as the morphism $\rho$ preserves the multiplicative unit) one has $\rho(\iota)=\lambda\in [0,\infty)\subset \R_{\rm max}$. By multiplicativity one gets $\rho(k\iota)=k\lambda$ for any $k\in {\mathbb Z}$ and by $\R_{\rm max}$-linearity that $\rho(k\iota +y)=k\lambda+y$ for $y\in {\mathbb R}$. By additivity one then gets that for any $f=\vee(x_j\iota+y_j)\in {\mathcal R}({\mathbb Z})$, $\rho(f)=\rho(\vee(x_j\iota+y_j))=\vee (\lambda x_j+y_j)=f(\lambda)$. \endproof
\begin{rem}{\rm One has in general, for a semiring $R$ of characteristic $1$, a natural isomorphism
\begin{equation}\label{resiso}
{\rm Res}:{\mbox{Hom}}_{\R_{\rm max}}(R\hat\otimes_{\mathbb B}\R_{\rm max},\R_{\rm max})\simeq {\mbox{Hom}}_{\mathbb B}(R,\R_{\rm max})
\end{equation}
given by restriction of $\chi\in {\mbox{Hom}}_{\R_{\rm max}}(R\hat\otimes_{\mathbb B}\R_{\rm max},\R_{\rm max})$ to the canonical image of $R$.
Taking $R={\Z_{\rm max}}$, this shows that the space ${\mbox{Hom}}_{\R_{\rm max}}({\Z_{\rm max}}\hat\otimes_{\mathbb B}\R_{\rm max},\R_{\rm max})$ of characters of $ {\mathbb Z}_{\rm max}\hat\otimes_{\mathbb B}{\mathbb R}_{\rm max}$ is the same as ${\mbox{Hom}}_{\mathbb B}({\Z_{\rm max}},\R_{\rm max})$.}
\end{rem}
\subsection{The small category $\mathscr C$}\label{sectcatC}
The topos ${[0,\infty)\rtimes{\N^{\times}}}$ is defined by assigning a small category $\mathscr C$ endowed with a Grothendieck topology $J$. We first describe $\mathscr C$. The objects of $\mathscr C$ are the (possibly empty) bounded open intervals
$\Omega\subset [0,\infty)$ including those of the form $[0,a)$ for $a>0$. The morphisms between two objects of $\mathscr C$ are defined by
$$
{\mbox{Hom}}_{\mathscr C}(\Omega,\Omega')=\{n\in \N^{\times}\mid n\Omega\subset \Omega'\}
$$
if $\Omega\neq \emptyset$. By definition ${\mbox{Hom}}_{\mathscr C}(\emptyset,\Omega'):=\{*\}$ {\it i.e.\/}\ the one point set, for any object $\Omega'$ of $\mathscr C$. Thus the empty set is the initial object of $\mathscr C$.
The following lemma shows that pullbacks exist in the category $\mathscr C$
\begin{lem}\label{lemcatC}
Let $\Omega_j\neq \emptyset$ ($j=1,2$) and consider two morphisms $\phi_j:\Omega_j\stackrel{n_j}{\to} \Omega$ given by integers $n_j\in {\mbox{Hom}}_{\mathscr C}(\Omega_j,\Omega)$. Let $n={\rm lcm}(n_j)$ be their lowest common multiple, $n=a_jn_j$ and let $\Omega':=\{\lambda \in [0,\infty)\mid a_j\lambda \in \Omega_j, \ j=1,2\}$. Then $\Omega'$ is an object of $\mathscr C$, and if it is non-empty one has $a_j\in {\mbox{Hom}}_{\mathscr C}(\Omega',\Omega_j)$ and $(\Omega',a_j)$ is the pullback of the $\phi_j$. When $\Omega'=\emptyset$ the pullback of the $\phi_j$ is the initial object of $\mathscr C$.
\end{lem}
\proof By construction $\Omega'=\cap_{j=1}^2 a_j^{-1}\Omega_j$ is an intersection of bounded open intervals and is thus an object of $\mathscr C$. Let $\psi_j:W\stackrel{k_j}{\to} \Omega_j$ be morphisms such that $\phi_1\circ \psi_1=\phi_2\circ \psi_2$. If $W\neq \emptyset$ it contains a $\lambda\neq 0$ and one has $n_1k_1=n_2k_2=mn$ for a unique $m\in \N^{\times}$. One has $k_j=ma_j$ and $k_jW\subset \Omega_j$ so that $mW\subset \Omega'$. Moreover the map $\psi_j:W\stackrel{k_j}{\to} \Omega_j$ is the composite of $W\stackrel{m}{\to} \Omega'$ with $a_j\in {\mbox{Hom}}_{\mathscr C}(\Omega',\Omega_j)$.
This shows that $(\Omega',a_j)$ is the pullback of the $\phi_j$. It also shows that if there exists $W\neq \emptyset$ and morphisms $\psi_j:W\stackrel{k_j}{\to} \Omega_j$
such that $\phi_1\circ \psi_1=\phi_2\circ \psi_2$, then $\Omega'\neq \emptyset$. Otherwise {\it i.e.\/}\ if this implies $W=\emptyset$ then one easily sees that the empty set is indeed the pullback. \endproof
\subsection{The topology $J$ on $\mathscr C$}\label{sectGtop}
A Grothendieck topology $J$ ({\it cf.}~ \cite{MM} Definition III.2.1) on a small category $\mathscr C$ associates to every object
$\Omega$ of the category a collection $J(\Omega)$ of sieves of $\Omega$, ({\it i.e.\/}\ of families, stable under right composition, of morphisms with codomain $\Omega$), such that:
\vspace{.05in}
$\blacktriangleright$~The maximal sieve $\{f\mid {\mbox{Codom}} f=\Omega\}$ belongs to $J(\Omega)$
\vspace{.05in}
$\blacktriangleright$~$S\in J(\Omega)$, $h\in {\mbox{Hom}}_{\mathscr C}(\Omega',\Omega)$ $\Rightarrow$ $h^*(S)\in J(\Omega')$, where
$
h^*(S):=\{f\mid h\circ f\in S\}
$
$\blacktriangleright$~For $S\in J(\Omega)$, and any sieve $R$ of $\Omega$
$$
h^*(R)\in J({\rm Dom}\, h), \ \forall h\in S\ \Rightarrow R\in J(\Omega).
$$
When the small category $\mathscr C$ admits pullbacks one can associate a Grothendieck topology $J$ to a basis $K$, {\it i.e.\/}\ a function which assigns to any object $\Omega$ a collection $K(\Omega)$ of families of morphisms with codomain $\Omega$ by the condition
$$
S\in J(\Omega) \iff \exists R\in K(\Omega), \ \ R\subset S.
$$
The above three conditions on $J$ are derived from the following three conditions on $K$: ({\it cf.}~ \cite{MM} Definition III.2.2):
\begin{enumerate}
\item For any isomorphism $f$ with range $\Omega$ the singleton $\{f\}$ is a covering.
\item The pullback of a covering of $\Omega$ by any morphism $\Omega'\to\Omega$ is a covering of $\Omega'$.
\item Given a covering $(\Omega_j)_{j\in I}$ of $\Omega$ and for each $j\in I$ a covering $\Omega_{ij}$ of $\Omega_j$, the composite $\Omega_{ij}$ is a covering of $\Omega$.
\end{enumerate}
\begin{prop}\label{proptop}
$(i)$~For each object $\Omega$ of $\mathscr C$, let $K(\Omega)$ be the collection of all ordinary covers $\{\Omega_i\subset \Omega, i\in I\mid \cup \Omega_i=\Omega\}$ of $\Omega$. Then $K$ defines a basis for a Grothendieck topology $J$ on $\mathscr C$.\newline
$(ii)$~The Grothendieck topology $J$ is subcanonical.\newline
$(iii)$~The category $\mathfrak{Sh}(\mathscr C,J)$ of sheaves of sets on $(\mathscr C,J)$ is canonically isomorphic to the category of $\N^{\times}$-equivariant sheaves of sets on $[0,\infty)$.
\end{prop}
\proof $(i)$~The only isomorphisms in $\mathscr C$ are the identity maps, thus one verifies 1. To check the condition 2., we let $\Omega' \stackrel{n}{\to} \Omega$ be a morphism in $\mathscr C$ and
$\{\Omega_i\subset \Omega, i\in I\mid \cup \Omega_i=\Omega\}$ a covering of $\Omega$. Then it follows from Lemma \ref{lemcatC} that the pullback of the cover is given by
$$
\left(\pi_2:\Omega_i\times_\Omega \Omega'\to \Omega' \right)\simeq \left(n^{-1}\Omega_i\cap \Omega'\to \Omega' \right).
$$
This defines a covering of $\Omega'$. Finally the condition 3. is a standard fact on ordinary covers of a topological space (here chosen to be $[0,\infty)$).\newline
$(ii)$~We prove that any representable presheaf is a sheaf on $(\mathscr C,J)$. For a fixed object $\Omega$ of $\mathscr C$ and an arbitrary open subset $U$ of $[0,\infty)$ one sets
$$
\Gamma(U):={\mbox{Hom}}_{\mathscr C}(U,\Omega)=\{n\in \N^{\times}\mid nU\subset \Omega\}.
$$
This determines the subsheaf of the constant sheaf $\N^{\times}$ which is given by the local condition around $\lambda$: $\{n\in \N^{\times}\mid n\lambda \in \Omega\}$. \newline
$(iii)$~An $\N^{\times}$-equivariant sheaf (of sets) on $[0,\infty)$ gives by restriction to $\mathscr C$ an object of $\mathfrak{Sh}(\mathscr C,J)$. Conversely let ${\mathcal F}$ be an object of $\mathfrak{Sh}(\mathscr C,J)$, and $U$ an arbitrary open subset of $[0,\infty)$. Let $\{\Omega_i\subset U, i\in I\mid \cup \Omega_i=U\}$ be a covering of $U$ by bounded open intervals. Take
the limit $\varprojlim {\mathcal F}(\Omega_i\cap\Omega_j)$ in $\frak{ Sets}$ of the diagram ${\mathcal F}(\Omega_i\cap\Omega_j)$ indexed by pairs $(i,j)\in I^2$, $\Omega_i\cap \Omega_j\neq \emptyset$ and arrows $(i,j)\to (i,i)$, $(i,j)\to (j,j)$. This is the equalizer of the two maps
$$
\prod_{i\in I} {\mathcal F}(\Omega_i)\stackrel{p_\ell}{\rightrightarrows}\prod_{i,j}{\mathcal F}(\Omega_i\cap\Omega_j).
$$
Since ${\mathcal F}$ is an object of $\mathfrak{Sh}(\mathscr C,J)$ the above equalizer does not depend upon the choice of the covering of $U$ by bounded open intervals, and defines a sheaf of sets on $[0,\infty)$ endowed with an action of $\N^{\times}$ compatible with the action of $\N^{\times}$ on $[0,\infty)$.\endproof
\begin{rem}\label{empty3}{\rm The setting ${\mbox{Hom}}_{\mathscr C}(\emptyset,\Omega)= \{*\}$ is imposed if one wants the Grothendieck topology $J$ to be subcanonical. Indeed, any sheaf evaluated on the empty set gives the one point set $\{*\}$.
}
\end{rem}
\section{The points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$} \label{sectptsss}
In this section we investigate the structure of the points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$ and prove Theorem \ref{scaltop}
which provides a canonical bijection between the space of these points (up to isomorphism) and the sector ${\mathbb Q}^\times\backslash{\mathbb A}_{\mathbb Q}/\hat{\mathbb Z}^*$ of the adele class space of ${\mathbb Q}$.
\subsection{Flatness and continuity}
It follows from \cite{MM} Corollary VII.5.4, that the points of the topos $\mathfrak{Sh}(\mathscr C,J)$ correspond by means of an equivalence of categories to continuous, flat functors $F:\mathscr C\longrightarrow \frak{ Sets}$. Moreover again from \cite{MM} Theorem VII.6.3, one knows that a functor $F:\mathscr C\longrightarrow \frak{ Sets}$ is flat iff it is filtering, {\it i.e.\/}\ $F$ fulfills the three conditions reported in the following definition
\begin{defn} \label{defnfiltering} A functor $F:\mathscr C\longrightarrow \frak{ Sets}$ is filtering iff
\begin{enumerate}
\item $F(C)\neq \emptyset$ for some object $C$ of $\mathscr C$.
\item Given $a_j\in F(C_j)$ $j=1,2$, there exist an object $C$ of $\mathscr C$, an element $a\in F(C)$ and morphisms $u_j:C\to C_j$ such that $F(u_j)a=a_j$.
\item Given two morphisms $u,v:C\to D$ in $\mathscr C$ and $a\in F(C)$ such that $F(u)a=F(v)a$, there exists an object $B$ of $\mathscr C$, an element $b\in F(B)$ and a morphism $w:B\to C$ of $\mathscr C$ such that $u\circ w=v\circ w$ and $F(w)b=a$.
\end{enumerate}
\end{defn}
By \cite{MM}, Lemma VII.5.3, a flat functor $F:\mathscr C\longrightarrow \frak{ Sets}$ is continuous iff it sends covering sieves to epimorphic families.
\begin{lem}\label{lemcont1} Let $F:\mathscr C\longrightarrow \frak{ Sets}$ be a flat functor and let $J$ be the Grothendieck topology on $\mathscr C$ generated by a basis $K$. Then $F$ is continuous iff for any object $U$ of $\mathscr C$ and a covering $(U_j\to U)_{j\in I}\in K(U)$ the family of maps $F(U_j\to U):F(U_j)\to F(U)$ is jointly surjective.
\end{lem}
\proof Since any covering sieve $S\in J(U)$ contains a covering from the basis, the condition of the Lemma implies that $F$ sends covering sieves to epimorphic families. Conversely let $R\in K(U)$, then the associated covering sieve $(R)\in J(U)$
$$
(R):=\{f\circ g\mid f\in R, \ {\rm Dom} \, f={\mbox{Codom}}\, g\}
$$
gives an epimorphic family $F(f\circ g)$ iff the family $F(f)$ is jointly surjective.\endproof
\subsection{The point $\mathfrak{p}_H$ associated to a rank one subgroup of ${\mathbb R}$}
The next Proposition shows that any (non-trivial) rank one subgroup $H\subset {\mathbb R}$ defines a point of the topos ${[0,\infty)\rtimes{\N^{\times}}}$.
\begin{prop} \label{proppoint}
$(i)$~Let $H$ be a (non-trivial) rank one subgroup of ${\mathbb R}$, then the equality $F_H(V):=V\cap H \cap (0,\infty)$ defines a flat continuous functor $F_H: \mathscr C\longrightarrow \frak{ Sets}$.
$(ii)$~The map $H\mapsto \mathfrak{p}_H$ which associates to a rank one subgroup of ${\mathbb R}$ the point of the topos ${[0,\infty)\rtimes{\N^{\times}}}$ represented by the flat continuous functor $F_H$ provides an injection of the space of (non-trivial) rank one subgroups of ${\mathbb R}$ into the space of points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$ up to isomorphism.
\end{prop}
\proof $(i)$~We set $H_+:=H \cap (0,\infty)$. Let $V \stackrel{n}{\to} W$ be a morphism in $\mathscr C$, then since $n H_+\subset H_+$ the equality $F_H(V)=V\cap H_+$ defines a subfunctor of the covariant functor $\mathscr C\longrightarrow \frak{ Sets}$, $V\mapsto V$. Given a covering $(U_j\subset U)_{j\in I}\in K(U)$ the family of maps $F_H(U_j\to U):F_H(U_j)\to F_H(U)$ is jointly surjective since $U\cap H_+=\cup(U_j\cap H_+)$, thus Lemma \ref{lemcont1} shows that $F_H$ is continuous. We show that $F_H$ is filtering. Since $H$ is non-trivial, $H_+\neq \emptyset$ and this gives condition $1$. Next, given $h_j\in V_j\cap H_+$ we let $h\in H_+$, $n_j\in \N^{\times}$ be such that $h_j=n_j h$. Let $V$ be an open interval containing $h$ and such that $n_j V\subset V_j$. This defines an object $V$ of $\mathscr C$. One has $h\in V\cap H_+=F_H(V)$ and the morphisms $u_j:V \stackrel{n_j}{\to} V_j$ fulfill $F_H(u_j)h=h_j$. This gives condition $2$. Finally, if $u:C \stackrel{n}{\to} D$, $v:C \stackrel{m}{\to} D$ are two morphisms in $\mathscr C$ and $a\in F_H(C)$ is such that $F_H(u)a=F_H(v)a$, then since $a>0$ one gets that $n=m$ and thus the condition $3$ holds. \newline
$(ii)$~Let $F_H(V):=V\cap H_+$ be the continuous flat functor associated to $H\subset {\mathbb R}$. Given a point $\lambda\in (0,\infty)$, we let $V_j$ be a basis of neighborhoods of $\lambda$ of bounded open intervals. Then one has
$$
\varprojlim_{U\ni x} F_H(U)\neq \emptyset \iff\cap F_H(V_j)\neq \emptyset \iff \lambda \in H.
$$
This shows that one can recover the subgroup $H\subset {\mathbb R}$ from the continuous flat functor $F_H$. Moreover it shows that a morphism of functors from $F_H$ to $F_{H'}$ exists iff $H\subset H'$ and hence that the isomorphism class of the point $\mathfrak{p}_H$ uniquely determines $H\subset {\mathbb R}$.\endproof
\subsection{Classification of points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$}
The main result of this subsection (Theorem~\ref{scaltop}) states the existence of a canonical isomorphism between the points $\mathscr A(\R_+^{\rm max})$ of the arithmetic site over $\R_+^{\rm max}$ and the points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$. To this end we shall need to state first several technical lemmas.
\begin{lem}\label{lemflatcont1}Let $F:\mathscr C\longrightarrow \frak{ Sets}$ be a continuous flat functor. Then the following facts hold\newline
$(i)$~Let $U_j\subset V$ ($j=1,2$) be objects of $\mathscr C$ with $U_1\cap U_2=\emptyset$. Then the images $F(U_j\to V)F(U_j)\subset F(V)$ are disjoint.\newline
$(ii)$~Let $V$ be an object of $\mathscr C$ and $x\in F(V)$. Then there exist a unique $\lambda\in V$ such that for any object $U$ of $\mathscr C$, $U\subset V$, containing $\lambda$ one has $x\in F(U\to V)F(U)\subset F(V)$.
\end{lem}
\proof $(i)$~Since $F$ is flat it commutes with fibered products. The fibered product of the two maps $U_j\to V$ is the empty set. The object $\emptyset$ of $\mathscr C$ admits the empty cover. Thus by continuity one has $F(\emptyset)=\emptyset$. One can also give the following argument. Let $a_j\in U_j$ with $ F(U_1\to V)(a_1)=F(U_2\to V)(a_2)=z$. Then the flatness of $F$ gives an object $W$ of $\mathscr C$ an element $c\in F(W)$ and morphisms $W \stackrel{n_j}{\to} U_j$ such that $F(W \stackrel{n_j}{\to} U_j)(c)=a_j$. By composition with $U_j\to V$ one gets $F(W \stackrel{n_j}{\to} V)(c)=z$. The third filtering condition on $F$ implies, since $\N^{\times}$ is simplifiable, that $n_1=n_2$ which contradicts $n_jW\subset U_j$ and $U_1\cap U_2=\emptyset$. \newline
$(ii)$~We show the existence of $\lambda$, its uniqueness then follows from $(i)$ since distinct points have disjoint neighborhoods given by objects of $\mathscr C$. Let $V=\cup W_j$ where $W_j$ is an increasing family of bounded open intervals such that $\overline{ W_j}\subset W_{j+1}$. Then the continuity of $F$ gives an interval $W=W_j$, with $\overline{W}\subset V$ such that $x\in F(W\to V)F(W)$, {\it i.e.\/}\ $x=F(W\to V)z$, $z\in F(W)$. Using a cover ${\mathcal U}$ of $W$ by bounded open intervals one obtains, by continuity of $F$, an interval $I_1\subset W$ of diameter $<1/2$ and $z_1\in F(I_1)$ such that $z= F(I_1\to W)z_1$. By induction one gets a decreasing sequence of intervals $I_k\subset I_{k-1}\subset W$ of diameter $<1/2^k$ and $z_k\in F(I_k)$ such that $z_{k-1}= F(I_k\to I_{k-1})z_k$. Let $\lambda$ be the unique limit point of the sequence $I_k$, {\it i.e.\/}\ the limit of any sequence $\lambda_k\in I_k$. One has $\lambda\in\overline{W}\subset V$. Let $U\subset V$ be an object of $\mathscr C$ containing $\lambda$. Then $U$ is an open neighborhood of $\lambda$ and there exists $k$ such that $I_k\subset U$. One then gets
$$
x=F(W\to V)z=F(I_k\to V)z_k=F(U\to V)F(I_k\to U)z_k\in F(U\to V)F(U)
$$
\endproof
In what follows we shall denote by $\lambda_V:F(V)\to V$ the map defined in Lemma \ref{lemflatcont1}.
\begin{lem}\label{lemflatcont2}Let $F:\mathscr C\longrightarrow \frak{ Sets}$ be a continuous flat functor. \newline
$(i)$~The maps $\lambda_V:F(V)\to V$ define a natural transformation of $F$ with the functor $\mathscr C\longrightarrow \frak{ Sets}$, $V\mapsto V$.\newline
$(ii)$~The maps $\lambda_V:F(V)\to V$ are injective when $0\notin V$.
\end{lem}
\proof $(i)$~Let $U\stackrel{n}{\to} V$ be a morphism in $\mathscr C$. Let $x\in F(U)$, $y=F(U\stackrel{n}{\to} V)x\in F(V)$. For any object $W\subset U$ of $\mathscr C$ containing $\lambda_U(x)$ one has $x\in F(W\to U)F(W)$ and thus
$$
y\in F(U\stackrel{n}{\to} V)F(W\to U)F(W)= F(W\stackrel{n}{\to} V)F(W)
=F(nW\to V)F(W\stackrel{n}{\to} nW)F(W).
$$
This shows that $y\in F(nW\to V)F(nW)$. Thus if $\lambda_V(y)\neq n \lambda_U(x)$ one obtains a contradiction by Lemma \ref{lemflatcont1} $(i)$.
We have thus shown that the following diagram commutes
\begin{equation*}\label{1o}
\xymatrix{
F(U) \ar[d]^{\lambda_U} \ar[rr]^{F(U\stackrel{n}{\to} V)}&& \ar[d]^{\lambda_V} F(V) \\
U \ar[rr]^{n}&& V
}
\end{equation*}
$(ii)$~Let $x_j\in F(V)$. By the flatness of $F$ there exists an object $W$ of $\mathscr C$, an element $c\in F(W)$ and morphisms $W \stackrel{n_j}{\to} V$ such that $F(W \stackrel{n_j}{\to} V)(c)=x_j$. Since $0\notin V$ by hypothesis, one has $\lambda_V(x_1)\neq 0$. Assume now that $\lambda_V(x_1)=\lambda_V(x_2)$. By $(i)$ one has $\lambda_V(x_j)=n_j\lambda_W(c)$ and this implies $\lambda_W(c)\neq 0$ (since $\lambda_V(x_1)\neq 0$) and $n_1=n_2$. One then gets $x_1=x_2$ since one has $x_j=F(W \stackrel{n_j}{\to} V)(c)$.\endproof
\begin{lem}\label{lemflatcont3}Let $F:\mathscr C\longrightarrow \frak{ Sets}$ be a continuous flat functor. \newline
$(i)$~Let $\lambda>0$ be a positive real number then, for objects $V$ of $\mathscr C$ one has
$$
\exists V\mid \lambda\in \lambda_V(F(V))\iff \forall V\ni \lambda, \ \lambda\in \lambda_V(F(V))
$$
$(ii)$~The subset of $(0,\infty)$ defined by the above condition is of the form $H\cap (0,\infty)$, where $H\subset {\mathbb R}$ is a rank one subgroup.
\end{lem}
\proof $(i)$~It is enough to show the implication $\Rightarrow$. Let $V$ and $x\in F(V)$ such that $\lambda=\lambda_V(x)$. Then for any object $U$ of $\mathscr C$, $U\subset V$, containing $\lambda$ one has $x\in F(U\to V)F(U)\subset F(V)$. Let then $W$ be an object of $\mathscr C$ containing $\lambda$. Let $U=W\cap V$ and $z\in F(U)$ such that $x=F(U\to V)z$. One has by Lemma \ref{lemflatcont2} $(i)$, $\lambda_W(F(U\to W)z)=\lambda_U(z)$ and $\lambda_U(z)=\lambda_V(F(U\to V)z)=\lambda_V(x)=\lambda$. Thus one gets $\lambda\in \lambda_W(F(W))$ as required.
$(ii)$~Let $E=\{\lambda>0\mid \lambda\in \lambda_V(F(V))\,,\,~\forall V\ni \lambda\}$. Let $\lambda_j\in E$, $j=1,2$, and $V$ containing the $\lambda_j$, and
$x_j\in F(V)$ such that $\lambda_j=\lambda_V(x_j)$. By the flatness of $F$ there exists an object $W$ of $\mathscr C$, an element $c\in F(W)$ and morphisms $W \stackrel{n_j}{\to} V$ such that $F(W \stackrel{n_j}{\to} V)(c)=x_j$. By Lemma \ref{lemflatcont2} $(i)$, one has $\lambda_V(x_j)=n_j\lambda_W(c)$. By $(i)$ one has $\lambda_W(c)\in E$. This shows that given any two elements $\lambda_j\in E$ there exists $\lambda \in E$ and integers $n_j\in \N^{\times}$ such that $n_j\lambda =\lambda_j$, $j=1,2$. Moreover Lemma \ref{lemflatcont2} $(i)$ shows that $\lambda\in E\Rightarrow n\lambda \in E$, $\forall n\in \N^{\times}$. Thus $E$ is an increasing union of subsets of the form $\lambda_k \N^{\times}$. Let
$H=\cup \lambda_k{\mathbb Z}$ be the corresponding increasing union of subgroups of ${\mathbb R}$. Then $H$ is a rank one subgroup of ${\mathbb R}$ and $E=(0,\infty)\cap H$ by construction. \endproof
\begin{lem}\label{lemflatcont4}Let $F:\mathscr C\longrightarrow \frak{ Sets}$ be a continuous flat functor. \newline
$(i)$~One has
$$
\exists V\mid 0\in \lambda_V(F(V))\iff \forall V, \ \lambda_V(F(V))=V\cap \{0\}
$$
$(ii)$~If the above equivalent conditions do not hold then there exists a rank one subgroup $H\subset {\mathbb R}$ and an isomorphism of functors $F\simeq F_H$.
\end{lem}
\proof $(i)$~Let $V$ be such that $0\in \lambda_V(F(V))$. One then has $0\in V$. Moreover the proof of Lemma \ref{lemflatcont3} $(i)$ applies and shows that for any object $W$ of $\mathscr C$ containing $0$ one has $0\in \lambda_W(F(W))$. Let $W$ be an object of $\mathscr C$ and assume that some $\lambda>0$ belongs to
$\lambda_W(F(W))$. Let $U$ be an object of $\mathscr C$ containing both $V$ and $W$. Then by Lemma \ref{lemflatcont2} $(i)$, one has $\{0,\lambda\}\subset \lambda_U(F(U))$. Let $x_j\in F(U)$ with $\lambda_U(x_1)=0$, $\lambda_U(x_2)=\lambda$. Then the flatness of $F$ gives an object $U'$ of $\mathscr C$, an element $c\in F(U')$ and morphisms $U' \stackrel{n_j}{\to} U$ such that $F(U'\stackrel{n_j}{\to} U)(c)=x_j$. By Lemma \ref{lemflatcont2} $(i)$, one has $\lambda_U(x_j)=n_j\lambda_{U'}(c)$. But $\lambda_U(x_1)=0$ implies that $\lambda_{U'}(c)=0$ and this contradicts $\lambda_U(x_2)=\lambda\neq 0$. Thus it follows that for any object $W$ of $\mathscr C$, $\lambda_W(F(W))$ contains at most $0$ and it does if and only if one has $0\in W$. \newline
$(ii)$~If the condition of $(i)$ does not hold, it follows that for any object $V$ of $\mathscr C$ one has $0\notin \lambda_V(F(V))$. It follows that the canonical map $\rho_V=F(V\cap (0,\infty)\to V): F(V\cap (0,\infty))\to F(V)$ is surjective. But by Lemma \ref{lemflatcont2}, $(ii)$, and the commutation with the localization $\lambda_V$ this map is injective. By Lemma \ref{lemflatcont3} $(ii)$ there exists a rank one subgroup $H\subset {\mathbb R}$ such that for any $U$ not containing $0$ one has $\lambda_U(F(U))=U\cap H$. By Lemma \ref{lemflatcont2}, $(ii)$, the composite
$$
\lambda_{V\cap (0,\infty)}\circ \rho_V^{-1}: F(V)\to V\cap (0,\infty)\cap H
$$
gives an isomorphism of functors $F\simeq F_H$. \endproof
\begin{lem}\label{lemflatcont5}Let $F:\mathscr C\longrightarrow \frak{ Sets}$ be a continuous flat functor such that $\lambda_V(F(V))=V\cap \{0\}$, $\forall V$. \newline
$(i)$~One has $F(V)=\emptyset$ if $0\notin V$ and if $0\in V$ then the canonical map
$p_V$ from $X:=\varprojlim_{W\ni 0} F(W)$ to $F(V)$ is bijective.\newline
$(ii)$~There exists a unique action $n\mapsto X(n)$ of $\N^{\times}$ on $X$ such that for any morphism $U \stackrel{n}{\to} V$ of objects containing $0$ the following diagram commutes :
\begin{equation}\label{1o1}
\xymatrix{
X\ar[d]^{p_U} \ar[rr]^{X(n)}&& \ar[d]^{p_V} X \\
F(U) \ar[rr]^{F(U\stackrel{n}{\to} V)}&& F(V)
}
\end{equation}
$(iii)$~With the above notations, the action of $\N^{\times}$ on $X$ defines a point of the topos ${\widehat{\N^{\times}}}$.\newline
$(iv)$~Let $H$ be an abstract rank one ordered group. The following defines a flat continuous functor $F'_H:\mathscr C\longrightarrow \frak{ Sets}$
$$
F'_H(V)=\emptyset \ \text{if} \ 0\notin V, \ \ F'_H(V)=H_+ \ \text{if} \ 0\in V.
$$
\end{lem}
\proof $(i)$~If $0\notin V$ one has $\lambda_V(F(V))=\emptyset$ and hence $F(V)=\emptyset$. Let $U\subset V$ with $0\in U$. The map $F(U\to V):F(U)\to F(V)$ is surjective since $\lambda_V(F(V))=\{0\}$. Let us show that it is injective. Let $x_j\in F(U)$ be such that $F(U\to V)(x_j)=z$. By the flatness of $F$ there exists an object $W$ of $\mathscr C$, an element $c\in F(W)$ and morphisms $W \stackrel{n_j}{\to} U$ such that $F(W \stackrel{n_j}{\to} U)(c)=x_j$. One then has $F(W \stackrel{n_j}{\to} V)(c)=z$ and the flatness of $F$ shows, since $\N^{\times}$ is simplifiable, that $n_1=n_2$ so that $x_1=x_2$. Thus the map $F(U\to V):F(U)\to F(V)$ is bijective and the projective limit
$X:=\varprojlim_{W\ni 0} F(W)$ is such that all maps $p_V: X\to F(V)$ are bijective. \newline
$(ii)$~The inclusions $U\subset V$, $nU\subset nV$ form a commutative square with the maps $U \stackrel{n}{\to} nU$, $V \stackrel{n}{\to} nV$. It follows that the map from $X$ to $X$ such that
$$
F(U \stackrel{n}{\to} nU)p_U(x)=p_{nU}(X(n)x)
$$
is independent of the choice of $U$ containing $0$ and turns \eqref{1o1} into commutative diagrams. \newline
$(iii)$~The set $X$ is non-empty since otherwise one would have $F(V)=\emptyset$, $\forall V$. Let $x_j\in X$ and $u_j\in F(U)$ ($0\in U$) such that $p_U(x_j)=u_j$. By the flatness of $F$ there exists an object $W$ of $\mathscr C$, an element $c\in F(W)$ and morphisms $W \stackrel{n_j}{\to} U$ such that $F(W \stackrel{n_j}{\to} U)(c)=u_j$. One has $0\in W$. Let $x\in X$ such that $p_W(x)=c$, using \eqref{1o1} one gets $X(n_j)x=x_j$. Thus the action of $\N^{\times}$ on $X$ verifies the second filtering condition. We now check the third filtering condition. Let $x\in X$ and $n_1\neq n_2$ be such that $X(n_1)x=X(n_2)x$. Let $0\in U$, $u=p_U(x)$. Let $V\supset n_jU$. One then gets an equality of the form
$$
F(U \stackrel{n_1}{\to} V)(u)=F(U \stackrel{n_2}{\to} V)(u)
$$
and the flatness of $F$ shows that $n_1=n_2$. This shows that the action of $\N^{\times}$ on $X$ verifies the three filtering conditions.\newline
$(iv)$~One checks that $F'_H$ is continuous, its flatness follows from the rank one property of $H$. \endproof
We are now ready to define a map $\Theta$ from points of ${\mathscr A}(\R_+^{\rm max})$ to points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$. As shown in \cite{CC1} Theorem 3.8, the points of the arithmetic site ${\mathscr A}$ over $\R_+^{\rm max}$ form the union of two sets: the set ${\mathscr A}({\mathbb B})\subset {\mathscr A}(\R_+^{\rm max})$ of isomorphism classes of points of ${\widehat{\N^{\times}}}$, and the set of the non-trivial rank one subgroups $H\subset {\mathbb R}$. To a point of ${\mathscr A}({\mathbb B})\subset {\mathscr A}(\R_+^{\rm max})$ associated with an abstract rank one ordered group $H$ we assign the point $\mathfrak{q}_H$ of ${[0,\infty)\rtimes{\N^{\times}}}$ associated to the flat continuous functor $F'_H$ of
Lemma \ref{lemflatcont5}, $(iv)$.
Next to the point of $ {\mathscr A}(\R_+^{\rm max})\setminus {\mathscr A}({\mathbb B})$ corresponding to the rank one subgroup $H\subset {\mathbb R}$, we associate the point $\mathfrak{p}_H$ of Proposition \ref{proppoint}.
\begin{thm}\label{scaltop}The map $\Theta$ defines a canonical isomorphism of ${\mathscr A}(\R_+^{\rm max})$ with the points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$.
\end{thm}
\proof The map $\Theta$ is well defined. Lemma \ref{lemflatcont5} shows that is bijective from points of ${\mathscr A}({\mathbb B})\subset {\mathscr A}(\R_+^{\rm max})$ to
points of ${[0,\infty)\rtimes{\N^{\times}}}$ such that the associated flat continuous functor $F:\mathscr C\longrightarrow \frak{ Sets}$ fulfills the hypothesis of Lemma \ref{lemflatcont5}. Proposition \ref{proppoint} shows that $\Theta$ is injective from points of ${\mathscr A}(\R_+^{\rm max})\setminus {\mathscr A}({\mathbb B})$ to
points of ${[0,\infty)\rtimes{\N^{\times}}}$. Finally, Lemma \ref{lemflatcont4} shows that $\Theta$ is surjective.\endproof
\subsection{Representation of points as filtering colimits of representables}\label{sectrep}
Let $\mathscr C$ be a small category. Any covariant functor $F:\mathscr C\longrightarrow \frak{ Sets}$ which is
representable, {\it i.e.\/}\ of the form $y_I(C)={\mbox{Hom}}_\mathscr C(I,C)$ for some object $I$ of $\mathscr C$ is flat, and any flat functor is obtained as a filtering colimit of such representable functors. In this subsection we describe such representations for the flat continuous functors associated to points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$. This result will be used in the description of the stalks of the sheaves given in \S~\ref{sectsheaf}.
\begin{lem} \label{limlim1}
Let $H\subset {\mathbb R}$ be a rank one subgroup of ${\mathbb R}$. Let $h_i\in H_+$ be a sequence of elements, $n_j\in\N^{\times}$, such that $n_jh_{j+1}= h_j$ and that $H=\cup h_j{\mathbb Z}$. Let $I_j$ be bounded open intervals with $h_j\in I_j$ for $j\geq 1$, such that
\begin{equation}\label{gofast}
n_j\overline{ I_{j+1}}\subset I_j, \ \forall j\geq 1, \ \
\,(\prod_1^{k-1} n_i)\, {\rm Diameter}(I_k)\to 0, \ \ \text{when}\, \ k\to \infty.
\end{equation}
Then the limit $\varinjlim y_{I_j}$ of the representable functors $y_{I_j}(V):={\mbox{Hom}}_\mathscr C(I_j,V)$ defines the point
$\mathfrak{p}_H$ of the topos ${[0,\infty)\rtimes{\N^{\times}}}$.
\end{lem}
\proof We show that the functor $\varinjlim y_{I_j}$ is simply given by
$$
\varinjlim y_{I_j}(V)=\varinjlim( h_j^{-1} V\cap \N^{\times})\sim \varinjlim( V\cap h_j \N^{\times})=V\cap H_+.
$$
It is enough to prove that the natural inclusion, due to $h_j\in I_j$
$$
\varinjlim y_{I_j}(V)\subset \varinjlim( h_j^{-1} V\cap \N^{\times})
$$
(in both cases one uses multiplication by $n_j$ to organize the inductive system) is a bijection.
Indeed, let $n\in h_k^{-1} V\cap \N^{\times}$: we show that for $j$ large enough one has $n\prod_k^{j-1} n_i\in y_{I_j}(V)$. Let $\epsilon>0$ be a positive real number such that the neighborhood $W$ of $h_k n$ of radius $\epsilon$ is contained in $V$. Then for $j$ large enough, using the hypothesis \eqref{gofast}, one gets that $n\prod_k^{j-1} n_i\, I_j\subset W$ (since it contains $h_kn$ and is of small enough diameter) and hence $n\prod_k^{j-1} n_i\in y_{I_j}(V)$. \endproof
\begin{prop}\label{proplim} Let $H\subset {\mathbb R}$ be a rank one subgroup of ${\mathbb R}$. Let $h_i\in H_+$, $n_j\in\N^{\times}$ and $I_j$ be bounded open intervals as in Lemma \ref{limlim1}. Then, the pullback part of the point $\mathfrak{p}_H$ of the topos ${[0,\infty)\rtimes{\N^{\times}}}$ is given by the functor which associates to an object ${\mathcal F}$ of $\mathfrak{Sh}(\mathscr C,J)$ the following colimit
\begin{equation}\label{gofast1}
\varinjlim_k {\mathcal F}(I_k), \quad {\mathcal F}(I_{k+1} \stackrel{n_k}{\to} I_k):{\mathcal F}(I_k)\to {\mathcal F}(I_{k+1}).
\end{equation}
\end{prop}
\proof The colimit \eqref{gofast1} defines the pullback part $f^*$ of a point of the topos of contravariant functors $\mathscr C\longrightarrow \frak{ Sets}$, which is defined as a filtering colimit of the points associated to the objects $I_k$ of $\mathscr C$. To show that the corresponding geometric morphism from the topos of sets to $\hat{\mathscr C}$ factors through $\mathfrak{Sh}(\mathscr C,J)$ it is enough to show ({\it cf.}~\cite{MM} Lemma VII.5.3) that the composite $f^*\circ y$ with the Yoneda embedding sends each covering sieve to an epimorphic family of functions. Lemma \ref{limlim1} shows that for any object $V$ of $\mathscr C$ and the associated object of $\hat{\mathscr C}$: ${\mathcal F}=y(V):={\mbox{Hom}}_\mathscr C(\bullet, V)$ one has
$$
\varinjlim_k {\mathcal F}(I_k)= \varinjlim_k {\mbox{Hom}}_\mathscr C(I_k,V)=V\cap H_+.
$$
Thus $f$ fulfills the condition $(iii)$ of \cite{MM} Lemma VII.5.3, and $f^*$ is the pullback part of the point $\mathfrak{p}_H$ of the topos ${[0,\infty)\rtimes{\N^{\times}}}$. \endproof
Next, we consider the limit case of Lemma \ref{limlim1} when all the $h_j$ are $0$.
\begin{lem} \label{limlim2}
Let $n_j\in\N^{\times}$, and $I_j$ be bounded open intervals containing $0\in[0,\infty)$, for $j\geq 1$, such that \eqref{gofast} holds.
Then the limit $\varinjlim y_{I_j}$ of the representable functors $y_{I_j}(V):={\mbox{Hom}}_\mathscr C(I_j,V)$ defines the point $\mathfrak{q}_H$
of the topos ${[0,\infty)\rtimes{\N^{\times}}}$ associated to the abstract ordered rank one group $H:=\cup (\prod_1^k n_j)^{-1}{\mathbb Z}$.
\end{lem}
\proof Let $F=\varinjlim y_{I_j}$. One has $F(V)=\emptyset$ when $0\notin V$. Let $m_j:=\prod_1^{j-1} n_\ell$ and $H:=\cup\, m_j^{-1}{\mathbb Z}$. We denote by $\iota(j,k):=k/m_j\in H$ the element of $H$ given by the image of $k\in {\mathbb Z}$ by the canonical injection ${\mathbb Z}\to H$ associated to the $j$-th copy of ${\mathbb Z}$ in the colimit. One has by construction
\begin{equation}\label{iotak}
\iota(j,k)=\iota(j+1, n_j\, k)\,,\,~\forall j, \ k\in {\mathbb Z} .
\end{equation}
Let $V$ be a bounded open interval with $0\in V$. One then obtains an injection
$$
\alpha:F(V)\to H_+, \ \ \alpha(I_j\stackrel{k}{\to} V):=\iota(j,k)
$$
by using the compatibility with the inductive limits, {\it i.e.\/}\
$$
\alpha\left( (I_j\stackrel{k}{\to} V)\circ (I_{j+1}\stackrel{n_j}{\to} I_j) \right)
=\iota(j+1, n_j\, k)=\iota(j,k).
$$
Let us show that $\alpha$ is surjective.
Given $k\in \N^{\times}$ and $j>0$ we prove that for $\ell$ large enough one has $k\prod_j^{\ell-1} n_i\in y_{I_\ell}(V)$. Let $\epsilon>0$ such that the neighborhood $W$ of $0$ of radius $\epsilon$ is contained in $V$. Then for $\ell$ large enough, using the hypothesis \eqref{gofast}, one gets that $k\prod_j^{\ell-1} n_i\, I_\ell\subset W$ (since it contains $0$ and is of small enough diameter) and hence $k\prod_j^{\ell-1} n_i\in y_{I_\ell}(V)$. \endproof
\begin{prop}\label{proplimbis} Let $H$ be an abstract rank one ordered group. Let $n_j\in\N^{\times}$, $I_j$ be bounded open intervals as in Lemma \ref{limlim2} such that $H:=\cup (\prod_1^k n_j)^{-1}{\mathbb Z}$. Then, the pullback part of the point $\mathfrak{q}_H$ of the topos ${[0,\infty)\rtimes{\N^{\times}}}$ is given by the functor which associates to an object ${\mathcal F}$ of $\mathfrak{Sh}(\mathscr C,J)$ the colimit \eqref{gofast1}.
\end{prop}
The proof is the same as the one of Proposition \ref{proplim}.
\section{The structure sheaf ${\mathcal O}$ of the scaling site} \label{sectsheaf}
We define the structure sheaf ${\mathcal O}$ of the scaling site ${\mathscr S}$ as the $\N^{\times}$-equivariant sheaf on $[0,\infty)$ of semirings ${\mathcal O}(U)$ of continuous convex functions $f(\lambda)\in \R_{\rm max}$, such that for any $\lambda \in U$ there exists an open interval $V$ containing $\lambda$, with $f$ affine and with slope $f'\in {\mathbb Z}$ in the complement $V\setminus \{\lambda\}$.
This condition is local and hence defines a sheaf. One endows this sheaf with the following action ${\mathcal O}(V\stackrel{n}{\to}W): {\mathcal O}(W)\to {\mathcal O}(V)$ of $\N^{\times}$
\begin{equation}\label{equivO}
{\mathcal O}(V\stackrel{n}{\to}W)(f)(\lambda):= f(n\lambda)\,,\,~\forall \lambda \in V.
\end{equation}
This action is compatible with the semiring structure and with the integrality property of the slopes and thus defines a semi-ring in the topos ${[0,\infty)\rtimes{\N^{\times}}}$
\begin{defn}\label{defnscalsite} The scaling site ${\mathscr S}$ is the semi-ringed topos given by the topos ${[0,\infty)\rtimes{\N^{\times}}}$ endowed with the structure sheaf ${\mathcal O}$.
\end{defn}
\subsection{The stalks of the structure sheaf ${\mathcal O}$}\label{sectsheafstalks}
The next result determines the structure of the stalks of ${\mathcal O}$.
\begin{thm} \label{structure2} $(i)$~At the point $\mathfrak{p}_H$ of the topos ${[0,\infty)\rtimes{\N^{\times}}}$ associated to the rank one subgroup $H\subset {\mathbb R}$ the stalk of the structure sheaf ${\mathcal O}$ is the semiring ${\mathcal R}_H$ of germs, at $\lambda=1$, of $\R_{\rm max}$-valued, piecewise affine convex functions $f(\lambda)$ with slopes in $H$.\newline
$(ii)$~The stalk of the structure sheaf ${\mathcal O}$ at the point $\mathfrak{q}_H$ of ${[0,\infty)\rtimes{\N^{\times}}}$ associated to the abstract rank one ordered group $H$ is the semiring ${\mathcal Z}_H$ associated by the max-plus construction to the totally ordered group ${\mathbb R}\times H$ and endowed with the lexicographic order.
\end{thm}
\proof $(i)$~To evaluate the stalk of the structure sheaf ${\mathcal O}$ at the point $\mathfrak{p}_H$ we use the description given by \eqref{gofast1}. We let $h_j$, $n_j$, $I_j$ as in Proposition \ref{proplim} and evaluate the colimit
\begin{equation*}\label{gofast2}
{\mathcal O}_{\mathfrak{p}_H}=\varinjlim_k {\mathcal O}(I_k), \ \ {\mathcal O}(I_{k+1} \stackrel{n_k}{\to} I_k):{\mathcal O}(I_k)\to {\mathcal O}(I_{k+1}).
\end{equation*}
We define a map $\rho:{\mathcal O}_{\mathfrak{p}_H}\to {\mathcal R}_H$ by associating to $(j,f)$, $f\in {\mathcal O}(I_j)$, the germ at $\lambda=1$ of the function $\lambda\mapsto f(\lambda h_j)$. This function is defined in the neighborhood of $\lambda=1$ given by $\{\lambda\mid h_j\lambda \in I_j\}$, and it is a piecewise affine, continuous, convex function with slopes in $h_j{\mathbb Z}\subset H$. Thus its germ at $\lambda=1$ is an element $\rho(j,f)\in {\mathcal R}_H$. Next we prove that this construction is compatible with the colimit, {\it i.e.\/}\ that
$$
\rho(j,f)=\rho(j+1,{\mathcal O}(I_{j+1} \stackrel{n_j}{\to} I_j)(f))
\,,\,~\forall f\in {\mathcal O}(I_j)$$
where ${\mathcal O}(I_{j+1} \stackrel{n_j}{\to} I_j)$ is defined in \eqref{equivO}. One has
$$
{\mathcal O}(I_{j+1} \stackrel{n_j}{\to} I_j)(f)(\lambda)=f(n_j\lambda)\,,\,~\forall \lambda \in I_{j+1}
$$
and thus, using $n_j h_{j+1}=h_j$, one has for any $f\in {\mathcal O}(I_j)$
$$
\rho(j+1,{\mathcal O}(I_{j+1} \stackrel{n_j}{\to} I_j)(f))(\lambda)={\mathcal O}(I_{j+1} \stackrel{n_j}{\to} I_j)(f)(\lambda h_{j+1})=f(n_j\lambda h_{j+1})=f(\lambda h_j)=\rho(j,f)(\lambda).
$$
Thus the map $(j,f)\mapsto \rho(j,f)$ is compatible with the colimit and determines a map $\rho:{\mathcal O}_{\mathfrak{p}_H}\to {\mathcal R}_H$
which is easily shown to be an isomorphism of semirings.\newline
$(ii)$~Let $(n_j)$, with $I_j$ be as in Lemma \ref{limlim2} and such that $H=\cup (\prod_1^j n_\ell)^{-1}{\mathbb Z}$. We let, as above, $m_j:=\prod_1^{j-1} n_\ell$ so that $H:=\cup\, m_j^{-1}{\mathbb Z}$. We set $\iota(j,k):=k/m_j\in H$
so that \eqref{iotak} follows by construction.
Then by Proposition \ref{proplimbis},
the stalk of the structure sheaf ${\mathcal O}$ at the point $\mathfrak{q}_H$ is the colimit ${\mathcal O}_{\mathfrak{q}_H}=\varinjlim {\mathcal O}(I_j)$. We define a map $\delta:{\mathcal O}_{\mathfrak{q}_H}\to H$ as follows. We associate to $(j,f)$, with $f\in {\mathcal O}(I_j)$, the element
$\delta(j,f):=\iota(j,k)$ where $k=f'(0)\in {\mathbb Z}$ is the derivative of $f$ at $0\in I_j$. One then has
$$
\delta(j+1,{\mathcal O}(I_{j+1} \stackrel{n_j}{\to} I_j)(f))=\iota(j+1,(f(n_j\lambda))_{\lambda=0}'=\iota(j+1,n_jf'(0))=\iota(j,f'(0))=\delta(j,f).
$$
This shows that the map $\delta$ is well defined. Similarly, the equality $\alpha(j,f):=f(0)$ defines a map $\alpha:{\mathcal O}_{\mathfrak{q}_H}\to \R_{\rm max}$ and the pair $\rho=(\alpha,\delta)$ gives a map ${\mathcal O}_{\mathfrak{q}_H}\to {\mathcal Z}_H$ which is both injective and surjective. One easily checks that this map is an isomorphism of ${\mathcal O}_{\mathfrak{q}_H}$ for the semiring structure whose multiplication corresponds to $(x,h)\bullet (x',h')=(x+x',h+h')$ and the addition to
$$
(x,h)\vee (x',h'):=\begin{cases} (x,h)\ \text{if}\ x>x'\\(x',h')\ \text{if}\ x'>x\\
(x,h \vee h') \ \text{if}\ x=x'.\end{cases}
$$
\endproof
Next, we describe the semiring ${\mathcal R}_H$ of germs at $\lambda=1$ of $\R_{\rm max}$ valued piecewise affine (continuous) convex functions $f(\lambda)$ with slopes in $H$. The germ of $f$ is determined by the triple $(x,h_+,h_-)$, with $x\in {\mathbb R}$ and $h_\pm\in H$ given by $x=f(1)$, $f(1\pm \epsilon)=x
\pm h_\pm \epsilon$, for $\epsilon \geq 0$ small enough. The triples $(x,h_+,h_-)$ obtained from elements of ${\mathcal R}_H$ are characterized by the condition $h_+\geq h_-$ which corresponds to the convexity of the function $f(1\pm \epsilon)=x
\pm h_\pm \epsilon$ for $\epsilon \geq 0$ small enough. The only other element of the semiring ${\mathcal R}_H$ corresponds to the germ of the constant function $-\infty$. This function plays the role of the ``zero" element for the following algebraic rules applied to the non-zero elements of the semiring. The ``addition" $\vee$ is given by the max between two germs and hence it is described by the formula
\begin{equation*}\label{additRh}
(x,h_+,h_-)\vee (x',h'_+,h'_-):=\begin{cases} (x,h_+,h_-)\ \text{if}\ x>x'\\(x',h'_+,h'_-)\ \text{if}\ x'>x\\
(x,h_+\vee h'_+, h_-\wedge h'_-) \ \text{if}\ x=x'.\end{cases}
\end{equation*}
The ``product" of two germs is given by their sum and hence it is described by the formula
\begin{equation*}\label{prodRh}
(x,h_+,h_-)\bullet (x',h'_+,h'_-):=(x+x',h_++h'_+,h_-+h'_-).
\end{equation*}
The conditions on the functions $f(H)$ on subgroups $H\subset {\mathbb R}$ which define the structure sheaf are expressed locally in terms of the scaling flow by the two derivatives $D_\pm$ where
\begin{equation}\label{dpm}
D_\pm(f)(H):=\lim_{\epsilon\to 0\pm}\frac{f((1+\epsilon)H)-f(H)}{\epsilon}
\end{equation}
and can be written in the form
\begin{equation}\label{dpm1}
D_\pm(f)(H)\in H \,,\,~\forall H , \ \ D_+(f)(H)\geq D_-(f)(H) \,,\,~\forall H.
\end{equation}
Indeed, taking a neighborhood of $H$ of the form $\{\lambda H\mid \lambda\in V\}$ where $V$ is a neighborhood of $1$, and considering the function $g(\lambda):=f(\lambda H)$ condition \eqref{dpm1} becomes
$
\lambda\partial^\pm_\lambda g(\lambda)\in \lambda H
$, where the $\partial^\pm_\lambda$ are the directional derivatives. Thus the condition on the functions becomes $\partial^\pm_\lambda g(\lambda)\in H$ and
$\partial^+_\lambda g(\lambda)\geq\partial^-_\lambda g(\lambda)$ which is the characterization of the germs in a neighborhood of $H$.
\subsection{The points of ${\mathscr S}$ over $\R_+^{\rm max}$}
The next Theorem states that the process of extension of scalars from ${\mathscr A}$ to ${\mathscr S}$ does not affect the points over $\R_+^{\rm max}$.
\begin{thm} \label{structure3} The canonical projection from the set ${\mathscr S}(\R_{\rm max})$ of the points of the scaling site ${\mathscr S}$ defined over $\R_+^{\rm max}$
to the points of the topos ${[0,\infty)\rtimes{\N^{\times}}}$ is bijective.
\end{thm}
The proof follows from the next Lemma
\begin{lem}\label{homtormax} $(i)$~The map $(x,h_+,h_-)\mapsto x$ is the only element of ${\mbox{Hom}}_{\R_{\rm max}}({\mathcal R}_H,\R_{\rm max})$.\newline
$(ii)$~The map $(x,h_+)\mapsto x$ is the only element of ${\mbox{Hom}}_{\R_{\rm max}}({\mathcal Z}_H,\R_{\rm max})$.
\end{lem}
\proof $(i)$~Let $\phi\in{\mbox{Hom}}_{\R_{\rm max}}({\mathcal R}_H,\R_{\rm max})$. First, we notice that the $\R_{\rm max}$-linearity shows that the image by $\phi$ of the constant function is $\phi(x,0,0)=x$. Then, we see that for any germ $f$ which is not identical to $-\infty$, there exists a constant function $g<f$. One then has
$$
f\vee g=f\Rightarrow \phi(f)=\phi(f\vee g)=\phi(f)\vee \phi(g)\Rightarrow \phi(f)\geq \phi(g).
$$
This shows that one cannot have $\phi(f)=-\infty$. This argument shows that for any elements $f,g$ of ${\mathcal R}_H$, one has $f<g\Rightarrow \phi(f)\leq \phi(g)$ and it follows that
$$
x<x'\Rightarrow \phi(x,h_+,h_-)\leq\phi(x',h'_+,h'_-).
$$
Then, since $\phi(x,0,0)=x$ one gets $\phi(x,h_+,h_-)=x$.
$(ii)$~The proof is similar as for $(i)$. \endproof
\subsection{The sheaf of fractions and Cartier divisors}
Cartier divisors on a scheme are defined as the global sections of the sheaf ${\mathcal K}^\times/{\mathcal O}^\times$ quotient of the sheaf of multiplicative groups of the rings of fractions ${\mathcal K}$ of the scheme by the sub-sheaf ${\mathcal O}^\times$ of invertible elements of the structure sheaf ({\it cf.} \cite{Hart} pp. 140-141). We adapt this notion in this context of characteristic $1$.
\begin{prop}\label{propfunct} Let $\mathfrak{p}_H$ be the point of the topos ${[0,\infty)\rtimes{\N^{\times}}}$ associated to a rank one subgroup $H\subset {\mathbb R}$.
The semiring ${\mathcal R}_H$ of germs, at $\lambda=1$, of $\R_{\rm max}$-valued, piecewise affine, continuous convex functions $f(\lambda)$ with slopes in $H$ is multiplicatively cancellative and its semifield of fractions ${\rm Frac}\,{\mathcal R}_H$ is the semifield of germs, at $\lambda=1$, of $\R_{\rm max}$-valued, piecewise affine, continuous functions $f(\lambda)$ with slopes in $H$ and endowed with the operations of the max and the addition of germs.
\end{prop}
\proof To show that ${\mathcal R}_H$ is multiplicatively cancellative one considers an equation of the form $f+h=g+h$ for $f,g,h\in {\mathcal R}_H$ and notice that since these functions take finite values at every point we get $f=g$. It follows that two pairs $(f,g)$ and $(h,k)$ of elements of ${\mathcal R}_H$ define the same element of the associated semifield of fractions ${\rm Frac}\,{\mathcal R}_H$ if and only if one has $f+k=g+h$. This is equivalent to write $f-g=h-k$ and hence to the equality of the functions obtained as pointwise differences. Next, the addition in the semiring ${\mathcal R}_H$ is given by the pointwise supremum and one needs to check that its extension to the associated semifield of fractions ${\rm Frac}\,{\mathcal R}_H$ coincides with the pointwise supremum for functions. This follows from the equality, valid for real numbers $x,y,z,t$ and obtained from translation invariance of $\vee$
$$
(x-y)\vee (z-t)=\left( (x+t)\vee (y+z)\right)-(y+t).
$$ \endproof
As in the case of ${\mathcal R}_H$ the germ of a function $f$ is characterized by the triple $(x,h_+,h_-)$ with $x\in {\mathbb R}$ and $h_\pm\in H$ given by $x=f(1)$, $f(1\pm \epsilon)=x
\pm h_\pm \epsilon$, for $\epsilon \geq 0$ small enough. The triples $(x,h_+,h_-)$ obtained from elements of ${\rm Frac}\,{\mathcal R}_H$ correspond to arbitrary values of $(x,h_+,h_-)\in {\mathbb R}\times H\times H$. As above the only other element of ${\rm Frac}\,{\mathcal R}_H$ corresponds to the germ of the constant function $-\infty$ which plays the role of the ``zero" element. The algebraic rules for the other elements are given for the ``addition" $\vee$ of two germs by the max of the two germs and hence by
\begin{equation}\label{additRhbis}
(x,h_+,h_-)\vee (x',h'_+,h'_-):=\begin{cases} (x,h_+,h_-)\ \text{if}\ x>x'\\(x',h'_+,h'_-)\ \text{if}\ x'>x\\
(x,h_+\vee h'_+, h_-\wedge h'_-) \ \text{if}\ x=x'\end{cases}
\end{equation}
The ``product" of the germs is given by the sum of the two germs and hence by
\begin{equation}\label{prodRhbis}
(x,h_+,h_-)\bullet (x',h'_+,h'_-):=(x+x',h_++h'_+,h_-+h'_-).
\end{equation}
\subsection{The order at a point}
\begin{defn}\label{site2} Let $\mathfrak{p}_H$ be the point of the ${[0,\infty)\rtimes{\N^{\times}}}$ associated to a rank one subgroup $H\subset {\mathbb R}$ and let $f$ be an element in the stalk of ${\mathcal K}$ at $\mathfrak{p}_H$. Then, the order of $f$ at $H$ is defined as ${\rm Ord}(f) = h_+-h_-\in H\subset {\mathbb R}$, where $h_\pm = \lim_{\epsilon\to 0\pm}\frac{f((1+\epsilon)H)-f(H)}{\epsilon}$.
\end{defn}
\begin{prop}\label{ordercomp} For any two elements $f,g$ in the stalk of ${\mathcal K}$ at $\mathfrak{p}_H$ one has
\begin{equation}\label{ordcomp1}
{\rm Ord}(f\vee g)\geq {\rm Ord}(f)\wedge {\rm Ord}(g)
\end{equation}
\begin{equation}\label{ordcomp2}
{\rm Ord}(f+ g)= {\rm Ord}(f)+ {\rm Ord}(g).
\end{equation}
\end{prop}
\proof The germ of $f\vee g$ is given by the max of the two germs and hence by
\eqref{additRhbis}.
When $x\neq x'$ the inequality \eqref{ordcomp1} follows from $h\geq h\wedge h'$, $h'\geq h\wedge h'$. When $x=x'$ one has
${\rm Ord}(f\vee g)=(h_+\vee h'_+)-(h_-\wedge h'_-)$. Thus the inequality \eqref{ordcomp1} follows from the general fact
$$
(a\vee b)-(c\wedge d)\geq (a-c)\vee(b-d)\geq (a-c)\wedge (b-d)\,,\,~\forall a,b,c,d\in {\mathbb R}.
$$
Note that one needs the $\wedge$ in \eqref{ordcomp1} to take care of the cases $x\neq x'$, but when $x=x'$ the $\vee$ works.
Finally, the germ of $f +
g$ is given by the sum of the two germs and hence \eqref{ordcomp2} holds. \endproof
\section{The periodic orbits $C_p$}
Let $p$ be a prime and consider the subspace $C_p$ of points of ${[0,\infty)\rtimes{\N^{\times}}}$ corresponding to subgroups $H\subset {\mathbb R}$ which are abstractly isomorphic to the subgroup $H_p\subset {\mathbb Q}$ of fractions with denominator a power of $p$. In this section we study the (quasi)-tropical structure of these curves, we develop the theory of theta-functions and finally we formulate a Riemann-Roch problem and establish a Riemann-Roch formula.
\begin{lem}\label{periodp} $(i)$~The map ${\mathbb R}_+^*\to C_p$, $\lambda\mapsto \lambda H_p$ induces the isomorphism $\eta_p: {\mathbb R}_+^*/p^{\mathbb Z}\to C_p$.\newline
$(ii)$~The pull-back by $\eta_p$ of the (restriction to $C_p$ of the) structure sheaf ${\mathcal O}$ is the sheaf ${\mathcal O}_p$ on ${\mathbb R}_+^*/p^{\mathbb Z}$ of piecewise affine (continuous), convex functions with slopes in $H_p$.
\end{lem}
\proof $(i)$~For $\lambda\in {\mathbb R}_+^*$ one has $\lambda H_p\in C_p$. An abstract isomorphism of ordered groups $\phi:H_p\to H\subset {\mathbb R}$ is given by multiplication by $\phi(1)=\lambda$. Since $pH_p=H_p$ the map ${\mathbb R}_+^*\to C_p$ induces a surjective map $\eta_p: {\mathbb R}_+^*/p^{\mathbb Z}\to C_p$. We show that $\eta_p$ is injective. If $\lambda H_p=\lambda' H_p$, then $\mu H_p=H_p$ for $\mu=\lambda/\lambda'$. Thus $\mu=a/p^n\in H_p$ and the same result holds for $\mu^{-1}$, thus $\mu$ is a power of $p$.
$(ii)$~The condition on the functions $f(H)$ on subgroups $H\subset {\mathbb R}$ which defines the structure sheaf are expressed locally in terms of the scaling flow by the two derivatives $D_\pm$ of \eqref{dpm} and can be written in the form \eqref{dpm1}. On the function $g=f\circ \eta_p$,
$g(\lambda):=f(\lambda H_p)$, condition \eqref{dpm1} becomes
$
\lambda\partial^\pm_\lambda g(\lambda)\in \lambda H_p
$, $\lambda\partial^+_\lambda g(\lambda)\geq\lambda\partial^-_\lambda g(\lambda)$ (where the $\partial^\pm_\lambda$ are the directional derivatives) or equivalently: $\partial^\pm_\lambda g(\lambda)\in H_p$ and $\partial^+_\lambda g(\lambda)\geq\partial^-_\lambda g(\lambda)$ thus one gets $(ii)$.\endproof
\subsection{Divisors}
\begin{prop}\label{cartierdiv} $(i)$~The sheaf of quotients of the sheaf of semirings ${\mathcal O}_p$ is the sheaf ${\mathcal K}_p$ on ${\mathbb R}_+^*/p^{\mathbb Z}$ of piecewise affine, continuous functions with slopes in $H_p$, endowed with the two operations max, $+$. \newline
$(ii)$~The quotient sheaf of Cartier divisors ${\mathcal{C}a\mathcal{C}\ell}(C_p):={\mathcal K}_p^\times/{\mathcal O}_p^\times$ is isomorphic to the sheaf $\mathscr{D}iv(C_p)$ of naive divisors, {\it i.e.\/}\ of maps $H\mapsto D(H)\in H$, such that
$$
\forall \lambda\in{\mathbb R}_+^*, \ \exists V \ \text{open} \ \lambda\in V: \ D(\mu)=0\,,\,~\forall \mu \in V, \ \mu\neq \lambda.
$$
\end{prop}
\proof $(i)$~The proof is the same as for Proposition \ref{propfunct}.
$(ii)$~A germ $(x,h_+,h_-)\in {\mathcal R}_H$ is invertible for the multiplicative structure given by \eqref{prodRhbis} iff one has $h_+=h_-$, thus the map given by the order as in Definition \ref{site2} is an isomorphism \endproof
\begin{defn}\label{defndiv} A {\em divisor} on $C_p$ is a global section of the sheaf
${\mathcal{C}a\mathcal{C}\ell}(C_p)\simeq \mathscr{D}iv(C_p)$. The collection of these global sections is denoted by ${\rm Div}(C_p)$.
\end{defn}
By Proposition \ref{cartierdiv}, a divisor on $C_p$ is uniquely specified by a global section of $\mathscr{D}iv(C_p)$ and by compactness, these are the maps $H\in C_p\mapsto D(H)\in H$ with finite support, where the support is defined
\begin{equation*}\label{support}
{\rm Support }(D):=\{H\mid D(H)\neq 0\}.
\end{equation*}
In other words a divisor $D$ on $C_p$ is a section, vanishing everywhere except on a finite subset of $C_p$, of the projection on the base from the total space of the bundle formed of pairs $(H,h)$ where $H\subset {\mathbb R}$ is a subgroup abstractly isomorphic to the subgroup $H_p\subset {\mathbb Q}$ and $h\in H$.
The sheaf ${\mathcal K}_p$ has global sections and they form the semifield ${\mathcal K}(C_p):= H^0({\mathbb R}_+^*/p^{\mathbb Z},{\mathcal K}_p)$.
\begin{prop}\label{propdiv}
$(i)$~The divisors ${\rm Div}(C_p)$ form an abelian group under pointwise addition.\newline
$(ii)$~The condition $D'(H)\geq D(H)$, $\forall H\in C_p$, defines a partial order on the group ${\rm Div}(C_p)$.\newline
$(iii)$~The following map defines a surjective group homomorphism compatible with the partial order
\begin{equation*}\label{degg}
\deg: {\rm Div}(C_p)\to {\mathbb R},\quad \deg(D):=\sum_H D(H)\in{\mathbb R}.
\end{equation*}
$(iv)$~The map which associates to $f\in {\mathcal K}^\times(C_p)$ the principal divisor
\begin{equation}\label{princdiv}
(f):=\sum_H (H,{\rm Ord}_H(f))
\end{equation}
determines a group homomorphism $ {\mathcal K}^\times(C_p)\to {\rm Div}(C_p)$.\newline
$(v)$~The subgroup ${\mathcal P}\subset {\rm Div}(C_p)$ of principal divisors is contained in the
kernel of $\deg:{\rm Div}(C_p)\to {\mathbb R}$
\begin{equation*}\label{order}
\sum_{H} {\rm Ord}_H(f)=0\,,\,~\forall f\in {\mathcal K}(C_p).
\end{equation*}
\end{prop}
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{ellchar1.pdf}
\end{center}
\caption{A function $f\in{\mathcal K}^\times(C_p)$. \label{tropfct} }
\end{figure}
\proof $(i)$~By construction ${\rm Div}(C_p)$ is the direct sum of the groups $H$, with $H\in C_p$.\newline
$(ii)$~Each group $H$ is ordered and the condition $D'(H)\geq D(H)$, $\forall H\in C_p$ defines the natural partial order on their direct sum.\newline
$(iii)$~The statement follows from the fact that the groups $H$ are subgroups of ${\mathbb R}$ and their union is ${\mathbb R}$. \newline
$(iv)$~For any $f\in {\mathcal K}^\times(C_p)$ the sum in \eqref{princdiv} is finite and defines a divisor. Moreover the formula \eqref{ordcomp2} shows that one obtains a group homomorphism.\newline
$(v)$~A global section $f\in {\mathcal K}(C_p)$ is a real valued function $f:{\mathbb R}_+^*\to {\mathbb R}$ which is piecewise affine, continuous, with slopes in $H_p$ and fulfills $f(p\lambda)=f(\lambda)$ $\forall \lambda\in{\mathbb R}_+^*$ ({\it cf.}~Figure~\ref{tropfct}). Such a function is uniquely determined by its restriction to the fundamental domain $[\lambda_0,p\lambda_0]$ and the only constraint on this restriction is the periodicity: $f(p\lambda_0)=f(\lambda_0)$. Let $\lambda_0<\lambda_1<\ldots <\lambda_{n-1}<\lambda_n=p\lambda_0$ be a finite sequence of positive real numbers such that $f$ is affine with slope $h_j\in H_p$ on the interval $I_j=[\lambda_{j-1},\lambda_j]$. One has for any $j\in \{1,\ldots ,n\}$
$$
f(\lambda_j)=f(\lambda_0)+\sum_1^j (\lambda_i-\lambda_{i-1})h_i.
$$
By applying the equality above when $j=n$, and using the periodicity property $f(\lambda_n)=f(p\lambda_0)=f(\lambda_0)$ one obtains
$$
\sum_1^n (\lambda_i-\lambda_{i-1})h_i=0.
$$
By using $\lambda_n=p\lambda_0$ one also has
$$
(\lambda_1-\lambda_0)h_1+(\lambda_2-\lambda_1)h_2+\ldots +(\lambda_n-\lambda_{n-1})h_n=\sum_1^{n-1} \lambda_i(h_i-h_{i+1})+\lambda_0(ph_n-h_1).
$$
For $1\leq i\leq n-1$ one has $\lambda_i(h_i-h_{i+1})=-{\rm Order}(f)(\lambda_i)$ where we set ${\rm Order}(f)(\lambda):={\rm Ord}_{\lambda H_p}f$. Moreover, notice that $ph_n$ is the slope of the function $f$ in the interval $(1/p) I_n$ so that the order of $f$ at $\lambda_0$ is $\lambda_0(h_1-ph_n)$. Then, the above equality shows that the sum of all orders must vanish. \endproof
Next, we consider the problem of constructing a global section $f\in {\mathcal K}(C_p)$ whose divisor is an assigned divisor of degree zero.
Let $J_p:=(p-1)H_p \subset H_p$ be the principal ideal generated by the integer $(p-1)$ in the ring $H_p$. One has the following exact sequence of rings
\begin{equation*}\label{rings}
0\to J_p\to H_p\stackrel{\chi}{\to} {\mathbb Z}/(p-1){\mathbb Z}\to 0
\end{equation*}
where $\chi(a/p^n):=a$ mod. $(p-1)$, for any $a\in {\mathbb Z}$ and $n\in {\mathbb N}$.
\begin{prop}\label{propdiv1}
$(i)$~Let $H\in C_p$, then there exists a unique map $\chi_H:H\to H_p/(p-1)H_p\simeq {\mathbb Z}/(p-1){\mathbb Z}$ such that for any $\lambda\in {\mathbb R}_+^*$ with $H=\lambda H_p$, one has $\chi_H=\chi\circ \lambda^{-1}$.\newline
$(ii)$~The group ${\mathcal P}$ of principal divisors is contained in the kernel of the group homomorphism
\begin{equation*}\label{jacmap}
\chi: {\rm Div}(C_p)\to Z/(p-1){\mathbb Z},\quad \chi(D):=\sum_H \chi_H(D(H)).
\end{equation*}
\end{prop}
\proof $(i)$~Given $H\in C_p$ the $\lambda 's \in {\mathbb R}_+^*$ such that $H=\lambda H_p$ provide maps $\lambda^{-1}:H\to H_p$ which differ from each other by multiplication by a power of $p$, thus the corresponding map $H\to H_p/(p-1)H_p\simeq {\mathbb Z}/(p-1){\mathbb Z}$ is independent of the choice of $\lambda$. \newline
$(ii)$~We use the notations of the proof of Proposition \ref{propdiv} $(v)$. The support of the principal divisor $(f)$ is contained in $\{\lambda_j H_p\mid 0\leq j\leq n-1\}$. For $1\leq j\leq n-1$, one has ${\rm Order}(f)(\lambda_j)=\lambda_j(h_{j+1}-h_j)$ and thus
$$
\chi_{\lambda_jH_p}({\rm Ord}_{\lambda_jH_p}(f))=\chi(h_{j+1}-h_j)=\chi(h_{j+1})-\chi(h_j).
$$
For $j=0$: ${\rm Order}(f)(\lambda_0)=\lambda_0(h_{1}-ph_n)$ and thus $$\chi_{\lambda_0H_p}({\rm Ord}_{\lambda_0H_p}(f))=\chi(h_{1}-ph_n)=\chi(h_1)-\chi(h_n).
$$
Thus one gets $\chi((f))=0$ as required. \endproof
\begin{thm}\label{thmjaccp} The map defined by the pair of homomorphisms
\begin{equation}\label{jacmapbis}
(\deg,\chi):{\rm Div}(C_p)/{\mathcal P}\to {\mathbb R}\times ({\mathbb Z}/(p-1){\mathbb Z})
\end{equation}
is an isomorphism of abelian groups.
\end{thm}
\proof We show first that any divisor $D$ such that $\deg(D)=0$ and $\chi(D)=0$ is principal. Indeed, the divisor $D$ can be written in the form
$$
D=\sum_{0\leq j\leq n-1}(\lambda_j H_p, a_j\lambda_j), \qquad \lambda_0<\lambda_1<\ldots <\lambda_{n-1}<p\lambda_0
$$
where the elements $a_j\in H_p$ are such that $\sum_0^{n-1}a_j\lambda_j=0$. Thus one is given the intervals $I_j=[\lambda_{j-1},\lambda_j]$ with $\lambda_n=p\lambda_0$. In order to show that $D$ is principal one needs to find elements $h_j\in H_p$ for $1\leq j\leq n$, giving the slope of $f$ on $I_j$ and hence such that
$$
a_0=h_1-ph_n,\ a_1=h_2-h_1, \ a_2=h_3-h_2, \ \ldots ,a_{n-1}=h_n-h_{n-1}.
$$
The solution of the above system of equations is unique and of the form
$$
h_n=\sigma/(1-p),\ h_{n-1}=h_n-a_{n-1}, \ldots ,h_1=h_2-a_1, \quad \sigma :=\sum a_j.
$$
Thus it takes values in $H_p$ if and only if the element $\sigma$ is divisible by
$p-1$ in $H_p$ {\it i.e.\/}\ iff $\chi(D)=0$. If this is the case one defines the function $f$ to be affine with slope $h_j$ in the interval $I_j=[\lambda_{j-1},\lambda_j]$ and normalized by $f(\lambda_0)=0$.
One has, using $\lambda_n=p\lambda_0$
$$
f(\lambda_n)=\sum_1^n(\lambda_j-\lambda_{j-1})h_j=-\lambda_0 h_1+\sum_1^{n-1}\lambda_j(h_j-h_{j+1})+\lambda_n h_n=-\sum_0^{n-1}a_j\lambda_j=0.
$$
This argument shows that one can extend $f$ by periodicity so that $f(p\lambda)=f(\lambda)$ and obtain a continuous, piecewise affine function with slopes in $H_p$. Moreover, by construction the divisor of $f$ is $D$, since the discontinuities of the derivative take place at the $\lambda_j$'s and are given by the $a_j$'s.
It remains to show that the restriction of the map $\chi$ to the subgroup ${\rm Div}_0(C_p)\subset {\rm Div}(D)$ of divisors of degree $0$ is surjective onto ${\mathbb Z}/p{\mathbb Z}$. Given $m\in{\mathbb Z}/(p-1){\mathbb Z}$, it is enough to find an increasing finite sequence of real numbers $\lambda_j>0$, $\lambda_0<\lambda_1<\ldots<\lambda_n=p\lambda_0$ and elements $a_j\in H_p$ such that $\sum_0^{n-1}a_j\lambda_j=0$ and $\sum_0^{n-1}a_j=m$ mod. $(p-1)H_p$. Both the group $H_p\subset {\mathbb R}$ and the subset $H_p^{(m)}:=\{x\in H_p\mid x=m \ \text{mod.}\ (p-1)H_p\}$ are dense in ${\mathbb R}$. This shows that one can fix arbitrarily the $(\lambda_j)$ and all the $a_j$ except for one of them, say $a_k$, and then choose $a_k\in H_p$ with preassigned $\chi(a_k)\in {\mathbb Z}/(p-1){\mathbb Z}$ such that for a choice of $\lambda'_k$ close to $\lambda_k$ one gets
$$
\sum_0^{n-1}a_j\lambda'_j=0, \qquad \sum_0^{n-1}\chi(a_j)=m \in {\mathbb Z}/(p-1){\mathbb Z}.
$$
The same argument also shows that one can arbitrarily prescribe both $\deg(D)$ and $\chi(D)$ for divisors $D\in {\rm Div}(C_p)$.
Since the group law on divisors is given by pointwise addition of sections, both maps $\deg:{\rm Div}(C_p)\to {\mathbb R}$ and $\chi:{\rm Div}(C_p)\to {\mathbb Z}/(p-1){\mathbb Z}$ are group homomorphisms and one obtains the isomorphism of groups \eqref{jacmapbis}.\endproof
\subsection{Symmetries}\label{sectsymm}
The scaling site ${\mathscr S}$ inherits from its construction by extension of scalars, as in Section \ref{sectextS}, several symmetries such as the arithmetic Frobenius associated to the automorphisms of $\R_+^{\rm max}\sim \R_{\rm max}$ or the absolute Frobenius associated to the frobenius endomorphisms of the semirings of characteristic one of the structure sheaf. These symmetries induce
symmetries of the curves $C_p$ and in this subsection we describe them as operators acting on the structure sheaf ${\mathcal O}_p$ of these curves. Since the divisors on $C_p$ are Cartier divisors, one then obtains, as a byproduct, induced actions on
${\rm Div}(C_p)$ and on its quotient ${\rm Div}(C_p)/{\mathcal P}$.
\subsubsection*{Arithmetic Frobenius}\label{sectaritfrob}
The analogue of the arithmetic Frobenius ${\rm Fr}^a$ on sections $f$ of ${\mathcal O}_p$ is defined as follows
\begin{equation*}\label{arithfrob}
{\rm Fr}^a_\mu(f)(\lambda):=\mu \, f(\mu^{-1}\lambda)\,,\,~\forall \lambda, \mu\in {\mathbb R}_+^*
\end{equation*}
This operator preserves the properties of $f$ ($f$ is convex, piecewise affine with slopes in $H_p$ and periodic $f(p\lambda)=f(\lambda)$) as well as the algebraic operations $(\vee,+)$. Thus it defines automorphisms denoted by ${\rm Fr}^a_\mu$. On a function $f$ which is locally written as $\vee (n_j\lambda +a_j)$ the action of ${\rm Fr}^a_\mu$ replaces the $a_j$'s by $\mu a_j$ and this corresponds to the arithmetic Frobenius. The induced action on the stalks of ${\mathcal O}_p$ associates to a germ at $\lambda$ a germ at $\mu\lambda$ given by $x\mapsto \mu \, f(\mu^{-1}x)$, for $x$ near $\mu\lambda$. One thus obtains a morphism from germs at $H$ to germs at $\mu H$
\begin{equation}\label{arithfrob1}
{\rm Fr}^a_\mu:{\mathcal R}_H\to {\mathcal R}_{\mu H}, \qquad (x,h_+,h_-)\mapsto (\mu x,\mu h_+,\mu h_-).
\end{equation}
Indeed, one has $f((1+\epsilon)H)\sim f(H)+\epsilon h_+$, for $\epsilon>0$ small and thus
$$
{\rm Fr}^a_\mu(f)((1+\epsilon)\mu H)=\mu \, f((1+\epsilon)H)\sim \mu f(H)+\mu \epsilon h_+.
$$
Next, we list some properties of the induced action by ${\rm Fr}^a_\mu$ on divisors. We use the notation
$D=\sum (H_j,h_j)$, with $h_j\in H_j$ for the divisor with support on the set $\{H_j\}$ and such that $D(H_j)=h_j$ $\forall j$.
\begin{lem}\label{arithfrob2} The action induced by ${\rm Fr}^a_\mu$ on divisors is given by
$$
D=\sum (H_j,h_j)\mapsto {\rm Fr}^a_\mu(D)=\sum (\mu H_j,\mu h_j).
$$
This action preserves the homomorphism $\chi:{\rm Div}(C_p)\to {\mathbb Z}/(p-1){\mathbb Z}$ as well as the subgroup ${\mathcal P}$ of principal divisors and it acts on the degree of a divisor by multiplication by $\mu$.
\end{lem}
\proof The first part of the statement follows from \eqref{arithfrob1} which shows that the singularity of the derivative of $f$ at $H$ gives a singularity of the derivative of ${\rm Fr}^a_\mu(f)$ at $\mu H$, while the order is multiplied by $\mu$. The value of $\chi(H_j,h_j)$ is obtained as $\chi(k)$ where $H_j=\lambda H_p$, $h_j=\lambda k$. One can choose the same $k$ for the pair $(\mu H_j,\mu h_j)$ and this implies that $\chi({\rm Fr}^a_\mu(D))=\chi(D)$. The degree of $D=\sum (H_j,h_j)$ is $\deg(D)=\sum h_j$ and the degree of ${\rm Fr}^a_\mu(D)$ is $\sum \mu h_j=\mu \deg(D)$. Since ${\mathcal P}={\rm Ker}(\deg,\chi)$ this group is preserved by the action of ${\rm Fr}^a_\mu$.\endproof
\subsubsection*{Relative Frobenius}\label{sectrelfrob}
Let $h\in H_p$, $h>0$. One defines the operator ${\rm Fr}^r_h$ acting on functions by
\begin{equation*}\label{relatfrob}
{\rm Fr}^r_h(f)(\lambda):= f(h\lambda)\,,\,~\forall \lambda\,,\,~\forall h\in H_p^+.
\end{equation*}
This operator acts on a section $f$ of ${\mathcal O}_p$ locally written as $f=\vee (n_j\lambda +a_j)$ by replacing the $n_j$ by $hn_j$, while leaving the $a_j$ unchanged. In particular it is $\R_{\rm max}$-linear.
Since the powers $p^n$ act trivially in view of the periodicity of $f$, the operator ${\rm Fr}^r_h$ only depends upon the class of $h$ in the quotient $H_p^+/p^{\mathbb Z}$ which is the multiplicative monoid $\N_{(p)}^{\times}$ of positive integers relatively prime to $p$. The induced action on the stalks associates to a germ at $\lambda$, the germ at $h^{-1}\lambda$ given by $x\mapsto f(hx)$ for $x$ near $h^{-1}\lambda$. One thus obtains a morphism from germs at $H$ to germs at $h^{-1} H$
\begin{equation}\label{relatfrob1}
{\rm Fr}^r_h:{\mathcal R}_H\to {\mathcal R}_{h^{-1} H}, \qquad (x,h_+,h_-)\mapsto ( x,h_+, h_-)\in \R_{\rm max}\times h^{-1} H\times h^{-1} H
\end{equation}
as follows from the identity ${\rm Fr}^r_h(f)((1+\epsilon)h^{-1} H)=f((1+\epsilon)H)$.
\begin{lem}\label{relatfrob2} The action induced by ${\rm Fr}^r_h$ on divisors is given by
$$
D=\sum (H_j,h_j)\mapsto {\rm Fr}^r_h(D)=\sum (h^{-1}H_j, h_j).
$$
This action preserves the degree $\deg\circ {\rm Fr}^r_h=\deg$, the subgroup ${\mathcal P}$ of principal divisors and it acts on the invariant $\chi$ by multiplication by $\chi(h)\in{\mathbb Z}/(p-1){\mathbb Z}$.
\end{lem}
\proof The first part of the statement follows from \eqref{relatfrob1}. One has $\deg({\rm Fr}^r_h(D))=\sum h_j=\deg(D)$. The value of $\chi(H_j,h_j)$ is obtained as $\chi(k)$ where $H_j=\lambda H_p$, $h_j=\lambda k$. One then has $h^{-1}H_j=h^{-1}\lambda H_p$ and $h_j=(h^{-1}\lambda)h k$,
so that $\chi(h^{-1}H_j,h_j)=\chi(hk)=\chi(h)\chi(k)$. This argument also shows that ${\mathcal P}={\rm Ker}(\deg,\chi)$ is preserved by the action of ${\rm Fr}^r_h$.\endproof
\subsubsection*{Absolute Frobenius}\label{sectabsfrob}
This operator acts on sections $f$ of ${\mathcal O}_p$ by composition with the Frobenius of $\R_{\rm max}$ which, in this logarithmic notation, multiplies the function $f$ by a constant. In order to preserve the property of the slopes in $H_p$ one takes this constant to be an element in $H_p$. More precisely, the action of the absolute Frobenius on functions is given, for any $h\in H_p^+$, by the formula
\begin{equation*}\label{absfrob}
{\rm Fr}_h(f)(\lambda):= h\,f(\lambda)\,,\,~\forall \lambda\,,\,~\forall h\in H_p^+.
\end{equation*}
Its properties follow from the properties of the two previous operators since
\begin{equation}\label{absfrob1}
{\rm Fr}_h= {\rm Fr}^a_h\circ {\rm Fr}^r_h={\rm Fr}^r_h\circ {\rm Fr}^a_h\,,\,~\forall h\in H_p^+.
\end{equation}
Indeed, one has ${\rm Fr}^a_h\circ {\rm Fr}^r_h(f)(\lambda)=hf(h^{-1}h\lambda)=hf(\lambda)$ and similarly ${\rm Fr}^r_h\circ {\rm Fr}^a_h={\rm Fr}_h$.
\begin{lem}\label{absfrob2} The action induced by ${\rm Fr}_h$ on divisors is given by
$$
D=\sum (H_j,h_j)\mapsto {\rm Fr}_h(D)=\sum (H_j, hh_j).
$$
This action is trivial on the points, {\it i.e.\/}\ it fixes ${\rm Support }(D)$. It preserves the subgroup ${\mathcal P}$ of principal divisors, acts on the invariant $\chi$ by multiplication by $\chi(h)\in{\mathbb Z}/(p-1){\mathbb Z}$ and on the degree by multiplication by $h\in H_p^+\subset {\mathbb R}_+^*$.
\end{lem}
\proof This follows from \eqref{absfrob1} using Lemmas \ref{arithfrob2} and \ref{relatfrob2}.
\endproof
Notice that each point $H\in C_p$ determines a morphism
$$
f\in {\mathcal K}(C_p)\mapsto p_H(f)=f(H)\in \R_{\rm max}\simeq \R_+^{\rm max}
$$
and the absolute Frobenius is compatible with this morphism: $p_H({\rm Fr}_h(f))={\rm Fr}_h(p_H(f))$.
In particular it acts trivially on the spectrum.
\subsection{Theta functions}
In this subsection we provide an explicit construction of elements of ${\mathcal K}(C_p)$ with assigned divisor, by introducing (tropical) analogues of theta functions.
We proceed by analogy with the construction of theta functions for elliptic curves $E_t(k)=k^\times/t^{\mathbb Z}$ over non-archimedean local fields $k$ as in \cite{Tate}. In that case the theta function is defined as
\begin{equation}\label{tatetheta}
\theta(w,t)=\sum_{\mathbb Z} (-1)^n t^{\frac{n^2-n}{2}} w^n=(1-w)\prod_1^\infty(1-t^m)(1-t^m w)(1-t^m w^{-1})
\end{equation}
and satisfies the functional equation: $-w\theta(tw,t)=\theta(w,t)$. Its relation with the standard theta function $\vartheta _1(u,q)$ is given by the equation: $\vartheta _1(u,q)=i\sqrt[4]{q}\, e^{-iu}\theta(e^{2iu},q^2)$.
This suggests to transpose \eqref{tatetheta} naively in order to define theta functions for $C_p={\mathbb R}_+^*/p^{\mathbb Z}$. In our framework, the role of the product is replaced by addition, thus the following infinite sums replace the infinite products on the left
$$
\prod_0^\infty (1-t^m w)\rightsquigarrow f_+(\lambda):=\sum_0^\infty \left(0 \vee (1-p^{m}\lambda)\right)
$$
$$
\prod_1^\infty (1-t^m w^{-1})\rightsquigarrow f_-(\lambda):=\sum_1^\infty \left(0 \vee (p^{-m}\lambda-1)\right).$$
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{elltheta2.pdf}
\end{center}
\caption{The function $\theta$ for $p=3$. \label{thetafct} }
\end{figure}
\begin{lem}\label{lemtheta1} $(i)$~The functions $f_\pm(\lambda)$, $\lambda\in (0,\infty)$ are convex (continuous), piecewise affine with slopes in $H_p$ and
\begin{equation}\label{theta1}
f_+(p\lambda)-f_+(\lambda)=-\left(0 \vee (1-\lambda)\right), \qquad
f_-(p\lambda)-f_-(\lambda)=\left(0 \vee (\lambda-1)\right).
\end{equation}
$(ii)$~The function $\theta(\lambda):=f_+(\lambda)+f_-(\lambda)$, $\lambda\in (0,\infty)$ ({\it cf.}~Figure~\ref{thetafct}), is convex (continuous), piecewise affine with slopes in $H_p$ and fulfills the equation: $\theta(p\lambda)=\theta(\lambda)+\lambda-1$, $\forall \lambda\in (0,\infty)$.\newline
$(iii)$~One has
$$
\vert\theta(\lambda)-\left(\frac{1}{p-1}\lambda -\log \lambda/\log p\right)\vert \leq 1 \qquad \forall \lambda\in (0,\infty).
$$
\end{lem}
\proof $(i)$~Notice that the sum $\sum_0^\infty \left(0 \vee (1-p^{m}\lambda)\right)$ has only finitely many non-zero terms since $p^{m}\lambda>1$ for $m$ large enough. Each of these terms is
convex continuous, piecewise affine with slopes in $H_p$ and thus the same property holds for $f_+$. Moreover, the difference $f_+(p\lambda)-f_+(\lambda)$ is given by the single term corresponding to $m=0$, {\it i.e.\/}\ $f_+(p\lambda)-f_+(\lambda)=-\left(0 \vee (1-\lambda)\right)$. Similarly for
$f_-(\lambda)=\sum_1^\infty \left(0 \vee (p^{-m}\lambda-1)\right)$ one gets the term $\left(0 \vee (\lambda-1)\right)$ in $f_-(p\lambda)-f_-(\lambda)$. Thus we obtain \eqref{theta1}.\newline
$(ii)$~The first part of the statement follows from $(i)$. Moreover one has
$$
\theta(p\lambda)-\theta(\lambda)=f_+(p\lambda)-f_+(\lambda)+f_-(p\lambda)-f_-(\lambda)
=-\left(0 \vee (1-\lambda)\right)+\left(0 \vee (\lambda-1)\right)=\lambda-1
$$
since for any real number $x$ one has $(0\vee x)-(0\vee -x)=x$.\newline
$(iii)$~Let $g(\lambda):=\frac{1}{p-1}\lambda -\log \lambda/\log p$. Then
$
g(p\lambda)-g(\lambda)=\lambda-1
$.
Thus the function $k(\lambda)=\theta(\lambda)-g(\lambda)$ fulfills $k(p\lambda)=k(\lambda)$. This periodicity shows that $\vert k(\lambda)\vert\leq\max_{[1,p]}\vert k(u)\vert$. Since $\theta(u)=0$, $\forall u \in [1,p]$ one just needs to check that $\vert g(u)\vert \leq 1$, $\forall u \in [1,p]$. In this interval the convex function $g$ varies between its value at the end points: $\frac{1}{p-1}$ and its minimum $g(\lambda)$ at $\lambda=\frac{p-1}{\log p}\in [1,p]$, whose value
is $g(\frac{p-1}{\log p})=(1-\log(p-1)+\log\log p)/\log p\geq -1$.
\endproof
We define, for $h\in H_p, h>0$ and $\mu\in {\mathbb R}_+^*$, the function
\begin{equation*}\label{defnthet}
\Theta_{h,\mu}(\lambda):=\mu\, \theta(\mu^{-1}h\lambda).
\end{equation*}
It is a convex continuous, piecewise affine function with slopes in $H_p$ and fulfills the equation
\begin{equation}\label{propthet}
\Theta_{h,\mu}(p\lambda)=\Theta_{h,\mu}(\lambda)+h\lambda -\mu
\end{equation}
since by Lemma \ref{lemtheta1} $(ii)$ one has
$$
\mu\, \theta(\mu^{-1}hp\lambda)=\mu\, \theta(\mu^{-1}h\lambda)+\mu\,\mu^{-1}h\lambda-\mu.
$$
It follows that
\begin{equation*}\label{propthet1}
\Theta_{ph,\mu}(\lambda)=\Theta_{h,\mu}(\lambda)+h\lambda -\mu.
\end{equation*}
To a pair of labels $(h,\mu)$ we associate the divisor which is everywhere zero except on the subgroup $H=\mu h^{-1} H_p$ (abstractly isomorphic to $H_p$) where it takes the value $\mu\in H$. We let
\begin{equation*}\label{defndivthet}
\delta(h,\mu):=(\mu h^{-1} H_p, \mu).
\end{equation*}
Note that by construction one has $\delta(ph,\mu)=\delta(h,\mu)$, so that $\delta(h,\mu)$ only depends upon the class of $h$ modulo the multiplication by powers of $p$. Such a class is uniquely specified as that of an integer $m>0$, prime to $p$.
Given a simple divisor, {\it i.e.\/}\ a positive divisor supported on a single $H$, and $\mu\in H$, $\mu>0$ one can find $\lambda>0$ such that $H=\lambda H_p$ and thus $h\in H_p$ with $\mu =h\lambda$. One then has $\delta(h,\mu)=(H,\mu)$. The choice of $\lambda$ (and hence of $h$) is unique only up-to multiplication by a power of $p$.
The classical description of elliptic functions in terms of theta functions admits the following counterpart in our framework, which provides, in particular, with another proof of Theorem \ref{thmjaccp} by an explicit construction of an $f\in {\mathcal K}(C_p)$ with given divisor.
\begin{prop}\label{proptheta1}
Let $D=D_+-D_-\in {\rm Div}(C_p)$ be a divisor ($D_\pm\in {\rm Div}^+$) and $(h_i,\mu_i)\in H_p^+\times {\mathbb R}_+^*$, $(h'_j,\mu'_j)\in H_p^+\times {\mathbb R}_+^*$ such that $D_+=\sum \delta(h_i,\mu_i)$ and $D_-=\sum \delta(h'_j,\mu'_j)$. Then, if $\deg(D)=0$ and $h\in H_p$ fulfills $(p-1)h=\sum h_i-\sum h'_j$, the following function
\begin{equation}\label{propthet2}
f(\lambda):=\sum_i \Theta_{h_i,\mu_i}(\lambda)-\sum_j \Theta_{h'_j,\mu'_j}(\lambda)-h\lambda
\end{equation}
is continuous, piecewise affine with slopes in $H_p$, fulfills $f(p\lambda)=f(\lambda)$ $\forall \lambda \in {\mathbb R}_+^*$ and one has: ${\rm Div}(f)=D$.
\end{prop}
\proof By applying the equation \eqref{propthet}, one has
$$
f(p\lambda)-f(\lambda) =\sum_i(h_i\lambda -\mu_i)-\sum_j(h'_j\lambda -\mu'_j)-(p-1)h\lambda=0
$$
since by hypothesis $\sum_i \mu_i-\sum_j\mu'_j=\deg(D)=0$ and $(p-1)h=\sum h_i-\sum h'_j$. Moreover, the divisor of $f$ is equal to $D$ by construction since each theta function $\Theta_{h_i,\mu_i}$ (resp. $\Theta_{h'_j,\mu'_j}$) contributes
with the term $\delta(h_i,\mu_i)$ (resp. $\delta(h'_j,\mu'_j)$) while $-h\lambda $ does not contribute at all.\endproof
\begin{thm}\label{thmtheta1}
Let $f\in {\mathcal K}(C_p)$. Then $f$ admits the following canonical decomposition
\begin{equation}\label{thmthet2}
f(\lambda)=\sum_i \Theta_{h_i,\mu_i}(\lambda)-\sum_j \Theta_{h'_j,\mu'_j}(\lambda)-h\lambda +c
\end{equation}
where $c\in {\mathbb R}$, $(p-1)h=\sum h_i-\sum h'_j$ and $h_i\leq \mu_i<ph_i$, $h'_j\leq \mu_j<ph'_j$.
\end{thm}
\proof Let $D=(f)$ be the principal divisor of $f$, and $D=D_+-D_-$ its decomposition with
$(h_i,\mu_i)\in H_p^+\times {\mathbb R}_+^*$, $(h'_j,\mu'_j)\in H_p^+\times {\mathbb R}_+^*$ such that $D_+=\sum \delta(h_i,\mu_i)$ and $D_-=\sum \delta(h'_j,\mu'_j)$. Since $\delta(ph,\mu)=\delta(h,\mu)$ we can choose the $h_i$ and $h'_j$ in such a way that $\mu h^{-1}\in [1,p)$ for all of them, which gives $h_i\leq \mu_i<ph_i$, $h'_j\leq \mu_j<ph'_j$. Proposition \ref{proptheta1} then shows that the function defined in \eqref{propthet2} belongs to ${\mathcal K}(C_p)$ and has the same divisor as $f$ thus it differs from $f$ by a constant $c\in{\mathbb R}$ and one gets \eqref{thmthet2}. \endproof
\subsection{Riemann-Roch theorem of type II}
In this subsection we define a Riemann-Roch problem for divisors on the curves $C_p$ and prove a Riemann-Roch formula. To this end, we introduce the notion of continuous dimension for the $\mathbb R_{\rm max}$-modules $H^0(C_p,{\mathcal O}(D))$ associated to divisors $D$ on $C_p$.
It follows from Proposition \ref{cartierdiv} that
the notion of divisors on $C_p$ coincides with the notion of global section of the
sheaf ${\mathcal{C}a\mathcal{C}\ell}(C_p)={\mathcal K}_p^*/{\mathcal O}_p^*$ of Cartier divisors.
In analogy with the classical case, a Cartier divisor $D$ described by local sections $f_i\in {\mathcal K}_p^*(W_i)/{\mathcal O}_p^*(W_i)$, defines a subsheaf ${\mathcal O}(D)$ of ${\mathcal K}_p$. This is the sheaf of ${\mathcal O}_p$-modules generated on $W_i$ by the (multiplicative) inverses of the $f_i$, {\it i.e.\/}\ in the additive notation of this tropical set-up by the functions $-f_i$. The sections of ${\mathcal O}(D)$ are given locally by the rational functions $f\in {\mathcal K}_p(W)$ which satisfy the inequality $D+(f)\geq 0$, where $(f)$ denotes the principal divisor associated to $f$. By construction, ${\mathcal O}(D)$ is a sheaf of ${\mathcal O}_p$-modules, and in particular its global sections define a module over $\R_{\rm max}$
\begin{equation}\label{rrproblem}
H^0(D):=\Gamma(C_p,{\mathcal O}(D)) =\{f\in {\mathcal K}(C_p)\mid D+(f)\geq 0\}.
\end{equation}
It follows from \eqref{ordcomp1} that $f,g\in H^0(D)\Rightarrow f\vee g\in H^0(D)$. The constant function $-\infty$
is, by convention, contained in all the $H^0(D)$ and plays the role of the $0$-element. We use the notation $H^0(D)=0$, rather than $H^0(D)=\{-\infty\}$, to mean that $H^0(D)$ does not contain any other $f$.
\begin{lem}\label{periodrr} $(i)$~If $\deg(D)<0$ then $H^0(D)=0$. \newline
$(ii)$~If $\deg(D)>0$ then $H^0(D)\neq 0$.
\end{lem}
\proof $(i)$~The condition $D+(f)\geq 0$ implies that $\deg(D+(f))\geq 0$ and hence $\deg(D)\geq 0$.
$(ii)$~Assume $\deg(D)=\lambda >0$, set $H=\lambda H_p$ and let $P_\lambda$ be the positive divisor which vanishes for $H'\in C_p$, $H'\neq H$ and takes the value $\lambda$ at $H$. By construction $\lambda\in H$ and $P_\lambda$ is an effective ({\it i.e.\/}\ positive) divisor such that $\deg(P_\lambda)=\lambda$. Moreover one has $\chi(P_\lambda)=1\in {\mathbb Z}/(p-1){\mathbb Z}$. Let $m\in \{1,\ldots, p-1\}$ be an integer congruent to $\chi(D)$ mod. $p-1$, then it follows that the divisor $D'=D-mP_{\lambda/m}$ fulfills $\deg(D')=0$ and $\chi(D')=0$ and hence it is principal. Let $f\in {\mathcal K}_p$ with $D'+(f)=0$. One has $D+(f)=mP_{\lambda/m}\geq 0$ and thus $f\in H^0(D)\neq 0$.\endproof
The slopes of the functions $f\in {\mathcal K}(C_p)$ can have arbitrarily small real size since the group $H_p\subset {\mathbb R}$ is dense. This shows that when $\deg(D)>0$, \eqref{rrproblem} will in general yield an infinite dimensional space of solutions. However, notice that the group $H_p$ is discrete when embedded diagonally in ${\mathbb Q}_p\times {\mathbb R}$, and this fact allows one to obtain a natural norm on sections of ${\mathcal K}_p$ by implementing the $p$-adic norm of the slopes.
In the following, we choose to normalize the $p$-adic norm $\vert h\vert_p\geq 0$, $\forall h\in H_p$ so that $\vert p\vert_p=1/p$.
Let $f$ be a continuous, piecewise affine function on ${\mathbb R}_+^*$ with slopes $h_\pm(u)\in H_p$ and such that $f(pu)=f(u)$. The slope $h_\pm(p\lambda)$ of $f$ at $p\lambda$ is $h_\pm(\lambda)/p$ since
$$
h_\pm(p\lambda):=\lim_{\epsilon\to 0\pm} \frac{f(p\lambda+\epsilon)-f(p\lambda)}{\epsilon}=
\lim_{\delta\to 0\pm} \frac{f(p\lambda+p\delta)-f(p\lambda)}{p\delta}=\frac 1p
\lim_{\delta\to 0\pm} \frac{f(\lambda+\delta)-f(\lambda)}{\delta}.
$$
The value of $\vert h_\pm(\lambda)\vert_p/\lambda$ is unchanged if one replaces $\lambda$ by $p\lambda$ since the $p$-adic norm of $h_\pm(\lambda)/p$ is $p\vert h_\pm(\lambda)\vert_p$.
\begin{defn}\label{pnorm} Let $f\in {\mathcal K}(C_p)$. We set
\begin{equation}\label{defnnp}
\Vert f\Vert_p :=\max \{\vert h(\lambda)\vert_p/\lambda\mid \lambda\in {\mathbb R}_+^*\}\end{equation}
where $h(\lambda)\in H_p$ is the\footnote{at a point of discontinuity of the slopes one takes the max of the two values $\vert h_\pm(\lambda)\vert_p/\lambda$ in \eqref{defnnp}} slope of $f$ at $\lambda$, and $\vert h(\lambda)\vert_p$ its $p$-adic norm.
\end{defn}
One has the following compatibility with the semiring structure of ${\mathcal K}(C_p)$
\begin{prop}\label{pnormcomp} Let $f,g\in {\mathcal K}(C_p)$. One has\newline
$(i)$~$\Vert f\vee g\Vert_p\leq \max\{\Vert f\Vert_p,\Vert g\Vert_p\}$. \newline
$(ii)$~$\Vert f + g\Vert_p\leq \max\{\Vert f\Vert_p,\Vert g\Vert_p\}$.\newline
$(iii)$~$\Vert p^a f\Vert_p=p^{-a}\Vert f\Vert_p$. \newline
$(iv)$~$\Vert f\Vert_p\leq 1$ iff the restriction of $f$ to $[1,p]$ has integral slopes. \newline
$(v)$~Let $D\in {\rm Div}(C_p)$ be a divisor. The following formula defines an increasing filtration on $H^0(D)$ by $\R_{\rm max}$-submodules
\begin{equation*}\label{defnfilt}
H^0(D)^\rho:=\{f\in H^0(D)\mid \Vert f\Vert_p\leq \rho\}.
\end{equation*}
\end{prop}
\proof $(i)$~At a point $\lambda\in {\mathbb R}_+^*$ the set of slopes (there can be two) of $f\vee g$ is a subset of the union of the sets of slopes of $f$ and $g$, thus one obtains the stated inequality.
$(ii)$~At a point $\lambda\in {\mathbb R}_+^*$, the slope of $f+g$ is the sum of the slopes of $f$ and $g$ and the ultrametric inequality for the $p$-adic norm gives the required result.
$(iii)$~The equality follows from the equality $\vert p^a x\vert_p=p^{-a}\vert x\vert_p$.
$(iv)$~Using the invariance of $\vert h(\lambda)\vert_p/\lambda$ under $\lambda\mapsto p\lambda$, one has
$$
\Vert f\Vert_p :=\max \{\vert h(\lambda)\vert_p/\lambda\mid \lambda \in [1,p]\}.
$$
Note that the value $\vert h(1)_-\vert_p$ corresponding to the ingoing slope at $\lambda=1$ is taken into account for $\lambda\in[1,p]$ near $p$ as the limit of the values $\vert h(\lambda)\vert_p/\lambda$ when $\lambda\to p$, since $h(p)_-=h(1)_-/p$.
If the restriction of $f$ to $[1,p]$ has integral slopes, one has $\vert h(\lambda)\vert_p\leq 1$ for all $\lambda\in[1,p]$ and thus $\Vert f\Vert_p\leq 1$. Conversely, if $\Vert f\Vert_p\leq 1$ one has $\vert h(\lambda)\vert_p<p$ for all $\lambda\in[1,p)$ and thus, since $h(\lambda)\in H_p$, one gets $h(\lambda)\in {\mathbb Z}$ for all $\lambda\in[1,p)$ as required.
$(v)$~It follows from $(i)$ and $(ii)$ that $H^0(D)^\rho$ is an $\R_{\rm max}$-submodule of $H^0(D)$, moreover one easily sees that $H^0(D)^\rho\subset H^0(D)^{\rho'}$ for $\rho<\rho'$.
\endproof
The next step is to define the continuous dimension of the module $H^0(D)$ using the filtration by the submodules $H^0(D)^\rho$ by means of a formula of the form
$$
{{\mbox{Dim}_\R}}(H^0(D)):=\lim_{\rho\to \infty}\frac 1 \rho {\mbox{dim}}(H^0(D)^\rho),
$$
where, on the right hand side, one uses a suitable notion of integer valued dimension for $\R_{\rm max}$-modules. In our context, the most natural notion is
that of ``topological dimension" that counts the number of real parameters on which a general element depends. The original definition for such dimension is due to Lebesgue. We use the reference \cite{Pears}.
\begin{defn}\label{defntopdim} Let $X$ be a topological space. The {\em topological dimension} ${{\mbox{dim}_{\rm top}}}(X)$ of $X$ is the smallest integer $n$ such that every open cover ${\mathcal U}$ of $X$ admits a refinement ${\mathcal V}$ such that every point of $X$ is in at most $n+1$ elements of ${\mathcal V}$.
\end{defn}
When working with the modules $H^0(D)$ we use the topology of uniform convergence for $\R_{\rm max}$-valued functions on the interval $[1,p]$ (or equivalently by periodicity on ${\mathbb R}_+^*$). The distance defining this topology is given by
\begin{equation}\label{distop}
d(f,g)= \max_{x\in [1,p]} \vert f(x)-g(x)\vert.
\end{equation}
We define
\begin{equation}\label{rr1}
{{\mbox{Dim}_\R}}(H^0(D)):=\lim_{n\to \infty} p^{-n}{{\mbox{dim}_{\rm top}}}(H^0(D)^{p^n})
\end{equation}
Our next goal is to prove that the above limit exists and one obtains a Riemann-Roch formula.
\begin{thm}\label{RRperiodic}
$(i)$~Let $D\in {\rm Div}(C_p)$ be a divisor with $\deg(D)\geq 0$. Then the limit in \eqref{rr1} converges and one has
${{\mbox{Dim}_\R}}(H^0(D))=\deg(D)$.\newline
$(ii)$~The Riemann-Roch formula holds
\begin{equation*}\label{rr2}
{{\mbox{Dim}_\R}}(H^0(D))-{{\mbox{Dim}_\R}}(H^0(-D))=\deg(D)\qquad \forall D\in {\rm Div}(C_p).
\end{equation*}
\end{thm}
The proof of Theorem \ref{RRperiodic} will be given later and follows by combining Lemma \ref{lemdivrel} below with the understanding of $H^0(D)$ in the special case discussed in Lemma \ref{Enp}.
\begin{lem}\label{lemdivrel} Let $D\in {\rm Div}(C_p)$ and $f\in {\mathcal K}(C_p)$. Then\newline
$(i)$~For $D'=D+(f)$ and for any non-negative integer $n$ such that $\Vert f\Vert_p\leq p^n$, the map $H^0(D)\to H^0(D')$, $\xi\mapsto \xi-f$ induces an isomorphism of $H^0(D)^{p^n}$ with $H^0(D')^{p^n}$.\newline
$(ii)$~The absolute Frobenius $f\mapsto p^n f$ determines a twisted isomorphism of $\R_{\rm max}$-modules
$$
F_{p^n}:H^0(D)^{p^n}\to H^0( p^n D)^{1}.
$$
This map preserves the topological dimension.
\end{lem}
\proof $(i)$~By Lemma \ref{pnormcomp}, $(ii)$, one has $\Vert \xi-f\Vert_p\leq \max\{\Vert \xi\Vert_p,\Vert f\Vert_p\}$ so that if $\Vert f\Vert_p\leq p^n$ one derives
$$
\xi \in H^0(D)^{p^n}\iff \Vert \xi\Vert_p\leq p^n\iff \Vert \xi-f\Vert_p\leq p^n\iff
\xi -f\in H^0(D')^{p^n}.
$$
$(ii)$~The twisting is by the Frobenius ${\rm Fr}_{p^n}\in {\rm Aut}(\R_{\rm max})$ and occurs since $F_{p^n}(f+a)=F_{p^n}(f)+p^n a$ for any scalar $a\in \R_{\rm max}$. This operation does not affect the topological dimension. The principal divisor $(p^nf)$ is equal to $p^n\times (f)$ and one has, using Proposition \ref{pnormcomp} $(iii)$
$$
D+(f)\geq 0\iff p^n D+(p^nf)\geq 0, \ \ \Vert f\Vert_p\leq p^n\iff \Vert p^n f\Vert_p\leq 1.
$$
Thus the map $f\mapsto p^n f$ gives a twisted isomorphism of $\R_{\rm max}$-modules as stated.
\endproof
Next, we determine the topological dimension of the $\R_{\rm max}$-module ${\mathcal E}_{N,p}:=H^0(D)^{1}$ which is associated to the divisor $D:=(H_p,N)$, where $N>0$ is an integer.
\begin{lem}\label{Enp} $(i)$~The module ${\mathcal E}_{N,p}$ is the $\R_{\rm max}$-module of
convex (continuous), piecewise affine functions on $[1,p]$ with integral slopes, such that $f(1)=f(p)$ and $-f'_+(1)+pf'_-(p)\leq N$.\newline
$(ii)$~Let $a\in \{1,\ldots,N-p\}$ and denote by $b=E((N-a)/p)\geq 1$ the integer part of $(N-a)/p$. Let
$\phi_a(x):=\max\{-a(x-1),b(x-p)\}$, for $x\in [1,p]$. Then $\phi_a\in {\mathcal E}_{N,p}$.\newline
$(iii)$~For $\epsilon > 0$, let $\Delta_{N-p}^\epsilon:=\{(t_1,\ldots,t_{N-p})\mid 0<t_1<\ldots <t_{N-p}<\epsilon\}$. The following map is continuous and injective for $\epsilon$ sufficiently small ($\phi_0:=0$ by convention)
$$h:{\mathbb R}\times \Delta_{N-p}^\epsilon\to {\mathcal E}_{N,p},\qquad h(t_0,\ldots ,t_{N-p}):=\vee_0^{N-p} (\phi_{N-p-j}-\sum_0^jt_i).$$
$(iv)$~${{\mbox{dim}_{\rm top}}}({\mathcal E}_{N,p})=N-p+1$.
\end{lem}
\proof
$(i)$~By Proposition \ref{pnormcomp} $(iv)$, the condition $\Vert f\Vert_p\leq 1$ means that the restriction of $f$ to $[1,p]$ has integral slopes. The condition $D+(f)\geq 0$ means that $f$ is convex, piecewise affine, (continuous) inside $[1,p]$ and that $N+{\rm Ord}_{H_p} f\geq 0$. These properties imply $(i)$ since
$$
{\rm Ord}_{H_p} f=f'_+(1)-pf'_-(p), \qquad N+{\rm Ord}_{H_p} f\geq 0\iff -f'_+(1)+pf'_-(p)\leq N.
$$
$(ii)$~By construction, the function $f=\phi_a$ is continuous, convex, piecewise affine with integral slopes and $\phi_a(1)=\phi_a(p)=0$. Moreover, at the point $x=\frac{a+b p}{a+b}$ where $-a(x-1)=b(x-p)$, the slope of $\phi_a$ changes from $-a$ to $b$. The point $x$ is inside the interval $(1,p)$ since $a>0,b>0$. Thus the slope of $\phi_a$ is $-a$ near $1$ and $b$ near $p$. The condition $-f'_+(1)+pf'_-(p)\leq N$ is fulfilled since $a+pb\leq N$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{hphifunct.pdf}
\end{center}
\caption{The function $h(t_0,\ldots ,t_{N-p})$ with $N-p=7$. \label{trop3} }
\end{figure}
\newline
$(iii)$~Let $\epsilon \leq \frac{p-1}{N-p+1}$. Then on $[1,1+\epsilon]$ all the $\phi_a(x)$'s coincide with $-a(x-1)$ since with $b=E((N-a)/p)\geq 1$ and $a\in \{1,\ldots,N-p\}$ one has
$$
\frac{a+b p}{a+b}-1=\frac{b (p-1)}{a+b}=(p-1)(1+a/b)^{-1}\geq \frac{p-1}{N-p+1}.
$$
For $0\leq j<N-p$, let $\psi_j:=\phi_{N-p-j}-\sum_0^jt_i$ and let $\psi_{N-p}=-\sum_0^{N-p}t_i$ be a constant function.
For $0\leq j\leq N-p$ one has
$$\psi_j(x)=(j+p-N)(x-1)-\sum_0^jt_i \,,\,~\forall x\in [1,1+\epsilon].$$ For $0\leq j<N-p$, the $x$ coordinate of the point where the line $$L_j:=\{(x,y)\mid y=(j+p-N)(x-1)-\sum_0^jt_i\}$$ meets $L_{j+1}$ is $1+ t_{j+1}\in [1,1+\epsilon]$. Thus one has
$$
\psi_j(x)\geq \psi_{j+1}(x)\,,\,~\forall x\in [1,1+t_{j+1}], \ \
\psi_j(x)\leq \psi_{j+1}(x)\,,\,~\forall x\in [1+t_{j+1},1+\epsilon].
$$
Thus $h(t_0,\ldots ,t_{N-p})(x)=\psi_0(x)\,,\,~\forall x\in [1,1+t_{1}]$ and for $1\leq j\leq N-p-1$, $h(t_0,\ldots ,t_{N-p})(x)=\psi_j(x)$, for $x\in [1+t_j,1+t_{j+1}]$ ({\it cf.}~Figure~\ref{trop3}). It follows that $h(x)$ passes from the slope $j+p-N$ to the slope $j+1+p-N$ at the point $1+ t_{j+1}\in [1,1+\epsilon]$. This statement still holds for $j=0$ and $j=N-p-1$. In the latter case, one needs to check that in a small interval $[1+t_{N-p},1+t_{N-p}+\delta]$, $h=\psi_{N-p}$ and this follows as $t_{N-p}<\epsilon$. The value of the function $h(t_0,\ldots ,t_{N-p})$ at $\lambda=1$ is $-t_0$, thus one recovers all the parameters $t_j$, $0\leq j\leq N-p$ from the function $h(t_0,\ldots ,t_{N-p})$ and this shows that the map $h:{\mathbb R}\times \Delta_{N-p}^\epsilon\to {\mathcal E}_{N,p}$ is injective. \newline
$(iv)$~Let $k={{\mbox{dim}_{\rm top}}}({\mathcal E}_{N,p})$. The above construction of $h(t_0,\ldots ,t_{N-p})$ shows that $k\geq N-p+1$. Let $f\in{\mathcal E}_{N,p}$ be non-constant, then one has $\beta=f_-'(p)\geq 1$ and $\alpha=-f_+'(1)>0$ while $\alpha+p\beta\leq N$. To determine the topological dimension of ${\mathcal E}_{N,p}$ one can fix the values of $\alpha$ and $\beta$ and count on how many real parameters the element $f\in {\mathcal E}_{N,p}$ depends. The possible values of the slope of $f$ in the interval $[1,p]$ are in the interval
$[f'(1),f'(p)]= [-\alpha,\beta]$ and $f$ is determined, up to an additive constant, by the points where the slope changes, which gives at most $\alpha+\beta$ points. Since the last of these points is determined by the previous ones in view of the periodicity of $f$, one sees that the number of free parameters for the turning points is $\alpha+\beta-1$. Since this argument determines $f$ up to an additive constant, $f$ depends on at most $\alpha+\beta$ real parameters. Moreover $\alpha+\beta= \alpha+p\beta-(p-1)\beta\leq N-(p-1)$. Thus the topological dimension $k$ of ${\mathcal E}_{N,p}$ is less than $\alpha+\beta\leq N-p+1$ and since $k\geq N-p+1$ one gets the equality.
In Appendix \ref{appA} we shall provide a more detailed description of the $\R_{\rm max}$-module ${\mathcal E}_{N,p}$ making the above qualitative argument more precise. \endproof
\proof {\it (of Theorem \ref{RRperiodic})}\newline
$(i)$~Assume first that $\deg(D)=0$. Then one has $H^0(D)=0$ except when there exists a non-trivial solution to $D+(f)\geq 0$. In that case $D$ is equivalent to $0$ and $H^0(D)$ consists of the constant functions which form the module $\R_{\rm max}$ whose topological dimension is $1$. Thus
the formula \eqref{rr1} gives in all cases that ${{\mbox{Dim}_\R}}(H^0(D))=0$. Assume now that $\delta=\deg(D)>0$ and let $\epsilon>0$. We show that
$$
\underline{\lim}_{n\to \infty}p^{-n}{{\mbox{dim}_{\rm top}}}(H^0(D)^{p^n})\geq \delta-\epsilon, \
\overline{\lim}_{n\to \infty}p^{-n}{{\mbox{dim}_{\rm top}}}(H^0(D)^{p^n})\leq \delta+\epsilon
$$
where $\underline{\lim}$ and $\overline{\lim}$ denote respectively the lim inf and the lim sup. Let $\alpha_j\in H_p$ ($j=1,2$), $\alpha_j>0$, be such that $\delta-\epsilon\leq \alpha_1< \delta$, $\delta< \alpha_2\leq \delta+\epsilon$ and that $\chi(\alpha_j)=\chi(D)$. Then let (as in Lemma \ref{periodrr}, $(ii)$) $P_j\in {\rm Div}^+(C_p)$ be positive divisors such that $\chi(P_j)=0$ with $\deg(P_1)=\delta-\alpha_1$, $\deg(P_2)=\alpha_2-\delta$. One has $\deg(\alpha_1\{1\}+P_1)=\delta$ and $\chi(\alpha_1\{1\}+P_1)=\chi(D)$. Thus by applying Theorem \ref{thmjaccp} we get the existence of a function $f_1\in {\mathcal K}(C_p)$ such that $\alpha_1\{1\}+P_1=D+(f_1)$ and a function $f_2\in {\mathcal K}(C_p)$ such that $D+(f_2)+P_2= \alpha_2\{1\}$. Using the natural injective maps, isometric for the distance \eqref{distop}, given by the inclusions
$$H^0(\alpha_1\{1\})\subset H^0(\alpha_1\{1\}+P_1)=H^0(D+(f_1))\stackrel{+f_1}{\to}H^0(D)$$
one obtains, by Lemma \ref{lemdivrel} $(i)$
$${{\mbox{dim}_{\rm top}}}(H^0(\alpha_1\{1\})^{p^n})\leq {{\mbox{dim}_{\rm top}}}(H^0(D)^{p^n})$$
as soon as $\Vert f_1\Vert_p\leq p^n$.
Similarly, one derives
$$
{{\mbox{dim}_{\rm top}}}(H^0(D)^{p^n})={{\mbox{dim}_{\rm top}}}(H^0(D+(f_2))^{p^n})\leq {{\mbox{dim}_{\rm top}}}(H^0(\alpha_2\{1\})^{p^n})
$$
as soon as $\Vert f_2\Vert_p\leq p^n$.
Thus the result will follow provided we show that for any $\alpha\in H_p$, $\alpha>0$ one has
$$
\lim_{n\to \infty} p^{-n}{{\mbox{dim}_{\rm top}}}(H^0(\alpha\{1\})^{p^n})=\alpha.
$$
By Lemma \ref{lemdivrel} $(ii)$, one has
${{\mbox{dim}_{\rm top}}}(H^0(\alpha\{1\})^{p^n})={{\mbox{dim}_{\rm top}}}(H^0(\alpha p^n\{1\})^{1})$ and for $n$ large enough so that $\alpha p^n$ is an integer, Lemma \ref{Enp} $(iv)$ shows that ${{\mbox{dim}_{\rm top}}}(H^0(\alpha p^n\{1\})^{1})=\alpha p^n -p+1$. Thus one gets as required
$$
\lim_{n\to \infty} p^{-n}{{\mbox{dim}_{\rm top}}}(H^0(\alpha\{1\})^{p^n})=\lim_{n\to \infty} p^{-n} (\alpha p^n -p+1)=\alpha.
$$
$(ii)$~Follows from $(i)$ and the fact that $H^0(D)=0$ if $\deg(D)<0$. \endproof
|
2,869,038,153,732 | arxiv |
\section{Introduction}
\label{sec:intro}
Flow states that remain exactly the same under some transformation can enrich our understanding of the basic physical mechanisms behind the flow phenomena we observe. Such solutions are known variously as invariant solutions, recurrent flows or exact coherent structures, and include equilibria, periodic orbits, and convecting versions of the same.
The calculation of such solutions promises an understanding of complex flow phenomena without the use of modelling or approximation. Where such solutions are unstable, chaotic flows can be thought of as a series of transitions from the proximity of one solution to another. This line of thinking has led to the view that statistical properties of these chaotic systems may be approximated using an expansion over (relative) periodic orbits \citep[e.g.][]{kawahara2001periodic,cvitanovic2005chaos}. Examples of previous investigations searching for steady states in incompressible flows used methods such as continuation \citep{continuation_keller}, selective frequency damping \citep{aakervik2006steady}, and an adjoint-based approach \citep{farazmand}. The numerical computation of exact steady solutions was extended to compressible flows by \cite{yamouni2013}, who used a Newton method. On the other hand, this type of work was first carried out with the aim of seeking non-steady states by \cite{nagata1990three}, and since then, the search of such flow solutions remained exclusively applied to incompressible flows. For further details of similar investigations, the reader is referred to the reviews from \cite{doi:10.1146/annurev-fluid-120710-101228} and \cite{cvitanovic2013recurrent}. More recently, \cite{farazmand} extended the available methods to obtain steady and travelling wave solutions using an adjoint-based approach. With his framework, convergence to an exact solution is guaranteed regardless from the flow state used as initial condition. This is an advantage over numerical Newton-Raphson methods, which require a starting point sufficiently close to the sought solution for convergence, although the latter converges faster in such a case.
\subsection{Cavity Flows}
\label{subsec:cavity_flows}
We apply our newly developed framework to the well studied case of flow over a two-dimensional open cavity at a Reynolds number $Re_D=2000$, which is defined as $Re_D=\rho_\infty U_\infty D/ \mu_\infty$. The subscript $\infty$ indicates free-stream values and D represents the cavity depth. Despite the relatively simple geometry of this case, the limited computational resources available to early investigators forced them to focus on the incompressible flow mechanisms, and insights into the compressible events present in cavity flows were mostly accessible through experimental research. In one of these investigations, \cite{rossiter_modes} documented and studied in detail for the first time the self-sustained flow oscillations of compressible origin, which are now commonly known as Rossiter modes. The origin of these periodic events resides in Kelvin-Helmholtz instabilities which grow along the separated shear layer, impinging on the cavity's trailing edge. This flow impingement radiates an acoustic wave which also propagates upstream and, due to the high receptivity of the leading edge (see section \ref{subsec:stability_of_equilibria}), fuels the appearance of new shear layer instabilities. This Mach-number dependent flow-acoustic interaction triggers these new instabilities and governs the overall sound directivity.
One of the first relevant computational studies which simulated two-dimensional cavity flows using compressible DNS was carried out by \cite{rowley2002self}. They performed a large parametric study (changing $\Rey_D$, $M$, $L/D$, etc.) and documented in thorough detail the dynamics in each case. For low aspect ratio cavities (such as the present case) they found that the dynamics were governed by a shear layer (Rossiter) mode. At higher aspect ratios, the cavity flow abandons the shear layer mode and undergoes a transition towards a wake mode type of motion. In a parametric sense, our results extend their database further for cavities of aspect ratio $L/D=3$.
The stability of several open cavity flow configurations has been widely studied in past investigations. \cite{vassilios_gls} highlights this well-known flow geometry in his review of global linear stability. \cite{brs2008} carried out BiGlobal instability analysis \citep{theofilis2003algorithm} on various compressible 2D and 3D cavity flows, where for the three-dimensional cases they used the two-dimensional mean flow as the base state. There they discovered that the spanwise instabilities were independent of the Mach number, where their origin was of convective (rather than acoustic) nature. This finding motivated the three-dimensional stability analysis in cavity flows assuming incompressible flow, such as \cite{devicente2014}. From the neutral stability curves shown in \cite{meseguergarrido2014} (their figure 7), for an aspect ratio of $L/D=3$, the critical Reynolds number above which the three-dimensional cavity flow becomes unstable is $Re_D\approx800$. Hence, this implies that if all the flow configurations studied in this article were extended to a three-dimensional space, they would exhibit an unstable spanwise mode. In that case, the 2D and 3D modes would interact causing a frequency modulation respect to their isolated behaviour \citep{brs2008}.
Previous studies have tried to understand the behaviour of Rossiter modes in compressible cavity flows with a relatively high Reynolds number \citep[e.g.][]{kegerise2004mode}. Under such conditions, the two-dimensional convective instabilities dominate and exert a strong modulation over the Rossiter modes, which makes the understanding of these shear layer modes extremely difficult. For example, in the work of \cite{yamouni2013}, the fact that their cavity flow became more unstable with an increasing Mach number was reported as surprising. In that regard, we restrict our analysis to a two-dimensional space, where the Reynolds number is below the threshold where convective instabilities start to appear. With this choice, we may examine physical mechanisms governing the two-dimensional shear layer modes, which are exclusively compressible in origin.
\cite{kervik2007} investigated the linear stability of an incompressible steady solution of a two-dimensional open cavity flow. Despite their assumption of incompressible flow with a relatively low Reynolds number ($Re=350$), this equilibrium solution was unstable due to the cavity's large aspect ratio ($L/D\approx 25$), which caused the separated flow to undergo transition towards a wake mode type periodic cycle \citep{rowley2002self}. A similar study was carried out by \cite{luchini_2007}, this time investigating the instability in a 2D cylinder's wake, also using several incompressible steady solutions as the base flow for the stability problem. So far, the use of linear stability analysis using compressible invariant solutions has only been considered by \cite{yamouni2013}. With an open cavity flow with an aspect ratio of $L/D=1$ and the Reynolds number being $Re_D=7500$, their flow was already unstable in the incompressible regime, which prevented them from isolating the origin of the instability as a function of the Mach number.
In this article, we apply our framework to a two-dimensional open cavity flow at $Re_D = 2000$ and $M=0.5$ as a central case. As we will see, this particular flow solution naturally decays very slowly towards a limit cycle. Hence, we use our method to drive the flow state directly to the periodic orbit, skipping this long transient. After this reference periodic solution is computed, we use it as the initial guess to compute the equivalent periodic solution at neighbouring Mach numbers, keeping the Reynolds number $Re_D$ constant. The steady invariant solutions associated to some of these periodic orbits are also computed. The numerical method related to the computation of compressible invariant solutions is introduced in section \ref{sec:num_method}. Both families of equilibrium and periodic solutions are thoroughly detailed in sections \ref{sec:periodic_family} and \ref{sec:equilibria}, highlighting the effects of a changing Mach number over the flow dynamics. Moreover, a linear stability analysis is performed on these steady solutions in section \ref{sec:equilibria} in order to unveil the physical mechanisms which make these flow equilibria unstable for high Mach numbers.
A dataset including the calculated equilibria and periodic solutions has been made available \citep{cavity_sol_dataset} and animations of the flow solutions are available in the online supplement.
\section{Numerical Method}
\label{sec:num_method}
The process of finding exact flow solutions is reduced to a simple optimisation exercise, where the optimisation parameters are the state variables $Q_0\left(\vec{x}\right)=Q\left(\vec{x},t_0\right)$ at the initialisation of the direct numerical simulation (DNS). The numerical framework used to obtain the exact flow solutions couples an in-house compressible DNS solver \citep[HiPSTAR -][]{rsand} with the L-BFGS optimisation algorithm \citep{lbfgsb}. Starting from an initial condition (i.e. an arbitrary flow snapshot), the DNS provides the value of a chosen cost function to be optimised, but also the gradients to all the control parameters. The value of the cost function alongside the gradients are fed to the optimisation algorithm, which returns a better estimate of initial state variables. This simple algorithm is then repeated until some stopping criterion is met, where the nature of the solution found by the algorithm (steady or time-periodic) depends exclusively on the chosen cost function.
The cavity flow simulation is arranged into a four-block setup, where the grid is of Cartesian type and continuous up to fourth order across blocks. With the coordinate origin at the lower-left corner of the cavity and using the cavity depth $D$ as the reference length, the domain ranges from -20 to 20 in the streamwise direction, and from 0 to 10 in the vertical direction; restricting the DNS resolution to the vicinity of the cavity. This discretisation leads to a total of 480 000 grid points, where the resolution of the cavity is $800 \times 300$ points. Note that this grid resolution is much higher than the one used by \cite{rowley2002self} and it has been shown to produce grid independent results in \cite{mythesis}. For a complete description of the flow governing equations the reader is referred to appendix \ref{app:gov_eq}.
\subsection{Steady Solutions}
\label{subsec:steady_solutions}
To find a steady flow solution, we require a cost function which penalises the change of the flow-field throughout the time integration respect to the initial state. Such function can be written as \begin{equation}
\mathcal{J}\left(Q\left(\vec{x},t\right),Q_0\left(\vec{x},t_0\right)\right) = \int_{T}\int_{\Omega}\frac{1}{2}\left|Q\left(\vec{x},t\right) - Q_0\left(\vec{x},t_0\right)\right|^2 \mathrm{d}\vec{x} \mathrm{d}t,
\label{eq:steady_cost_f}
\end{equation}
where $T$ is the duration of the time integration, $\Omega$ is the computational domain and $\left|\cdot\right|$ indicates an appropriate norm. In order to drive the cost function towards its minimum using the L-BFGS method, it is necessary to compute the gradients of the cost function to the initial condition, which are written as
\begin{equation}
\frac{D\mathcal{J}}{DQ_0} = \frac{\partial \mathcal{J}}{\partial Q} \frac{\mathrm{d}Q}{\mathrm{d}Q_0} + \frac{\partial \mathcal{J}}{\partial Q_0}.
\label{eq:grad_fwd_steady}
\end{equation}
In this particular case, the gradients can be computed straight from the DNS without any further cost as\begin{equation}
\left. \frac{D\mathcal{J}}{DQ_0}\right|_{\vec{x}} = \int_{T}- \left[Q\left(\vec{x},t\right) - Q_0\left(\vec{x},t_0\right)\right] \mathrm{d}t,
\end{equation} since the first term of the right hand side of (\ref{eq:grad_fwd_steady}) cancels out. From a preliminary study, it was found that longer horizons provide better gradients with more information about the leading instability of the initial state. Hence, increasing the length of this horizon resulted in a flow-field matching the initial state for a longer time. Note that this would only be the case when the steady solutions are unstable. Finally, the stopping criterion used consists in the variations of the cost function $\mathcal{J}$ being of the order of the numerical precision used. All the equilibria presented later on were computed using their corresponding periodic orbit at the same Mach number as the starting point.
\subsection{Periodic Solutions}
\label{subsec_6:periodic_solutions}
In order to search for a time-periodic flow solution, we use less constrained cost function,
\begin{equation}
\mathcal{J}\left(Q\left(\vec{x},T\right),Q_0\left(\vec{x},t_0\right)\right) = \int_{\Omega}\frac{1}{2}\left|Q\left(\vec{x},T\right) - Q_0\left(\vec{x},t_0\right)\right|^2 \mathrm{d}\vec{x}.
\end{equation}
Note that in contrast to equation (\ref{eq:steady_cost_f}) there is no time integral, so the cost is now just the squared norm of the difference between the initial and final flow states. Following the same reasoning as before, the gradients also follow directly from the DNS,
\begin{equation}
\left.\frac{D\mathcal{J}}{DQ_0}\right|_{\vec{x}} = - \left[Q\left(\vec{x},T\right) - Q_0\left(\vec{x},t_0\right)\right].
\end{equation}
Here, the time horizon $T$ is not fixed and so also has to be optimised. One possible way to find a good initial estimate of $T$ is seeking a globally periodic pattern in the flow-field with distributed monitor points across the entire flow domain. After performing a Fourier transform of each individual signals, all monitor points should show an energy peak at a common frequency, allowing an initial guess of $T$. To optimise the horizon length, instead of computing a time gradient with respect to the cost function introducing a new optimisation variable, $T$ is adjusted at every new iterate. This is achieved by overrunning the horizon $T$ by a minimal amount, and finding which nearby time-step has the minimum cost. For this approach to work, the initial time horizon guess must be close to the period of the final flow orbit. The convergence criterion used for these periodic solutions is $\mathcal{J}/\left\rVert Q_0 \right\rVert< 10^{-12}$.
\section{Periodic Solution at M=0.5}
\label{sec:PS_M050}
Before delving into the analysis of the evolution of periodic solutions across Mach number, we first describe the periodic orbit found for $M=0.5$ and $Re_D=2000$. This solution was found first and will be referred to when comparing other orbits.
The analysis carried out in this section breaks down the periodic trajectory into more fundamental intervals. To characterise the periodic orbits, we define the norm
\begin{equation}
\left\lVert \alpha \right\rVert = \int_{\Omega} \alpha\left(\vec{x},t\right)^2 W_i\left(\vec{x}\right)\text{d}\vec{x},
\label{eq:periodic_norm}
\end{equation}
where $\alpha$ is a space-time dependent flow quantity, and $W_i\left(\vec{x}\right)$ is a spatial function which is zero outside the vicinity of the cavity, preventing spurious sensitivity to the boundary conditions. Here, the function $W_i\left(\vec{x}\right)$ is defined as
\begin{equation}
W_i\left(\vec{x}\right) = \begin{cases}
1 & \quad \text{if} \ \vec{x} \in \left[\left(-1.5,0\right),\left(4.5,4\right)\right]\\
0 & \quad \text{if} \ \vec{x} \notin \left[\left(-1.5,0\right),\left(4.5,4\right)\right]\\
\end{cases}.
\end{equation}
To analyse the physics of the periodic orbits, we have selected dilatation ($\nabla \cdot \vec{u}$), which highlights the compressible events, and viscous dissipation rate \citep[$\varepsilon$ - see][]{kundu_book}, which should emphasise the flow phenomena with strong shear, such as vortex merging. Additionally, we use vorticity ($\nabla \times \vec{u}$) and kinetic energy ($e_{kin}$), which are often used to characterise periodic orbits in incompressible flows. Figure \ref{fig:phase_portrait_Re_2000_M050} shows a phase portrait of the periodic solution for $Re_D=2000$ and $M=0.5$, projected onto these four variables. The locations of the key physical events occurring in the solution are highlighted with symbols. These symbols are labelled chronologically from $a$ to $e$ in figure \ref{fig:diss_vs_vort}, where $a$ indicates the vortex impingement on the trailing edge of the cavity, which is the instant where compressible effects reach their maximum. Also in figure \ref{fig:phase_portrait_Re_2000_M050}, the grey lines show how the flow evolves from the flow initial condition towards the periodic trajectory. It can be seen that the periodic orbit is an attractor which almost represents the complete behaviour of the flow. This follows as a consequence of the relatively low Reynolds number alongside a strong acoustic shear layer feedback mechanism which stabilises the flow onto this periodic behaviour. Hence, the flow lacks the sufficient energy to `jump' to another state and sits indefinitely in the close vicinity of this stable orbit.
\begin{figure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_a}
\label{fig:diss_vs_dil}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_b}
\label{fig:ke_vs_dil}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_c}
\label{fig:ke_vs_vort}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_d}
\label{fig:diss_vs_vort}
\end{subfigure}
\caption{4D representation of the periodic orbit at $M=0.5$. The grey line shows the natural flow evolution from the vicinity of the initial condition towards the periodic orbit. The red symbols indicate key reference points (colour online).}
\label{fig:phase_portrait_Re_2000_M050}
\end{figure}
\begin{figure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_vort_a}
\label{fig:vort_a}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_vort_b}
\label{fig:vort_b}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_vort_c}
\label{fig:vort_c}
\end{subfigure}\\
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_vort_d}
\label{fig:vort_d}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_vort_e}
\label{fig:vort_e}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}{0.32\textwidth}
\vspace{-2.1cm}
\centering
\input{./Re_2000_M050_vort_colourbar}
\end{subfigure}
\caption{Contours of instantaneous $z$-vorticity at the time steps represented by the red symbols in figure \ref{fig:phase_portrait_Re_2000_M050} (colour online). That is, \textbf{(a)} corresponds to the mark labelled with $a$, \textbf{(b)} with the mark labelled with $b$ and so on.}
\label{fig:vort_series}
\end{figure}
To further illustrate the flow behaviour, snapshots of the vorticity field are shown in figure \ref{fig:vort_series}, where the subfigures are ordered chronologically with the sub-caption matching the instants labelled in figure \ref{fig:diss_vs_vort}. At the instant $a$, the vortex located in the downstream end of the cavity impinges onto the trailing edge (figure \ref{fig:vort_a}). This vortex was also observed in, for example, \cite{brs2008}, and remains in that location throughout the entire periodic orbit. For this reason, it will be referred to as the stationary vortex. At this instant, the vortex is slightly stretched in the vertical direction, which is why it impinges on the trailing edge of the cavity. This stretching is the result of the previous shear layer vortex merging with this stationary vortex, which has lead to a low density (and high momentum) area on top of this stationary vortex. Hence, this vortex impingement radiates a low density acoustic wave. Additionally, another vortex is attached to the leading edge of the cavity. As mentioned, this is the instant where the compressible effects are higher, mostly, but not entirely, due to the vortex impingement occurring at the trailing edge of the cavity. Figure \ref{fig:cartoon_a} shows how the sound radiation is mainly generated in the upstream direction. The physical phenomenon is due to two aligned dipoles in perfect synchronisation. The strongest dipole is located at the trailing edge of the cavity and its origin is the impingement of the stationary vortex on the cavity's trailing edge. Whereas, the weaker dipole is caused by the clockwise rotation of the vortex currently attached to the leading edge of the cavity. Note that this dipole interaction partially cancels the sound radiation in the downstream direction.
\begin{figure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_dil_a_cartoon}
\label{fig:cartoon_a}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_dil_b_cartoon}
\label{fig:cartoon_b}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_dil_c}
\label{fig:dil_c}
\end{subfigure}\\
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_dil_d}
\label{fig:dil_d}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M050_dil_e}
\label{fig:dil_e}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}{0.32\textwidth}
\vspace{-2.5cm}
\centering
\input{./Re_2000_M050_dil_colourbar}
\end{subfigure}
\caption{\textbf{(a)} and \textbf{(b)} - Illustration of the leading edge and shear layer dipole interaction at instants $a$ and $b$. \textbf{(c)} to \textbf{(e)} - Snapshots of the dilatation field at instants $c$, $d$ and $e$.}
\label{fig:dil_d_e}
\end{figure}
\begin{itemize}
\item[From $a$ to $b$] the intensity of the impingement decreases rapidly, which causes the trailing edge dipole to dissipate while the shear layer dipole gains in intensity. Additionally, the upstream vortex is detached from the leading edge of the cavity. As this vortex moves downstream, a new vortex is formed in the upstream lower corner of the cavity. This vortex rotates counter-clockwise and induces a downwash along the upstream vertical wall of the cavity, which expands the flow as it interacts with the incoming boundary layer. This results into a new dipole attached to the leading edge, which also contributes to the amplification of the acoustic wave reflected from the trailing edge at $a$ (figure \ref{fig:cartoon_b}). When the flow reaches $b$, dilatation and kinetic are at their minimum values (figure \ref{fig:ke_vs_dil}). At this point, the trailing edge dipole has vanished, leaving temporarily as leading noise sources the shear layer and leading edge dipoles.
\item[From $b$ to $c$] the vortex in the shear layer keeps moving forward, with its corresponding dipole now decaying in intensity, but still further amplifying the upstream propagating sound wave. The main contributor to this amplification is now the dipole at the leading edge. The instant $b$ can be seen as the start of a compressible interaction between the shear layer and stationary vortices, where their counter-rotating behaviour compresses the flow between the two vortices which creates a high density spot. Note that the norm of the dilatation field increases due to this phenomenon (figure \ref{fig:diss_vs_dil}). Also, the norm of vorticity (figure \ref{fig:ke_vs_vort}) reaches minimum values due to the low interaction of vortical structures with the trailing edge (figure \ref{fig:vort_c}).
\item[From $c$ to $d$] the shear layer and stationary vortices collide and start a merging process. This interaction gains in intensity continuously until the orbit arrives at $d$. This phenomenon is reflected in figure \ref{fig:diss_vs_dil}, where the viscous dissipation rate increases suddenly and reaches a maximum at the instant $d$. The flow compression occurring downstream from the shear layer vortex has grown further and impinges on the trailing edge at $d$. This impingement reflects a high-density wave that also propagates upstream. At the leading edge of the cavity, a new vortex starts forming at $c$ due to the suction caused by the vortex at the upstream lower corner of the cavity. Again, the counter-rotation between the leading edge vortex and the shear layer vortex create another high-density spot in between them, which further amplifies the upstream propagating high-density wave. The instantaneous dilatation field shows again two dipoles in the shear layer and trailing edge in perfect synchronisation radiating noise in the upstream direction (figure \ref{fig:dil_d}). Note that this time, the dipoles show opposite sign with respect to $a$. Hence, $d$ could be seen as the phase counterpart of $a$.
\item[From $d$ to $e$,] soon after $d$, the core of the shear layer vortex gets absorbed by the stationary vortex. As the vortex merging completes, the norm of the viscous dissipation rate experiences a sudden drop (figure \ref{fig:diss_vs_dil}). Meanwhile, the leading edge vortex continues growing in size. As a result of this merging, the stationary vortex is stretched in the vertical direction as it keeps rotating clockwise towards the trailing edge, which causes the trailing edge dipole to slowly invert in sign. As well, the shear layer dipole which keeps contributing to the upstream sound radiation. Also, the clockwise rotation of the leading edge vortex pushes the flow upwards along the upstream vertical cavity wall compressing the flow. This phenomenon is observed in the dilatation field (figure \ref{fig:dil_e}) as a weak dipole located at the leading edge, which keeps amplifying the upstream travelling wave. Note that this same amplification mechanism occurred with opposite phase in $c$ (figure \ref{fig:cartoon_b}).
\item[From $e$ to $a$,] the stationary vortex starts interacting with the trailing edge as it keeps rotating clockwise. This interaction is also observed in figure \ref{fig:diss_vs_dil}, where the norm of the dilatation field increases monotonically from $e$ to $a$. During this interval, the leading edge vortex has grown in size considerably, up to about half cavity length, just before it detaches from the leading edge again in $a$. These two physical phenomena occur in perfect synchronisation, leading to the double dipole, which radiates the acoustic wave upstream.
\end{itemize}
\section{Family of Periodic Solutions Across Mach Number}
\label{sec:periodic_family}
The non-dimensional character of our numerical framework permits us to use a periodic orbit with Mach number $M_0$ as an initial guess in the search of a new periodic orbit in neighbouring Mach numbers $M_0 \pm \delta$. In particular, the orbits were continued from the periodic trajectory found at $M=0.5$, both in ascending and descending order with increments of 0.05 in Mach number. Given the large variations in the flow quantities plotted in figure \ref{fig:Mach_range} for Mach numbers above 0.65, the step size in Mach number was reduced to 0.01. The range of Mach numbers studied covers $M=0.25$ to $M=0.8$. At the lower end ($M\gtrsim 0.35$), the periodic solution ceases to exist due to the low compressibility of the system and leads to a steady state. This phenomenon is also reflected in figure \ref{fig:Mach_range}, where the steady (see later section \ref{sec:equilibria}) and periodic solutions are seen to collapse at this lower Mach number regime. On the other hand, for immediately higher Mach numbers, the interaction between the compressible and convective phenomena shown previously for the $M=0.5$ solution results in a family of periodic solutions which are stable.
\newlength\machfigheight
\setlength\machfigheight{2.5cm}
\begin{figure}
\begin{subfigure}{0.49\textwidth}
\centering
\subcaption{}
\vspace{-0.25cm}
\input{./T_vs_M}
\label{fig:T_vs_M}
\end{subfigure}
~\hspace{0.0\textwidth}
\begin{subfigure}{0.49\textwidth}
\vspace{0.06cm}
\centering
\subcaption{}
\vspace{-0.65cm}
\input{./dil_vs_M}
\label{fig:dil_vs_M}
\end{subfigure}\\
\begin{subfigure}{0.49\textwidth}
\centering
\subcaption{}
\vspace{-0.05cm}
\input{./ke_vs_M}
\label{fig:ke_vs_M}
\end{subfigure}
~\hspace{0.0\textwidth}
\begin{subfigure}{0.49\textwidth}
\centering
\subcaption{}
\vspace{-0.05cm}
\input{./vort_vs_M}
\label{fig:vort_vs_M}
\end{subfigure}\\
\begin{subfigure}{0.49\textwidth}
\vspace{0.03cm}
\centering
\subcaption{}
\vspace{-0.325cm}
\input{./diss_vs_M}
\label{fig:diss_vs_M}
\end{subfigure}
~\hspace{0.0\textwidth}
\begin{subfigure}{0.49\textwidth}
\vspace{0.1cm}
\centering
\subcaption{}
\vspace{-0.05cm}
\input{./Re_mom_vs_M}
\label{fig:Re_thet_vs_M}
\end{subfigure}
\caption{\textbf{(a)} - Time period of the orbits as a function of Mach number. The red solid line shows the periods predicted by Rossiter's formula. \textbf{(b)} to \textbf{(f)} - Time-averaged quantities for the periodic orbits (black circles) and steady solutions (red triangles) across Mach number. The open circles represent purely numerical output of the algorithm without any physical interpretation. Colour online.}
\label{fig:Mach_range}
\end{figure}
Figure \ref{fig:Mach_range} shows the period $T$ alongside other time-averaged quantities of interest as functions of Mach number. The periods shown in figure \ref{fig:T_vs_M} are compared with the predictions calculated using Rossiter's semi-empirical formula \citep{rossiter_modes}
\begin{equation}
T_M = \frac{L}{U_\infty} \frac{M+1/\kappa_M}{n-\gamma_M},
\label{eq:rossiter_formula}
\end{equation}
where $n$ is integer, $L$ is the cavity length and $\kappa_M$ and $\gamma_M$ are empirical constants\footnote{These empirical constants were calculated based on the periods obtained for the periodic orbits at Mach numbers 0.5 and 0.55. After solving the system of two equations with $\kappa_M$ and $\gamma_M$ as unknowns, we arrive to $\kappa_M = 0.6096$ and $\gamma_M = 0.5003$.}. These results correspond to the second cavity mode ($n=2$ - see subsection \ref{subsec:mom_t_and_s} for a description of the cavity mode selection mechanism). The agreement of the periods from the orbits presented in the present work and the predictions calculated using Rossiter's formula is remarkably good in the central section of figure \ref{fig:T_vs_M} ($0.35 \leq M \leq 0.65$). This particular range of Mach numbers shows a smooth and monotonic behaviour across all the plots shown in figure \ref{fig:Mach_range}. Note that the solutions at $M=0.25$ and $M=0.3$ are steady flow solutions, where the periods shown in \ref{fig:T_vs_M} are purely numerical outputs obtained by the algorithm without physical interpretation. As mentioned in the previous section for the $M=0.5$ orbit, this family of periodic solutions arise from the self-sustained compressible feedback mechanism characteristic of cavity flows. The vortex impingement on the trailing edge of the cavity radiates an upstream travelling acoustic wave that interacts with the oncoming new shear layer vortex. These two phenomena mutually benefit from each other due to their phase synchronisation along the entire orbit. When moving away from the reference $M=0.5$ solution in Mach number, the change in the propagating speed of the upstream travelling acoustic wave results into a phase modification of the acoustic wave shear layer interaction. For clarity in the following analysis, we refer to a synchronised interaction as `in phase', where the dipoles associated with the shear layer vortex and vortex impingement at the trailing edge have the same sign and enhance the upstream travelling acoustic wave (for example figures \ref{fig:cartoon_a} or \ref{fig:dil_d}). Contrarily, we say that the interaction occurs `out of phase' when these two dipoles have opposite sign and lessen the intensity of the radiated sound wave. The smooth and monotonic increase of the quantities shown in figure \ref{fig:Mach_range}, suggest that the higher flow compressibility associated with a higher Mach number dominates the average behaviour of the periodic orbits up to $M=0.65$. From this point onwards, the phase of the interaction appears to become a dominant phenomenon, leading to the oscillatory behaviour of the mean quantities observed mainly in figures \ref{fig:vort_vs_M} and \ref{fig:diss_vs_M}. Additionally, the rate of increase in average dilatation reduces from $M=0.65$ to $M=0.7$ due to the opposite phase of the acoustic wave and the shear layer dipoles, which partially cancels out the compressible phenomena. Note that this phase coupling might also vary as a function of Reynolds number (number of vortices in the shear layer) and cavity length (travelling distance of the upstream propagating acoustic wave). As the acoustic wave keeps decreasing its propagating speed (increasing the Mach number), the phase of the acoustic event becomes favourable again for the two physical mechanisms to work in synchronisation. This behaviour is reflected as a pronounced increase in the average dilatation from $M=0.7$ to $M=0.76$. As discussed later, the higher flow compressibility allows the shear layer not only to modulate the upstream travelling acoustic wave but also to radiate sound of comparable magnitude. This phenomenon, alongside its phase synchronisation with the already existing acoustic wave, is responsible for the sudden and steep changes in the periodic orbits above $M=0.75$ observed in figure \ref{fig:Mach_range}. Moreover, it is also worth tracking the evolution across Mach number of the boundary layer's momentum thickness $\Theta$ at the flow separation point, which is often used to characterise this type of flow. Variations in this particular quantity produce considerable changes in the amplitude of the above-described shear layer oscillations, and also slight modulations in the characteristic frequency. In particular, the amplitude of these oscillations appears to increase with rising $L/\Theta$, expressing its maximum differences in terms of $\overline{\left\lVert e_{kin} \right\rVert}$. Hence, the almost monotonic decrease in $Re_\Theta$ for $M>0.5$ shown in figure \ref{fig:Re_thet_vs_M} agrees with the steep increase in $\overline{\left\lVert e_{kin} \right\rVert}$ shown in figure \ref{fig:ke_vs_M} for the highest Mach number orbits. Furthermore, the trajectory's period also shows a close relation with $Re_\Theta$, where a decrease in $L/\Theta$ (increase in $Re_\Theta$) yields a longer period (see section \ref{subsec:mom_t_and_s} and also appendix \ref{appA}). Especially for $M>0.65$, an opposite oscillatory behaviour to the orbit's period as a function of Mach number (figure \ref{fig:T_vs_M}) is reflected in figure \ref{fig:Re_thet_vs_M}. More precisely, Mach number ranges which present a steep increase in the orbit's period correspond to a steep decrease in $Re_\Theta$ at that same Mach number range. Hence, it appears that the optimisation algorithm uses the incoming boundary layer thickness to balance the frequency of each flow trajectory, maintaining it within the proximity of the predictions from Rossiter's formula (\ref{eq:rossiter_formula}).
\begin{figure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./diss_vs_dil_all}
\label{fig:diss_vs_dil_all}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./ke_vs_dil_all}
\label{fig:ke_vs_dil_all}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./diss_vs_vort_all}
\label{fig:diss_vs_vort_all}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./ke_vs_vort_all}
\label{fig:ke_vs_vort_all}
\end{subfigure}
\caption{4D representation of the family of solutions across Mach number, ranging from 0.25 to 0.65. The equilibrium solutions from Mach numbers 0.25 to 0.65 (ordered with increasing kinetic energy) are represented with red dots.}
\label{fig:phase_portrait_Re_2000_all}
\end{figure}
\subsection{Low Mach number regime}
Figure \ref{fig:phase_portrait_Re_2000_all} shows the phase portraits of the periodic orbits up to $M=0.65$. Note that three of the main features described in the previous section can be identified in figure \ref{fig:diss_vs_dil_all}. Instants $a$ and $b$ (vortex impingement, and shear layer vortex moving forward before interacting with the stationary vortex, respectively) correspond to the local maximum and minimum values of dilatation on the low viscous dissipation zone of the orbit. The $c$ in figure \ref{fig:phase_portrait_Re_2000_M050} (at the beginning of the merging of the shear layer and stationary vortices), merges with $b$ for Mach numbers above 0.5, as the difference in flow and sound velocity reduces. On the other hand, the instant $d$ (maximum intensity of the vortex merging) always follows the absolute maximum value of viscous dissipation rate, peaking at $M=0.6$. Interestingly, at Mach numbers 0.35 and 0.4 the strongest compressible event (maximum dilatation) is the vortex merging. After further increasing the Mach number, the maximum value of the norm of dilatation shifts to the instant corresponding with the vortex impinging on the trailing edge. This transition is strongly related to the physical mechanisms that cause the bifurcation of this family of periodic solutions from the steady solutions.
As shown in figure \ref{fig:M025_vort}, the solution at $M=0.30$ sits at perfect equilibrium, where contrarily, a periodic trajectory exists for immediately higher Mach numbers (figure \ref{fig:phase_portrait_Re_2000_all}). Figure \ref{fig:M035_vort} shows the instantaneous vorticity field of the $M=0.35$ orbit at the point where the norm of the dilatation field is highest. When the steady solution becomes unstable, the weak leading edge vortex gets absorbed by the stationary vortex. During this merging process, similarly to the $M=0.5$ solution, the counter rotation of the two vortices originates a local flow compression, which is located downstream from the leading edge vortex (or also shear layer vortex). This phenomenon is the responsible for the highest norm of the dilatation field and coincides in time with the also highest norm of the viscous dissipation. From a density field perspective, this vortex counter-rotation leads to a high density spot sitting amongst the two vortices, whereas the vortices have an associated low density area on top of each one of them. Also note that the flow-field shown in figure \ref{fig:M035_vort} is the $M=0.35$ equivalent of the one shown in figure \ref{fig:vort_d} for the $M=0.5$ case. The reason why the vortex impingement is not as relevant from the compressible point of view is that the shear layer vortex is not strong enough to endure the orbit of the stationary vortex and it dissipates very rapidly. However, as the Mach number increases, the leading edge vortex gains in strength, which eventually leads to the low-density area impingement as the flow event with the highest compressibility. In addition, it is worth pointing out that the absolute minima of vorticity shown in figure \ref{fig:diss_vs_vort} represent the start of the merging amongst the shear layer and stationary vortices (instant $c$). Moreover, figure \ref{fig:ke_vs_vort_all} shows how the kinetic energy varies significantly with Mach number. Even though this is partially induced by the drop in $\Theta$, the overall change in the shape of the projection of the orbit also suggests that the phase of the interaction amongst the shear layer and acoustic wave begins to dominate the flow behaviour. Furthermore, the maximum value of the kinetic energy norm appears to reach saturation from $M=0.6$ to $M=0.65$ which shows that this interaction is shifting towards an out of phase synchronisation.
\begin{figure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M025_vort}
\label{fig:M025_vort}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M035_vort}
\label{fig:M035_vort}
\end{subfigure}\vspace{-4.60cm}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\input{./M_025_035_colourbar}
\end{subfigure}\vspace{3.75cm}
\caption{Instantaneous contours of $z$-vorticity at Mach numbers 0.30 \textbf{(a)} and 0.35 \textbf{(b)}.}
\label{fig:M_025_035}
\end{figure}
\begin{figure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./diss_vs_dil_all2}
\label{fig:diss_vs_dil_all2}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./ke_vs_dil_all2}
\label{fig:ke_vs_dil_all2}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./diss_vs_vort_all2}
\label{fig:diss_vs_vort_all2}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./ke_vs_vort_all2}
\label{fig:ke_vs_vort_all2}
\end{subfigure}
\caption{4D representation of the family of solutions across Mach number, ranging from 0.65 to 0.75.}
\label{fig:phase_portrait_Re_2000_all2}
\end{figure}
\subsection{Phase-dominated Mach number regime}
Figure \ref{fig:phase_portrait_Re_2000_all2} shows the phase-dominated Mach number range ($0.65<M<0.75$). Unlike the lower Mach number orbits represented in figure \ref{fig:phase_portrait_Re_2000_all}, the averaged norms of vorticity and viscous dissipation rate do not increase monotonically with Mach number (figure \ref{fig:Mach_range}). Instead, they describe an oscillatory behaviour related to the phase of the acoustic wave shear layer interaction. Figure \ref{fig:diss_vs_dil_all2} highlights how the phase of this interaction modifies the orbit at the vortex impingement on the trailing edge. As mentioned earlier in section \ref{sec:PS_M050}, this phenomenon is represented as the top left loop observed in the periodic orbits shown in figure \ref{fig:diss_vs_dil_all2}, which slowly unfolds as the Mach number increases. For the orbit at $M=0.74$, the beginning and the maximum intensity of the vortex impingement are distinctively represented as two sharp corners. Overall, the presence of more abrupt and complex features in these phase plots is strongly linked to higher flow compressibility, which permits the appearance of new dominant flow mechanisms present in the flow. To give insight into the interaction between the shear layer and the upstream travelling acoustic wave, figure \ref{fig:dil_max_65_and_74} shows the dilatation field at the maximum of the norm of dilatation (instant $a$) for Mach numbers 0.65 and 0.74. At figure \ref{fig:dil_65_max}, the interaction occurs in opposite phase, where the shear layer dipole induces a slight curvature in the upstream propagating sound wave. As the Mach number keeps increasing (figure \ref{fig:dil_74_max}) the speed of sound gets reduced compared to the flow velocity, bringing this interaction further out of phase. On the other hand, the greater flow compressibility in this scenario, permits the shear layer dipole to grow and it now radiates sound of comparable magnitude to the vortex impingement in the cavity's trailing edge. In essence, this shear layer dipole has progressed from slightly modifying the upstream propagating acoustic wave, to generating its own sound wave of similar magnitude. Hence, as the speed of sound decreases and the magnitude of the shear layer dipole grows, the combination of the two acoustic waves slowly shifts the main sound radiation towards a more vertical direction. For comparative purposes, these dilatation contours from these figures show a remarkable agreement with the ones shown in \cite{rowley2002self}, despite their differing configuration of $L/D=2$.
Returning to figure \ref{fig:phase_portrait_Re_2000_all2}, the vortex impingement now sits as the highest value of dilatation and also in the vicinity of the highest kinetic energy, which differs from the lower Mach number range shown in figure \ref{fig:phase_portrait_Re_2000_all}. Similarly to the previous Mach number range, the maximum intensity of the vortex merging is also identified as the instant of maximum viscous dissipation rate in figures \ref{fig:diss_vs_dil_all2} and \ref{fig:diss_vs_vort_all2}. These two figures also show the shear layer vortex travelling downstream, right after detaching from the leading edge, as the minimum point in dilatation, vorticity and viscous dissipation. Shortly before that instant, the projected trajectory over the viscous dissipation and vorticity appears briefly to become independent from Mach number, where all the trajectories collapse. Also, as seen before in figure \ref{fig:ke_vs_M}, the average kinetic energy follows an increasing quasi-linear trend up to $M=0.8$. This is also reflected in figures \ref{fig:ke_vs_dil_all2} and \ref{fig:ke_vs_vort_all2}, where the horizontal displacement of the orbits as a function of Mach number is considerably higher than observed in figure \ref{fig:ke_vs_vort_all}. This phenomenon was found to be strongly related with the substantial drop in $\Theta$ at these periodic solutions.
\begin{figure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M065_dil_max}
\label{fig:dil_65_max}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./Re_2000_M074_dil_max}
\label{fig:dil_74_max}
\end{subfigure}\vspace{-5.6cm}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\input{./M_065_074_dil_max_colourbar}
\end{subfigure}\vspace{5.5cm}
\caption{Snapshots of the dilatation field at the maximum value of the norm of dilatation for Mach numbers 0.65 \textbf{(a)} and 0.74 \textbf{(b)}.}
\label{fig:dil_max_65_and_74}
\end{figure}
\begin{figure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./diss_vs_dil_all3}
\label{fig:diss_vs_dil_all3}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./ke_vs_dil_all3}
\label{fig:ke_vs_dil_all3}
\end{subfigure}\\
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./diss_vs_vort_all3}
\label{fig:diss_vs_vort_all3}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./ke_vs_vort_all3}
\label{fig:ke_vs_vort_all3}
\end{subfigure}
\caption{4D representation of the family of solutions across Mach number, ranging from 0.75 to 0.8.}
\label{fig:phase_portrait_Re_2000_all3}
\end{figure}
\subsection{High Mach number regime}
The trajectories corresponding to the highest Mach numbers ($0.75<M<0.8$) from the family of solutions shown in figure \ref{fig:Mach_range} are gathered in figure \ref{fig:phase_portrait_Re_2000_all3}. In this regime, the high flow compressibility alongside the phase of the flow-acoustic interaction favour the appearance of a new acoustic wave caused by the dipole associated to the shear layer vortex. This new acoustic wave will be referred to as the shear layer acoustic wave. As mentioned earlier, the acoustic radiation from the trailing edge arises from the impingement of both low and high density flow events at this location. The low density events are related to the shear layer vortex which merges with the stationary vortex, whereas the high density ones result from the counter-rotation of the stationary and shear layer vortices which compresses the flow amongst them. For the lowermost Mach number orbits ($M<0.65$), the higher speed of sound relative to the flow velocity results in an favourable (in phase) synchronisation of the shear layer and trailing edge dipoles, which further enhance the upstream radiated acoustic wave. At the intermediate Mach number regime, the propagating velocity of sound is less and the shear layer dipole grows in magnitude. These two phenomena partially cancel out the upstream sound radiation (out of phase synchronisation), which shifts the leading sound radiation towards the vertical. At the current Mach number range, the large flow compressibility has reinforced the shear layer dipole as a leading contributor in the overall sound radiation of the system. In addition, the speed of sound continues to reduce which increases the lag of the interaction between the acoustic wave radiated from the trailing edge and the shear layer. Recalling figure \ref{fig:dil_vs_M}, the time-averaged norm of dilatation experiences a sudden drop after $M=0.76$, reaching a local minimum at $M=0.78$ and finally rapidly increasing again until the upper end of out Mach number range. This non-monotonic behaviour differs from the orbits at the lower Mach numbers and it is also reflected in figures \ref{fig:diss_vs_dil_all3} and \ref{fig:ke_vs_dil_all3}. In these figures we observe that the oscillations in the averaged norm of dilatation are not caused by a single event with a radically higher or lower compressibility, but by the entire orbit which raises and lowers the flow compression as a whole. The origin of this global changes resides in both the increasing flow compressibility as the Mach number is raised and the phase of the interaction between the acoustics and the shear layer dipole. Note that this interaction also occurs for the lower Mach numbers, but there, due to the weaker shear layer dipole, these changes in dilatation were masked by the constantly increasing flow compressibility with the Mach number.
Similarly, all quantities with the exception of the norm of kinetic energy do not follow a constant trend but instead oscillate. Contrarily, the overall kinetic energy keeps increasing as the Mach number is raised, but this time the phenomenon has nothing to do with the changing flow compressibility. As we will see, the cause of this phenomenon is the monotonic drop in $\Theta$ seen in figure \ref{fig:Re_thet_vs_M}. For a more intuitive interpretation of these physical events, figure \ref{fig:dil_max_vort} shows the snapshots of the dilatation field at Mach numbers 0.76, 0.78 and 0.8 at the point of maximum norm of the vorticity field (figure \ref{fig:phase_portrait_Re_2000_all3}). The position of these snapshots are highlighted in figure \ref{fig:ke_vs_vort_all3} with red square symbols at the flow orbit and it corresponds to the instant when the vortex impingement (low density event) occurs. In the same figure, the red circles show the opposite phase to the plots from \ref{fig:dil_max_vort}, which indicates the impingement of the high density area onto the trailing edge. The interference amongst the upstream propagating acoustic wave and shear layer acoustic wave, which we observed for the first time earlier on in figure \ref{fig:dil_74_max}, grows continuously through $M=0.76$ (figures \ref{fig:dil_max_vort_M076}), reaching the anti-phase of the interaction at $M=0.78$. At this point, both acoustic waves cancel each other out in the vicinity of the cavity (figure \ref{fig:dil_max_vort_M078}). This phenomenon causes the sudden drop in the average dilatation field from $M=0.76$ to $M=0.78$. From this Mach number onwards, the shear layer becomes more energetic, and the phase of the interaction slowly becomes favourable again. Note that the features of the purely convecting events (i.e.~vortices) are barely altered throughout this interval. Additionally, despite the considerable difference in Mach number, the shape and alignment of the dipoles related to convective phenomena are remarkably similar to the ones illustrated in figure \ref{fig:cartoon_a} for the $M=0.5$ case. Furthermore, the qualitative agreement of figure \ref{fig:dil_max_vort_M080} with the data shown by \cite{rowley2002self} (their figure 6) is excellent.
\begin{figure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./dil_max_vort_M076}
\label{fig:dil_max_vort_M076}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./dil_max_vort_M078}
\label{fig:dil_max_vort_M078}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./dil_max_vort_M080}
\label{fig:dil_max_vort_M080}
\end{subfigure}\vspace{-4.15cm}
\begin{subfigure}[b]{\textwidth}
\centering
\hspace{-0.15cm}
\input{./dil_max_vort_colourbar}
\end{subfigure}
\vspace{3.0cm}
\caption{Instantaneous contours of the dilatation field at Mach numbers 0.76 \textbf{(a)}, 0.78 \textbf{(b)} and 0.8 \textbf{(c)}. These instants correspond to the maximum norm of the vorticity field along the periodic orbit.}
\label{fig:dil_max_vort}
\end{figure}
\subsection{Momentum thickness effect and stability}
\label{subsec:mom_t_and_s}
In the family of periodic orbits presented above, the optimisation algorithm induced a progressive drop in the incoming boundary layer as the Mach number was raised (figure \ref{fig:Re_thet_vs_M}). In order to isolate the effect of this phenomenon over the periodic orbits, we have computed an additional periodic solution at $M=0.8$ with the same incoming boundary layer as the initial orbit at $M=0.5$. The $Re_\Theta$ of this new orbit is 64.37, whereas the continued orbit at the same Mach number presents a $Re_\Theta=23.06$. After a preliminary observation to these orbits \citep{mythesis}, we have concluded that the orbits describe trajectories of identical shape, where the new periodic solution presents a negative shift on its average norms of dilatation and kinetic energy. The fact that the shape in both trajectories is the same it suggests that the physical mechanisms which govern the flow remain unchanged. In addition, the period of the new orbit is almost identical to the one from the respective continued orbit (see appendix \ref{appA}), which confirms that both sets of solutions correspond to the same Rossiter mode. On the other hand, the thicker incoming boundary layer in the new orbit is the physical event that is causing the negative shift in dilatation, but mainly also in kinetic energy. Bear in mind that this phenomenon was also observed in figure \ref{fig:ke_vs_vort_all3}, where all the periodic solutions from this particular Mach number range exhibit a progressive shift in the kinetic energy norm as the boundary layer thickness is reduced. One of the primary effects of this smaller $Re_\Theta$ in the continued flow trajectory (caused only by the decrease of $\Theta$) is a more unstable character of the shear layer \citep{brs2008}. This enhanced instability yields a stronger leading edge vortex, which following the flow mechanisms described earlier in section \ref{sec:PS_M050}, it eventually travels downstream and impinges onto the cavity's trailing edge, radiating a stronger acoustic wave.
In order to assess the stability of the periodic solutions presented in this chapter, we introduced random noise perturbations across the entire flow-field for several flow orbits. Despite the amplitude of these disturbances ranging up to $10\%$ of $Q$ in some situations, the flow-field eventually adjusted back to the unperturbed trajectory in every case, stabilised by the flow-acoustic feedback mechanism. In addition, bear in mind that random noise perturbations are essentially strong numerical point-to-point oscillations in the flow-field. For this reason, they might be interpreted as spurious oscillations by the high-order explicit filter applied by the DNS code \citep{rsand}, which partially removes these disturbances, contributing to the stability of the periodic orbits. Instead, to more efficiently perturb the periodic solutions, we now introduce an initial body force disturbance in the streamwise and vertical momentum state variables. The spatial domain of activity for this perturbation is defined as
\begin{equation}
W_f \left(\vec{x}\right) = \mathrm{e}^{-\frac{\left(x-x_0\right)^2+\left(y-y_0\right)^2}{0.05}},
\end{equation}
where both streamwise and vertical momentum components experience an external forcing up to $-0.1\rho_\infty u_\infty$ at the centre of the Gaussian function, at $\vec{x}_0=\left(1,1\right)$. The exact location of this forcing has been carefully chosen to alter the flow-field at the shear layer and the proximity of the cavity's leading edge. If any exists, an unstable shear layer mode would exhibit its highest receptivity values at these particular spatial regions (see section \ref{subsec:stability_of_equilibria}). The flow trajectories we perturb are the flow solution at $M=0.8$ continued from the original periodic trajectory at $M=0.5$ (named M080), and the one obtained straight from the developed flow (M080-fd). The flow trajectories representing perturbed orbits are appended with the prefix `p-' (i.e. p-M080 for the perturbed M080 and so on).
\begin{figure}
\begin{subfigure}[b]{0.55\textwidth}
\centering
\subcaption{}
\vspace{-0.175cm}
\input{./stability_M080_fft}
\label{fig:stability_M080_fft}
\end{subfigure}
~\hspace{0.0\textwidth}
\begin{subfigure}[b]{0.43\textwidth}
\centering
\subcaption{}
\input{./stability_M080_zoom}
\label{fig:stability_M080_zoom}
\end{subfigure}
\caption{\textbf{(a)} Frequency spectrum of the perturbed flow trajectories at $M=0.8$ and the exact (continued) solution also at $M=0.8$. \textbf{(b)} Final stage of p-M080 decaying to M080.}
\label{fig:stability_M080_more}
\end{figure}
Figure \ref{fig:stability_M080_fft} displays the Fourier transformed norms of the dilatation field across time from M080, p-M080 and p-M080-fd. For simplicity, M080-fd has not been represented as it yields almost identical results to M080. Indeed, due to the high receptivity of the area surrounding the cavity's leading edge, this perturbation initially triggers the first Rossiter mode for both p-M080 and p-M080-fd, which appears as a peak at $St\approx 0.103$. Taking now a closer look on how M080-p decays to M080 (figure \ref{fig:stability_M080_zoom}), we can spot the influence of the first Rossiter mode on the trajectory described by p-M080. Once the perturbed state p-M080 reaches the vicinity of M080, it commences an oscillation about the M080 flow solution with both frequency and exponential decaying rate associated with this stable first Rossiter mode. The stability of these Rossiter modes is related to the mode selection phenomenon also discussed by \cite{brs2008}. This flow mechanism consists of the cavity flow `choosing' one Rossiter mode to govern the shear layer dynamics, where all the remaining modes below experience an exponential decay. This mode selection is not fully understood yet, and according to the literature, it appears to be dependent on parameters such as Reynolds number or the cavity's aspect ratio. \cite{brs2008} reported for their 2M06 case that the shear layer oscillated with the first Rossiter mode. Similarly to the present investigation, in the initial stages of time marching, they also observed an additional Rossiter mode (in their case the second mode) with relevant activity, which eventually decayed completely. Furthermore, on a potential three-dimensional scenario, they suggested that the interaction with the spanwise modes might affect the selection of the dominating Rossiter mode.
\subsection{Overall Sound and Directivity}
\label{subsec:oaspl_dir}
So far in the present section, we have seen how the overall behaviour of the periodic orbits changes as a consequence of the phase modification of the interaction amongst the shear layer and the trailing edge acoustic radiation. In particular, for the higher Mach numbers $\left(M>0.65\right)$ the sound radiation of the shear layer is of comparable magnitude to the radiation from the trailing edge. This leads to an enhanced interaction which, as seen in the above dilatation plots, has an effect on the energy and directivity of the overall sound radiation. To characterise the sound (or noise) radiation for the current family of periodic orbits we define the overall sound pressure level as \begin{equation}
\mathrm{OSPL} = 10 \cdot log \left( \frac{\int^{\infty}_\infty \left| \mathcal{F} \left( p'\left(t\right) \right) \right|^2 \mathrm{d}f}{\left(p_{ref}\right)^2}\right),
\label{eq:ospl}
\end{equation}
where $p'\left(t\right)$ are the pressure fluctuations at the measurement point, $p_{ref}$ is the reference pressure level set as $2 \cdot 10^{-5}$ and $\mathcal{F}$ indicates a Fourier transform. To evaluate the sound directivity, the OSPL was computed in 35 equally distributed monitor points along an arc of radius 5, each of them separated by 5 degrees. The centre of the arc is located at coordinates 1.5 and 1 in the streamwise and vertical directions, respectively. The OSPL of the most representative periodic orbits are gathered in figure \ref{fig:oaspl_polar}, where angles lower than 90 degrees and greater than 90 degrees show, respectively, an upstream and downstream sound radiation. The OSPL values are normalised with the maximum upstream propagating OSPL value from the orbit at $M=0.5$.
\begin{figure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./oaspl_polar}
\label{fig:oaspl_polar1}
\end{subfigure}
~\hspace{0.0\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./oaspl_polar2}
\label{fig:oaspl_polar2}
\end{subfigure}
\caption{OSPL directivity across Mach number.}
\label{fig:oaspl_polar}
\end{figure}
In section \ref{sec:PS_M050} we observed how the shear layer and leading edge dipoles enhance the acoustic wave originally radiated by the trailing edge at $M=0.5$. At the same time, the interaction of the shear layer and trailing edge dipoles partially cancels out the sound radiation in the downstream direction (figure \ref{fig:cartoon_a}). This behaviour agrees with the content of figure \ref{fig:oaspl_polar1}, where the OSPL values reach higher magnitudes at the upstream propagating angles, and the downstream propagation is severely mitigated by the shear layer dipole. The quasi-circular shape of the plots at Mach numbers 0.4 and 0.5 indicate that this upstream sound radiation is almost entirely governed by the cavity's trailing edge noise. Note that this is not the case for $M=0.6$ and $M=0.65$, where the shear layer is compressible enough to start radiating sound. As the Mach number is further increased, the higher flow compressibility leads to stronger dipoles at the leading edge and shear layer. This phenomenon in combination with a lower speed of sound forces the above-mentioned interaction to slowly get out of synchronisation. The OSPL values for Mach numbers 0.6 and 0.65 show how the dipole located at the leading edge cancels out the upstream sound propagation in the vicinity of the wall. Furthermore, the high intensity of the shear layer dipole is reflected in the more pronounced lobe at about $140^{\circ}$. Note that as the Mach number is increased, the leading sound radiation direction shifts monotonically towards a higher angle. This trend also applies for the higher Mach numbers (figure \ref{fig:oaspl_polar2}). At Mach numbers from 0.65 to 0.70, the dipole interaction continues out of synchronisation which leads to a substantial reduction of the acoustic radiation in the upstream direction. As the interaction becomes favourable again from $M=0.70$ to $M=0.76$, the upstream sound radiation increases in the $30^{\circ}$ to $60^{\circ}$ range. This is solely caused by the higher radiation of the shear layer dipole at this Mach number range. As shown in figure \ref{fig:dil_max_vort}, for Mach numbers above $M=0.76$ the sound radiation is split in two different waves, where now the shear layer dipole is the responsible from the upstream propagating acoustics. This is observed in figure \ref{fig:oaspl_polar2} as two main distinct lobes for $M>0.76$. The sound radiation from the cavity's trailing edge is cancelled out almost completely in the upstream direction as it interacts with the shear layer acoustics, which forces its propagation in the vertical direction.
\section{Equilibrium Solutions}
\label{sec:equilibria}
In the preceding section we have shown how the periodic solutions are dominated by compressible events. As the Mach number was decreased down to the incompressible regime ($M \approx 0.30$), the amplitude of this periodic behaviour decayed to zero, giving rise to a steady state. Hence, as shown in figure \ref{fig:Mach_range}, this incompressible limit could be seen as a bifurcation point amongst the families of periodic and steady or equilibrium solutions. In the sequel we show that, away from the lower Mach number regime ($M\gtrsim 0.35$), these steady solutions are unstable, where a perturbation suffices to trigger a transition towards the limit cycle (figure \ref{fig:steady_to_periodic_transition}). Bear in mind that the steady solutions are not seen in a naturally evolving flow at compressible Mach numbers (where the compressible effects are not negligible $M \gtrsim 0.30$). Interestingly, the flow topology of these equilibrium states matches the flow features shown earlier in figure \ref{fig:M025_vort}. Similarly to the periodic orbits, these solutions consist of a weak shear layer vortex which counter-rotates with respect to the stationary vortex, located at the downstream end of the cavity. From the same figure, we also see that the stationary and shear layer vortices interact to maintain a fragile equilibrium. A small perturbation crossing the shear layer would trigger the merging process between these two vortices as we saw in section \ref{sec:PS_M050}. These particular flow features remain almost unaltered throughout the entire Mach number range, where only the norms of dilatation (figure \ref{fig:dil_vs_M}) and kinetic energy (figure \ref{fig:ke_vs_M}) show significant changes. The flow variations reflected in these two norms are strongly linked to the higher flow compressibility associated to a higher Mach number. Similarly to the periodic orbits, the counter-rotating character of the shear layer and stationary vortices gives rise to a flow compression at the mid-point of the two vortices. Just downstream from this location, the stationary vortex further accelerates the flow on top of the shear layer, which causes a flow expansion. However, the strongest compressible phenomenon in this family of equilibrium solutions is the flow compression in the vicinity of the trailing edge of the cavity, which emanates from the constant flow impingement (this time steady impingement) onto the trailing edge. Thus, the increase in the norm of dilatation is caused by the strongest density gradients in the flow, whereas the increase in kinetic energy arises mainly from the stronger compressible effects above the stationary vortex.
\begin{figure}
\begin{subfigure}[b]{0.42\textwidth}
\centering
\subcaption{}
\vspace{-0.3cm}
\input{./steady_to_periodic_M050}
\label{fig:steady_to_periodic_transition}
\end{subfigure}
~\hspace{0.0\textwidth}
\begin{subfigure}[b]{0.55\textwidth}
\centering
\subcaption{}\hspace{-0.90cm}
\input{./transition_cost_vs_time}
\label{fig:transition_cost}
\end{subfigure}
\caption{\textbf{(a)} Transition from the steady solution to its associated periodic orbit at $M=0.5$. \textbf{(b)} Evolution of the cost function in the initial stage of the transition from steady to periodic flow.}
\label{fig:M050_steady_to_periodic}
\end{figure}
\subsection{Stability of Equilibrium Solutions}
\label{subsec:stability_of_equilibria}
The bifurcation amongst the families of steady and periodic solutions was illustrated in figure \ref{fig:Mach_range}. In this figure, it is straightforward to notice how the periodic and steady families get progressively closer as we descend in Mach number and completely merge for Mach number $M\leq0.3$. Thus, in order to determine the nature of the bifurcation, we require a detailed stability analysis of these equilibrium solutions. To this end, we make use of both forward (non-linear) and adjoint (linear) Navier-Stokes solvers available in our code \citep[see][for a description of the adjoint framework used]{mythesis}. Respectively, forward and adjoint stability analysis yield information about the system's response and receptivity, where they can also be combined to reveal the so-called `{wave maker}' or core of the instability. For an elaborated introduction to global stability analysis and the use of forward and adjoint global modes, the reader is referred to the reviews by \cite{vassilios_gls} and \cite{luchini_adjoint}. To perform the stability analysis, we use dynamic mode decomposition (DMD) to get the eigenvalues and eigenvectors (modes) of the system straight from the data produced by our numerical solvers, without any further modifications. This approach seeks a flow decomposition of the form \begin{equation}
\tilde{Q}\left(t\right) = Q_0 + \sum_{n=1}^{N_{mod}} \Psi_n e^{\mu_n t} + c.c.,
\label{eq_2:gm_decomposition}
\end{equation} where $c.c.$ indicates complex conjugate and $\Psi_n$ and $\mu_n$ are the system's eigenvectors and eigenvalues. According to \cite{dmd}, DMD recovers the global modes of a linear process, whereas for the non-linear case ``it identifies the dominant frequencies and their associated spatial structures''. Since we lack a linearised Navier-Stokes solver, the forward global modes are approximated assuming linear behaviour of the full non-linear Navier-Stokes equations in the initial stages of time marching. Note that for unstable steady solutions, the flow undergoes a transition from a steady state to periodic orbit. Consequently, extending the dataset too much might result in the DMD framework accounting for non-linear dynamics related to the limit cycle, whereas a short dataset might not produce converged results. On the other hand, the adjoint equations are linear, which allows extending the adjoint dataset as required to obtain converged adjoint dynamic modes. Hence, since the spectrum recovered from the forward and adjoint simulations should coincide, we use the adjoint eigenvalues as the reference and also to check the approximated forward modes.
\subsubsection{Global Stability Analysis at $M=0.5$}
From figure \ref{fig:M050_steady_to_periodic} we can see that the equilibrium solution at $M=0.5$ is unstable. Figure \ref{fig:steady_to_periodic_transition} shows the evolution of this unstable solution, where shortly after the beginning of the simulation, the flow instability experiences low amplitude oscillations which grow exponentially in time. This amplitude growth continues until the flow-field approaches the vicinity of the periodic solution. There, the amplitude's growth rate decays rapidly, where the flow saturates and follows closely the trajectory described by the periodic orbit. Therefore, in order to approximate the forward global modes using DMD, we must establish a time interval where the global behaviour of the system can be approximated as linear. Figure \ref{fig:transition_cost}, shows the value of the cost function (\ref{eq:steady_cost_f}) over time. The initial steep increase in the cost function is the result of the flow solver reacting to an initial condition which was computed externally. For instance, the boundary conditions and derivative routines adapt the steady solution to satisfy the physical conditions imposed at the boundaries, and also fit the flow-field onto a polynomial function of the same order as the derivative routines. After this initial `rejection' is overcome, one would expect that the divergence of the flow-field from the initial steady solution would be much more drastic once the non-linear effects start taking place. We estimate the start of the non-linear flow behaviour from figure \ref{fig:transition_cost}, as the point where the trajectory of the cost function abandons the linear trend exhibited approximately from 12 to 47 time units. On the other hand, the converged eigenvalue spectrum obtained through the adjoint simulations used 801 snapshots sampled equidistantly across a time span of 220 time units. To avoid unwanted influence from the boundary conditions, the snapshot sequences (both adjoint and forward datasets) were cropped down to the vicinity of the cavity.
\begin{figure}
\centering
\input{./spectrum_M050}
\caption{Eigenvalue spectrum for the steady solution at $M=0.50$. The red circular markers show the eigenvalues calculated through the adjoint simulations. The blue squared symbols represent the approximate eigenvalues obtained using the non-linear forward simulations. The smaller squared symbols show purely numerical modes (noise), which is a product of the insufficiently large forward snapshot sequence.}
\label{fig:M050_spectrum}
\end{figure}
Figure \ref{fig:M050_spectrum} shows the eigenspectrum calculated using both forward and adjoint snapshot sequences. These eigenvalues have been transformed to the continuous-time form using the relations $\mu_r = \operatorname{\mathbb{R}e}\lbrace log\left(\lambda_n\right)\rbrace/\Delta t$ and $\mu_i = \operatorname{\mathbb{I}m}\lbrace log\left(\lambda_n\right)\rbrace/\left(2 \pi \Delta t\right)$, indicating growth rate and Strouhal number ($St = fD/U_\infty$), respectively. The resulting leading (most unstable) eigenvalues yielded by both forward and adjoint approaches are very similar. The exact values produced by the adjoint DMD for this particular mode are $\mu_r=0.042644$ and $\mu_i=0.246062$, where the forward DMD results are just $3.33\%$ off. Also note the closeness of both approaches on the secondary eigenvalue at $\mu_r=-0.007837$ and $\mu_i=0.153670$. The remaining unmatched eigenvalues predicted by the forward DMD appear as a direct consequence of an insufficiently long snapshot sequence to produce a fully converged spectrum. This phenomenon was also observed when studying the convergence of the adjoint spectrum, where most of the damped eigenvalues ($\mu_r < 0 $) further decayed as the snapshot sequence was incremented. These stable eigenvalues (with $\mu_r < -0.5 $) have been excluded from the above plot due to their lack of dynamical relevance. Hence, figure \ref{fig:M050_spectrum} can be seen as a validation of both forward and adjoint compressible Navier-Stokes solvers, alongside the DMD post-processing tool.
Figure \ref{fig:M050_modes_and_wavemaker} presents the forward and adjoint modes, and the wave maker corresponding to the momentum components of the most unstable eigenvalue. Since the current equilibrium solution is unstable, the forward mode reveals the laminar shear layer and the reattached laminar boundary layer as the leading amplifiers of this instability. Apart from the amplitude, the locations where the mode switches sign reveal the underlying physical mechanisms which drive the dynamics. In substance, loci with contrary sign represent opposite flow behaviour, such as increase-decrease of pressure, momentum, density, etc. For reference, the streamlines calculated from the velocity field of the steady flow solution are superimposed on the contour plots in figure \ref{fig:M050_modes_and_wavemaker}. Curiously, the momentum mode shapes highlight the close relationship between the vortices located in the shear layer and also at the downstream end of the cavity. If we analyse in detail the streamwise momentum component, we observe that it switches signs in the proximity of the vertical centreline of the vortices located in the shear layer and at the downstream end of the cavity. This phenomenon suggests that both vortices are compressed and expanded in the streamwise direction at the eigenvalue's frequency. Further, as also seen in figure \ref{fig:M050_a}, similar sign changes occur across the laminar shear layer in the vertical direction, revealing its unstable character. Additionally, the vertical momentum component also shows relevant activity along the laminar shear layer. Figure \ref{fig:M050_d} uncovers patches of opposite increase in vertical momentum, where one of the sign changes takes place just in between the above-mentioned shear layer and downstream vortices. Hence, the combination of the streamwise and vertical momentum components induces fluctuations in the laminar shear layer which affect both shear layer and downstream vortices. These two vortices begin an oscillating motion with increasing amplitude, resulting in the downstream vortex absorbing the shear layer vortex. As detailed in previous sections, the extra streamwise momentum resulting from this merging process impinges onto the cavity's trailing edge, which radiates upstream an acoustic wave, initiating the Rossiter mode. From this point onwards, the system can no longer be approximated as linear and begins the last stage of its transition towards the stable limit cycle (figure \ref{fig:steady_to_periodic_transition}).
\begin{figure}
\newlength\spacingfigmodeone
\setlength\spacingfigmodeone{-0.25cm}
\newlength\spacingfigmodetwo
\setlength\spacingfigmodetwo{-0.00cm}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\vspace{\spacingfigmodeone}
\input{./M050_forward_mu}
\label{fig:M050_a}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\vspace{\spacingfigmodeone}
\input{./M050_adj_mu}
\label{fig:M050_b}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\vspace{\spacingfigmodeone}
\input{./M050_u_wavemaker}
\label{fig:M050_c}
\end{subfigure}\\
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\vspace{\spacingfigmodeone}
\input{./M050_forward_mv}
\label{fig:M050_d}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\vspace{\spacingfigmodeone}
\input{./M050_adj_mv}
\label{fig:M050_e}
\end{subfigure}
~\hspace{0.00\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\vspace{\spacingfigmodeone}
\input{./M050_v_wavemaker}
\label{fig:M050_f}
\end{subfigure}\vspace{-6.4cm}\\
\begin{subfigure}[b]{0.32\textwidth}
\centering
\input{./M050_fwd_adj_mode1_colourbar}
\end{subfigure}\hspace{0.35\textwidth}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\input{./M050_wavemaker_colourbar}
\end{subfigure}\vspace{6.0cm}
\caption{Streamwise (top) and vertical (bottom) components of the forward $\left( \textbf{(a)}, \textbf{(b)}\right)$ and adjoint $\left( \textbf{(b)}, \textbf{(c)}\right)$ momentum components associated with the leading eigenvalue for the equilibrium solution at $M=0.5$. Figures \textbf{(c)} and \textbf{(f)} show the structural sensitivity maps of the streamwise and vertical momentum components, calculated as $\big|\Psi^*_{m_u^*}\big| \cdot \big|\Psi_{\rho u}\big|$ and $\big|\Psi^*_{m_v^*}\big| \cdot \big|\Psi_{\rho v}\big|$, respectively. The contour levels of these two plots are normalised with $6\cdot10^{-6}$.}
\label{fig:M050_modes_and_wavemaker}
\end{figure}
\begin{figure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./U_monitor_transition_M050}
\label{fig:U_monitor}
\end{subfigure}
~\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\subcaption{}
\input{./frequency_evolution_M050}
\label{fig:freq_evolution_M050}
\end{subfigure}\\
\begin{center}
\begin{subfigure}[b]{0.67\textwidth}
\centering
\subcaption{}
\input{./transition_M050_fft}
\label{fig:transition_M050_fft}
\end{subfigure}
\end{center}
\caption{\textbf{(a)} Time signal of the streamwise momentum at $\vec{x}=\left(2.25,1\right)$. \textbf{(b)} Estimated flow-acoustic feedback loop's frequency evolution across time. Red dashed and blue dash-dotted horizontal lines indicate the frequencies associated the unstable mode and periodic limit cycle, respectively. \textbf{(c)} Power spectrum of the time signal show in \textbf{(a)}.}
\label{fig:Transition_M050}
\end{figure}
So far, we know the physical mechanisms associated with the unstable eigenvalue, which trigger a transition from a steady state towards a periodic orbit in the present 2D cavity flow. In other words, we only know that our steady flow solution is unstable and how it reacts to that instability. In order to understand the origin of the instability, we recover the adjoint mode of that unstable eigenvalue. The adjoint modes can be interpreted as receptivity maps which indicate how to most efficiently trigger their respective forward mode. Hence, the adjoint mode shown in figures \ref{fig:M050_b} and \ref{fig:M050_e} reveals the receptivity of its forward mode. Similarly to the forward mode, the streamwise and vertical adjoint momentum components (figures \ref{fig:M050_b} and \ref{fig:M050_e}, respectively) present a considerable activity. Opposite to the behaviour exhibited by the forward mode, the adjoint mode shows increasing amplitude in the upstream direction, highlighting the vicinity of the leading edge as particularly receptive. This high receptivity, linked to the separated shear layer and the incoming boundary layer, is responsible for the global instability. Any perturbation in the state vector where its corresponding adjoint mode component is not zero will set off the instability growth. For the current equilibrium solution, the numerics from our compressible Navier-Stokes solver, alongside with the numerical precision of the flow solution, suffice to trigger this unstable mode. Following the concept of structural sensitivity (or wave maker), we can now combine the forward and adjoint modes. Such analysis reveals the area of the flow-field which acts as `the driver of the oscillation' \citep{luchini_adjoint}, highlighting the regions with both high receptivity and response. These spatial maps are also referred to as sensitivity to a localised feedback \citep{luchini_2007}. Figures \ref{fig:M050_c} and \ref{fig:M050_f} show the wave maker regions for the streamwise and vertical momentum components. These structural sensitivity maps expose predominately both the shear layer and the incoming laminar boundary layer as the origin of the instability. Especially, the areas showing a greater intensity are located in the proximity of the cavity's leading edge, where the adjoint mode peaks. In addition, as observed in figure \ref{fig:M050_c}, the laminar boundary layer only contributes to the onset of the instability in the streamwise direction. In fact, the point of highest magnitude is located just above the leading edge, before this incoming laminar boundary layer separates from the wall. Moreover, the high structural sensitivity patches exhibited along the shear layer in figure \ref{fig:M050_f} are also strongly linked to the flow structures located underneath. The two weakest patches (yet non-negligible) are located in the shear layer, precisely above the cores from the downstream and shear layer vortices. Overall, for this distinct geometry, the wave maker region is distributed along the incoming boundary layer and shear layer, peaking in intensity at the cavity's leading edge, to then decay towards the trailing edge. In particular, the activity shown in these two wave maker components vanishes rapidly after the cavity's trailing edge, because the convective-acoustic feedback mechanism does not hold past the trailing edge. Analogously to the acoustic-convective feedback loop occurring in periodic cavity solutions, any minimal flow imbalance which encounters the trailing edge will reflect upstream a weak acoustic wave. This upstream travelling acoustic wave eventually reaches the proximity of the leading edge of the cavity, setting in motion the unstable mode. Hence, once the unstable mode is active, it will further disturb the flow-field producing a more energetic acoustic wave than the precedent in the following impingement on the trailing edge. This feedback loop continues fuelling the flow instability until the linear flow dynamics collapse. Once this occurs, the natural frequency from the flow-acoustic feedback loop progressively evolves from the frequency of the (linear) unstable mode of $St\approx0.246$, to the characteristic frequency of its corresponding limit cycle (Rossiter mode) of $St\approx0.233$. A similar disagreement amongst the frequencies associated with the unstable eigenmode and the non-linear periodic solution was also observed by \cite{sipp2007global} in an incompressible flow over a square cavity. Figure \ref{fig:U_monitor} shows the streamwise momentum time signal through this transition, captured by a monitor point located at the shear layer. At this location, the unstable forward mode is considerably active, which permits the estimation of the dominant frequency by tracking the local minima (or maxima) of this signal across time. Since there is only one single unstable mode, the frequency should asymptotically approach the frequency of this unstable eigenvalue, which governs the long time behaviour of the system. Figure \ref{fig:freq_evolution_M050} shows his frequency estimation as a function of time, where it appears to become stable after approximately 100 time units. The fact that the initial strong frequency oscillations ceased is an indication that all the stable modes have decayed significantly, leaving the unstable mode as the only one active in the system. At this point in time, we readily observe that the approximated frequency has already abandoned the frequency associated with the unstable mode, which confirms that the dynamics have stopped being governed by linear flow mechanisms \citep{devicente2014}. Furthermore, consider that the linear behaviour ceases even before the vortex merging of the shear layer and downstream vortices begins, which occurs approximately after 135 time units. After this, the flow undergoes the last stage of the transition, where the frequency slowly decays to the Rossiter mode's frequency. Contrary to \cite{brs2008}, the transition of this case does not seem to trigger other Rossiter modes (figure \ref{fig:transition_M050_fft}). The origin of this different behaviour is believed to reside in the different base flows employed. Note that here we make use of a steady exact flow solution as base flow, whereas, in their study, they employed an averaged flow-field. The average flow is not itself a steady solution, so, the flow reacts to that artificial flow as if it were a perturbation to an exact solution, triggering other leading Rossiter modes as seen also in subsection \ref{subsec:mom_t_and_s}.
\subsubsection{Stability Evolution across Mach Number}
\begin{figure}
\centering
\input{./EGV_vs_M}
\caption{Eigenspectrum of the family of equilibrium solutions across Mach number. The red dashed lines show the rise of the unstable branch at Mach numbers 0.3, 0.35 and 0.4. The blue dash-dotted lines represent the evolution of a decaying branch as the Mach number is increased. The thin black dashed line highlights the branch appearing due to noise in the stable steady states.}
\label{fig:EGV_vs_M}
\end{figure}
In the present section, we evaluate how the changes in flow compressibility and the associated acoustic speed translate into the stability of these equilibrium solutions. In order to carry out this stability analysis across Mach number, we use the eigenspectrum and eigenmodes resulting from our adjoint Navier-Stokes framework and focus the present analysis on the receptivity.
Figure \ref{fig:EGV_vs_M} shows the eigenvalue spectrum for the steady solutions from Mach numbers $0.30$ to $0.65$. For clarity, the eigenspectrum from the stable $M=0.25$ solution has not been included since all the eigenvalues corresponding to that Mach number lack dynamical importance ($\mu_r \lesssim -20 $). From this figure \ref{fig:EGV_vs_M}, notice that the steady solutions at $M=0.3$ and $M=0.35$ are stable, where all their eigenvalues are $\mu_r < 0$. Contrarily, all the other steady states are globally unstable since they have at least one eigenvalue with $\mu_r>0$. Similarly, \cite{brs2008} also observed that their 2D base flow was stable at $M=0.35$, but equivalently to the present investigation, it became unstable when raising the Mach number up to $M=0.6$, leaving the rest of parameters unchanged. Recalling the periodic orbits documented earlier, a stable periodic flow solution was found also at $M=0.35$ when descending in Mach number from $M=0.4$. This shows that both steady and periodic flow solutions are stable at this particular Mach number, where the flow will evolve to one or the other depending on the initial condition chosen. Additionally, this suggests that the transition from steady to periodic solutions does not occur at the same Mach number as the transition in the opposite direction. At the same time, this can be interpreted as the cavity flow being compressible enough to maintain the self-sustained oscillations (when descending in Mach number), but not enough to trigger them (when ascending).
The coexistence of these two stable solutions at the same Mach indicates bistability, which is characteristic of a subcritical Hopf bifurcation. Figure \ref{fig:bifurcation} shows an illustration of this type of bifurcation with the data points from the families of periodic and steady flow solutions. With ascending Mach number, the equilibrium solutions are stable up to the subcritical bifurcation point, located between Mach numbers 0.35 and 0.40. At this bifurcation, two unstable branches emerge, giving rise to unstable equilibrium and periodic solutions. Hence, far from following any of the unstable branches past the bifurcation point, the system undergoes a transition towards the corresponding stable limit cycle at the same Mach number. Once the flow reaches this periodic orbit, further variations in Mach number will displace the system along the family of periodic solutions as described earlier in section \ref{sec:periodic_family}. However, if the Mach number is reduced below the lower limit of the bistability range, the periodic orbit ceases to be stable, which leads to a rapid decay towards its corresponding steady state at the same Mach number. Thus, the bistability leads to a hysteresis loop around this bistable range (see figure \ref{fig:bifurcation}), which results into the two different transitions described above, from a steady state to a periodic orbit and vice versa. Unfortunately, in order to determine the exact Mach number at which these transitions occur, a finer sampling of flow solutions would be required.
\begin{figure}
\centering
\input{./bifurcation}
\caption{Illustration of the subcritical Hopf bifurcation of the periodic and steady families of solutions across Mach number. The solid lines represent stable flow states, whereas the dashed lines show unstable flow configurations. The red and black coloured lines indicate the periodic and steady families of solutions, respectively. Also with the same colour coding, the circle symbols show the actual flow solutions computed in this investigation. The hysteresis loop is depicted with the thick grey solid lines, where the arrows show the loop direction. For this illustration, the Mach number range where bistability occurs is represented with the red coloured horizontal axis.}
\label{fig:bifurcation}
\end{figure}
Going back to figure \ref{fig:EGV_vs_M}, in an attempt to shed light on the origin of the instability as a function of Mach number, the leading eigenvalue can be traced back from the unstable to the stable regime, and there its eigenmode can be analysed. Note that the eigenvalues with a positive growth rate from the solutions ranging from $M=0.4$ to $M=0.65$ are remarkably aligned in the unstable plane. Intuitively, it might be tempting to relate these unstable eigenvalues to the leading (stable) eigenvalue from the $M=0.3$ solution, which is placed just below. If that were the case, the leading eigenvalue corresponding to the $M=0.35$ equilibrium solution should be placed somewhere in between; but this is not the case. Studying figure \ref{fig:EGV_vs_M} in more detail, we may observe the branch containing this leading eigenvalue at $M=0.3$ (highlighted by the blue dash-dotted line) has its analogous branch at $M=0.35$, which exhibits a further damped character, with also a much higher frequency. In addition, at this point, it is worth highlighting that the eigenvalues aligned in a parabolic shape at Mach numbers 0.3 and 0.35 are believed to be associated with the noise present in those simulations \citep{Bagheri:2014}.
\begin{figure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./M030_adj_umom_red_branch_1}
\label{fig:M030_adj_lead1}
\end{subfigure}
~
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./M030_adj_umom_red_branch_2}
\label{fig:M030_adj_lead4}
\end{subfigure}
~
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./M035_adj_umom_red_branch_1}
\label{fig:M035_adj_lead1}
\end{subfigure}\vspace{-0.25cm}\\
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./M035_adj_umom_red_branch_2}
\label{fig:M035_adj_lead2}
\end{subfigure}
~
\begin{subfigure}[b]{0.32\textwidth}
\centering
\subcaption{}
\input{./M040_adj_umom_red_branch}
\label{fig:M040_lead}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\vspace{-2.5cm}
\centering
\input{./unstable_egv_evolution_colourbar}
\end{subfigure}
\vspace{0.5cm}
\caption{Evolution of the streamwise momentum receptivity of some of the eigenvalues from the unstable branch across Mach number. The first mode is shown in figures \textbf{(a)}, \textbf{(c)} and \textbf{(e)} at Mach numbers 0.30, 0.35 and 0.4, respectively. The fourth mode at $M=0.3$ and the second mode at $M=0.35$ are also shown in \textbf{(b)} and \textbf{(d)}.}
\label{fig:unstable_egv_evolution}
\end{figure}
Figure \ref{fig:EGV_vs_M} also shows a rising eigenvalue branch from $M=0.3$ to $M=0.4$, highlighted with red dashed lines. In contrast to the preceding blue dash-dotted decaying branch, this new branch exhibits a least stable nature as the Mach number of the steady solution is increased, moving towards a lower frequency regime. At $M=0.4$ its leading eigenvalue presents a positive growth rate, sitting in the unstable plane. For reference, this branch is also denoted as the unstable branch, where its eigenvalues are sorted numerically with decreasing characteristic frequency. Figure \ref{fig:unstable_egv_evolution} displays the evolution of some of the eigenvalues from this unstable branch across Mach number from a streamwise momentum receptivity point of view. Respectively, the first and fourth modes at $M=0.3$ are shown in figures \ref{fig:M030_adj_lead1} and \ref{fig:M030_adj_lead4}. There, the areas with high receptivity appear to displace continuously from the incoming laminar boundary layer towards the shear layer as the frequency increases along the branch. Additionally, as seen in figure \ref{fig:EGV_vs_M}, the eigenvalues from this branch get closer together as the Mach number is raised. As reflected in figures \ref{fig:M035_adj_lead1} and \ref{fig:M035_adj_lead2}, this phenomenon also occurs in their respective eigenvectors, where the first and second adjoint modes at $M=0.35$ show a very similar receptivity (both amplitude and shape-wise) in the streamwise momentum component. Hence, if this approaching behaviour would continue, the first and second modes would merge eventually. This would result into only three eigenvalues forming the unstable branch, which is precisely the case observed at $M=0.4$. Unfortunately, a finer sampling of equilibrium solutions across Mach number would be required in order to rigorously demonstrate this potential eigenvalue merging, which is believed to have a relevant influence on the onset of the unstable eigenvalue.
Lastly, figure \ref{fig:M040_lead} reveals the adjoint streamwise momentum component of the unstable mode at $M=0.4$. Inspecting the progression of this particular adjoint mode across Mach number, we detect how the dominant activity of this mode moves gradually from the shear layer towards the proximity of the leading edge and incoming boundary layer, where it halts. This phenomenon agrees with the adjoint momentum components of the unstable mode at $M=0.5$ as shown earlier in figure \ref{fig:M050_modes_and_wavemaker}, which exhibit a remarkably similar receptivity to the corresponding mode at Mach number 0.4. Perhaps the only noticeable difference between figures \ref{fig:M050_b} and \ref{fig:M040_lead} resides in the content of the free-stream, where a slightly more distinct downstream reflected wave appears with increasing flow compressibility. Note that an analogous Mach number effect was shown in the Rossiter modes taking place in time-periodic flow solutions. In the same manner, once the unstable mode of the steady solution is active, a higher flow compressibility results in a stronger upstream-travelling acoustic wave reflected off the trailing edge.
This phenomenon reinforces the acoustic-convective feedback loop, yielding a larger growth rate $\mu_r$. In addition, the close relationship amongst the periodic orbits and the stability of the steady solutions is manifested in the evolution of the frequency of this unstable mode. Similarly to the flow periodic trajectories, the non-dimensional frequency $\mu_i$ of the unstable mode decreases as the Mach number raises (see table \ref{table:freq_vs_M}). The reason behind this behaviour is simply the reduction in the propagating speed of sound relative to the convective flow velocity, which increases the lag of the acoustic-convective feedback loop. Relating these results to previous studies, the decreasing trend in the unstable mode's frequency as a function of Mach number was also observed in \cite{yamouni2013}. Analogously to our findings, they observed the appearance of new unstable modes as the Mach number was raised (five extra modes at $M=0.5$ and seven at $M=0.9$). In their investigation, the constant presence of unstable modes from purely convective origin is prone to exert a frequency and growth rate modulation over the exclusively compressible unstable modes, and vice versa. In fact, the proportional increase in growth rate as a function of Mach number was reported as surprising, when comparing their results with literature. Hence, in the present work, we have illuminated the physical mechanisms causing the increase in growth rate and the appearance of additional unstable modes with Mach number, from an exclusively compressible receptivity point of view.
\begin{table}
\centering
\begin{tabular}{cccc}
\hline
\hspace{0.5cm}$M_\infty$\hspace{0.5cm} & $St_{periodic}$ & $\mu_i$ & $\mu_r$ \\ \hline \hline
0.30 & $-$ & \hspace{0.5cm}0.824514\hspace{0.5cm} & -0.176693\\
0.35 & \hspace{0.5cm}0.24871\hspace{0.5cm} & 0.412341 & -0.053106 \\
0.40 & 0.24472 & 0.256365 & \hspace{0.5cm}0.019984\hspace{0.5cm} \\
0.45 & 0.23923 & 0.251265 & 0.031612 \\
0.50 & 0.23355 & 0.246062 & 0.042644 \\
0.55 & 0.22822 & 0.241150 & 0.048355 \\
0.60 & 0.22319 & 0.236504 & 0.052891 \\
0.65 & 0.21792 & 0.231924 & 0.055347 \\
\hline
\end{tabular}
\caption{Comparison across Mach number of the non-dimensional frequency associated with the periodic solutions and the first eigenvalue of the unstable branch of the corresponding steady solution.}
\label{table:freq_vs_M}
\end{table}
\section{Conclusions}
A newly developed framework to compute steady and periodic compressible flow solutions has been successfully applied on a two-dimensional open cavity flow. With this method, we have computed a family of compressible periodic flow solutions across Mach number for the first time. We have also calculated a family of equilibrium solutions using the same open cavity configuration. The Reynolds number based on the cavity depth was carefully chosen to avoid any convective instabilities in the flow-field, restricting the system to be only driven by purely compressible self-sustained oscillations. With this setup, we were able to show how the two families of periodic and steady solutions collapse in the quasi-incompressible regime ($M\approx0.30$), which proves that the flow compressibility has a destabilising effect in cavity flows.
To shed light on this phenomenon a thorough analysis of the evolution of the compressible events across Mach number has been carried out. Particular emphasis was put on the flow-acoustic shear layer interaction, which dominates the dynamics. This interaction was shown to have a major effect on the overall sound radiation.
Furthermore, these periodic flow solutions are stable even under large disturbances, unlike the steady solutions, which transition towards the corresponding limit cycle.
To analyse the bifurcation point, a detailed stability analysis of the equilibrium solutions was also carried out. Dynamic mode decomposition of an adjoint simulation snapshot sequence was used to recover the adjoint global modes and eigenspectrum. The forward global modes were also similarly found for the steady solution at $M=0.5$, using DMD over a non-linear forward dataset, assuming linear flow behaviour in the initial stages of time marching.
The forward approximated modes revealed the laminar shear layer as the leading amplifier of the instability. The respective adjoint modes highlighted the leading edge and incoming boundary layer as the regions with the highest receptivity.
Additionally, the evolution of the frequency corresponding to the unstable mode was tracked throughout the steady-to-periodic transition at $M=0.5$. This showed that the system quickly leaves the linear regime, even before the shear layer and downstream vortices begin the merging process. The wave maker region was computed showing activity in the boundary layer and shear layer, with maximum intensity at the leading edge, only to decay towards the trailing edge.
The evolution of the eigenvalue instability was analysed across Mach number. This analysis showed that the transition from steady to periodic flow and vice versa do not occur at the same Mach number, indicating a subcritical Hopf bifurcation of the steady and periodic families. This shows that locally stable periodic and steady solutions may coexist over a short Mach number range.
The eigenvalue branch which eventually becomes unstable as the Mach number is raised was also identified. The lack of both forward modes and a finer sampling of solutions across Mach number permits us only to hypothesise on the onset of the unstable character, and analyse such modes from a receptivity point of view. Hence, a forward stability analysis and a finer sampling of steady flow solutions (in particular from $M=0.35$ to $M=0.40$) are indicated for future work. A dataset containing both families of solutions is available online \citep{cavity_sol_dataset}.
|
2,869,038,153,733 | arxiv | \section{Introduction}
\noindent
The symmetric group $S_n$ may be viewed as the subgroup of the general linear group $GL_n(\mathbb{C})$ consisting of permutation matrices. We may therefore consider the restriction to $S_n$ of irreducible $GL_n(\mathbb{C})$ representations. Let $S^\lambda$ denote the irreducible representation of $\mathbb{C}S_n$ indexed by the partition $\lambda$ (so necessarily $n$ is the size of $\lambda$). Let $\mathbb{S}^\lambda$ denote the Schur functor associated to a partition $\lambda$, so that $\mathbb{S}^\lambda(\mathbb{C}^n)$ is an irreducible representation of $GL_n(\mathbb{C})$, provided that $l(\lambda) \leq n$. Let us write $[M]$ for the image of a module in the Grothendieck ring of $\mathbb{C}S_n$-modules. Thus, the restriction multiplicities $a_{\mu}^{\lambda}$ are defined via
\[
[\mbox{Res}_{S_n}^{GL_n}(\mathbb{S}^\lambda(\mathbb{C}^n)] = \sum_{\mu \vdash n} a_{\mu}^{\lambda}[S^\mu].
\]
Although a positive combinatorial formula for the restriction multiplicities is not currently known, there is an expression using plethysm of symmetric functions (see \cite{macdonald} Chapter 1 Section 8 for background about plethysm). Let us write $s_\lambda$ for the Schur functions (indexed by partitions $\lambda$). The complete symmetric functions, $h_n$, are the Schur functions indexed by the one-part partitions $(n)$. We recall the Schur functions $s_\lambda$ form an orthonormal basis of the ring of symmetric functions with respect to the usual inner product, denoted $\langle -,- \rangle$ (see \cite{macdonald} Chapter 1 Section 4). Let $f[g]$ denote the plethysm of a symmetric function $f$ with another symmetric function $g$. Then,
\[
a_\mu^\lambda = \langle s_\lambda, s_\mu[1 + h_1 + h_2 + \cdots] \rangle,
\]
see \cite{gay} or Exercise 7.74 of \cite{stanley}. We will need to consider the Lyndon symmetric function,
\[
L_n = \frac{1}{n} \sum_{d\mid n}\mu(d)p_d^{n/d},
\]
where $\mu(d)$ is the M\"{o}bius function and $p_d$ is the $d$-th power-sum symmetric function. It is important for us that $L_n$ is the $GL(V)$ character of the degree $n$ components of the free Lie algebra on $V$ (see the first proof of Theorem 8.1 of \cite{reutenauer}, which proves this to deduce a related result). For convenience we define the total Lyndon symmetric function $L = L_1 + L_2 + \cdots$; this is the character of the (whole) free Lie algebra on $V$.
\newline \newline \noindent
Instead of asking for the restriction coefficients $a_\mu^\lambda$, we may ask the inverse question: how can one express $[S^\mu]$ in terms of $[\mbox{Res}_{S_n}^{GL_n}(\mathbb{S}^\lambda(\mathbb{C}^n))]$? This question was recently answered by Assaf and Speyer in \cite{AS}. For a partition $\mu = (\mu_1, \mu_2, \ldots)$ of any size, let $\mu[n]$ denote $(n-|\mu|, \mu_1, \mu_2, \ldots)$ (a partition of $n$ provided that $n \geq |\mu|+ \mu_1$). Assaf and Speyer showed
\[
[S^{\mu[n]}] = \sum_{\lambda} b_{\lambda}^\mu [\mathbb{S}^\nu(\mathbb{C}^n)],
\]
where
\[
b_\lambda^\mu = (-1)^{|\mu| - |\lambda|}\sum_{\mu / \nu \mbox{ vert. strip}}
\langle s_{\nu^\prime}, s_{\lambda^\prime}[L] \rangle.
\]
The notation $\mu / \nu \mbox{ vert. strip}$ means that the diagram of $\mu$ may be obtained from the diagram of $\nu$ by adding boxes, no two in the same row, and primes indicate dual partitions.
\newline \newline \noindent
It is more convenient to work with
\[
M_n^{\mu} = \mbox{Ind}_{S_{|\mu|} \times S_{n - |\mu|}}^{S_n} (S^\mu \boxtimes \mathbf{1}),
\]
which decompose into the irreducible $S^{\nu[n]}$ via the Pieri rule:
\[
[M_n^{\mu}] = \sum_{\mu / \nu \mbox{ horiz. strip}} [S^{\nu[n]}].
\]
Here, $\mu / \nu \mbox{ horiz. strip}$ means that the diagram of $\mu$ may be obtained from the diagram of $\nu$ by adding boxes, no two in the same column. The formula for $b_\lambda^\mu$ is equivalent to the following statement (see Theorem 3 and Proposition 5 of \cite{AS}):
\[
[M_n^{\mu}] = \sum_{\lambda} (-1)^{|\mu| - |\lambda|} \langle s_{\mu^\prime}, s_{\lambda^\prime}[L] \rangle [\mathbb{S}^\lambda(\mathbb{C}^n)].
\]
The purpose of this note is to give a a categorification of this answer, namely a (minimal) resolution of $M_n^\mu$ by restrictions of $\mathbb{S}^\lambda(\mathbb{C}^n)$; this is accomplished in Theorem \ref{final_resolution}. Along the way, this explains the presence of the character of the free Lie algebra in the formula, and constructs projective resolutions in the category of $\mathcal{F}$-modules (over $\mathbb{Q}$) introduced by Wiltshire-Gordon in \cite{JWG}.
\section*{Acknowledgements}
\noindent
The author would like to thank Gurbir Dhillon for helpful comments on this paper.
\section{The Resolution}
\noindent
We begin by calculating the cohomology of the free Lie algebra on a fixed vector space. Although this result is very well known, it is instrumental in what follows, so we include it for completeness.
\newline \newline \noindent
Let $L$ be the free Lie algebra on $V = \mathbb{C}^m$. Then $\mathfrak{g} = L^{\oplus n} = L \otimes \mathbb{C}^n$ is again a Lie algebra. It has an action of $S_n$ by permuting the $L$ summands, coming from an action of $GL_n(\mathbb{C})$ that does not respect the Lie algebra structure. We consider the Lie algebra cohomology of $\mathfrak{g}$ (with coefficients in the trivial module).
\newline \newline \noindent
Recall that the Lie algebra cohomology is $\mbox{Ext}_{U(\mathfrak{g})}^{i}(\mathbb{C}, \mathbb{C})$. We first consider the case for $t=1$, so $\mathfrak{g} = L$. Now $U(L)$ is just the tensor algebra of $V$, which we denote $T(V)$. We therefore have a (graded) free resolution
\[
0 \xrightarrow{} T(V) \otimes V \xrightarrow{d_1} T(V) \xrightarrow{d_0} \mathbb{C} \xrightarrow{} 0.
\]
Here, $d_1(x \otimes v) = xv$ (product in $T(V)$), while $d_0$ simply projects onto the degree zero component. Crucially, $GL(V)$ acts by automorphisms on $L$ (which was the free Lie algebra on $V$), and the above complex is equivariant for this action. The Lie algebra cohomology is given by the cohomology of the complex
\[
0 \leftarrow \hom_{T(V)}(T(V) \otimes V, \mathbb{C}) \xleftarrow{d_1^*} \hom_{T(V)}(T(V), \mathbb{C}) \leftarrow 0.
\]
We easily see the differential $d_1^*$ is zero because any element of $\hom_{T(V)}(T(V), \mathbb{C})$ is zero on a positive degree element of $T(V)$, but the image of $d_1$ is contained in degrees greater than or equal to $1$. We thus conclude that $H^0(L, \mathbb{C}) = \mathbb{C}$, and $H^1(L, \mathbb{C}) = V^*$, with all higher cohomology vanishing. Next, we obtain the Lie algebra cohomology of $\mathfrak{g} = L^{\oplus n}$.
\begin{proposition} \label{coh_prop}
For $0 \leq i \leq n$:
\[
H^i(\mathfrak{g}, \mathbb{C}) = \mbox{\emph{Ind}}_{S_i \times S_{n-i}}^{S_n}((V^*)^{\otimes i} \otimes \varepsilon_i \boxtimes \mathbb{C}^{\otimes (n-i)}),
\]
where $\varepsilon_i$ is the sign representation of $S_i$, and $\mathbb{C}^{\otimes (n-i)}$ is the trivial representation of $S_{n-i}$. Further, for $i > n$, the cohomology $H^i(\mathfrak{g}, \mathbb{C})$ vanishes.
\end{proposition}
\begin{proof}
We apply the K\"{u}nneth theorem in an $S_n$-equivariant way. The sign representation $\varepsilon_i$ arises because of the Koszul sign rule (cohomology is only graded commutative).
\end{proof}
\noindent
Now let us compute the Lie algebra cohomology of $\mathfrak{g}$ using the Chevalley-Eilenberg complex \cite{weibel}. Recall that the $i$-th cochain group is
\[
\hom_{\mathbb{C}}(\bigwedge\nolimits^i (\mathfrak{g}), \mathbb{C})
\]
and the differential $d$ is given by the formula
\[
d(f)(x_1, \ldots, x_{k+1}) = \sum_{i<j} (-1)^{j-i}f([x_i, x_j], x_1, \ldots, \hat{x}_i, \ldots, \hat{x}_j, \ldots , x_{k+1}
\]
where hats indicate omitted arguments.
This differential is $GL(V) \times S_n$-equivariant and homogeneous in terms of the grading on $\mathfrak{g}$ (the grading corresponds to the action of $\mathbb{C}^\times = Z(GL(V))$).
\newline \newline \noindent
Note that $\mathfrak{g}$ is graded in strictly positive degrees. As an algebraic representation of $GL(V)$, the $i$-th cochain group,
\[
\hom_{R}(\bigwedge\nolimits^i (\mathfrak{g}), \mathbb{C})
\]
is contained in degrees $\leq -i$. This means that if we are interested only in the degree $-i$ component of the cohomology, we may truncate the Chevalley-Eilenberg complex after $i$ steps. Thus, if we write a subscript $-i$ to indicate the degree $-i$ component of an $GL(V)$ representation, we obtain the following.
\begin{proposition} \label{res_prop}
The complex (with differential inherited from the Chevalley-Eilenberg complex)
\[
0 \leftarrow\hom_{\mathbb{C}}(\bigwedge\nolimits^i (\mathfrak{g}), \mathbb{C})_{-i} \leftarrow \hom_{\mathbb{C}}(\bigwedge\nolimits^{i-1} (\mathfrak{g}), \mathbb{C})_{-i} \leftarrow \cdots \leftarrow \hom_{\mathbb{C}}(\bigwedge\nolimits^1 (\mathfrak{g}), \mathbb{C})_{-i} \leftarrow \hom_{\mathbb{C}}(\bigwedge\nolimits^0 (\mathfrak{g}), \mathbb{C})_{-i} \leftarrow 0
\]
has cohomology $H^i(\mathfrak{g}, \mathbb{C})$ on the far left, and zero elsewhere.
\end{proposition}
\section{Resolving the $\mathbb{C}S_n$-modules $M_n^\mu$} \label{sec3}
\noindent
Let us take the multiplicity space of the $GL(V)$-irreducible $\mathbb{S}^{\mu^\prime}(V^*)$.
\begin{proposition} \label{next_prop}
The $\mathbb{S}^{\mu^\prime}(V^*)$ multiplicity space in the cohomology $H^i(\mathfrak{g}, \mathbb{C})$ is
\[
M_n^{\mu} = \mbox{\emph{Ind}}_{S_i \times S_{n-i}}^{S_n}({S}^\mu \boxtimes \mathbf{1})
\]
\end{proposition}
\begin{proof}
We apply Schur-Weyl duality to Proposition \ref{coh_prop}, noting that $S^\lambda \otimes \varepsilon_i = S^{\lambda^\prime}$:
\[
H^i(\mathfrak{g}, \mathbb{C}) = \mbox{Ind}_{S_i \times S_{n-i}}^{S_n}((V^*)^{\otimes i} \otimes \varepsilon_i \boxtimes \mathbf{1}) = \mbox{Ind}_{S_i \times S_{n-i}}^{S_n}(\bigoplus_{\lambda \vdash i}\mathbb{S}^{\lambda}(V^*)\otimes S^{\lambda^\prime} \boxtimes \mathbf{1}).
\]
Hence, the $\mathbb{S}^{\mu^\prime}(V^*)$ multiplicity space is $\mbox{Ind}_{S_i \times S_{n-i}}^{S_n}({S}^\mu \boxtimes \mathbf{1})$.
\end{proof}
\noindent
Because the complex we constructed in Proposition \ref{res_prop} is $GL(V)$ equivariant, taking cohomology commutes with taking the $\mathbb{S}^{\mu^\prime}(V^*)$ multiplicity space. We immediately obtain the following.
\begin{theorem} \label{final_resolution}
Consider the complex of $S_n$ representations
\[
\hom_{GL(V)}\left(\mathbb{S}^{\mu^\prime}(V^*), \hom_{\mathbb{C}}(\bigwedge\nolimits^i (\mathfrak{g}), \mathbb{C})\right)
\]
for $|\mu| \geq i \geq 0$ with maps induced by the differential of the Chevalley-Eilenberg complex. This is a resolution of $M_n^\mu$ by representations restricted from $GL_n(\mathbb{C})$.
\end{theorem}
\begin{proof}
This is immediate from Proposition \ref{next_prop} and Proposition \ref{res_prop}.
\end{proof}
\noindent
Should we wish to resolve the irreducible $S^\mu$, rather than $M_n^{\mu}$, we simply take $n = |\mu|$ so that $M_n^\mu = S^\mu$.
\newline \newline \noindent
We now take the Euler characteristic of our complex, viewed as an element of the Grothendieck ring of $\mathbb{C}S_n$-modules tensored with the Grothendeick ring of $GL(V)$-modules; we view the latter as the ring of symmetric functions. In the language of symmetric functions, the Schur function $s_\lambda$ corresponds to the irreducible representation $\mathbb{S}^\lambda(V)$ (strictly speaking, we must quotient out $s_\lambda$ for $\lambda$ with more parts than $m = \dim(V)$, but this will never be an issue). We express the cohomology groups in terms of symmetric functions; as in the proof of Proposition \ref{next_prop}, Schur-Weyl duality gives
\[
H^i(\mathfrak{g}, \mathbb{C}) = \mbox{Ind}_{S_i \times S_{n-i}}^{S_n}(\bigoplus_{\lambda \vdash i}\mathbb{S}^{\lambda}(V^*)\otimes S^{\lambda^\prime} \boxtimes \mathbf{1}).
\]
Letting $\lambda = \mu^\prime$ and passing to Grothendieck rings, this becomes $\sum_{\mu \vdash i} s_{\mu^\prime}(x^{-1}) [M_n^\mu]$, where a Schur function indicates a representation of $GL(V)$ (the inverted variables account for the dualised space $V^*$).
Calculating the Euler characteristic directly from the cochain groups, we consider the $i$-th exterior power of $\mathfrak{g} = L \otimes \mathbb{C}^n$,
\begin{equation} \label{chain_form}
\bigwedge\nolimits^i (\mathfrak{g}) = \bigoplus_{\lambda \vdash i} \mathbb{S}^{\lambda^\prime}(L) \otimes \mathbb{S}^\lambda(\mathbb{C}^n)
\end{equation}
which gives $\sum_{\lambda \vdash i} [S^\lambda(\mathbb{C}^n)] s_{\lambda^\prime}[L](x)$. The actual chain groups are the duals of these exterior powers, so we replace the symmetric function variables $x$ with their inverses $x^{-1}$. When we introduce a factor of $(-1)^{|\lambda|-|\mu|}$ from the signs in the Euler characteristic, we obtain
\[
\sum_{\lambda} (-1)^{|\lambda|-|\mu|} [\mathbb{S}^\lambda(\mathbb{C}^n)] s_{\lambda^\prime}[L](x^{-1}).
\]
Thus the coefficient of $[\mathbb{S}^\lambda(\mathbb{C}^n)]$ in $[M_{\mu}^n]$ is the coefficient of $s_{\mu^\prime}(x^{-1})$ in $(-1)^{|\mu|-|\lambda|}s_{\lambda^\prime}[L](x^{-1})$, which gives us
\[
[M_\mu^n] = \sum_{\lambda} (-1)^{|\mu|-|\lambda|} [\mathbb{S}^\lambda(\mathbb{C}^n)] \langle s_{\lambda^\prime}[L], s_{\mu^\prime}\rangle .
\]
This provides an alternative proof the formula from \cite{AS} for expressing the irreducible representation $S^{\mu[n]}$ of $S_n$ in terms of restrictions $\mbox{Res}_{S_n}^{GL_n(\mathbb{C})}(\mathbb{S}^{\lambda}(\mathbb{C}^{n}))$. This construction addresses a remark of Assaf and Speyer by explaining the presence of the character of the free Lie algebra (namely, $L$) in the formula.
\section{Application to $\mathcal{F}$-modules}
\noindent
Let $\mathcal{F}$ denote the category of finite sets. An $\mathcal{F}$-module is a functor from $\mathcal{F}$ to vector spaces over a fixed field. These were introduced in \cite{JWG}, and their homological algebra was studied over $\mathbb{Q}$.
An $\mathcal{F}$-module consists of a $S_n$-module for each $n$ together with suitably compatible maps between them. (This is because the image of an $n$-element set carries an action of $Aut(\{1,2,\ldots,n\}) = S_n$.) When $\mu$ is a partition different from $(1^k)$ (i.e. not a single column), $M_n^\mu$ (considered for fixed $\mu$ but varying $n$) defines an irreducible $\mathcal{F}$-module, by demanding that an $n$-element set in $\mathcal{F}$ map to $M_n^\mu$ (see Theorem 5.5 of \cite{JWG}). Furthermore, in this category, objects obtained by restricting $\mathbb{S}^{\lambda}(\mathbb{Q}^n)$ to $S_n$ are projective (see Definition 4.8 and Proposition 4.12 of \cite{JWG}). Our resolution (provided we replaces all instances of $\mathbb{C}$ with $\mathbb{Q}$) therefore gives a projective resolution of these simple $\mathcal{F}$-modules $M_n^\mu$. This resolution is in fact minimal (in the sense that each step in the projective resolution is as small as possible). This follows from the following two facts. Firstly, the $r$-th term in the resolution of $M_n^\mu$ is a sum of $\mbox{Res}_{S_n}^{GL_n(\mathbb{Q})}(\mathbb{S}^{\lambda}(\mathbb{Q}^n))$ with $|\lambda| = |\mu|-r$, which is a consequence of Equation \ref{chain_form}. In particular, such a module with fixed $\lambda$ can only appear in one step of the resolution. Secondly, a theorem of Littlewood (Theorem XI of \cite{lw}), states that the restriction multiplicity $a_{\mu}^\lambda$ is equal to $\delta_{\mu, \lambda}$ if $|\mu| \geq |\lambda|$. Thus, $[\mbox{Res}_{S_n}^{GL_n(\mathbb{Q})}(\mathbb{S}^{\lambda}(\mathbb{C}^n)]$ are linearly independent elements of the Grothendieck ring of $S_n$-modules, provided $n$ is sufficiently large. Furthermore, the $[\mbox{Res}_{S_n}^{GL_n(\mathbb{Q})}(\mathbb{S}^{\lambda}(\mathbb{C}^n)]$ should only occur in the resolution in order of decreasing $|\lambda|$ (as in our resolution). Together with Observation 4.25 of \cite{JWG}, which provides a projective resolution of certain $\mathcal{F}$-modules $D_k$ (which can be thought of as substitutes for $M_n^\mu$ when $\mu = (1^k)$), we obtain minimal projective resolutions of all finitely-generated $\mathcal{F}$-modules over $\mathbb{Q}$.
\bibliographystyle{alpha}
|
2,869,038,153,734 | arxiv | \section{Introduction} \vspace{-\parskip}
Bulges of early-type spirals
and elliptical galaxies comprise primarily old low-mass stars,
which account for more than half of the total stellar mass in
the local Universe (Fukugita, Hogan, \& Peebles 1998).
These stars collectively generate a long-lasting feedback
via mass injection from evolved stars
(mainly red giant branch stars and planetary nebulae)
in the form of
stellar winds and energy input from Type Ia SNe.
Because of the SN heating, the ISM should be mostly in
X-ray-emitting plasma inside galactic bulges,
where little cool gas is present (e.g., \citealt{MB1971, Sage07}).
Observations have shown that the X-ray-inferred gas mass and energy
are far less than those empirical predictions
(e.g., \citealt{Sato99, David06}).
In other words, the bulk of stellar feedback expected
is not observed \citep{Wang07}.
This ``missing'' stellar feedback problem becomes particularly acute
in so-called low $L_X/L_B$ (i.e., the ratio of X-ray luminosity to blue
band luminosity) bulge-dominated galaxies
(typically Sa spirals, S0, and low mass ellipticals).
After removing the contribution from point sources
in those relatively deep {\sl Chandra} observations,
the remaining ``diffuse'' X-ray component generally shows a soft spectrum,
indicating a thermal origin \citep{Irwin02,OEPT03},
and its luminosity is only a few percent of the expected
Type Ia SNe energy input \citep{Li07a, Li07b}. The inferred total mass of
the X-ray-emitting gas falls far short of what is deduced from the
stellar mass loss over the galaxy's lifetime \citep{David06}.
The presence of a bulge-wide outflow may solve the
``missing'' stellar feedback problem (e.g. \citealt{Tang08}).
The 1D solution of an SN-heated bulge outflow, however, has problems.
The physical state of the gas outflow mainly depends on the mass and
energy input rates as well as the gravitational field.
Within those low $L_X/L_B$ galactic bulges of typical mass and
energy input rates, a hot bulge wind\footnote{Hereafter
in our paper we use the term {\sl wind} specifically for
a supersonic outflow and {\sl outflow} for an outflow
in either supersonic or subsonic state.}
should be present theoretically (e.g., \citealt{MB1971,Ciot91}).
However, observations of the diffuse hot ISM are apparently
at odds with the theoretical wind models in nearly all aspects.
Firstly, the predicted X-ray luminosity in a bulge wind scenario is
a few orders of magnitude smaller than the observed (e.g., \citealt{Tang08}).
Secondly, the expected wind temperature is about $\sim$1\,keV or higher,
while the observation-inferred gas temperatures are substantially
lower (e.g., \citealt{David06,Li07a}).
Thirdly, the estimated X-ray surface brightness profile of the wind should be
steeper than that of starlight \citep{Ciot91}, but the observed profiles of
diffuse emission distributions are fairly extended in most low
$L_X/L_B$ bulges or ellipticals
(e.g. fig.~7 in \citealt{Sarazin01}; fig.~5 in \citealt{Li07b}).
Furthermore, the predicted mean iron abundance of the diffuse hot gas
is 3--7 times solar because of the Type Ia SN enrichment
(e.g., \citealt{Ciot91,Sato99}), whereas the observed spectra usually
indicate near- or sub-solar iron abundance.
Some, if not all, of the listed discrepancies may arise from various
oversimplifications of the existing 1D models
(e.g., \citealt{MB1971,WC1983,LM1987,Ciot91}).
In such models the mechanical energy input of SNe is always treated
as pure thermal energy smoothly injected into the ISM.
In reality, however, SNe, sporadic in both time and space,
should naturally produce inhomogeneity in the ISM.
The density and temperature inhomogeneity may significantly affect
the X-ray spectrum and luminosity,
which are proportional to the density square.
Explosive energy injection in a hot tenuous medium can be
transported away in form of sound wave, so the SN heating
is not local (e.g., \citealt{Tang05}).
Furthermore, whether or not the SN ejecta of each individual
SNR can be well mixed with the surrounding
material is crucial to address
the apparent low abundance puzzle of the hot gas.
These effects need to be quantified in order to
correctly interpret the existing observations.
In this work, we present a pilot study to explore
the properties of hot gas in galactic bulges by
conducting 3D hydrodynamic simulations.
In these simulations, SNe are randomly generated that statistically
and spatially follow the stellar light distribution.
Based on the mean temperature and density of the surrounding medium,
we adaptively determine the appropriate sizes of individual SNRs
and generate their density, temperature, and velocity profiles
from a library of 1D simulated SNR templates.
We then plant such structured SNR seeds into the 3D simulation grid and
let them evolve. We terminate the simulation when it reaches
a statistically steady state.
The 3D simulations not only provide us with the
dynamical structures of SNRs, but also enable us to trace the
thermal and chemical states of the bulge wind material.
The organization of the paper is as follows.
In \S2 we describe the main physical ingredients of
the bulge wind model and the numerical methods.
The results are presented in \S3 and discussed in \S4.
We summarize our results and conclusions in \S 5.
\section{ Model and Method} \vspace{-\parskip}
\subsection{Model Basics} \vspace{-\parskip}
We model hot gas inside a galactic bulge that originates
from continuous stellar injection
in the form of stellar winds, and is heated by sporadic Type Ia SNe.
The dynamics of the hot gas is described by the following equations:
\begin{eqnarray}
&\ &\parder{\rho}{t} + 2 \nabla\cdot(\rho\mathbf{v}) = \dot{\rho}_*(r)+
\dot{\rho}_{\scriptscriptstyle SN}(\mathbf{r},t), \\
&\ &\parder{\rho\mathbf{v}}{t} + \nabla\cdot(\rho\mathbf{v}\mathbf{v})+
\nabla P = -\rho\nabla\Phi, \\
&\ &\parder{\rho E}{t} + \nabla\cdot[(\rho E + P)\mathbf{v}]=
-\rho\mathbf{v}\cdot\nabla\Phi + S_{\scriptscriptstyle SN}(\mathbf{r},t) + S_{*}(r)
-n_t n_e \Lambda(T), \\
&\ & P = n k T
\end{eqnarray}
where $\rho$, $\,\mathbf{v}$, $\,P$, $T$, and $E$ denote density, velocity vector,
pressure, temperature, and total specific energy of the hot gas;
and $\Phi$ is the gravitational potential field;
$n_t$ and $n_e$ are the number density of ions and electrons;
$n=n_t + n_e$ is the total number density; $\Lambda(T)$ is the normalized
cooling function taken from \citet{Sutherland93}, assuming an
optically thin plasma with solar abundance;
$\dot{\rho}_*$ and $S_*$ denote the mass and energy
input from evolved stars.
The mass and energy input from individual SNe,
$\dot{\rho}_{\scriptscriptstyle SN}(\mathbf{r},t)$ and $S_{\scriptscriptstyle SN}(\mathbf{r},t)$,
are explicitly expressed as a function of position and time.
We adopt parameters appropriate to the bulge of the Milky Way.
The stellar mass distribution of the bulge follows the
potential-density pair of the Hernquist profile \citep{Hernquist1990}:
\begin{equation}\label{henprof}
\Phi_{\scriptscriptstyle bulge}(r) = -\frac{GM_{\scriptscriptstyle bulge}}{r+r_s}, \ \
\rho_{\scriptscriptstyle bulge} (r)=\frac{M_{\scriptscriptstyle bulge}}{2\pi}\frac{r_s}{r}\frac{1}{(r+r_s)^3},
\end{equation}
where $r_s$ is the scale radius and $M_{\scriptscriptstyle bulge}$ is the total mass of the bulge.
Here we set the $M_{\scriptscriptstyle bulge}$ and $r_s$ to be $2.4\times 10^{10}\,{\rm M}_\odot$
and 0.42\,kpc (e.g., \citealt{Kent1992, Zhao1994, Blum1995, Wolfire1995, LepineL00}).
Other components of our Galaxy, such as the disk and dark matter halo,
have only little effects on the gas dynamics within the Galactic bulge
and are thus ignored in our simulations.
The stellar mass loss from evolved stars,
$\dot{\rho}_*(r)$, follows the stellar mass distribution.
The total stellar mass loss rate of the bulge ($\dot{M}=\int\!\dot{\rho}dV$)
is constrained by current theoretical predictions and related observations,
although the true value cannot be
observed directly inside the bulge. It is inferred from our knowledge
of the stellar population and evolution, together with observations
of the mass loss of similar stars in the solar neighborhood.
Estimates of the stellar mass loss rate may vary
by more than a factor of two.
Assuming a single stellar population with a standard
Salpeter initial mass function, \citet{Ciot91} found that
the stellar mass loss rate can be approximated as
\begin{equation} \label{eq:mdot}
\dot{M}=0.25L_{10}t_{10}^{-1.3}{\rm M}_\odot\,\yr^{-1},
\end{equation}
where $L_{10}$ is the current optical blue-band luminosity
in units of $10^{10}L_{B,\odot}$ of the stellar population and
$t_{10}$ is its age in units of 10$\,$Gyr.
Adopting a blue-band luminosity of
$2\times 10^{9}L_{B,\odot}$ (\citealt{Cox00}, p571) and an age of 10\,Gyr
for the bulge, we have $\dot{M} \simeq 0.05\,{\rm M}_\odot\,\yr^{-1}$.
\citet[fig.~22]{Mar05} directly relates mass loss to the total mass
of a stellar population (see also \citealt{Tang08}),
which gives $\dot{M} \simeq 0.07 \rm M_\odot yr^{-1}$.
Another estimate based on observations of asymptotic giant
branch stars gives $\dot{M}=0.64 L_{10}$\,${\rm M}_\odot\rm yr^{-1}$,
consistent with the stellar mass loss rate inferred for a sample of
nine ellipticals from mid-IR observations \citep{Athey02}.
Thus, $\dot{M}$ for the Galactic bulge can be as high as
0.13$\,{\rm M}_\odot\,\yr^{-1}$. We therefore run models with different
mass loss rates, as discussed in \S2.3.
The energy feedback of the stellar bulge is dominated by the mechanical
energy from Type Ia SNe. Following the convention, we assume that each SN
releases $10^{51}\rm~ergs$ mechanic energy.
We take the Type Ia SN rate of E-S0 galaxies to be 0.12 SNu
(SNu is defined as one SN per $\rm 10^{10}\,L_{B,\odot}$
per century; \citealt{CappEva1999}; \citealt{Cox00}, p467),
which gives about one SN per 3000 year in our Galactic bulge.
The energy input from the stellar wind, $S_{*}(r)$,
is assumed to be thermalized to their stellar kinematic
temperature $T_* \equiv \mu m_p \sigma^2 /3k \simeq 3\times10^5\,$K,
corresponding to a stellar velocity dispersion around 100 km$\,\rm s^{-1}$
\citep{Eckart1993}. Overall this energy is almost negligible
compared with that from SNe.
Each SN also ejects an adopted (Chandrasekhar) mass of 1.4\,$\rm M_\odot$.
Though the total amount of SN ejecta is much less than that of the mass loss
from evolved stars, the SN ejecta contribute most of the metals,
especially iron. In order to trace how these iron-rich ejecta
mix with the stellar wind material, we additionally incorporate
a separate advection equation
\begin{equation}\label{eq:ironfrac}
\parder{\rho \chi_i}{t} + \nabla\cdot(\rho \chi_i \mathbf{v})=0,
\end{equation}
where $\chi_i$ is the mass fraction of the $i$th component,
with the constraint $\sum_i \chi_i = 1$.
In the present simulations we have two components,
the iron mass and the rest of gas mass.
The iron mass from SNe, assumed to be 0.7\,$\rm M_\odot$ per SN
\citep{Nomoto1984}, is part of the SN ejecta.
The iron mass fraction in stellar wind material
is $f_{Fe,\odot}=0.1\%$ (i.e., the nominal solar).
The iron abundance can trace the SN enrichment.
Any zone with iron abundance greater than the solar
value is enriched by SNe.
Thereafter we refer the iron ejecta as purely from SNe.
The simulations are performed with FLASH \citep{Fryxell00},
an Eulerian astrophysical hydrodynamics code
with the adaptive mesh refinement (AMR) capability,
developed by the FLASH Center at the University of Chicago.
FLASH solves the Euler equations and uses the piecewise-parabolic method
to deal with compressible flows with shocks.
We take the advantage of the AMR capability
to accurately include the heating of individual SNRs.
\vspace{-0.5\parskip}
\subsection{One Dimensional Model} \vspace{-\parskip}
To help set up the 3D simulations, we first simulate a 1D model.
Assuming that the energy and mass inputs are continuous in time and
spherically symmetric in space, we may simplify Eqs.~(1)--(3) into a 1D problem.
For a specific galactic bulge, the {\it crossing time} ---the time required
for the gas to flow from the center to the outer boundary---
is a few million years (assuming a size of a few kpc for the bulge),
significantly shorter than the evolutionary time scale of
the stellar energy and mass input rates.
Therefore, the problem can be regarded
as time-independent. The bulge outflow can reach a steady state
if radiative cooling does not affect the dynamics.
We run the 1D simulations by using a gas-free initial condition
and by continuing to add corresponding energy and mass in each
zone. FLASH handles the mass and energy inputs
with an operator split method. At each time step it first solves
the Eulerian equations without the source terms,
then explicitly updates the solution to account for
the corresponding source terms.
To properly conserve the mass, momentum, and energy,
we implement this procedure in three steps:
first, we update the gas density according to the amount of mass input
in that step; next, we modify the gas velocity to satisfy
momentum conservation; finally, we modify the gas temperature
to conserve the total energy.
We verified that this implementation can exactly produce
the analytical solution of a star cluster wind \citep{Canto00}.
The system eventually evolves to a steady state.
Such a steady outflow solution can be analytically
derived without including cooling \citep{WC1983}.
If the 1D bulge outflow solution has a sonic point,
a subsonic outflow can then develop into a supersonic outflow
(i.e., a bulge wind). The final state of such a wind does not
depend on the specific initial condition.
Under certain conditions (e.g., due to significant radiative cooling
or low specific energy input; see \S4.2 for more discussion),
the sonic point may not exist and the gas outflow may be sensitive
to the boundary condition as well as the initial condition.
The use of the outflow (sometimes called zero-gradient)
boundary condition would then introduce an artificial force inserted by
the leveled-off pressure that would produce perturbation,
propagating inwards on a time scale comparable to the crossing time.
This situation would also occur if a simulation region were too
small to include the sonic point (if present).
Thus we use the 1D solution to make sure that the sonic point is
included in the 3D simulation domain (a cubic box).
\vspace{-0.5\parskip}
\subsection{Three Dimensional simulations}
\vspace{-\parskip}
Two 3D simulations are performed to examine the
properties of galactic bulge winds.
Their key parameters are listed in
Table~\ref{T:para} for quick reference.
The major difference between these two simulations is the mass loss rate:
$0.05\,{\rm M}_\odot\,\yr^{-1}$ for Model A (the 3D reference model)
and $0.1\,{\rm M}_\odot\,\yr^{-1}$ for Model B,
representing the uncertainty in the mass loss rate (\S2.1)
or extra mass loading expected in galactic bulges
(\S4.2; see also Li \& Wang 2009 in preparation).
The highest spatial resolutions are $\sim$ 3.9 and 4.9\,pc
respectively for the two models. The effective single-grid
resolution of the simulations are $1024^3$ and $2048^3$ zones.
The steady wind flow established
in the 1D model is used as the initial flow with
an iron mass fraction of the solar value ($f=0.1\%$).
In the 3D realizations, SNe explode randomly
according to a Poisson process with the mean overall rate.
Their spatial distribution statistically follows
the stellar density distribution.
It would be computationally very expensive, if even possible,
to simulate the evolution of each SNR on sub-parsec scales
within the bulge-wide flow.
Instead, we adaptively plant individual structured SNR seeds
into the 3D simulation grid and then let them evolve.
We do not simply adopt the Sedov solution,
which is generally not appropriate for an SNR evolving
in a hot tenuous medium \citep{Tang05},
especially when the dynamics of the SN ejecta are considered.
According to a scaling scheme detailed in
a separate paper \citep{Tang09},
the structure of an SNR can be scaled from a template SNR simulated in
a different ambient medium setting.
We have constructed a library of template SNRs from 1D simulations,
assuming a selection of ambient gas temperatures and densities.
Each entry of this library consists of the density, temperature,
and velocity profiles at a particular age and a forward shock radius.
We apply the scaling scheme to dynamically generate
the profiles of each SNR seed. Specifically, we select a spherical
region around each SN location within which the density and
pressure are sufficiently smooth, using the L$\rm\ddot{o}$ner's
error (FLASH User's Guide; \citealt{Lohner87}) as the estimator.
We find that at least 500 zones
(i.e., more than 5 points for the radial profiles)
are needed to reasonably well represent a structured SNR seed.
This in turn requires the minimum embedding radius to be
at least 20\,pc, given the spatial resolution that is achievable
in our simulations. With the embedding radius determined,
we calculate the mean density and gas-mass-weighted temperature
of the enclosed gas to find the most suitable template in the library
and to form the required SNR seed \citep{Tang09}.
The planting of an SNR seed also takes a few steps.
First we refine the affected region to the highest refinement
level.
Then we normalize the SNR structures to ensure
the conservations of mass, momentum, and energy within that
region (see Appendix A for details).
Finally, the innermost region that
encloses 0.7$\,{\rm M}_\odot$ is traced as the pure iron ejecta of the embedded SNR.
This embedding procedure is well parallelized and allows
for linear scaling up to at least 1024 processors.
\begin{deluxetable}{cccccc}
\tabletypesize{\footnotesize}
\tablecaption{Model Parameters}
\tablewidth{0pt}
\tablehead{
Model&$\dot{M}$ &$\dot{E}_{sn}$ &$r_{\rm sonic}$ &$\Delta \rm L $ & L\\
& ($\rm{\rm M}_\odot\,\yr^{-1}$) & ($10^{40}{\rm ergs\,s^{-1}}$) & (kpc) & (pc) & (kpc)
}
\startdata
A & 0.05 & 1.1 & 1.0 & 3.9 & 4 \\
B & 0.1 & 1.1 & 1.8 & 4.9 & 10
\enddata
\label{T:para}
\end{deluxetable}
To save computing time, one may simulate only one
octant of the bulge by adopting a reflecting
boundary condition at the surfaces across the center.
A test run, however, shows that
a reflecting boundary condition introduces correlated
wave interactions when an SN explodes near the reflecting boundaries.
This effect is not physical and difficult to quantify.
It is most serious near the bulge center where three reflecting
boundaries intersect and the stellar density,
hence the rate of the SNe, is the highest.
Thus we resort to simulating the whole bulge,
which is centered inside the simulation domain.
However, we only simulate one octant at full resolution,
while the highest resolution in the rest of the grid is degraded by
a factor of four, except for regions where SNRs seeds have just
been embedded. These regions are forced to have the full resolution
in all octants, and are held for $10^5$ years
before returning to the default AMR.
We use refinement estimators acting on the density and pressure
to determine whether a block needs to be refined or derefined,
adopting the default criterion suggested (in FLASH User's Guide,
i.e., refining a block if any estimator is greater
than 0.8 and derefining it if all estimators are less than 0.2).
Regions outside the sonic radius
(which is obtained from the 1D model) are allowed to have
a lower refinement level that gradually decreases with radius.
This approach circumvents the reflection boundary problem
at the expense of about 60\% more computing time, which is acceptable.
In addition, it allows
us to examine the resolution effect within a single run.
A statistically steady state of such a 3D simulation can be reached after
a few crossing times.
We quantify the establishment of the steady state of a 3D bulge wind by
examining the relative variation of its global quantities such
as the total mass and energy.
Here we define the variation as the change of
a given quantity relative to its initial value, i.e.,
the relative difference between 3D and 1D.
The variation of the total mass within 2.0\,kpc of Model A is
shown in Fig.~\ref{F:massen_evol}a by the solid line.
Compared to its initial value, the total mass increases to
$\sim 7\%$ on average and fluctuates around this value with a
period of $\sim$\,10\,Myr, comparable to the flow crossing time.
As expected, the mass variation within the inner 1.2\,kpc radius,
displayed as the dashed line in the figure, has a shorter fluctuation
period. The expected iron mass fraction,
if fully mixed with stellar wind material, is
0.35\% ($\sim 3.5$ times the solar abundance) in Model A.
We show the variation of the iron abundance (in the solar units)
in Fig.~\ref{F:massen_evol}c.
It takes about 5\,Myr for the hot gas inside 2\,kpc to gain the
expected iron mass. The variation of the iron abundance is smaller than
that of total mass, mainly because it actually only reflects
the ratio of total iron mass to the total gas mass.
By introducing random SN events in the 3D simulations,
the globally conserved quantities are no longer constant as
they should be in a 1D spherical steady flow.
Only when a hydrodynamic steady state is established in the simulations
(i.e., the fluctuation has reached a statistically stable level) is
the comparison between 1D and 3D results meaningful.
We check the resolution effect based primarily on X-ray luminosity, which
is particularly sensitive to the density structure in the simulations.
We find that the X-ray luminosity difference between the high resolution octant
and the other seven octants is rather small.
This is partly because the majority of the emission arises from
individual SNRs which are resolved at the same resolution in all the octants.
To examine the resolution effect more directly, we resume Model A
with an increased spatial resolution by a factor of two,
and let it only evolve for 0.1\,Myr, limited by the available computing time.
This simulation produces finer structures of the bulge gas,
and the resultant X-ray luminosity increases about 3\% in the high resolution
octant and about 10\% in the other seven octants.
This demonstrates that our results are quite robust and are only slightly
affected by the spatial resolution.
\section{Results} \vspace{-\parskip}
In this section we present the gas properties extracted from the
3D hydrodynamical simulations. We first detail the results of
Model A (Fig.~\ref{F:bgw0_structure}) and then present Model B
(Fig.~\ref{F:dblm_structure}) for comparison.
Data near the outer region are excluded in our analysis to
avoid any potential artifacts introduced by the assumed outer
boundary condition of the simulations.
We show time-dependent gas properties such as global
structures, individual SNRs, and X-ray luminosities
as well as various time-averaged measurements.
The average is made over a time span ranging from 15 to 30\,Myr,
when the simulations have reached quasi steady states
(see Fig.~\ref{F:massen_evol}).
\subsection{Structures}\vspace{-\parskip}
Fig.~\ref{F:bgw0_structure} shows snapshots of the simulated density,
temperature, pressure, and iron ejecta mass fraction of Model A
in the $z=2$\,pc plane.
Sporadic SN events produce non-uniformity in the bulge wind.
The prominent features are various shell-like and filamentary
density structures. These shell-like structures of SNRs are easily
identified in the outer region, where the explosions are less frequent
and each SNR can evolve individually to a large volume
before colliding with others.
Individual SNRs near the bulge center appear more compact
because of the high gas density and pressure
and frequent interactions with adjacent remnants.
Evolved remnants tend to be dispersed and advected outward.
Higher temperature regions in general represent low-density
interiors of SNRs.
The distribution of the pressure is much smoother than those of
the density and temperature (Fig.~\ref{F:bgw0_structure}d),
as has also been found for the SN driven ISM in the Galaxy
\citep{Avillez05,MacLow05,Joung06} and in starbursts \citep{Joung08}.
The shell-like structures in
the pressure map correspond to the expanding blastwaves.
Inside each SNR, the pressure is nearly uniform,
as expected from the \citet{Sedov59} solution.
The spatial distribution of iron ejecta is far from uniform,
as illustrated in Fig.~\ref{F:bgw0_structure}c.
Regions with the lowest iron mass fraction are primarily filled with
stellar wind material with little mixing with the iron ejecta.
Gas with an intermediate iron mass fraction represents iron ejecta
diluted by constantly injected stellar wind material
(or mildly by numerical mixing).
Though being diluted and fragmented, the iron ejecta
is advected out of the bulge, hardly mixing with the bulk of
stellar wind material. Hence, the ISM is not uniformly enriched
by SN ejecta within the bulge.
Similar results are also present in
Model B (see Fig.~\ref{F:dblm_structure}).
Fig.~\ref{F:bgw0_xaxis} shows sample density, temperature,
and velocity profiles of the hot gas at two representative times.
Individual troughs shown in the density profiles represent interiors
which are surrounded by peaks (i.e., expanding shells of SNRs).
The temperature profiles are nearly
anti-correlated with the density profiles: a peak in temperature usually
corresponds to a trough in density at the same locus,
which is equivalent to the smooth pressure profiles
(e.g., \citealt{MacLow05,Joung06}).
As SNRs evolve, the loci of peaks and troughs change with time.
Multiple evolved SNRs appear to be wave-like.
This wave-like flow driven by sporadic SNe produces the fluctuations
in the globally conserved quantities (Fig.~\ref{F:massen_evol}).
We demonstrate the evolution of a few SNRs
(labeled as $\rm I$, $\rm I\!I$, and $\rm I\!I\!I$)
very close to the bulge center in Fig.~\ref{F:gcsnr}
taken from the high resolution study.
The forward blastwave of SNR $\rm I$ is evident in the density panel
at $10^3$\,year.
The pure iron core has a radius of about 10\,pc at this time.
The blastwave expands faster toward the lower-right region,
where a lower density cavity has been created by an earlier
SNR (labeled as $\rm I\!I$).
SNR $\rm I$ is unusual in that it occurs right at the bulge center.
Because of the high stellar wind injection rate there, its iron ejecta
are diluted quickly. SNRs away from the center are less affected
by the stellar wind dilution. SNR $\rm I\!I$, for example,
at an age of $2\times 10^4$ year still has an iron mass
fraction of $\sim 6\%$, corresponding to 60 times the solar abundance.
The collision between SNR $\rm I$ and $\rm I\!I\!$
is also evident in the lower-right subpanel of each group.
The evolved shapes of these SNRs are asymmetric
because of both the inhomogeneous environment and
interactions with other SNRs.
\subsection{Time-Averaged Distributions}\vspace{-\parskip}
Fig.~\ref{F:bgw0_phase} shows the time-averaged gas mass and volume
distributions in the high resolution octant
as functions of temperature and density in two
regions: $0<r<0.6$\,kpc and $0.6<r<1.2$\,kpc.
The distributions in both regions are broad.
As expected, the mass distribution is biased toward
the lower temperature and higher density side
relative to the volume distribution. Relatively cold dense gas
(e.g., distributed in the upper-left corner of Fig.~\ref{F:bgw0_phase}a)
occupies negligible volume, lying outside
the 1\% contour level of the gas volume distribution.
To see the results more quantitatively, we show their marginal
distributions in temperature and density respectively
(Fig.~\ref{F:bgw0_pdfnt}).
The volume distribution resembles a power law
with an index of roughly $-$2 at the high-temperature end.
The mass distribution is similar to the volume
distribution but peaks at a lower temperature.
Fig.~\ref{F:bgw0_dem} shows the differential emission measure (EM)
as a function of temperature within a radius of 1.2\,kpc.
The EMs from the low- and high-resolution
regions are nearly identical around the peak value.
The bulk of the broad EM distribution,
peaked at $\sim 3.7\times10^{6}$\,K,
can be approximated by a log-normal distribution,
with small deviations mostly at the high temperature end.
The grey region in the figure encloses the 50\% intervals
below and above the mean of the whole region.
\subsection{X-ray Emission and Spectra\label{emspec}}
\vspace{-\parskip}
\newcommand{1D: $\rm Z_{\odot} / 3.5\,Z_{\odot}$ }{1D: $\rm Z_{\odot} / 3.5\,Z_{\odot}$ }
\newcommand{3D: $\rm Z_{\odot} / 3.5\,Z_{\odot}$ }{3D: $\rm Z_{\odot} / 3.5\,Z_{\odot}$ }
\newcommand{1D: $\rm Z_{\odot} / 1.8\,Z_{\odot}$ }{1D: $\rm Z_{\odot} / 1.8\,Z_{\odot}$ }
\newcommand{3D: $\rm Z_{\odot} / 1.8\,Z_{\odot}$ }{3D: $\rm Z_{\odot} / 1.8\,Z_{\odot}$ }
\begin{deluxetable}{c|cccc}
\tabletypesize{\footnotesize}
\tablewidth{0pt} \tablecolumns{5}
\tablecaption{\label{T:luminosity}Time-averaged X-ray Luminosities}
\tablehead{Enery band(keV) & 0.2-0.5 & 0.5-2.0 & 2.0-10 & Iron K$\alpha$\\
(${\rm ergs\,s^{-1}}$) &($10^{36})$ &($10^{36})$ &($10^{36})$ &($10^{32}$)}
\startdata
(A) 1D: $\rm Z_{\odot} / 3.5\,Z_{\odot}$ & 0.092/0.22 & 0.72/2.42 & 0.022/0.066 & 0.095/0.31 \\ \hline
(A) 3D: $\rm Z_{\odot} / 3.5\,Z_{\odot}$ & 0.39/1.18 & 1.22/4.14 & 0.021/0.062 & 4.1/13.8\\
\hline \hline
(B) 1D: $\rm Z_{\odot} / 1.8\,Z_{\odot}$ & 3.44/5.32 & 10.8/18.5& 0.03/0.05 & \nodata/\nodata \\ \hline
(B) 3D: $\rm Z_{\odot} / 1.8\,Z_{\odot}$ & 26.8/49.8& 29.09/50.4& 0.066/0.10 & 7.9/13.5\\
\enddata
\tablecomments{For each model the X-ray emissions are calculated
using two metallicities: 1.0 and 3.5 solar for Model A;
1.0 and 1.8 solar for Model B.}
\end{deluxetable}
We calculate the luminosities and spectra using the standard
software package {\small XSPEC} based on the EM distributions.
For the hot gas considered here, we adopt the standard {\small MEKAL} model
\citep{Mewe85,Liedahl95}
for plasma in collisional ionization equilibrium.
Fig.~\ref{F:spectrum} shows a synthesized spectrum of Model A
assuming a solar abundance.
The luminosities in a few bands of Model A
are listed in the 4th row of Table \ref{T:luminosity}.
The corresponding spectra and luminosities of Model B are presented
in Fig.~\ref{F:spectrum} and in Table \ref{T:luminosity} as well.
The luminosity of Model B in the low energy band increases
dramatically; e.g, in the 0.2-0.5\,keV band it
is more than 60 times larger than that of Model A.
This increase is due to the combination of higher density and
lower temperature (hence higher emissivity)
of the weak bulge wind (see \S4.2 discussion).
Of course, the X-ray emission depends on the metal abundance as well.
If an abundance expected for stellar material fully mixed with SN
ejecta were adopted, the luminosities in the three bands would
increase by a factor of a few, as listed in Table \ref{T:luminosity}.
Since the iron ejecta are in fact not well mixed with
the surrounding material, we examine the effect of
the non-uniform metal distribution on X-ray emission.
The normalized EM distribution as a function of the iron abundance
is plotted in Fig.~\ref{F:emz} as the solid black line.
It shows that about 80\% of the EM comes from the material
with the iron abundance less than 1.5 times solar.
The corresponding distribution of the luminosity in the
0.3-2 keV band nearly follows that of the EM.
But the distribution of the luminosity in the 2.0-5.0 keV band is
affected considerably by gas with higher iron mass fractions
(corresponding to SNR interiors that in general have higher temperatures).
Thus the majority of the X-ray emission from the bulge wind
comes from the stellar wind material
that is hardly enriched by SNe ejecta.
In the following, we thus adopt the solar abundance for
our calculations.
Fig.~\ref{F:bgw0_lxt} shows the time variation of the 0.3-2.0 keV
luminosity of Model A in two concentric regions:
an inner region with $r < 0.6$\,kpc,
and an outer region with $0.6 < r < 1.2$\,kpc.
The luminosity in the inner region has a larger fluctuation
(up to a factor of 3) than that in the outer region.
The X-ray emission from the region with $r>1.2$\,kpc is negligible.
The total 0.3-2.0 keV X-ray luminosity is only
$\sim 10^{36} \rm~ergs\,s^{-1}$ with a fluctuation less than
a factor of two. Thus overall the X-ray emission of the
diffuse gas is not significantly affected by the sporadic SN heating.
Fig.~\ref{F:surbx} illustrates the 0.3-2.0 keV surface brightness
profile at several representative times.
The brightness varies by more than an order of magnitude near the center.
Compared with the stellar surface density profile (gray line),
the X-ray profiles are generally steeper.
Fig.~\ref{F:emap3b} shows the X-ray surface brightness maps in
three representative bands (0.3-0.7, 0.7-2.0, and 2.0-5.0 keV).
The maps in the two lower energy bands do not show
significant structures; those small features are typically not
associated with individual SNRs. In the 2.0-5.0 keV map,
individual SNRs are recognizable because they are the primary
source of the hard X-ray emission.
\subsection{Energy Distribution } \vspace{-\parskip}
From the 3-D realization, we can further
quantify the thermal, kinetic energies as well as
the turbulent motion of the hot gas.
Fig.~\ref{F:energyform} shows the distributions of
kinetic and thermal energies as a function of temperature
at four different regions.
The energy distribution peaks at $5\times 10^6$\,K.
At temperature below $5\times10^6$\,K,
the energy is dominated by the thermal energy
near the center ($0<r\leq 0.4$\,kpc),
while the thermal and kinetic energies contribute almost equally
at large radii ($r>1.2$\,kpc).
Note that for ideal gas with the ratio of specific heat $\gamma=5/3$,
the Mach number is 1.34 (i.e., supersonic flow)
when its thermal energy equals to
its kinetic energy. The thermal energy of much hotter gas,
primarily low-density hot SNR ejecta,
is always much larger than its kinetic energy, which means that
the SNR ejecta leave the simulation region subsonically.
The difference between the total kinetic energy and its radial
component reveals the energy contained in non-radial motion.
Inside the scale radius ($0<r\leq 0.4$), $\sim$ 30\%
of the kinetic energy is in non-radial motion. This fraction
is less than 2\% in the outer shell ($1.2<r\leq 1.6$).
As a whole, $>$ 80\% of the thermal and kinetic energy of the bulge wind
is stored in gas with temperature between $10^{6.5}$\,K to $10^{7.5}$\,K.
The hotter gas ($T>10^{7.5}$\,K) contains $<$ 5\% of the total energy.
\section{Discussions} \vspace{-\parskip}
\subsection{Comparison between the 1D Model and 3D Simulation}
\vspace{-\parskip}
Fig.~\ref{F:radial_prof} compares averaged radial profiles for the
density, temperature, velocity, and pressure in a few snapshots of
Model A with the corresponding 1D results.
The velocity and pressure profiles averaged in radial bins follow the
1D results excellently. In the 3D simulation there is no unique
location of the sonic point to divide the supersonic flow from
subsonic flow since the velocities and temperatures vary greatly
from point to point. Nevertheless, the average hot gas outflow
does become supersonic at $\rm \sim 1.0\,kpc$, close to the 1D
sonic point (marked by the arrow).
The density and temperature profiles
deviate significantly from the corresponding
1D results in the inner region ($r<r_s$).
In particular, the density profile tends to be more
centrally peaked in the 3D simulations.
This temporary accumulation of stellar wind material is
largely due to the continuous mass
injection and the lack of prompt heating.
The variation of the density profiles, especially near the center,
is closely coupled with the realization of SN events.
At large radii the density and temperature profiles
become nearly identical to the 1D results.
Since the radiation of the bulge wind is negligible,
the identical radial profiles at large radii
are expected due to the conservation of mass and energy.
Thus the nature of sporadic SN explosions does not affect the
overall gas dynamics on large scales.
In Fig.~\ref{F:radial_prof_res} we compare the
profiles of the octant at full resolution to profiles of three
low resolution octants. The figure suggests that the profiles show
numerical convergence at a level better than the random variance
among realizations.
On average the gas temperature in the 3D simulation has
a lower and flatter profile than that in the corresponding
1D result. In the 1D model, the gas temperature is the highest
at the center and decreases outwards monotonically. In contrast,
the average gas temperature in a 3D simulation can be much lower
at the center than in the surroundings,
because of non-uniformly distributed SN heating.
The distribution of SN heating depends on the ambient medium.
An SN that explodes in a dense environment like the bulge center,
heats a small region to a rather high temperature.
The small amount of overheated gas advects outward
and carries a large fraction of the SN energy with it,
while leaving much of the central gas unheated.
The unheated gas accumulates around the bulge center,
resulting in a relatively steep density profile near the
bulge center (see Fig.~\ref{F:radial_prof} panel a).
The large density gradient makes it even harder for the SN
heating to be uniformly distributed near the center,
so it tends to be transported to outer low density regions
(see also \citealt{Hnatyk99}). Thus a low-temperature inner region
naturally forms under the sporadic SN heating scenario.
It is also worth noting that the temperature profile depends on
the weighting methods, as shown in Fig.~\ref{F:cmpT}.
The EM-weighted value, which closely resembles that
inferred from X-ray observations, is relatively low because
it is primarily determined by the low-temperature
denser material (see \S3.2).
In comparison, the temperature profiles from different
weighting methods are identical in the 1D model.
This is because in the 1D model, all quantities such as
temperature, density, and pressure are monotonic functions of radius.
Therefore the density has a one-to-one correspondence to the temperature,
which also explains the single-line gas distribution of the 1D model
in the temperature and density space (Fig.~\ref{F:bgw0_phase}).
The integrated gas properties of Model A are also significantly
different from those of the 1D model, such as the EM distributions,
spectra, X-ray luminosities, etc.
In the 1D model the EM peaks at $\sim8\times10^{6}$\,K
and is truncated sharply at low or high temperatures
(three-dots-dashed magenta line in Fig.~\ref{F:bgw0_dem}),
while the EM distribution of Model A peaks at a lower
temperature and is much broader.
But the difference between the spectral shape of Model A and
the corresponding 1D model is small in the 0.7-3.0\,keV band
(see Fig.~\ref{F:spectrum}). Model A has slightly
higher (about 30\% more) X-ray luminosity in this band.
At lower ($<0.7$\,keV) and higher energy bands
($>3.0$\,keV), Model A gives much higher photon fluxes,
especially for some line features, such as OVII (22.0\AA, 0.56\,keV)
and helium-like iron K$\alpha$ ($\rm FeXXV\,K\alpha$; 6.7\,keV),
largely due to the much broader temperature distribution.
It is worth noting that the $\rm FeXXV\,K\alpha$ line emission
in Model B increases but the line in its corresponding 1D model
is missing because of the relatively low and narrow
gas temperature distribution in the 1D model.
\subsection{Observational Implications} \vspace{-\parskip}
There are two direct predictions from the 3D simulations
that help to understand partly the observational puzzles
faced by the 1D wind model.
First, the relatively low emission-weighted gas temperature
is consistent with the results inferred from the diffuse
X-ray emission in galactic bulges and elliptical
galaxies (e.g, \citealt{Sarazin01,David06,Li07a}).
Second, the SN Ia ejecta are not fully mixed with the
stellar wind material inside the galactic bulge, and most of the X-ray
emission is contributed by the stellar wind material shocked by
SN blastwaves. Thus the X-ray spectra in general reflect only
the metal abundance of the stellar wind material,
which is consistent with the apparent solar metallicity inferred
from the diffuse X-ray emission in a large sample of ellipticals
(e.g., \citealt{Humphrey06}).
In addition, the broadening of the gas temperature distribution
can also affect the determination of metallicity.
A low metal abundance could be obtained by fitting the X-ray
spectra of such gas with an isothermal plasma model.
This effect is demonstrated in Fig.~\ref{F:simspec}.
We simulate the X-ray spectrum based on the {\small MEKAL} model
in {\small XSPEC}, adopting the approximate log-normal
EM distribution (see \S3.2) and a solar abundance.
The data points in the upper panel
show a simulated spectrum of about $600$ photons.
If we fit this simulated spectrum with a single-temperature
{\small MEKAL} model, the resulting abundance is only about half
solar ($0.49\pm0.17$). The fit is statistically acceptable
(with $\chi^2=42.4/41$), partly due to the small counting statistics.
If the simulated spectrum contains $10^4$ photons instead,
the fit is no longer acceptable (with $\chi^2=213/91$)
and the best fit tends to give a much lower abundance
(about one-third solar).
A two-temperature component model can significantly improve the fit,
and the abundances are in general less than one solar,
although they are not strongly constrained.
Thus the abundance is likely underestimated
by fitting an X-ray spectrum of gas with broad temperature and density
distributions (see also \citealt{Strickland98}).
Given the low resolution and small signal-to-noise ratio of
X-ray spectra typically available for diffuse hot gas,
it is difficult to distinguish gas with a broad temperature distribution
from an isothermal plasma, especially within a narrow energy band.
In Fig.~\ref{F:spectrum} we plot an arbitrarily normalized
spectrum of a single-temperature hot plasma (0.8\,keV for Model A
and 0.4 keV for Model B,
where the EM distribution of the corresponding 1D model peaks).
The spectra of the 3D simulation and the isothermal plasma model
are similar in the 0.5--2.0\,keV band.
This approximation gives a potential shortcut to fit the X-ray
observations coarsely with only one or two gas components,
which are not necessarily able to reveal the actual physical and chemical state
of the ISM. With higher resolution spectra, line diagnostics
may provide additional useful information to reveal the gas properties.
However, the models still predict far lower X-ray luminosities
than observed. The observed diffuse X-ray luminosity
in the 0.5--2~keV band of our Galactic bulge
and M\,31 bulge is about $10^{38}\rm~ergs~s^{-1}$
\citep{Shirey01,TOKM04,Muno04,Li07a}.
Using the reference mass and energy input rates,
the bolometric X-ray
luminosity predicated by the 1D wind model is no more than
$10^{36}\rm~ergs~s^{-1}$. The inhomogeneous
structures of the gas density and temperature in Model A
increase by no more than half an order of magnitude
the luminosity in the 0.5-2\,keV band (Table 2).
This under-prediction of the observed luminosity remains
a serious problem for the models.
The parameter with the strongest influence on the luminosity
of the bulge wind is the stellar mass loss rate.
As demonstrated in Model B which has a doubled mass input rate,
its mean gas density increases while the temperature decreases.
The X-ray emission
increases by a factor of $\sim 20$ in the 0.5-2.0\,keV band.
This enhancement is due to the increase of both density and emissivity.
To quantitatively understand this trend, let us first consider
a scaling relation of the bulge wind.
The specific heating $\beta = \dot{E}/\dot{M}$ determines
the mean central gas temperature of the wind solution.
In a steady flow, the velocity $u \propto \beta^{0.5}$
(e.g. \citealt{WC1983}), so long as gravitational potential
energy remains unimportant.
We have $\rho \propto \dot{M} \beta^{-0.5}$ from
the mass conservation equation,
and $T \propto \beta$ from the energy conservation equation.
For gas with a temperature between 0.5 and 1.0\,keV,
the X-ray emissivity in the 0.5--2.0\,keV band
is roughly inversely proportional to the temperature
[e.g., $\Lambda(T) \propto T^{-0.7}$ based on the {\small MEKAL} model].
The total X-ray emission is then $L\propto \rho^2 \Lambda(T)
\propto \dot{M}^{3.7} \dot{E}^{-1.7}$.
For winds with the same $\beta$ the velocity and temperature
profiles are the same, and density profiles differ only in their
normalization, which is proportional to $\dot{M}$.
We thus use $\beta/\beta_*$
to denote the separate change of either $\dot{M}$ or $\dot{E}$,
e.g, $L\propto \dot{M}^{3.7}$ is equivalent to
$L\propto (\beta/\beta_*)^{-3.7}$ for a fixed energy input,
and $L\propto \dot{E}^{-1.7}$ to $L\propto (\beta/\beta_*)^{-1.7}$
for a fixed mass input.
Fig.~\ref{F:en_vs_em} shows the luminosity in the 0.5--2 keV
as a function of $\beta$ based on a suite of 1D simulations.
In this plot the luminosities are normalized to that of a
1D reference model ($L_*\sim 10^{36}\rm ergs\,s^{-1}$;
corresponding to Model A).
The scaling relations of $L$ versus $\dot{M}$ and $\dot{E}$
closely match the simulations when $\beta/\beta_*$ is larger than 0.5.
In the adopted galactic bulge,
if the specific energy approaches one-third of the reference value,
the simulated luminosities increase sharply and no longer follow the
scaling relation, because the gravitational potential becomes
dynamically important for such a small $\beta$.
The corresponding results of Models A and B are also plotted in
Fig.~\ref{F:en_vs_em} to show the effects of gas inhomogeneity.
The ratio of the 0.5-2.0 keV luminosity of Model A
to that of the corresponding 1D model is about 2,
and the ratio is about 3 for Model B.
Let $\beta$ be fixed at the reference value in Model A,
the velocity and temperature profiles of the wind should
remain the same and $L_X \propto \dot{M}^2$.
Hence, both $\dot{E}$ and $\dot{M}$
need to increase by an order of magnitude in order to
boost the X-ray luminosity to match that observed in the M\,31 bulge.
Although a bulge wind with a reduced $\beta$ can significantly increase
the diffuse X-ray emission as demonstrated in Model B,
it is unlikely to be the best solution.
In the presence of a reasonable dark matter halo with
properties like that of the Milky Way galaxy (e.g.,
$M_{halo} \simeq 10^{12}{\rm M}_\odot$, $r_{virial} \simeq 250$\,kpc,
an NFW profile with a concentration parameter of 15),
$\sim 120\%$ of the available energy input in Model B
is required for the escape of all the stellar ejecta
to beyond the virial radius. Thus the bulge wind in Model B
is not energetic enough to escape the galaxy potential well
and the bulge wind material must accumulate within
the virial radius. This accumulated material may form
a considerable circum-galactic medium (CGM) that could
eventually quench the bulge wind \citep{Tang08}.
The gas outflow might be subsonic in the vicinity of galactic
bulges and other ellipticals with low $L_X/L_B$ ratios
under the interaction between the bulge wind and CGM.
The subsonic state is necessary to explain both the
large luminosity (compared to the wind model prediction)
and the extent of the diffuse X-ray emissions.
Even if a bulge wind has the energy to escape the halo
virial radius, the wind can be reversely shocked and stalled by the CGM.
When the reverse shock propagates inward within the sonic point,
the bulge wind turns into a globally subsonic outflow.
Such a subsonic state can be quasi stable for a long time
with proper treatment of the boundary the galactic flow \citep{Tang08}.
A proper 3-D simulation including the entire CGM is possible,
though computationally very expensive at present.
Other effects, which are ignored here for simplicity,
can further affect the X-ray luminosity of the galactic bulges.
A vertically collimated bulge wind can be a promising way to
significantly boost the X-ray luminosity, particularly for objects
such as M\,31 with a considerable mass in the galactic disk.
Motivated by the observed bipolar diffuse X-ray emission in the M\,31
bulge \citep{Li07a}, we have found in preliminary simulations that a
vertically collimated wind, confined by the surrounding gaseous disk,
will have significantly higher X-ray emission
(by up to an order of magnitude).
The major reason is that the density diverges much slower in the
collimated wind than in the spherical wind.
We plan to address this issue in a separate paper
(Tang \& Wang in preparation).
\section {Summary}\vspace{-\parskip}
In this work, we have explored the properties of the structured
hot gas created by sporadic SN explosions inside a galactic bulge
by conducting detailed 3D hydrodynamical simulations.
Our main results are as follows:
\begin{itemize}
\item A galactic wind may be generated in a galactic bulge
with the standard empirical stellar mass loss and Type Ia SN rate.
The gas properties fluctuate in time particularly
in the central region lying within the sonic radius of the wind,
where individual SN explosions strongly influence
the density and temperature distributions.
At larger radii, the spherically average profiles of the 3D simulations
follow those of the 1D models. Therefore the 1D treatment of
a galactic bulge wind flow is a reasonable approximation
on large scales.
\item Sporadic SN explosions produce 3D filamentary and
shell-like structures in the gas.
These structures result in broad density and temperature
distributions, compared to the 1D model. Furthermore,
the relatively low temperature of the structures leads
to an emission measure-weighted temperature that is
significantly lower than the expected value inferred
from the specific heating and has a relatively flat
radial distribution throughout the bulge region,
consistent with observations.
\item Iron ejected by SNe does not mix well with the surrounding gas
within the bulge region and has a relatively high temperature
and low density, so it contributes primarily to emission in the
energy band $>$ 2\,keV. The diffuse soft X-ray emission
comes from shells associated with SN blastwaves,
which are hardly enriched by SN ejecta and have a metallicity
close to the ambient gas that originates in stellar winds.
This, together with the temperature broadening, helps to explain the
apparent sub-solar abundance of the soft X-ray-emitting gas
in galactic bulges/ellipticals.
\item
Compared with the 1D spherical wind model,
the structured hot gas in 3D simulations can boost
the X-ray emission in an intermediate energy band
(e.g., 0.5-2.0\,keV) by a factor of a few.
This increase is more significant at the lower or
higher energy bands due to the broad distributions
of both temperature and density.
In order to produce the luminosity and surface
brightness distribution similar to the observed
diffuse X-ray emission, the bulge outflow likely
needs to be in a subsonic state and/or
an angularly confined configuration.
\end{itemize}
\vspace{-\parskip}
\section*{Acknowledgments}\vspace{-\parskip}
The software used in this work was in part developed
by the DOE-supported ASC / Alliance Center for Astrophysical
Thermonuclear Flashes at the University of Chicago.
Simulations were performed at the Pittsburgh Supercomputing Center
supported by the NSF.
This project is supported by
NASA through grant SAO TM7-8005X and NNG07AH28G.
|
2,869,038,153,735 | arxiv | \section{Introduction}\label{sec:intro}
The cycle 4 {\it Spitzer Space Telescope} Legacy project ``The Gould Belt: Star Formation in the Solar Neighborhood'' (PID: 30574; PI: L.E. Allen) completed the {\it Spitzer} survey of the large, nearby star-forming regions begun by the c2d Legacy Project \citep{Evans2003, Evans2009}. The cloud with the least prior study included in the survey is the cloud we have designated as ``Auriga'' which lies on the Perseus-Auriga border. This cloud has also been designated the California Molecular Cloud by \cite{Ladaetal2009} since it extends from the California Nebula in the west to the LkH$\alpha$~ 101 region and associated NGC 1529 cloud in the east. We adopt the name Auriga-California Molecular Cloud (AMC) to encompass both nomenclatures.
Despite the AMC's proximity to two of the most well-examined star-forming clouds, Taurus-Auriga and Perseus, it is a relatively unstudied region. Several dark nebulae were noted along its length by \cite{Lynds1962}, and CO associated with many Lynds objects was measured by \cite{Ungerechts1987}, who note the presence of a CO ``cloud extending from the California nebula (NGC 1499) in Perseus along NGC 1579 and LkH$\alpha$~ 101 well into Auriga'' (their cloud 12). Only very recently has a giant molecular cloud been unambiguously associated with the series of Lynds nebulae through high resolution extinction maps by \cite{Ladaetal2009} who placed its distance firmly within the Gould Belt (GB) at $450 \pm 23$ pc. At this distance, the cloud's extent of 80 pc and mass of $\sim10^5 \ M_{\odot}$ rivals that of the Orion Molecular Cloud (L1641) for the most massive in the Gould Belt. For the remainder of this paper, we adopt this distance of 450 pc for the entire AMC. This is consistent with the distance of $510^{+100}_{-40}$ pc found by \citep{Wolketal2010} on their study of LkH$\alpha$~ 101 with Chandra. We note that this distance differs from that adopted by \cite{Gutermuthetal2009} for LkH$\alpha$~ 101 of 700 pc.
We have mapped a significant fraction of the AMC with the Infrared Array Camera (IRAC; \citealt{Fazio2004}) and the Mid-Infrared Photometer for \textit{Spitzer}\ (MIPS; \citealt{Rieke2004}) on board the \textit{Spitzer Space Telescope} \citep{Werner2004}, with a total overlapping coverage of 2.5 deg$^2$ in the four IRAC bands (3.6, 4.5, 5.8 and 8.0 \micron) and 10.47 deg$^2$ in the three MIPS bands (24, 70, and 160 \micron). The mapped areas are not all contiguous and were chosen to include the areas with $\rm{A}_V$~$> 3$, as given by the \cite{Dobashi2005} extinction maps. The goal of these observations is to identify and characterize the young stellar object (YSO) and substellar object populations. The data presented here are the first mid-IR census of the YSO population in this region. The area around LkH$\alpha$~ 101 and its associated cluster was observed as part of a survey of 36 clusters within 1 kpc of the Sun with \textit{Spitzer}\ by \cite{Gutermuthetal2009} and those data have been incorporated into our dataset through the c2d pipeline.
More recently, the AMC has been observed by the {\it Herschel Space Observatory} at 70 -- 500 \micron, and by the Caltech Submillimeter Observatory with the Bolocam 1.1 mm camera \citep{Harveyetal2013}. These observations characterize the diffuse dust emission and the cooler Class 0 and Class I objects which can be bright in the far-IR. We do not analyze the large scale structure of the cloud in this paper as \cite{Harveyetal2013} present such an analysis with the \textit{Herschel}~observations, which are more contiguous and have a higher resolution than our MIPS observations. \cite{Harveyetal2013} also include a comparison to these MIPS data and so further analysis is not required here.
We describe the observations and data reduction (briefly as it is well-documented elsewhere) in $\S$ \ref{sec:obs}. In $\S$ \ref{sec:yso}, we describe the source statistics, the criteria for identifying and classifying YSO candidates and we compare the YSO population to other clouds. The SEDs and disk properties of YSOs are modeled in $\S$ \ref{sec:sed}. We characterize the spatial distribution of YSOs in $\S$ \ref{sec:spatial} and summarize our findings in $\S$ \ref{sec:summary}.
\section{Observations and Data Reduction}\label{sec:obs}
\begin{figure*}
\includegraphics[width=6 in,clip=True,trim=2.45cm 7cm 3.85cm 8cm]{fig_Auriga_hbf_3.jpg}
\caption{Integrated \textit{Spitzer}\ mapped areas from the Gould Belt Survey and other projects. The grey boxed area shows the MIPS coverage; the white boxes show the IRAC coverage (with the sub-regions labelled); and the hatched black box shows the non-GBS survey data in the field from \cite{Gutermuthetal2009}. These regions are schematic to give a general picture of the layout of the coverage and to identify the subregions. The greyscale is the extinction map of \cite{Dobashi2005}. Contours show the $A_V$ levels of 1, 3 and 5 mag.}\label{fig:areacoverage}
\end{figure*}
The areas mapped are shown in Figure \ref{fig:areacoverage}. The MIPS coverage is more contiguous than the IRAC coverage due to the mapping modes of the two instruments. Observations were designed to cover regions with $\rm{A}_V$~$> 3$ within the extinction maps of \cite{Dobashi2005}. All areas were observed twice with IRAC and MIPS cameras with the AORs and dates of the observations compiled in Tables \ref{iracobssum} and \ref{mipsobssum}. The two epochs were compared to remove transient asteroids that are numerous at the low ecliptic latitude of these observations.
The GBS survey data and the LkH$\alpha$~ 101 data from \cite{Gutermuthetal2009} were processed through the c2d pipeline. Details of the data processing are available in \cite{Evans2007}. Briefly, the data processing starts with a check of the images whereupon image corrections are made for obvious problems. Mask files are created to remove problematic pixels. The individual frames are then mosaicked together, with one mosaic created for each epoch and one joint mosaic as well. Sources are detected in each mosaic and then re-extracted from the stack of individual images which include the source position. Finally, the source lists for each wavelength are band-merged, and sources not detected at some wavelengths are ``band-filled'' to find appropriate fluxes or upper limits at the positions which had clear detections at other wavelengths.
As noted by \cite{Harvey2008}, the details of this data reduction are essentially the same as that of the original c2d datasets except that the input to the c2d pipeline are products of later versions of the \textit{Spitzer}\ BCD pipeline. The c2d processing of IRAC data was described by \cite{Harvey2006}, and the MIPS data processing was described by \cite{Young2005} and \cite{Rebull2007}. \cite{Harvey2007} describe additional reduction processes which we have used for the AMC data.
\section{Star-forming Objects in the AMC}
\label{sec:yso}
\begin{figure
\includegraphics[width=3.5 in, clip=True,trim=2.5cm 0cm 3cm 0cm]{AUR_reg1cde_IRAC24_MIPS1.jpg}
\caption[AUR_reg1cde_IRAC24_MIPS1.pdf]{
False colour image with 4.5 \micron\ (blue), 8 \micron\ (green), and 24 \micron\ (red) of the IRAC 1cde fields with YSO positions are overlaid.
(Similar figures for other IRAC regions are shown in Figures \ref{fig:rgb2} -- \ref{fig:rgb4}.)}
\label{fig:rgb1}
\end{figure}
\begin{figure*
\includegraphics[width=4.5 in, clip=True,trim=0cm 0.5cm 0cm 0.5cm]{AUR_reg2a_IRAC24_MIPS1.jpg}\centering
\caption[AUR_reg2a_IRAC24_MIPS1.pdf]{
False colour image with 4.5 \micron\ (blue), 8 \micron\ (green), and 24 \micron\ (red) of the IRAC 2a field with YSO positions are overlaid.
(Similar figures for other IRAC regions are shown in Figures \ref{fig:rgb1}, \ref{fig:rgb3}, and \ref{fig:rgb4}.)}
\label{fig:rgb2}
\end{figure*}
\begin{figure*
\begin{centering}
\includegraphics[width=3in, clip=True, trim=0cm 0.25cm 0cm 0.25cm]{AUR_reg3a_IRAC24_MIPS1.jpg}
\includegraphics[width=3.5in, clip=True, trim=0cm 2cm 0cm 2cm]{AUR_reg4a_IRAC24_MIPS1.jpg}
\includegraphics[width=2.2in, clip=True, trim=1cm 0cm 1.5cm 0cm]{AUR_reg2b_IRAC24_MIPS1.jpg}
\includegraphics[width=2.2in, clip=True, trim=0.5cm 0cm 1.25cm 0cm]{AUR_reg5_IRAC24_MIPS1.jpg}
\includegraphics[width=1.8in, clip=True, trim=2.25cm 0cm 3cm 0cm]{AUR_north_IRAC24_MIPS1.jpg}
\end{centering}
\caption{
False colour image with 4.5 \micron\ (blue), 8 \micron\ (green), and 24 \micron\ (red) of the IRAC fields 3a, 4a, 2b, 5, and North (left to right, top to bottom) with YSO positions are overlaid. These regions contain only a few YSOs each.
(Similar figures for other IRAC regions are shown in Figures \ref{fig:rgb1}, \ref{fig:rgb2}, and \ref{fig:rgb4}.)}
\label{fig:rgb3}
\end{figure*}
\begin{figure*
\begin{centering}
\includegraphics[scale=0.6, clip=True, trim=0.5cm 0cm 1.25cm 0cm]{AUR_reg1a_IRAC24_MIPS1.jpg}
\includegraphics[scale=0.6, clip=True, trim=0cm 0cm 0.5cm 0cm]{AUR_reg1b_IRAC24_MIPS1.jpg}
\includegraphics[scale=0.6, clip=True, trim=1cm 0cm 1.5cm 0cm]{AUR_reg3b_IRAC24_MIPS1.jpg}
\includegraphics[scale=0.6, clip=True, trim=0.25cm 0cm 0.75cm 0cm]{AUR_reg4b_IRAC24_MIPS1.jpg}
\end{centering}
\caption{
False colour image with 4.5 \micron\ (blue), 8 \micron\ (green), and 24 \micron\ (red) of the IRAC fields 1a, 1b, 3b, and 4b (left to right, top to bottom) with YSO positions are overlaid. These regions do not contain YSOs.
(Similar figures for other IRAC regions are shown in Figures \ref{fig:rgb1} -- \ref{fig:rgb3}.)}
\label{fig:rgb4}
\end{figure*}
Figures \ref{fig:rgb1} -- \ref{fig:rgb4} show RGB mosaics for the IRAC covered regions using 4.5 \micron\ (blue), 8.0 \micron\ (green) and 24 \micron\ (red) data with the positions of YSOs overlaid. The diffuse 8.0 \micron\ emission is strongly concentrated at the eastern edge of the cloud, near the well-known object LkH$\alpha$~ 101. The LkH$\alpha$~ 101 data are taken from and have been discussed by \cite{Gutermuthetal2009}.
\subsection{YSO Selection}
\label{sec:ysoselect}
The majority of objects in our fields are not YSOs. The maps are contaminated by background/foreground stars and background galaxies. We have selected our YSO candidates (YSOcs) by various methods, augmenting the list where possible based on data outside the \textit{Spitzer}\ IRAC/MIPS wavelength bands. The fundamental criteria use IRAC, MIPS and 2MASS data \citep{Cutri2003} and are based on identification of infrared excess and brightness limits below which the probability of detection of external galaxies becomes high. The total number of sources is 704,045. In regions observed by both IRAC and MIPS,
the YSOc selection follows that of \cite{Harvey2008}. We refer to these as IRAC+MIPS YSOcs. For objects with upper limits on the MIPS 24 \micron\ flux, we follow the method outlined by \cite{Harvey2006}. We refer to these as IRAC-only YSOcs. In regions observed only by MIPS and not IRAC, we have used the formalism of \cite{Rebull2007}, except we use a tighter 2MASS K$_{\rm{S}}$ cut of [K$_{\rm{S}}$] $< 13.5$. This tighter magnitude cut removed objects that were similar in color and magnitude to others that had already been eliminated. We further remove galaxies from the MIPS-only source list by including photometry from the Wide-field Infrared Survey Explorer (WISE; \citealt{Wright2010}) and applying color cuts suggested by \cite{Koenig2012} (see their Figure 7) and requiring the WISE Band 2 magnitude criterion of [4.6] $<$ 12. We refer to these as MIPS-only YSOcs. Note that the MIPS-only YSOcs were not observed with IRAC, as opposed to the IRAC-only YSOcs which were observed, but not detected, with MIPS.
\begin{figure*
\includegraphics[width=7 in]{cc_dia.jpg}
\caption{IRAC colors of the sources in the the regions observed with IRAC. Stars are in blue; YSOs are in red; and ``other sources'' (e.g., galaxies) are in green.
The boxed region on the right panel marks the approximate domain of Class II sources identified by \cite{Allen2004}.}\label{fig:cc_dia}
\end{figure*}
\begin{figure*
\includegraphics[width=5.5 in,clip=True, trim=0.1cm 0cm 2.5cm 0.25cm]{multicolor_yso.jpg}\centering
\caption{Color-magnitude and color-color diagrams for the AMC (left), the SWIRE dataset resampled to match our sensitivities and measured extinction (middle), and the full SWIRE dataset (right). The black dash-dot lines show soft boundaries for YSO candidates whereas the red dash-dot lines show hard limits, fainter than which objects are not included as YSO candidates.}\label{fig:colmag}
\end{figure*}
Figure~\ref{fig:cc_dia} shows the IRAC color-magnitude and color-color diagrams relevant for classifying IRAC-only sources.
The different domains occupied by stars, YSOcs, and other (e.g., extragalactic) sources are are shown.
For sources in regions observed by both IRAC and MIPS, Figure~\ref{fig:colmag} shows the color and magnitude boundaries used to remove sources that are likely extragalactic. This identification is done by comparing the observed fluxes and colors to results from the SWIRE extragalactic survey \citep{Surace2004}. The sources in the AMC field are compared to a control catalogue from the SWIRE dataset that is resampled to match our sensitivity limits and the extinction level derived for the AMC. (See \citealt{Evans2007} for a complete description.)
Finally, we vetted the YSOcs through individual inspection of the \textit{Spitzer}\ maps (and optical images where available), and determined that 24 of the original 159 IRAC+MIPS YSOcs, 14 of the original 17 IRAC-only YSOcs, and 56 (26 based on WISE and other photometric criteria) of the original 84 MIPS-only YSOcs were unlikely to be YSOs. Henceforth we refer to the list of vetted YSOcs, totalling 166, as YSOs to distinguish them from the raw unvetted list. While we have undergone an extensive process to construct a list of sources that are very likely to be YSOs, we stress that these YSOs have not been confirmed spectroscopically.
Table~\ref{tbl:SourceSummary} lists the final source counts for objects in the observed fields.
The IRAC and MIPS fluxes of the IRAC+MIPS and IRAC-only YSOs are listed in Table~\ref{tbl:irac}.
The 70 \micron\ fluxes have been listed where available. (There are fewer YSOs with fluxes at 70 \micron\ because of the lower sensitivity and, in some cases, the bright background.) The fluxes of MIPS-only vetted YSOs are listed in Table~\ref{tbl:wisemips} with their WISE and MIPS fluxes (and IRAC fluxes where available).
In Tables~\ref{tbl:irac} and~\ref{tbl:wisemips}, we have noted which YSOs are in regions of low column density ($N_{\rm{H2}} < 5 \times 10^{21}$ cm$^{-2}$) according to the column density maps by \cite{Harveyetal2013}, as these are more likely to be contaminants than YSOs in regions of high column density.
We compare our final YSO source list to those found for LkH$\alpha$~ 101 in \cite{Gutermuthetal2009}. All 103 YSOs in \cite{Gutermuthetal2009} are identified as sources in our catalogue with positions that are within a couple tenths of an arcsecond agreement. Where this work and \cite{Gutermuthetal2009} provide fluxes, they agree at the shorter IRAC bands (IRAC1-3) typically within 0.05 -- 0.1 mag. At IRAC4 and MIPS1, the agreement is typically within 0.2 mag. These differences are what one might expect for PSF-fitting (used here) versus aperture fluxes (used by \citealt{Gutermuthetal2009}) at wavelengths where there is substantial diffuse emission. (Recall that we have incorporated their dataset into our own.) Therefore no previously identified sources have been missed in this study, and our measurements agree well with those of \cite{Gutermuthetal2009}.
Note, however, that the different classification methods used in this work and by \cite{Gutermuthetal2009} each yield a different total number of YSOs in this region;
we have identified 42 YSOs whereas \cite{Gutermuthetal2009} identified 103. Our total breaks down into 7 YSOs identified here that were not identified by \cite{Gutermuthetal2009} and 35 YSOs shared between the two lists. (The c2d pipeline identified 47 YSOcs that were listed as YSOs by \cite{Gutermuthetal2009}, but 12 were removed during the vetting process.)
The major source of this discrepancy is that we require
4 (or 5) band photometry with S/N $\geq 3$ in IRAC (and MIPS 24 \micron\ bands) to identify YSO candidates.
Such criteria are especially difficult to satisfy in the region of bright and diffuse emission around LkH$\alpha$~ 101.
Therefore, our results do not contradict those in \cite{Gutermuthetal2009}, rather we believe that the stringent criteria used here have excluded some YSOs. We keep these criteria for consistency with other c2d and \textit{Spitzer}\ GBS observations and analyses, but note the limitations in such a bright region.
\begin{figure*
\includegraphics[width=6.5 in, clip=True,trim=0.5cm 10.3cm 0.5cm 10.5cm]{fig_stardust_sources_gal_irac.jpg}
\caption{Sources with SEDs consistent with a reddened stellar photosphere and a dust component (IR excess) but for which detections with S/N $\geq 3$ across all 4 IRAC bands,
required to considered a YSOc, did not exist (see text).
The positions of these sources are plotted against the 160 \micron\ greyscale (colorbar units are MJy\,sr$^{-1}$). The striking over-density at the center of LkH$\alpha$~ 101 compared to other IRAC+MIPS regions (marked by black lines) suggests that we are missing veritable YSOs in this region. The robust set of measurements required to identify whether a source is a likely YSO or background galaxy is difficult to attain in this region of very bright emission.}
\label{fig:stardust}
\end{figure*}
The diffuse emission problem is isolated to the immediate vicinity of LkH$\alpha$~ 101. To demonstrate this point, in Figure~\ref{fig:stardust} we have plotted the location of all the sources having an SED consistent with being a reddened stellar photosphere and an associated dust component, which do not have S/N $\geq 3$ at all IRAC bands.
The SEDs of these sources are classified as `star+dust' in our catalogue.
Of the 56 YSOs listed by \cite{Gutermuthetal2009} that were not identified as YSOs in this work, the majority of them (34 of 56) have a `star+dust' SED.
There is a total of 465 `star+dust' sources without robust 4-band IRAC fluxes in the AMC field.
These sources are relatively evenly distributed throughout the field, with the exception of a striking over-density at the center of LkH$\alpha$~ 101 compared to other IRAC regions. Therefore, we believe this over-density is an effect of the difficulty in getting detections with S/N $\geq 3$ across 4 bands in the bright LkH$\alpha$~ 101 region and not that there are significantly fewer YSOs than suggested by \cite{Gutermuthetal2009}.
\cite{Harveyetal2013} identified 60 YSOs in the AMC with \textit{Herschel}/PACS, 49 of which are also identified in this work. Four of these \textit{Spitzer}-identified YSOs are members of pairs of YSOs that are blended in the \textit{Herschel}\ images. \textit{Herschel}\ is more sensitive to the rising- and flat-spectrum sources, i.e., of the other 45 \textit{Spitzer}-identified YSOs that are also detected in the \textit{Herschel}\ maps, most (76\%) are Class I/F objects, and the remaining 24\% are Class IIs.
\subsection{YSO classification}\label{sec:classification}
\begin{figure*
\includegraphics[width=4.5 in, clip=True,trim=1cm 0.5cm 0cm 0cm]{fig_alpha_hist.pdf}
\includegraphics[width=2 in, clip=True,trim=3cm 3.5cm 3.5cm 5cm]{fig_pie_chart5.pdf}
\caption{Left: Distribution of $\alpha$ values (the slope of the SED in the IR) used to determine the `class' of the YSOs in the AMC.
The vertical dotted lines mark the boundaries between the different classes as defined by \cite{Greene1994}. Right: Pie chart for the AMC showing the percentage of sources in each SED class. Green is Class I; blue is Flat; red is Class II; and yellow is Class III (colors are the same as in Figure~\ref{fig:ysodistribution}).}
\label{fig:alpha_hist}
\end{figure*}
The YSOs are classified according to the slope of their SED in the infrared (see \citealt{Evans2009} for a description). The spectral index, $\alpha$, is given by
\begin{equation}
\label{eq:alpha}
\alpha \equiv {{d \log(\lambda S(\lambda))} \over {d \log(\lambda)}}
\end{equation}
and determined by fitting the photometry between 2 \micron\ and 24 \micron. The distribution of $\alpha$ values is shown in Figure~\ref{fig:alpha_hist} along with the relative number of YSOs in each SED class. The majority of YSOs identified in the cloud are Class II objects (55\%). The percentage of sources in each SED class for the AMC is strikingly similar to that of Perseus (23\%, 11\%, 58\%, and 8\% for Class Is, Fs, IIs and IIIs, respectively; \citealt{Evans2009}).
Table~\ref{tbl:ages} lists the breakdown of Class Is, Fs, and IIs for the AMC and other clouds in the GB and c2d surveys to estimate their relative ages. We did not include Class IIIs in this analysis since this population is typically incomplete in \textit{Spitzer}\ surveys (e.g., see discussions in \citealt{Harvey2008,Evans2009,Gutermuthetal2009}) due to their weak IR excess. This simplifies the comparison to other clouds where the completeness limits may vary. We compared the ratio of Class Is and Fs to Class IIs, $N_{\rm{I+F}}$/$N_{\rm{II}}$, for the different cloud populations in other GB and c2d surveys which use the same classification scheme. We also include YSOs in the OMC identified with \textit{Spitzer}\ by \cite{Megeathetal2012}; since they use a different classification scheme however, we have re-calculated the $\alpha$ values for their sample. The Class I/F lifetime is relatively short compared to the Class II lifetime, and therefore a higher ratio indicates a younger population (see discussion in \citealt{Evans2009}). The high number of Class Is and Fs suggests that the AMC is relatively young compared to other clouds.
Finally, we also compared the number of YSOs per square degree in the AMC (11.5~deg$^2$)\footnote{Here we use the total coverage of IRAC + MIPS1, the five bands used to identify YSOs. This differs from the overlapping MIPS1, MIPS2 and MIPS3 coverage of 10.47~deg$^2$ described in Section~\ref{sec:intro}.} to that in the OMC (14~deg$^2$). The OMC is forming vastly larger amounts of stars. It has 237 YSOs per deg$^2$ whereas the AMC only has 13 YSOs per deg$^2$, a factor of about 20 fewer.
Even if we only compare the number of YSOs in the OMC with 4 band photometry (as this was the source of the discrepancy between the total number of YSOs around LkH$\alpha$~ 101 identified in this work and by \citealt{Gutermuthetal2009}, who use a similar identification method to \citealt{Megeathetal2012}), this still suggests that there is at least a factor 15 more YSOs in the OMC than in the AMC.
Despite the differences in identification methods used for the OMC and for the AMC, it is clear that the OMC is forming far more stars than the AMC is. The YSOs in the OMC are also concentrated much more strongly than the AMC, despite both clouds having comparable sizes and masses.
We note that \cite{Ladaetal2009} attribute the difference between the amount of star formation to the different amounts of material at high $\rm{A}_V$/column density.
\section{Spectral Energy Distribution Modeling}\label{sec:sed}
Optical data of the YSOs were downloaded from the USNO NOMAD catalogue (Zacharias et al. 2004). SEDs of the YSOs are shown in Figures~\ref{fig:classIF1} and \ref{fig:classIF2} (Class Is and Class Fs), \ref{fig:classII1} -- \ref{fig:classII3} (Class IIs) and \ref{fig:classIII} (Class IIIs). We were able to perform relatively detailed modelling of the stellar and dust components of the Class II and Class III sources (YSOs which are not heavily obscured by dust). The luminosities of sources in the earlier classes are presented in \cite{Dunham2013}. The majority of the Class II and Class III sources are likely in the physical stage where the stellar source and circumstellar disk are no longer enshrouded by a circumstellar envelope. We note that the observed ``class" does not always correspond to the associated physical stage of the YSO (see discussion in \citealt{Evans2009}) and that some Class IIs may be sources, viewed pole-on, with circumstellar envelopes that are only beginning to dissipate. Conversely, an edge-on disk without an envelope could look like a Class I object.
Our SED modelling methods follow those used by \cite{Harvey2007} (and similar works since, e.g., \citealt{Merin2008}, \citealt{Kirk2009}) to model the SEDs. The stellar spectrum of a K7 star was fit to the SEDs by normalizing it to the de-reddened fluxes in the shortest available IR band of J, K or IRAC1. We use the extinction law of \cite{WeingartnerDraine2001} with $\rm{R}_V$$ = 5.5$ to calculate the de-reddened fluxes. The $\rm{A}_V$~value was estimated by matching the de-reddened fluxes with the stellar spectrum. In eight cases, we used an A0 spectrum when the K7 spectrum was unable to produce a reasonable fit. The use of only two stellar spectra is of course over-simplified; how-
\begin{figure*
\includegraphics[width=6 in,clip=True, trim=1cm 0cm 1cm 1cm]{Aur_SEDs_I_0-fig.pdf}\centering
\caption{SEDs of Class I and Flat sources. The YSO ID, from Tables~\ref{tbl:irac} and~\ref{tbl:wisemips}, is shown in the upper right of each panel along with the Class (I or F) of the YSO.}
\label{fig:classIF1}
\end{figure*}
\begin{figure*
\includegraphics[width=6 in,clip=True, trim=1cm 5.5cm 1cm 1cm]{Aur_SEDs_I_1-fig.pdf}\centering
\caption{\it continued from Figure~\ref{fig:classIF1}.}
\label{fig:classIF2}
\end{figure*}
\begin{figure*
\includegraphics[width=6 in,clip=True, trim=1cm 0cm 1cm 1cm]{Aur_SEDs_II_0-fig.pdf}\centering
\caption{SEDs of Class II sources. The YSO ID, from Tables~\ref{tbl:irac} and~\ref{tbl:wisemips}, is shown in the upper right of each panel. The observed fluxes are plotted with unfilled circles. The de-reddened fluxes are plotted with filled circles. The grey line plots the model stellar spectrum fit to the shorter wavelengths. The black line shows the median SED of T Tauri stars in Taurus (with error bars denoting quartiles of the distribution, \citealt{DAlessioetal1999}) normalized to the B band flux and J band flux of the K7 and A0 stellar spectrum models, respectively.}\label{fig:classII1}
\end{figure*}
\begin{figure*
\includegraphics[width=6 in,clip=True, trim=1cm 0cm 1cm 1cm]{Aur_SEDs_II_1-fig.pdf}\centering
\caption{\it continued from Figure~\ref{fig:classII1}.}
\label{fig:classII2}
\end{figure*}
\begin{figure*
\includegraphics[width=6 in,clip=True, trim=1cm 5.5cm 1cm 1cm]{Aur_SEDs_II_2-fig.pdf}\centering
\caption{\it continued from Figure~\ref{fig:classII1}.}
\label{fig:classII3}
\end{figure*}
\begin{figure*
\includegraphics[width=6 in,clip=True, trim=1cm 8cm 1cm 1cm]{Aur_SEDs_III_0-fig.pdf}\centering
\caption{SEDs of Class III sources. The YSO ID, from Tables~\ref{tbl:irac} and~\ref{tbl:wisemips}, is shown in the upper right of each panel. The observed fluxes are plotted with unfilled circles. The de-reddened fluxes are plotted with filled circles. The grey line plots the model stellar spectrum fit to the shorter wavelengths. The black line shows the median SED of T Tauri stars in Taurus (with error bars denoting quartiles of the distribution, \citealt{DAlessioetal1999}) normalized to the B band flux and J band flux of the K7 and A0 stellar spectrum models, respectively.}\label{fig:classIII}
\end{figure*}
\noindent ever, it produces adequate results for the purposes of this study. More exact spectral typing is difficult with only the photometric data presented here and the uncertainties in $\rm{A}_V$. We nevertheless obtain a broad overview of the disk population with the applied assumptions. Tables~\ref{tbl:diskpropII} and~\ref{tbl:diskpropIII} list the stellar spectrum, the $\rm{A}_V$~value, and stellar luminosity ($L_{\rm{star}}$) used for the stellar models of each source's SED for the Class II and Class III YSOs, respectively.
\subsection{Second order SED parameters $\alpha_{\rm{excess}}$~and $\lambda_{\rm{turnoff}}$}
\label{sec:transitiondisks}
The first order SED parameter $\alpha$ is used as a primary diagnostic of the excess and circumstellar environment and to separate the YSOs into different ``classes'' (\S~\ref{sec:classification}). Once we have a model of the stellar source, however, we are able to characterize the circumstellar dust better. For each source we determined the values of $\alpha_{\rm{excess}}$\ and $\lambda_{\rm{turnoff}}$\ defined by \cite{Cieza2007} and \cite{Harvey2007} and used in many works since. $\lambda_{\rm{turnoff}}$\ is the longest measured wavelength before an excess greater than 80\% of the stellar model is observed. If no excess $>$ 80\% is observed, than $\lambda_{\rm{turnoff}}$\ is set to 24 \micron. $\alpha_{\rm{excess}}$\ is the slope of the SED at wavelengths longward of $\lambda_{\rm{turnoff}}$. $\alpha_{\rm{excess}}$\ is not calculated for YSOs with $\lambda_{\rm{turnoff}}$~$= 24$ \micron\, as there are not enough data points to determine the slope of the excess. These parameters provide a better characterization of the excess since $\alpha$ can include varying contributions from the stellar and dust components.
\begin{figure}[h]
\includegraphics[width=3.5 in,clip=True, trim=1cm 0cm 0cm 0cm]{fig_l_exc_vs_a_exc.pdf}\centering
\caption{Distribution of $\alpha_{\rm{excess}}$\ and $\lambda_{\rm{turnoff}}$\ for Class II and Class III sources. The Class IIIs with $\lambda_{\rm{turnoff}}$$ = 24$~\micron\ (IDs 15, 19, 80, and 148) are not shown as those sources typically do not have excess measured across a wide enough range to calculate reliable values of $\alpha_{\rm{excess}}$.}
\label{fig:l_exc_vs_a_exc}
\end{figure}
Figure~\ref{fig:l_exc_vs_a_exc} shows the distribution of $\alpha_{\rm{excess}}$~and $\lambda_{\rm{turnoff}}$~for the Class IIs and Class IIIs. Class II and Class III YSOs with long $\lambda_{\rm{turnoff}}$\ and positive $\alpha_{\rm{excess}}$\, (YSOs 2, 24, 58, 64, 74, 102, 108, 113, 115, and 133 in the 8 \micron\ bin and YSOs 145, 150, 162, and 165 in the 12 \micron\ bin of Figure~\ref{fig:l_exc_vs_a_exc}) are good classical transition disk candidates;
the lack of near-IR excess but large mid-IR excess is a sign of a deficit of material close to the star within a substantial disk. \cite{Cieza2012} have recently done a study on the transition disks in the AMC, Perseus and Taurus and identify six transition disk candidates in the AMC, three of which are also in our list of candidates (YSOs 58, 102 and 115). Of their remaining candidates, two were debris-like disks (YSOs 11 and 54) and the other was not identified in our YSO list.
The larger distribution of $\alpha_{\rm{excess}}$\ for sources with longer $\lambda_{\rm{turnoff}}$\ is consistent with distributions found for other disk populations (e.g., \citealt{Cieza2007,Alcala2008,Harvey2008,Merin2008}).
\subsection{Disk luminosities}\label{sec:diskprop}
Figure~\ref{fig:Ldisk} shows the ratio of the disk luminosities to stellar luminosities for the Class II and Class III sources. The disk luminosity is the integral of the observed excesses. (The excess at a given wavelength is calculated by subtracting the flux of the stellar model at that wavelength from the observed flux). The distribution of $L_{\rm{disk}}$$/$$L_{\rm{star}}$\ for Class II and III sources in the AMC is similar to that found for other \textit{c2d} and GB surveys with \textit{Spitzer}\ (Serpens: \citealt{Harvey2007}, IC 5146: \citealt{Harvey2008}, Chameleon II: \citealt{Alcala2008}, Lupus: \citealt{Merin2008}, and the Cepheus Flare: \citealt{Kirk2009}). We find the Class III sources in the regions typically occupied by sources with passive disks and debris disks (e.g., 0.02 $<$ $L_{\rm{disk}}$$/$$L_{\rm{star}}$ $<$ 0.08 for passive disks; \citealt{KenyonHartmann1987}). The low disk luminosity may be attributable to the lack of mid-IR excess at IRAC wavelengths in these sources' SEDs.
\begin{figure}[h]
\includegraphics[width=3.5 in,clip=True, trim=1cm 0cm 0cm 0cm]{fig_Ldisk_hist.pdf}
\caption{The ratio of the disk luminosity to the stellar luminosity for Class II and Class III sources. Also shown are the typical boundaries found for accreting disks, passive disks and debris disks \citep{KenyonHartmann1987}.}
\label{fig:Ldisk}
\end{figure}
\subsection{Questionable Class III sources}
\label{sec:classIIIs}
It is possible that some of the Class III sources identified here are field giants.
\cite{Oliveira2009} followed up on 150 \textit{Spitzer}\ identified YSOs in Serpens and obtained 78 optical spectra with sufficient signal-to-noise. They showed that there were at least 20 giant contaminants in this list, 18 of which were identified as Class III sources. The more scattered spatial distribution of Class IIIs throughout the AMC is consistent with this idea that they are contaminants. Additionally, five of our Class III objects (YSOs 11, 141, 144, 148, 164) have very high luminosities ($> 100$ L$_\odot$). Four of these objects (YSOs 141, 144, 148, 164), as well as YSO 149 which is not of particularly high luminosity, are quite removed from the areas of high extinction towards the AMC (see Figure~\ref{fig:ysodistribution} in the following section) and regions of low column density ($N_{\rm{H2}}< 5 \times 10^{21}$ cm$^{-2}$, see \S~\ref{sec:ysoselect}).
\section{Spatial Distribution of Star Formation}
\label{sec:spatial}
\begin{figure*
\includegraphics[width=6.5 in, clip=True, trim=1cm 9.9cm 0.75cm 4.5cm]{fig_classes.jpg}\centering
\caption{Left: The positions of YSOs and IRAC fields in Auriga. The greyscale is the MIPS 160\,$\mu$m map (colorbar units are MJy\,sr$^{-1}$) and the YSOs are marked according to their classification: green circles denote Class~Is; blue $+$s denote Class~Fs; red $\times$s denote Class~IIs; yellow triangles denote Class~IIIs. The magenta diamonds mark the Class III sources of high luminosities that are likely contaminants (see \S~\ref{sec:classIIIs}). IRAC fields are outlined in black and labelled. (Note that some YSOs fall beyond the 160 \micron\ coverage because it is slightly offset from the 24 \micron\ coverage that is used for YSO identification.)
Right: Close-up of the region around LkH$\alpha$\,101. The greyscale is the log (base 10) of the flux (colorbar units are $\log$(MJy\,sr$^{-1}$)). The centre of the field is entirely saturated. As is evident, there are some YSOs outside the IRAC coverage area. This list of MIPS-only YSOs has been trimmed by using WISE data to remove more objects that are likely background galaxies. }
\label{fig:ysodistribution}
\end{figure*}
The spatial distribution of IRAC/MIPS-identified YSOs by class is shown in Figure \ref{fig:ysodistribution}. A close-up of the region surrounding the LkH$\alpha$~ 101 cluster and the cluster extension along the filament is also included so the relatively densely clustered YSOs can be better distinguished. Figure \ref{fig:ysodistribution} shows that the bulk of star formation in the AMC has been concentrated
in this southern region of the cloud;
the majority of the identified YSOs (79\%) are in this area. (Note that the number of YSOs in that region is a lower limit as it is likely that a significant number of YSOs in the LkH$\alpha$~ 101 region are not identified, see discussion at the end of $\S$ \ref{sec:ysoselect}.)
\subsection{Identification of YSO groups}
We performed a clustering analysis on the identified Class I, F, and II sources in the AMC to identify the densest regions of YSOs and the largest groups. The details of the analysis are described in \cite{Masiunas2012}. We omit the Class III sources from the analysis to avoid the risk of including field giants (see for example \S~\ref{sec:classIIIs}).
We performed a minimum spanning tree (MST) analysis to identify groups of YSOs within the region.
This analysis connects YSOs by the minimum distance to the next YSO to form a ``branch'' \citep{CartwrightWhitworth2004}.
Figure~\ref{fig:brkpt} shows the cumulative distribution function (CDF) of the branch lengths between YSOs. This is used to determine the MST critical branch length, $\rm{L}_{\rm{crit}}$, that defines the transition between the branch lengths in the denser regions to the branch lengths in the sparser regions \citep{Gutermuthetal2009}. Therefore $\rm{L}_{\rm{crit}}$\ is based on relative over densities of objects. We measure an $\rm{L}_{\rm{crit}}$\ of 210\arcsec\ for the AMC.
Group memberships are defined by members which are all connected by branches of lengths less than $\rm{L}_{\rm{crit}}$. The boundary of a group is defined where the branch length between adjacent sources exceeds $\rm{L}_{\rm{crit}}$.
Figure~\ref{grps} shows that we have extracted four groups with 10 or more members (marked by colored convex hulls) and three groups with 5--9 members (marked with magenta circles). Table~\ref{tbl:grps} lists the properties of these groups.
The position of the group is given by its geometric center. The group's effective radius, $R_{\rm{eff}}$, defines the radius of a circle with the same area as the convex hull containing the group members.
The maximum radial distance to a member from the median position gives $R_{\rm{circ}}$, therefore a circle with this radius would contain all group members. Finally, the elongation of the group is determined by comparing $R_{\rm{circ}}$\ to $R_{\rm{eff}}$\ and represented by the aspect ratio, $R_{\rm{circ}}$$^2/$$R_{\rm{eff}}$$^2$.
The MST analysis on the full cloud recovers the clustering
surrounding LkH$\alpha$~ 101. The cluster subtends a larger area than that measured in \cite{Gutermuthetal2009} confirming their claim that there was star formation extended beyond their field of view. The star formation is mostly extended along the North-South direction of the cluster and therefore we measure a more elongated group than measured by \cite{Gutermuthetal2009}.
This is still the largest group in the AMC in terms of area and the number of members.
\begin{figure}[h]
\includegraphics[width=3.0 in,clip=True,trim=0.8cm 0.25cm 0.2cm 0.65cm]{fig_auriga_mst_brkpt.pdf}
\caption{Cumulative distribution function (CDF) of MST branch lengths (asterisks). The solid lines represent linear fits to each end of the CDF. The dot-dash line marks $\rm{L}_{\rm{crit}}$\ where the solid lines meet. The solid lines follow the CDF in the dense regions (steep line) and the sparser regions (shallow line). }
\label{fig:brkpt}
\end{figure}
\begin{figure*}[h]
\includegraphics[width=6.5 in,clip=True,trim=0.3cm 9.4cm 0.3cm 2.8cm]{fig_grps-ann.pdf}\centering
\caption{We extract four groups with 10 or more members (colored convex hulls) and three groups with 5--9 members (magenta circles) using an MST analysis. The right hand panel shows the enlarged southern region of the cloud where most of the groups are located. The red numbers adjacent to the groups correspond to the group number listed in Table~\ref{tbl:grps}.}
\label{grps}
\end{figure*}
As discussed in $\S$~\ref{sec:ysoselect}, our analysis is likely to have underestimated the number of YSOs in the region around LkH$\alpha$~ 101. To check the consistency of our analysis with \cite{Gutermuthetal2009}, we ran the MST analysis on both YSO lists within the \cite{Gutermuthetal2009} area of 4-channel IRAC coverage. This leaves us with 41 of the YSOs presented here and 102 of those presented in \cite{Gutermuthetal2009}. (There is one bright YSO in \citealt{Gutermuthetal2009} that lies just outside their 4-channel IRAC coverage to the south. It was only observed at IRAC1 and IRAC3.)
We get an $\rm{L}_{\rm{crit}}$~of 120\arcsec~for our cropped list of YSOs and an $\rm{L}_{\rm{crit}}$~of 73\arcsec~for the cropped \cite{Gutermuthetal2009} YSO list. (Note that running the analysis on the cropped field, which is dense compared to the rest of the cloud, yields a smaller $\rm{L}_{\rm{crit}}$\ than when the analysis is run on the whole cloud. This is expected as $\rm{L}_{\rm{crit}}$\ is based on over densities, as discussed above.) The ratio of the $\rm{L}_{\rm{crit}}$~values for the two YSO lists ($73/120 = 0.61$) agrees with our expectation that it should scale with the square-root of the density, and hence the cropped YSO count ($\sqrt{102/41} = 0.63$). Therefore we report that the derived properties are consistent with those measured by \cite{Gutermuthetal2009}. (Differences are expected as shown by \cite{Gutermuthetal2009} with their comparisons among several shared regions.) However, the missing YSOs at the centre of the cluster complicate any further comparison their results.
\subsection{Comparison of grouped and non-grouped YSOs}
We find 76\% (113 of the 149) of the Class Is, Class Fs, and Class IIs are found in groups. Rather than compare the class fractions, given by $N_{\rm{I+F}}$/$N_{\rm{II}}$ in Table~\ref{tbl:grps}, we directly compare the underlying distribution of $\alpha$ to determine whether the distribution of YSOs within groups is consistent with the whole cloud. We get the same result for each group:
a KS test on the $\alpha$ distribution of the group and the $\alpha$ distribution of the whole cloud shows that we cannot reject the hypothesis that they are drawn from the same sample (p-values $>$ 0.13). (We also did a KS test for each group with the extended population and found the same result.)
Similarly, we compared the properties of disks within groups and those not in groups by performing a KS test on the distributions of disk luminosities (p-value of 0.08), $\alpha_{\rm{excess}}$\ (p-value of 0.9), and $\lambda_{\rm{turnoff}}$\ (p-value of 0.9) and find no evidence that the two populations are drawn from different parent populations.
\section{Summary}
\label{sec:summary}
We observed the AMC with IRAC and MIPS aboard the \textit{Spitzer Space Telescope} and identify 138 YSOs in the cloud. As our IRAC coverage is segmented, we complemented our more contiguous MIPS coverage with WISE data to further eliminate galaxies from the sample, leaving 28 MIPS-only YSOs remaining, bringing the total number of YSOs in the AMC to 166. We classified the YSOs based on the spectral slope of their SEDs between 2 \micron\ and 24 \micron\ and find 37 Class I objects, 21 Class F objects (flat spectrum sources), 91 Class II objects, and 17 Class III objects. The high fraction of Class Is and Class Fs suggests that the AMC is relatively unevolved compared to other star-forming clouds. Despite the similarity in cloud properties between the AMC and the OMC, there is a distinct difference in the star formation properties. The star formation in the AMC is also concentrated along its filament, however, it is also forming a factor of about 20 fewer stars than the OMC. \cite{Ladaetal2009} find that there is much less material at high density in the AMC than in the OMC and attribute the difference in star formation to this. Further studies of the star formation and YSO population in the AMC are needed to highlight the differences of the two clouds given their similar age.
We modelled the SEDs of the Class II and Class III sources and their excesses by first fitting a K7 stellar spectrum to the optical and near-IR fluxes. The spectrum is normalized to the 2MASS flux (or the IRAC1 flux when 2MASS is unavailable) and we use an $\rm{A}_V$~value to match the spectrum of the stellar model to the de-reddened observed optical fluxes. An A0 stellar spectrum is used in the eight cases where a K7 spectrum is unable to provide a reasonable fit. Fitting a stellar spectrum allows us to measure the disk luminosities and characterize the excess. The excesses of the Class II and Class III sources were further parameterized by $\lambda_{\rm{turnoff}}$, the longest wavelength before an excess greater that 80\% is measured, and $\alpha_{\rm{excess}}$, the slope of the SED at wavelengths longward of $\lambda_{\rm{turnoff}}$. $\lambda_{\rm{turnoff}}$\ is a useful tracer for the proximity of dust to the star and consequently we identify fourteen classical transition disk candidates.
The bulk of the star formation in the AMC is in the southern region of the cloud.
We included a clustering analysis to quantify the densest areas of star formation and to identify groups within the cloud.
We find four groups with 10 or more members all in the region around LkH$\alpha$~ 101 and its adjoining filament. We find three smaller groups with 5 -- 9 members scattered throughout the cloud. The largest group is that around LkH$\alpha$~ 101 and contains 49 members. We note that there are likely even more YSOs in this group since our YSO identification criteria of S/N $\geq 3$ in IRAC1-4 and MIPS1 are difficult to attain in this bright region.
\acknowledgements
We thank the referee whose comments and suggestions greatly helped improve the paper and its clarity.
H.B.F gratefully acknowledges research support from an NSERC Discovery Grant.
This research made use of APLpy, an open-source plotting package for Python hosted at http://aplpy.github.com
This research made use of Montage, funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. Montage is maintained by the NASA/IPAC Infrared Science Archive.
\bibliographystyle{apj}
|
2,869,038,153,736 | arxiv | \section{#1}}
\renewcommand{\baselinestretch}{1}
\topmargin -0.6cm
\oddsidemargin=-0.75cm
\hsize=16.8cm
\setlength{\textheight}{247mm}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\def\mbox{e}{\mbox{e}}
\def{\rm sgn}{{\rm sgn}}
\def\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}{\;
\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;}
\def\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}{\;
\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;}
\def\rm MeV{\rm MeV}
\def\rm eV{\rm eV}
\thispagestyle{empty}
\begin{titlepage}
\begin{center}
\rightline{hep-ph/9610526}
\hfill FTUV/96-74\\
\hfill IFIC/96-83\\
\vskip 0.3cm
\large
{\bf The MSW conversion of solar neutrinos
and random matter density perturbations\footnote{Invited talk
presented by A. Rossi at {\it 17th Int. Conf. on
Neutrino Physics and Astrophysics },
Helsinki, Finland, 13-20 June 1996. To appear in the Proceedings.}}
\end{center}
\normalsize
\begin{center}
{\bf H. Nunokawa, A. Rossi, and J. W. F. Valle}
\end{center}
\begin{center}
{\it Instituto de F\'{\i}sica Corpuscular - C.S.I.C.\\
Departament de F\'{\i}sica Te\`orica, Universitat de Val\`encia\\}
{\it 46100 Burjassot, Val\`encia, SPAIN }\\
\end{center}
\vglue 0.3cm
\begin{center}
{\bf V. B. Semikoz}
\end{center}
\begin{center}
{\it Institute of Terrestrial Magnetism, the Ionosphere and Radio Wave
Propagation of the Russian Academy of Sciences\\}
{\it Izmiran, Troitsk, Moscow region, 142092 RUSSIA}
\end{center}
\vglue 2cm
\begin{center}
{\bf Abstract}
\end{center}
We present a generalization of the resonant
neutrino conversion in matter,
including a random component in the matter density profile.
The study is focused on the effect of such matter
perturbations upon both large and
small mixing angle MSW solutions to the solar neutrino problem.
This is carried out both for the active-active $\nu_e \rightarrow \nu_{\mu,\tau}$
as well as active-sterile $\nu_e \rightarrow \nu_s$ conversion channels.
We find that the small mixing MSW solution is much more stable
(especially in $\delta m^2$) than the large mixing solution.
Future solar neutrino
experiments, such as Borexino, could probe solar matter density noise
at the few percent level.
\vfill
\end{titlepage}
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\newpage
{\bf 1.} The comparison among the
present experimental results on the observation of the solar neutrinos
strongly points to a deficit of neutrino flux (dubbed
the Solar Neutrino
Problem (SNP)).
The most recent averaged data \cite{cl} of
the chlorine, gallium
\footnote{For the gallium result we have taken the weighted average of GALLEX
$R^{exp}_{Ga}= (77\pm8\pm5)$SNU and SAGE
$R^{exp}_{Ga}= (69\pm 11\pm 6)$SNU data.} and Kamiokande
experiments are:
\begin{equation}
\label{data}
R_{Cl}^{exp}= (2.55 \pm 0.25) \mbox{SNU}, \,\,\,\,
R_{Ga}^{exp}= (74 \pm 8) \mbox{SNU}, \,\,\,\
R_{Ka}^{exp}= (0.44 \pm 0.06) R_{Ka}^{BP95}
\end{equation}
where $R_{Ka}^{BP95}$ is the prediction according to the
most recent Standard Solar Model (SSM)by
Bahcall-Pinsonneault (BP95)\cite{SSM} .
It is now understood that the SNP cannot be explained
through astrophysical/nuclear solutions \cite{CF,BFL}.
From the particle physics point of view, however,
the resonant neutrino conversion
(the Mikheyev-Smirnov-Wolfenstein (MSW) effect) \cite{MSW} seems
to explain successfully the present experimental situation
\cite{FIT,smirnov,Cala,cl}.
This talk deals with the stability of the MSW solution with
respect to the possible presence of random perturbations in the solar
matter density \cite{NRSV}.
We remind that
in Ref.\cite{KS} the effect of periodic matter density perturbations
added to a mean matter density $\rho$
upon resonant neutrino conversion was investigated.
There are also a number of papers which address similar effects by different
approaches \cite{AbadaPetcov,BalantekinLoreti}.
Here we consider the effect of random
matter density perturbations $\delta \rho(r)$, characterised by an
{\sl arbitrary} wave number $k$,
\begin{equation}
\delta \rho (r) = \int dk \delta \rho(k)\sin kr \:,
\end{equation}
Moreover, as in Ref.\cite{BalantekinLoreti},
we assume that the perturbation $\delta \rho$
has Gaussian distribution with the spatial correlation function
$\langle \xi^2 \rangle$ defined as
\begin{equation}\label{correlator}
\langle \delta \rho(r_1)\delta \rho(r_2)\rangle = 2\rho^2\langle
\xi^2\rangle L_0 \delta (r_1 - r_2)\, , \,\,\,\,\,\,
\langle
\xi^2\rangle \equiv
\frac{\langle \delta \rho^2\rangle}{\rho^2}\, .
\end{equation}
The correlation length $L_0$ obeys the following relation:
\begin{equation}\label{size}
l_{{free}} \ll L_0 \ll \lambda_m
\end{equation}
where
$l_{\rm free}\sim 10$ cm
is the mean free path
of the electrons
in the solar medium
and $\lambda_m$ is the neutrino matter wave length.
For the sake of discussion, in the following
we choose to adjust $L_0$ as follows:
\begin{equation}\label{L0}
L_0 = 0.1 \times \lambda_m \,.
\end{equation}
The SSM in itself cannot account for the existence of
density perturbations, since it is based on hydrostatic
evolution equations. On the other hand, the present
helioseismology observations cannot
exclude the existence of few percent
level of matter density fluctuations.
Therefore, in what follow we assume, on phenomenological grounds,
such levels for $\xi$, up to 8\%.
Before generalizing the MSW scenario,
accounting for the presence in the interior of the
sun of such matter density fluctuations, first
we give a quick reminder to the main features of the MSW effect.
\vspace{0.4cm}
{\bf 2.}
The resonant conversion of neutrinos in a matter background is due
to the coherent neutrino scattering off matter constituents \cite{MSW}.
This determines an effective matter potential $V$ for neutrinos.
In the rest frame of the unpolarized matter, the potential
is given, in the framework of the Standard Model, by
\begin{equation}\label{poten}
V = \frac{\sqrt{2}G_F}{m_p} \rho Y
\end{equation}
where $G_F$ is the Fermi constant and
$Y$ is a number which depends on the \hbox{neutrino } type and on the chemical
content of the medium. More precisely, $Y= Y_e - \frac{1}{2}Y_n$ for
the $\nu_e$ state, $Y= -\frac{1}{2}Y_n$ for \hbox{$\nu_\mu$ } and \hbox{$\nu_\tau$ } and $Y=0$
for the sterile $\nu_s$ state, where $Y_{e,n}$ denotes the electron and
neutron number per nucleon.
For the matter density $\rho$,
one usually consider the {\it smooth} distribution,
as given by the SSM \cite{SSM,turck,CDF}.
For given mass difference
$\delta m^2$ and neutrino mixing $\theta$ in vacuum,
the neutrinos $\nu_e$'s,
created in the inner region of the sun, where the
$\rho$ distribution is maximal,
can be completely converted into $\nu_y$ ($y= \mu$, $\tau$ or $s$),
while travelling to the solar surface. \\
This requires two conditions \cite{MSW}:
1) - the resonance condition. Neutrinos of given energy
$E$ experience the resonance if the energy splitting in the vacuum
$\delta m^2 \cos 2 \theta / 2E$
is compensated by the effective matter potential
difference $\Delta V_{ey} = V_e - V_y$. It is helpful to define the
following dynamical factor $A_{ey}$
\begin{equation}
\label{afactor}
A_{ey}(r) = \frac{1}{2} [\Delta V_{ey} (r)
- \frac{\delta m^2}{2E} \cos2 \theta]
\end{equation}
which vanishes at the resonance, $A_{ey}=0$.
This condition determines
the value \\
$\rho_{res} = (m_p \cos 2 \theta /2 \sqrt{2} G_F) (Y_e-Y_y)
\delta m^2 /E$ which, in turn, implies a resonance layer $\Delta r$.
2) - The adiabatic condition.
At the resonance layer, the neutrino conversion $\nu_e\to \nu_y$
is efficient if the propagation is adiabatic.
This can be nicely expressed
requiring the neutrino wavelenght $\lambda_m$ to
be smaller than $\Delta r$ \cite{MSW},
\begin{eqnarray}
\label{alfamsw}
\alpha_r & = & \Delta r/(\lambda_m)_{res} \equiv
\frac{\delta m^2 \sin^2 2 \theta R_0}{4\pi E \cos 2\theta} > 1\, ,
\,\,\,
R_0 \approx 0.1 R_{\odot} \, ,\\
\lambda_m & = & \frac{\pi}{\sqrt{ A_{ey}^2 +
(\delta m^2)^2\sin^2 2\theta/(16 E^2)}}\,, \,\,\,\,\,\, \Delta r = 2 \rho_{res}
\tan 2\theta
|\mbox{d}\rho/\mbox{d}r|^{-1}\,. \nonumber
\end{eqnarray}
\vspace{0.4cm}
{\bf 3.}
Now we re-formulate the neutrino evolution equation
accounting for a fluctuation
term $\delta \rho$ superimposed to the main profile $\rho$.
The perturbation level $\xi =\frac{\delta \rho}{\rho}$
induces a corresponding
random component $\Delta V_{ey} \xi$ for the matter potential.
The evolution for the $\nu_e-\nu_y$
system is governed by
\begin{equation}
\label{ev1}
i \frac{d}{dt}\matr{\nu_e} {\nu_y} =
\mat{H_{e}} {H_{e y}}
{H_{ey}} {H_{y}}\matr{\nu_e} {\nu_y},
\end{equation}
where the entries of the Hamiltonian matrix are given by
\begin{equation}
\label{matdef}
H_e= 2 [A_{ey}(t) + \tilde{A}_{ey}(t)], ~~~~ H_y=0,
~~~~ H_{ey}=\frac{\delta m^2}{4E} \sin2 \theta,
~~~~ \tilde{A}_{ey}(t) = \frac{1}{2} \Delta V_{ey}(t) \xi\, .
\end{equation}
Here the matter potentials read as:
\begin{equation}
\label{vex}
\Delta V_{e\mu (\tau)}(t) = \frac{\sqrt{2} G_F}{m_p} \rho(t) (1-Y_n)\, ,
\,\,\,\,\,\,\,
\Delta V_{es}(t) = \frac{\sqrt{2} G_F}{m_p} \rho(t) (1-\frac{3}{2}Y_n)
\end{equation}
for the
$\nu_e\rightarrow \nu_{\mu,\tau}$ and
$\nu_e\rightarrow\nu_{s}$ conversions, respectively.
(The neutral matter relation $Y_e =1-Y_n$ has been used.)
The system (\ref{ev1}) has to be rewritten averaging over the
random density distribution, taking into account that
for the random component we have:
\begin{equation}
\label{den_noise}
\langle \tilde{A}_{ey}^{2n+1} \rangle \! = \!0, ~~~~
\langle \tilde{A}_{ey}(t)\tilde{A}_{ey}(t_{1}) \rangle \!= \!
\kappa\delta (t - t_{1}) , ~~~~
\kappa(t)\!= \! \langle \tilde{A}_{ey}^2(t)\rangle L_0 \!=\! \frac{1}{2}
\Delta V^2_{ey}(t)
\langle \xi^2\rangle L_0 .
\end{equation}
We have obtained (see \cite{NRSV} for more details)
the following system:
\begin{eqnarray}
\label{sys1}
\dot{\cal{P}}(t) &= &2 H_{ey} \cal{I}(t) \nonumber \\
\dot{\cal{R}}(t) & = & -2A_{ey}(t) \cal{I}(t) -2 \kappa(t)\cal{R}(t)
\nonumber \\
\dot{\cal{I}}(t) & = & 2A_{ey}(t) \cal{R}(t) -2 \kappa(t)\cal{I}(t)
- H_{ey} (2 \cal{P}(t)-1) \, ,
\end{eqnarray}
where $ \cal {P}(t)= \langle | \nu_e |^2 \rangle$,
$\cal {R}(t)=\langle \mbox{Re}(\nu_y \nu_e^*) \rangle $ and
$\cal {I}(t)=\langle \mbox{Im}(\nu_y \nu_e^*) \rangle$. $\,\,$
Now the `` dynamics '' is governed by one more quantity i.e. the
noise parameter $\kappa$, besides the factor $A_{ey}$. The
quantity $\kappa$ can be given the meaning of energy quantum associated
with the matter density perturbation.
However, let us note that the MSW resonance condition,
i.e. $A_{ey}(t) =0 $
remains unchanged,
due to the random nature of the matter perturbations.
The comparison between the noise parameter $\kappa$ in
\Eq{den_noise} and $A_{ey}(t)$ shows that
$\kappa(t) < A_{ey}(t)$, for $\xi \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}$
few \%, except at the resonance region.
As a result, the density
perturbation can have its maximal effect just at the resonance.
Furthermore, one can find the analogous of condition 2)
(see Eq. (\ref{alfamsw}) for the noise to give rise to sizeable
effects. Since the noise term gives rise to a damping term in the system
(\ref{sys1}), it follows that the corresponding noise length scale
$1/\kappa$ be much smaller than the thickness of the resonance
layer $\Delta r$. In other words, the
following {\it adiabaticity} condition
\begin{equation}
\label{alfa}
\tilde{\alpha}_r= \Delta r\, \kappa_{res} > 1 \, ,\,\,\,\,\,
\tilde{\alpha}_r \approx \alpha_r\, \frac{\xi^2}{\tan^2 2\theta} \, .
\end{equation}
is also necessary.
For the range of parameters we are considering, $\xi \sim 10^{-2}$
and $\tan^2 2\theta\geq 10^{-3}-10^{-2}$, and due to the r.h.s of
(\ref{size}), there results $\tilde{\alpha}_r \leq \alpha_r$.
This relation can be
rewritten as $\kappa_{res} < \delta H_{res}$, where $\delta H_{res}$
is the level splitting between the energies of the neutrino mass
eigenstates at resonance. This shows that the noise energy quantum
is unable to ``excite'' the system, causing the
level crossing (even at the resonance) \cite{KS}.
In other words, it never
violates the MSW adiabaticity condition.
From Eq. (\ref{alfa}) it follows also that, in the adiabatic regime
$\alpha_r >1$, the smaller
the mixing angle value the larger
the effect of the noise. Finally, as already noted above,
the MSW non-adiabaticity $\alpha_r <1$
is always transmitted to $\tilde{\alpha}_r < 1$. As a result,
under our assumptions the fluctuations are expected to be
ineffective in the non-adiabatic MSW regime.
\vspace{0.4cm}
{\bf 4.}
All this preliminary discussion is illustrated in the Fig. 1.
For definiteness we take BP95 SSM \cite{SSM}
as reference model.
We plot $\cal{P}$ as a function
of $E/\delta m^2$ for different values of the noise parameter $\xi$.
For comparison, the standard MSW case $\xi=0$ is also shown
(lower solid curve).
One can see that in both cases of small and large mixing
(Fig. 1a and Fig. 1b, respectively), the effect of the matter
density noise is to raise the bottom of the pit (see
dotted and dashed curves).
In other words, the noise weakens
the MSW suppression in the adiabatic-resonant
regime, whereas its effect is negligible in
the non-adiabatic region.
The relative increase
of the survival probability $\cal{P}$ is larger for the case
of small mixing (Fig. 1a) as already guessed on the basis of
Eq. (\ref{alfa}).
We have also drawn pictorially (solid vertical line) the
position, in the $\cal{P}$ profile,
where $^7Be$ neutrinos fall in for the relevant
$\delta m^2 \sim 10^{-5}$ eV$^2$, to visualize
that these intermediate energy neutrinos are the ones most likely
to be affected by the matter noise.
\vspace{0.4cm}
{\bf 5.} Let us
analyse the possible impact of this
scenario in the determination of solar neutrino parameters
from the experimental data.
For that we have performed the standard $\chi^2$ fit in the $(\sin^2
2 \theta, \delta m^2)$ parameter space.
The results of the fitting
are shown in Fig. 2 where the 90\%
confidence level (C.L.) areas are drawn for different
values of $\xi$.
Fig. 2a and Fig. 2b refer to the cases of $\nu_e
\to \nu_{\mu,\tau}$ and $\nu_e
\to \nu_{s}$ conversion, respectively.
One can observe that the small-mixing region is almost stable,
with a slight shift
down of $\delta m^2$ values and a slight shift of
$\sin ^2 2\theta$ towards larger values.
The large mixing area is also pretty stable, exhibiting
the tendency to shift to smaller $\delta m^2$ and $\sin^2 2 \theta$.
The smaller $\delta m^2$ values compensate for the
weakening of the MSW suppression due to the presence of
matter noise, so that a larger portion of
the neutrino energy spectrum can be converted.
The presence of the matter
density noise makes the data fit a little poorer:
$\chi^2_{min}= 0.1$ for $\xi=0$, it
becomes $\chi^2_{min}= 0.8$ for $\xi=$ 4\% and even
$\chi^2_{min}= 2$ for $\xi=$8\% for the $\nu_e
\to \nu_{\mu,\tau}$ transition.
The same holds in the
case of transition into a sterile state (Fig. 2b):
$\chi^2_{min}= 1$ for $\xi=0$, it
becomes $\chi^2_{min}= 3.6$ for $\xi=$ 4\% and
$\chi^2_{min}= 9$ for $\xi=$8\%.
In conclusion
we have shown that the MSW
solution to the SNP exists for any realistic levels of matter density noise
($\xi\leq 4\%$).
Moreover the MSW solution is essentially stable in mass ($4\cdot 10^{-6}
\mbox{eV}^2 <\delta m^2< 10^{-5}\mbox{eV}^2$ at 90\% CL), whereas
the mixing appears more sensitive to the level of fluctuations.
\vspace{0.4cm}
{\bf 6.}
We can reverse our point of view, wondering whether the solar
neutrino experiments can be a tool to get information on the
the level of matter noise in the sun.
In particular, the
future Borexino experiment \cite{borex},
aiming to detect the $^7$Be neutrino flux, could be
sensitive to the presence of solar matter fluctuations.
In the relevant MSW parameter region for the noiseless case,
the Borexino signal cannot be definitely predicted
(see Fig. 3a). Within the present allowed C.L. regions (dotted line)
the expected rate, $Z_{Be}\!=\!R^{pred}_{Be}/R^{BP95}_{Be}$ (solid lines),
is in the range $0.2\div 0.7$.
On the other hand, when the matter density noise is switched on, e.g.
$\xi= 4\%$ (see Fig. 3b), the minimal
allowed value for $Z_{Be}$ becomes higher, $Z_{Be}\!\geq \!0.4$.
Hence, if the MSW mechanism is responsible for the
solar neutrino deficit and Borexino
experiment detects a low signal, say $Z_{Be}\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 0.3$
(with good accuracy) this will imply that a 4\% level of matter
fluctuations in the central region of the sun is unlikely.
The same argument can be applied to
\hbox{$\nu_e$ } $\rightarrow$ \hbox{$\nu_{s}$ } resonant conversion, whenever future
large detectors such
as Super-Kamiokande
and/or the Sudbury Neutrino Observatory (SNO)
establish through, e.g. the measurement of the charged to neutral
current ratio,
that the deficit of solar neutrinos is due to this kind of transition.
The expected signal in Borexino is very small $Z_{Be} \approx 0.02$ for
$\xi =0$ (see Fig. 3c).
On the other hand with $\xi=4\%$,
the minimum expected Borexino signal is 10 times higher than in the
noiseless case, so that if Borexino detects a rate $Z_{Be} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 0.1$
(see Fig. 3d) this would again exclude noise levels above $4\%$.
Let us notice that Super-Kamiokande and SNO experiments, being sensitive
only to the higher energy Boron neutrinos, probably
do not offer similar possibility
to probe such matter fluctuations in the sun.
The previous discussion, which certainly deserves a more accurate
analysis
involving also the theoretical uncertainties in the
$^7$Be neutrino flux, shows the close link between neutrino physics and
solar physics.
\vspace{0.4cm}
This work has been supported by
the grant N. ERBCHBI CT-941592 of the Human Capital and Mobility
Program.
\vspace{0.3cm}
|
2,869,038,153,737 | arxiv | \section*{Introduction}
Starting with the seminal works \cite{Beil} and \cite{BGG} on derived categories of coherent sheaves
on projective spaces the techniques involving derived categories have been applied to a variety
of problems in algebraic geometry. Among recent examples one could mention
the relation between semiorthogonal decompositions of derived categories and birational geometry
(see \cite{BO}, \cite{B-flop}), as well as Bridgeland's theory
of stability conditions (see \cite{Bridge}).
However, there are still some open problems in which not much progress
was made since the 80's. Among them is the problem of describing derived categories of coherent
sheaves on homogeneous varieties. The method of Beilinson in \cite{Beil} was generalized by
Kapranov to the case of quadrics and to partial flag varieties for series $A_n$ (see
\cite{Kap}). Furthermore, it was
realized that the relevant structure is that of a {\it full exceptional collection}, a notion that can be
formulated for an arbitrary triangulated category (see \cite{Rud-sem}). Namely, this is a collection
of objects $E_1, \ldots, E_n$ generating the entire triangulated category with the following vanishing conditions:
$$\operatorname{Hom}^*(E_j,E_i)=0 \text{ for }i<j, \ \operatorname{Hom}^{\neq0}(E_i,E_i)=0,\ \operatorname{Hom}^0(E_i,E_i)=k,$$
where $k$ is the ground field
(which we always assume to be algebraically closed of
characteristic zero). For a smooth projective variety $X$ over $k$ we denote by
$\mathop{\mathrm D\kern0pt}\nolimits^b(X)$ the bounded derived category of coherent sheaves on $X$.
It has been conjectured that for every homogeneous variety $X$ of a semisimple algebraic group
the category $\mathop{\mathrm D\kern0pt}\nolimits^b(X)$ admits a full exceptional
collection (of vector bundles).
However, the only homogeneous varieties of simple groups for which this is known
(other than the examples mentioned above) are:
\noindent the isotropic Grassmannian of $2$-dimensional planes in a symplectic
$2n$-dimensional space (see \cite{Kuz-iso});
\noindent the isotropic Grassmannian of $2$-dimensional planes in an orthogonal
$2n+1$-dimensional space (see \cite{Kuz-iso});
\noindent the full flag variety for the symplectic and the orthogonal groups (see \cite{Sam});
\noindent the isotropic Grassmannians of a $6$-dimensional symplectic space
(see \cite{Sam});
\noindent the isotropic Grassmannian of $5$-dimensional planes in a $10$-dimensional orthogonal
space and a certain Grassmannian for type $G_2$ (see \cite{Kuz-hs}).
In the case of the Cayley plane, the minimal homogeneous variety for $E_6$, an exceptional
collection of $27$ vector bundles, that is conjectured to be full, was constructed in \cite{Manivel}.
In the present paper we construct full exceptional collections of vector bundles in the derived categories
of coherent sheaves of the Lagrangian Grassmannians $LG(4,8)$ and $LG(5,10)$, see
Theorems \ref{exc-col-thm}, \ref{full-thm} and \ref{L5-col-thm}. Note that the situation
is radically different from the previously known cases of classical type in that we have to consider
homogeneous bundles corresponding to reducible representations of the isotropy group.
The new exceptional bundles are constructed as successive extensions of appropriate Schur
functors of the universal quotient bundle.
Checking that the collections we construct are full is done in both cases ``by brute force".
One needs therefore to find a more conceptual proof before
trying to generalize our results to other Lagrangian Grassmannians.
It seems plausible that an exceptional collection $(E_1,\ldots,E_n)$ in $\mathop{\mathrm D\kern0pt}\nolimits^b(X)$ such that
classes of $E_i$ generate the Grothendieck group $K_0(X)$, is automatically full.
Thus, it would be enough to have $n=\operatorname{rk} K_0(X)$.
Recall that a full triangulated subcategory ${\cal C}\subset{\cal D}$ generated by an exceptional collection
in ${\cal D}$ is {\it admissible} (see \cite{Bondal}, Thm. 3.2). By definition, this means that the
inclusion functor ${\cal C}\to{\cal D}$ admits left and right adjoint functors ${\cal D}\to{\cal C}$.
To check that ${\cal C}={\cal D}$ is equivalent to showing that the right orthogonal ${\cal C}^{\perp}\subset{\cal D}$
is zero, where ${\cal C}^{\perp}=\{A\in{\cal D}\ |\ \operatorname{Hom}_{{\cal D}}({\cal C},A)=0\}$. It is known that
${\cal C}^{\perp}$ is also admissible. Thus, the above statement
would follow from the Nonvanishing conjecture of A.~Kuznetsov (see \cite{Kuz-Hoch}, Conjecture 9.1 and Corollary 9.3) that a nonzero admissible subcategory should have nonzero Hochschild homology.
\vspace{2mm}
\noindent
{\it Acknowledgments}.
The work of the first author was partially supported by the NSF grant DMS-0601034.
Part of this work was done while the second author was visiting the University of Oregon and
the IHES.
He gratefully acknowledges the hospitality and support of both institutions.
\section{Applications of the Bott's theorem in the case of Lagrangian Grassmannians}
Let $V$ be a symplectic vector space of dimension $2n$.
Consider the Largangian Grassmannian $LG(V)$ of $V$
(we also use the notation $LG(n,2n)$).
We have the basic exact sequence of vector bundles on $LG(V)$
\begin{equation}\label{basic-seq}
0\to \mathcal U\to V\otimes{\cal O} \to Q\to 0
\end{equation}
where $\mathcal U=Q^{\ast}$ is the tautological subbundle, and $Q$ is the tautological quotient-bundle.
We set ${\cal O}(1)=\wedge^n Q$. This is an ample generator of the Picard group of $LG(V)$.
It is well known that the canonical line bundle on $LG(V)$ is isomorphic to ${\cal O}(-n-1)$.
The variety $LG(V)$ is a homogeneous space for the symplectic group $\operatorname{Sp}(V)=\operatorname{Sp}(2n)$. Namely,
it can be identified with $\operatorname{Sp}(2n)/P$, where $P$ is the maximal parabolic associated with the
simple root $\a_n$. Here we use the standard numbering of the vertices in the Dynkin diagram
$D_n$ as in \cite{Bour}. Recall that the semisimple part of $P$ is naturally identified with
$\operatorname{GL}(n)$. Thus, to every representation of $\operatorname{GL}(n)$ one can associate a homogeneous vector
bundle on $LG(V)$. This correspondence is compatible with tensor products and the standard
representation of $\operatorname{GL}(n)$ corresponds to $Q$. For our purposes it will be convenient to
identify the maximal torus of $\operatorname{Sp}(2n)$ with that of $\operatorname{GL}(n)\subset P$. One can easily check
that under this identification the half-sum of all the positive roots of $\operatorname{Sp}(2n)$ is equal to
$$\rho=n\epsilon_1+(n-1)\epsilon_2+\ldots+\epsilon_n,$$
where $(\epsilon_i)$ is the standard basis of the weight lattice corresponding to $\operatorname{GL}(n)$.
Note that with respect to this basis the roots of $\operatorname{Sp}(2n)$ are $\pm\epsilon_i$ and
$\pm\epsilon_i\pm\epsilon_j$. Thus, a weight $x_1\epsilon_1+\ldots+x_n\epsilon_n$ is singular for $\operatorname{Sp}(2n)$
if and only if
either there exists $i$ such that $x_i=0$, or there exist $i\neq j$ such that $x_i=\pm x_j$.
The Weyl group $W$ of $\operatorname{Sp}(2n)$ is the semidirect product of $S_n$ and ${\Bbb Z}_2^n$
acting by permutations and sign changes $x_i\mapsto -x_i$.
A weight $x_1\epsilon_1+\ldots+x_n\epsilon_n$ is dominant for $\operatorname{Sp}(2n)$ if and only if
$x_1\ge x_2\ge \ldots\ge x_n\ge 0$.
For a dominant weight $\lambda=(a_1,\ldots,a_n)$ of $\operatorname{GL}(n)$ (where $a_1\ge a_2\ge\ldots\ge a_n$),
let $S^{\lambda}$ denote the corresponding Schur functor (sometimes we omit the tail of
zeros in $\lambda$). Note that by definition, $S^{(a_1+1,\ldots,a_n+1)}=\det\otimes S^{(a_1,\ldots,a_n)}$. Hence,
$$S^{(a_1+1,\ldots,a_n+1)}Q\simeq S^{(a_1,\ldots,a_n)}Q(1).$$
Our main computational tool is Bott's theorem on cohomology of homogeneous vector bundles.
In the case of the Lagrangian Grassmannian $LG(V)$ it states the following.
\begin{thm} (Theorem $IV'$ of \cite{Bott})
\begin{enumerate}
\item If $\lambda+\rho$ is singular then $H^{\ast}(LG(V),S^{\lambda}Q)=0$;
\item if $\lambda+\rho$ is non-singular and $w\in W$ is an element of minimal length $\ell$ such that
$\mu=w(\lambda+\rho)-\rho$ is dominant for $\operatorname{Sp}(2n)$, then $H^i(LG(V),S^{\lambda}Q)=0$ for $i\neq\ell$ and
$H^{\ell}(LG(V),S^{\lambda}Q)$ is an irreducible representation of $\operatorname{Sp}(2n)$ with the highest weight $\mu$.
\end{enumerate}
\end{thm}
Below we will often abbreviate
$H^{\ast}(LG(V),?)$ to $H^{\ast}(?)$.
\begin{lem}\label{coh-lem}
One has
\noindent
(i) $H^{\ast}({\cal O}(i))=0$ for $i\in [-n,-1]$; $H^{>0}({\cal O})=0$ and $H^0({\cal O})=k$.
\noindent
(ii) $H^{\ast}(\wedge^k Q(i))=0$ for $k\in [1,n-1]$ and $i\in [-n-1,-1]$. Also,
for $k\in [1,n-1]$ one has $H^{>0}(\wedge^k Q)=0$ and
$H^0(\wedge^k Q)$ is an irreducible representation of $\operatorname{Sp}(2n)$ with the highest weight
$((1)^k,(0)^{n-k})$ ($k$ $1$'s).
\end{lem}
\noindent {\it Proof} . (i) We have in this case $\lambda+\rho=(n+i,\ldots,1+i)$ which is singular fo $i\in [-n,-1]$.
For $i=0$ we have $\lambda+\rho=\rho$.
\noindent
(ii) The bundle $\wedge^k Q$ corresponds to the weight $((1)^k,(0)^{n-k})$.
Thus, $\wedge^k Q(i)$ corresponds to $\lambda=((1+i)^k,(i)^{n-k})$, so
$\lambda+\rho=(n+1+i,\ldots, n-k+2+i, n-k+i,\ldots,1+i)$. In the case when $i\in [-n-1, -n-2+k]$ or
$i\in [-n+k,-1]$ one of the coordinates is zero. On the other hand, for $i=-n-1+k$ the
sum of the $k$th and $(k+1)$st coordinates is zero. Hence, $\lambda+\rho$ is singular for $i\in [-n-1,-1]$.
\qed\vspace{3mm}
When computing the $\operatorname{Ext}$-groups on $LG(V)$ between the bundles of the form $S^{\lambda}Q$
it is useful to observe that
$$(S^{(a_1,\ldots,a_n)}Q)^{\ast}\simeq S^{(a_1-a_n,a_1-a_{n-1},\ldots,0)}(-a_1).$$
To compute the tensor products of the Schur functors we use Littlewood-Richardson rule.
\begin{lem}\label{gen-exc-lem} Assume that $n\ge 3$.
\noindent (i)
One has $\operatorname{Hom}^*(\wedge^k Q,\wedge^l Q(i))=0$ for $i\in [-n,-1]$ and $k,l\in [0,n-2]$.
Also, $\operatorname{Hom}^*(\wedge^k Q,\wedge^l Q)=0$ for $k,l\in [0,n-2]$ and $k>l$.
All the bundles $\wedge^k Q$ are exceptional.
\noindent (ii)
For $k<n$ one has $\operatorname{Hom}^*(\wedge^k Q,\wedge^{k+1}Q)=V$ (concentrated in degree $0$).
Furthermore, the natural map $Q\to\underline{\operatorname{Hom}}(\wedge^k Q,\wedge^{k+1}Q)$ induces an
isomorphism on $H^0$.
\end{lem}
\noindent {\it Proof} . (i) Recall that
$$\wedge^k Q^{\ast}=\wedge^{n-k} Q(-1)=S^{((1)^{n-k},(0)^k)}Q(-1).$$
Therefore, for $k>l$, $k+l\neq n$, the tensor product
$\wedge^k Q^{\ast}\otimes \wedge^l Q\simeq \wedge^{n-k}Q\otimes \wedge^l Q(-1)$ decomposes into
direct summands of the form $S^{\lambda}Q$ with $\lambda=((1)^a,(0)^b,(-1)^c)$, where $b>0$ and $c>0$.
It is easy to see that in this case $\lambda+((i)^n)+\rho$ will be singular for $i\in [-n,0]$. Furthermore,
even if $k+l=n$ but $l<k<n-1$, we claim that the weights $\lambda+((i)^n)+\rho$ will still be singular
for $i\in[-n,0]$. Indeed, this follows easily from the fact that $\lambda=((1)^a,(0)^b,(-1)^c)$ with $c>0$, and
either $b>0$ or $c>1$ or $a>1$.
Hence, $\operatorname{Hom}^*(\wedge^k Q,\wedge^lQ(i))=0$, where $i\in [-n,0]$, $n>k>l\ge 0$ and $(k,l)\neq (n-1,1)$.
Using Serre duality we deduce the needed vanishing for the case $k<l$.
In the case when $k=l$ the tensor product
$\wedge^k Q^{\ast}\otimes\wedge^k Q\simeq \wedge^{n-k}Q\otimes\wedge^k Q(-1)$ will contain exactly on summand isomorphic
to ${\cal O}$, and the other summands of the same form as above with $c>0$. The same argument as
before shows that $\operatorname{Hom}^*(\wedge^kQ,\wedge^kQ(i))=0$ for $i\in [-n,-1]$ and that
$\operatorname{Hom}^*(\wedge^kQ,\wedge^kQ)=k$ (concentrated in degree $0$).
\noindent
(ii) The tensor product $\wedge^k Q^{\ast}\otimes\wedge^{k+1}Q\simeq\wedge^{n-k}Q\otimes\wedge^{k+1}Q(-1)$
decomposes into the direct sum of $Q$ and of summands
of the form $S^{\lambda}Q$ with $\lambda=((1)^a,(0)^b,(-1)^c)$, where $c>0$. In the latter case
the weight $\lambda+\rho$ is singular, so these summands do not contribute to cohomology.
\qed\vspace{3mm}
Next, for $k\in [1,n-3]$
consider the vector bundle $R_k:=S^{(2,(1)^k)}Q$, so that we have a direct sum decomposition
$$Q\otimes \wedge^{k+1} Q= \wedge^{k+2} Q\oplus R_k.$$
One can check that $R_k$ itself is not exceptional but in the next section
we are going to construct a related exceptional bundle on $LG(V)$.
\begin{lem}\label{R-van-lem}
For $1\le k\le n-3$, $0\le l\le n-2$ and $-n\le i\le -1$ one has
$$\operatorname{Hom}^*(\wedge^l Q, R_k(i))=\operatorname{Hom}^*(R_k,\wedge^l Q(i))=0.$$
Furthermore, for $l>k+1$ one has $\operatorname{Hom}^*(\wedge^l Q, R_k)=0$, while
for $l<k$ one has $\operatorname{Hom}^*(R_k,\wedge^l Q)=0$.
\end{lem}
\noindent {\it Proof} .
By Littlewood-Richardson rule, the tensor product
$\wedge^l Q^*\otimes R_k=\wedge^{n-l} Q\otimes R_k(-1)$ decomposes into direct summands
of the form $S^{\lambda}$, where $\lambda$ has one of the following types:
\noindent
(i) $\lambda=(1,(0)^{k+n-l},(-1)^{l-k-1})$, provided $l\ge k+1$ (note that $k+n-l\ge 3$);
\noindent
(ii) $\lambda=((1)^a,(0)^b,(-1)^c)$, where $1\le a\le k+1$, $a+b\ge k+1$, $a+b+c=n$, $2a+b=k+n-l+2$;
\noindent
(iii) $\lambda=(2,(1)^a,(0)^b,(-1)^c)$, where $a\le k$, $a+b\ge k$, $a+b+c=n-1$, $2a+b=k+n-l-1$.
\noindent
In case (i) the weight $\lambda+((i)^n)+\rho$ will be singular for $i\in [-n-1,-1]$. In the case
$l>k+1$ it will also be singular for $i=0$. Next, let us consider case (ii).
If $b>0$ then the weight $\lambda+((i)^n)+\rho$ will be singular for $i\in [-n-1,-1]$ and if in addition
$c>0$ then it will be also singular for $i=0$. Note that the case $c=0$ occurs only when $l\le k+1$.
In the case $b=0$ we should have $a=k+1$, so $2\le a\le n-2$, which implies that $\lambda+((i)^n)+\rho$ is
singular for $i\in [-n-1,0]$. Finally, let us consider case (iii).
If $a>0$, $b>0$ and $c>0$ then the weight $\lambda+((i)^n)+\rho$ will be singular for $i\in [-n-1,0]$.
The case $c=0$ can occur only when $l\le k$. In the case $b=0$ we should have $a=k$,
so $c=n-k-1\ge 2$ which implies that the above weight is still singular for $i\in [-n-1,0]$.
In the case $a=0$ we have $b=k+n-l-1\ge 2$, so we deduce that the above weight will be singular
for $i\in [-n,0]$. Note that the case $a=0$ can occur only for $l\ge k$.
The above analysis shows the vanishing of $\operatorname{Hom}^*(\wedge^l Q, R_k(i))$ for $i\in [-n,-1]$, as well
as vanishing of $\operatorname{Hom}^*(\wedge^l Q, R_k)$ for $l>k+1$ and of $\operatorname{Hom}^*(\wedge^l Q, R_k(-n-1))$ for
$l<k$. Applying Serre duality we deduce the remaining assertions.
\qed\vspace{3mm}
Note that in the above lemma we have skipped the calculation of $\operatorname{Hom}^*(R_k,\wedge^l Q)$ and
$\operatorname{Hom}(\wedge^l Q,R_k)$ for $l=k$ and $l=k+1$. This will be done in the following lemma,
where we also prove a number of other auxiliary statements.
Let us consider a natural map $f:V\otimes\wedge^{k+1}Q\to R_k$ induced by the projection
$Q\otimes\wedge^{k+1} Q\to R$ and the map $V\otimes {\cal O}\to Q$.
\begin{lem}\label{Bott-lem}
Assume $1\le k\le n-3$. Then one has
\noindent
(i) $\operatorname{Hom}^{>0}(\wedge^{k+1} Q,R_k)=0$ and $\operatorname{Hom}^0(\wedge^{k+1} Q,R_k)=V$.
The map $f$ induces an isomorphism on $\operatorname{Hom}^*(\wedge^{k+1} Q, ?)$.
\noindent
(ii) One has $\operatorname{Hom}^{>0}(\wedge^k Q,R_k)=0$. Also,
the natural map $Q\otimes Q\to\underline{\operatorname{Hom}}(\wedge^k Q,R_k)$ induces an isomorphism on $H^0$, so that
$\operatorname{Hom}^0(Q,R_k)\simeq V\otimes V/k$.
\noindent
(iii) $\operatorname{Hom}^1(R_k,\wedge^k Q)=k$,
$\operatorname{Hom}^1(R_k,\wedge^{k+1} Q)=V$, $\operatorname{Hom}^{\neq 1}(R_k,\wedge^k Q)=
\operatorname{Hom}^{\neq 1}(R_k,\wedge^{k+1} Q)=0$.
The natural map $S^2Q^{\ast}\to\underline{\operatorname{Hom}}(R_k,\wedge^k Q)$ induces an isomorphism on $H^1$.
\noindent
(iv) $\operatorname{Hom}^{>1}(R_k,R_k)=0$, $\operatorname{Hom}^0(R_k,R_k)=k$, $\operatorname{Hom}^1(R_k,R_k)=V\otimes V/k$.
\noindent
(v) $H^{\ast}(Q^{\ast}\otimes S^2Q^{\ast})=0$.
\noindent
(vi) $H^i(Q^{\ast}\otimes Q\otimes S^2Q^{\ast})=0$ for $i\neq 1$.
\end{lem}
\noindent {\it Proof} . (i) By Littlewood-Richardson rule we have
$$\wedge^{k+1} Q^{\ast}\otimes R_k\simeq
\wedge^{n-k-1} Q(-1)\otimes S^{(2,(1)^k)}Q\simeq
Q\oplus \ldots
$$
where the remaining summands correspond to highest weights
$\lambda=(a_1,\ldots,a_n)$ such that $a_n=-1$.
For such $\lambda$ the weight $\lambda+\rho$ is singular,
hence these summands do not contribute to cohomology. Thus, the unique embedding of
$Q$ into $\underline{\operatorname{Hom}}(\wedge^{k+1} Q,R_k)$ induces an isomorphism on cohomology.
This immediately implies the result (recall that $H^{\ast}(Q)=V$ by Lemma \ref{coh-lem}).
\noindent (ii) Applying Littlewood-Richardson rule again we find
$$\wedge^k Q^{\ast}\otimes R_k\simeq S^2Q\oplus\wedge^2Q\oplus\ldots$$
where the remaining summands correspond to highest weights $\lambda=(a_1,\ldots,a_n)$
with $a_n=-1$. The sum of the first two terms is exactly
the image of the natural embedding $Q\otimes Q\to\underline{\operatorname{Hom}}(Q,R)$.
\noindent
(iii) We have
$$R_k^{\ast}\otimes \wedge^k Q\simeq S^2Q^{\ast}\oplus\ldots$$
where all the remaining summands correspond to highest weights $\lambda=(a_1,\ldots,a_n)$
such that either $a_n=-1$ or $(a_{n-1},a_n)=(-1,-2)$. In both cases $\lambda+\rho$ is singular,
hence these summands do not contribute to cohomology.
for $S^2Q^{\ast}=S^{((0)^{n-1},-2)}Q$ one has
$\lambda+\rho=(n,\ldots,2,-1)$. Hence, applying a simple reflection
we get exactly $\rho$. This means that only $H^1$ is nonzero, and it is one-dimensional.
Similarly,
$$R_k^{\ast}\otimes \wedge^{k+1}Q\simeq S^{(1,(0)^{n-2},-2)}Q\oplus\ldots$$
where the remaining summands have singular $\lambda+\rho$.
For $\lambda=(1,(0)^{n-2},-2)$ we have
$\lambda+\rho=(n+1,\ldots,2,-1)$. This differs by a single reflection from $\rho+(1,(0)^{n-1})$.
Hence only $H^1$ is nonzero and $H^1(R_k^{\ast}\otimes\wedge^{k+1}Q)\simeq V$.
\noindent (iv) We have
$$R_k^{\ast}\otimes R_k\simeq S^{((2)^{n-2},1,0)}Q(-2)\otimes S^{(2,1)}Q\simeq
S^{(2,(0)^{n-2},-2)}Q\oplus S^{(1,1,(0)^{n-3},-2)}Q\oplus {\cal O}\oplus\ldots,$$
where the remaining terms do not contribute to cohomology.
The first two terms contribute only to $H^1$. Namely, the corresponding weights
$\lambda+\rho$ differ by a single reflection from $\rho+(2,(0)^{n-1})$ and $\rho+(1,1,(0)^{n-2})$,
respectively.
\noindent (v)
We have
$$Q^{\ast}\otimes S^2Q^{\ast}\simeq S^3Q^{\ast}\oplus S^{(2,1)}Q^{\ast}\simeq
S^{((0)^{n-1},3)}Q\oplus S^{((0)^{n-2},-1,-2)}Q.$$
In both cases $\lambda+\rho$ is singular.
\noindent (vi)
We have
$$Q\otimes Q^{\ast}\otimes S^2Q^{\ast}=Q\otimes S^3Q^{\ast}\oplus Q\otimes S^{(2,1)}Q^{\ast}
\simeq (S^{((0)^{n-1},-2)}Q)^{\oplus 2}\oplus\ldots,
$$
where the remaining summands do not contribute to cohomology.
For the first summand we have $\lambda+\rho=(n,\ldots,2,-1)$ which is obtained by applying
a simple reflection to a dominant weight. Hence, the cohomology is concentrated in degree $1$.
\qed\vspace{3mm}
\section{A family of exceptional vector bundles on $LG(V)$}
Let us fix $k\in [1,n-3]$.
The natural map $f:V\otimes\wedge^{k+1} Q\to Q\otimes\wedge^{k+1} Q$ is surjective, so we
obtain an exact sequence of vector bundles
\begin{equation}\label{SR-seq}
0\to S_k\to V\otimes\wedge^{k+1} Q\stackrel{f}{\to} R_k\to 0.
\end{equation}
Using the composite nature of $f$ we also get an exact sequence
\begin{equation}\label{S-seq}
0\to Q^{\ast}\otimes\wedge^{k+1} Q\to S_k\to \wedge^{k+2} Q\to 0.
\end{equation}
We have a natural embedding of vector bundles
$$\wedge^k Q\hookrightarrow \underline{\operatorname{Hom}}(Q,\wedge^{k+1} Q)=Q^{\ast}\otimes\wedge^{k+1} Q\hookrightarrow S_k.$$
Now we define $E_k$ to be the quotient $S_k/\wedge^k Q$, so that we have an exact sequence
\begin{equation}\label{QSE-seq}
0\to \wedge^k Q\to S_k\to E_k\to 0.
\end{equation}
\begin{lem}\label{split-lem}
The exact sequence \eqref{QSE-seq} splits canonically, so we have
$S_k\simeq\wedge^k Q\oplus E_k$. Furthermore, the bundles
$\wedge^k Q$ and $E_k$ are orthogonal to each other, i.e.,
$$\operatorname{Hom}^*(\wedge^k Q,E_k)=\operatorname{Hom}^*(E_k,\wedge^k Q)=0.$$
\end{lem}
\noindent {\it Proof} . First, we claim that $\operatorname{Hom}^0(S_k,\wedge^k Q)=k$ and $\operatorname{Hom}^i(S_k,\wedge^k Q)=0$
for $i\neq 0$.
Indeed, this follows immediately from the exact sequence \eqref{SR-seq} and from
Lemma \ref{Bott-lem}(iii) since $\operatorname{Hom}^*(\wedge^{k+1} Q,\wedge^k Q)=0$
by Lemma \ref{gen-exc-lem}. Next, using the vanishing of $\operatorname{Hom}^*(\wedge^{k+2} Q,\wedge^k Q)$
and the exact sequence \eqref{S-seq} we see that the embedding
$Q^{\ast}\otimes\wedge^{k+1} Q\hookrightarrow S_k$ induces an isomorphism
on $\operatorname{Hom}^*(?,\wedge^k Q)$. Hence, the nonzero morphism $S_k\to\wedge^k Q$ restricts
to the nonzero morphism $Q^{\ast}\otimes\wedge^{k+1} Q\to\wedge^k Q$, unique up to scalar.
The latter morphism is proportional to the natural contraction operation. Hence,
its restriction to $\wedge^k Q\subset Q^{\ast}\otimes\wedge^{k+1} Q$ is nonzero. Therefore, we get
a splitting of \eqref{QSE-seq}.
The vanishing of $\operatorname{Hom}^*(E_k,\wedge^k Q)$ also follows. On the other hand, from the exact
sequence \eqref{SR-seq}, using Lemma
\ref{Bott-lem}(ii) we get $\operatorname{Hom}^0(\wedge^k Q,S_k)=k$ and $\operatorname{Hom}^i(\wedge^k Q,S_k)=0$ for
$i\neq 0$. This implies that $\operatorname{Hom}^*(\wedge^k Q, E_k)=0$.
\qed\vspace{3mm}
By the above lemma we have a unique morphism $S_k\to\wedge^k Q$ extends the identity
morphism from $\wedge^k Q\subset S_k$. Pushing forward the extension given by \eqref{SR-seq}
under this morphism we get an extension
\begin{equation}\label{FR-seq}
0\to \wedge^k Q\to F_k\to R_k\to 0.
\end{equation}
Furthermore, we also get an exact sequence
\begin{equation}\label{EF-seq}
0\to E_k\to V\otimes \wedge^{k+1}Q\to F_k\to 0.
\end{equation}
Let us recall the definition of the mutation operation.
For an exceptional pair $(A,B)$ in a triangulated category ${\cal D}$,
the right mutation is a pair $(B,R_{B}A)$, where
$R_{B}A$ is defined to be a cone of the triangle
\begin{equation}\nonumber
\dots \longrightarrow R_{B}A[-1] \longrightarrow A\longrightarrow \operatorname{Hom} _{{\cal D}}^{\bullet}(A,B)^{\ast}\otimes B\longrightarrow
R_{B}A\longrightarrow \dots \quad .
\end{equation}
The pair $(B,R_B A)$ is again exceptional.
\begin{thm}\label{exc-thm} Let $k\in [1,n-3]$.
The bundle $F_k$ is the unique nontrivial extension of $R_k$ by $\wedge^k Q$.
The bundles $E_k$ and $F_k$ are exceptional, and $F_k$ is the right mutation of
$E_k$ through $\wedge^{k+1}Q$. Also, one has $F_k^{\ast}(1)\simeq E_{n-2-k}$.
\end{thm}
\noindent {\it Proof} . {\bf Step 1}. $\operatorname{Hom}^*(\wedge^{k+1}Q,E_k)=\operatorname{Hom}^*(F_k,\wedge^k Q)=0$. Indeed, the first vanishing
follows immediately from the exact sequence \eqref{SR-seq}. The second vanishing
follows from the exact sequence
\eqref{EF-seq} since $\operatorname{Hom}^*(\wedge^{k+1}Q,\wedge^k Q)=0$ and $\operatorname{Hom}^*(E_k,\wedge^k Q)=0$ by Lemma
\ref{split-lem}.
\noindent
{\bf Step 2}. $F_k$ is a nontrivial extension of $R_k$ by $\wedge^k Q$ (recall that by Lemma
\ref{Bott-lem}(iii) there is a unique such extension). Indeed,
otherwise we would have a surjective map $F_k\to\wedge^k Q$ which is impossible by Step 1.
\noindent
{\bf Step 3}. $E_k$ is isomorphic to $F_{n-2-k}^{\ast}(1)$.
We have $Q^{\ast}\otimes\wedge^{k+1}Q\simeq \wedge^k Q\oplus R^{\ast}_{n-k-2}(1)$. Therefore, from
the exact sequence \eqref{S-seq} we get an exact sequence
$$0\to R^{\ast}_{n-2-k}(1)\to E_k\to \wedge^{k+2}Q\to 0.$$
We claim that it does not split. Indeed, otherwise we would get an inclusion
$\wedge^{k+2}Q\hookrightarrow E_k$ which is impossible since $\operatorname{Hom}(\wedge^{k+1}Q,\wedge^{k+2}Q)\neq 0$
but $\operatorname{Hom}(\wedge^{k+1}Q,E_k)=0$. Comparing this with the extension \eqref{FR-seq} for $n-2-k$
instead of $k$ we get the result.
\noindent
{\bf Step 4}. The natural map
$$H^0(Q\otimes Q)\otimes H^1(S^2 Q^{\ast})\to H^1(Q\otimes Q\otimes S^2 Q^{\ast})$$
is an isomorphism. Indeed, it is easy to check using Bott's theorem that both sides are isomorphic to
$V^{\otimes 2}/k$, so it is enough to check surjectivity. Therefore, it suffices to check
surjectivity of the maps
$$H^0(Q)\otimes H^1(S^2 Q^{\ast})\to H^1(Q\otimes S^2 Q^{\ast}) \ \text{and}$$
$$H^0(Q)\otimes H^1(Q\otimes S^2 Q^{\ast})\to H^1(Q\otimes Q\otimes S^2 Q^{\ast}).$$
Using the exact sequence \eqref{basic-seq} we deduce this from the vanishing
of $H^2(Q^{\ast}\otimes S^2 Q^{\ast})$ and $H^2(Q^{\ast}\otimes Q\otimes S^2 Q^{\ast})$
(see Lemma \ref{Bott-lem}(v),(vi)).
\noindent
{\bf Step 5}. The composition map
$$\operatorname{Hom}^0(\wedge^k Q,R_k)\otimes\operatorname{Hom}^1(R_k,\wedge^k Q)\to\operatorname{Hom}^1(R_k,R_k)$$
is an isomorphism. Note that by Lemma \ref{Bott-lem}(ii),(iii),(iv),
both sides are isomorphic to $V^{\otimes 2}/k=S^2V\oplus \wedge^2 V/k$, so it is enough
to check surjectivity. Let us define the natural morphisms
$$\a:S^2Q^{\ast}\to R_k^{\ast}\otimes \wedge^k Q,$$
$$\b:Q\otimes Q\to \wedge^k Q^{\ast}\otimes R_k,$$
as follows. Consider the Koszul complex for the symmetric algebra $S^*Q$
$$0\to\wedge^{k+2}Q\stackrel{d_1}{\to}Q\otimes\wedge^{k+1} Q\stackrel{d_2}{\to} S^2Q\otimes\wedge^k Q\to\ldots$$
Then $R_k$ can be identified with the image of $d_2$ (or cokernel of $d_1$).
In particular, we have a natural
embedding $R_k\to S^2Q\otimes\wedge^k Q$ which induces $\a$ by duality.
On the other hand, the natural projection $Q\otimes\wedge^{k+1} Q\to R_k$ gives rise to
the composed map
$$Q\otimes Q\otimes\wedge^k Q\stackrel{\operatorname{id}_Q\otimes\mu_k}{\to} Q\otimes\wedge^{k+1}Q\to R_k$$
where $\mu_k:Q\otimes\wedge^k Q\to\wedge^{k+1}Q$ is given by the exterior product. The map $\b$
is obtained from the above map by duality.
The morphisms $\a$ and $\b$ can be combined into a map
$$\gamma:S^2Q^{\ast}\otimes Q\otimes Q\stackrel{\a\otimes\b}{\to}R_k^{\ast}\otimes \wedge^k Q\otimes \wedge^k Q^{\ast}\otimes R_k\to
R_k^{\ast}\otimes R_k,$$
where the last arrow is induced by the trace map on $\wedge^k Q$.
By Step 4, it remains to check that the maps $\a$, $\b$ and $\gamma$ induce isomorphisms on
cohomology. In fact, we are going to prove that all these maps are embeddings of a direct summand
by constructing the maps $p_{\a}$, $p_{\b}$ and $p_{\gamma}$ in the opposite direction such that
$p_{\a}\circ\a$, $p_{\b}\circ\b$ and $p_{\gamma}\circ\gamma$ are proportional to identity. To this end we use the Koszul complex for the exterior algebra $\wedge^*Q$
$$\ldots\to S^2 Q\otimes\wedge^k Q\stackrel{\delta_2}{\to}Q\otimes\wedge^{k+1}Q\stackrel{\delta_1}{\to}\wedge^{k+2}Q\to 0$$
We can identify $R_k$ with the kernel of $\delta_1$ (or image of $\delta_2$). Hence,
we have natural map $S^2 Q\otimes\wedge^k Q\to R_k$. By duality this corresponds to a map
$$p_{\a}:R_k^{\ast}\otimes\wedge^k Q\to S^2Q^{\ast}.$$
On the other hand, we have a natural embedding
$$R_k\to Q\otimes\wedge^{k+1}Q\to Q\otimes Q\otimes\wedge^k Q$$
that gives rise to a map $p_{\b}:\wedge^k Q^{\ast}\otimes R_k\to Q\otimes Q$.
Combining $p_{\a}$ and $p_{\b}$ we obtain a map
$$p_{\gamma}:R_k^{\ast}\otimes R_k\to R_k^{\ast}\otimes\wedge^k Q\otimes\wedge^kQ^{\ast}\otimes R_k\to S^2Q^{\ast}\otimes Q\otimes Q.$$
A routine calculation proves our claim about the compositions $p_{\a}\circ\a$, $p_{\b}\circ\b$ and
$p_{\gamma}\circ\gamma$.
\noindent
{\bf Step 6}. Now we can prove that $F_k$ is exceptional (and hence, $E_k$ is also exceptional
by Step 3).
Applying the functor $\operatorname{Hom}^*(F_k,?)$ to the exact sequence
\eqref{FR-seq} and using Step 1 we get isomorphisms $\operatorname{Hom}^i(F_k,F_k)\simeq \operatorname{Hom}^i(F_k,R_k)$.
Next, applying the functor $\operatorname{Hom}^*(?.R_k)$ to the same sequence we get a long exact sequence
$$\ldots\to\operatorname{Hom}^{i-1}(R_k,\wedge^k Q)\to\operatorname{Hom}^i(R_k,R_k)\to \operatorname{Hom}^i(F_k,R_k)\to\operatorname{Hom}^i(R_k,\wedge^k Q)\to\ldots$$
It remains to apply Lemma \ref{Bott-lem}(iii) and Step 5 to conclude that $\operatorname{Hom}^i(F_k,R_k)=0$ for $i>0$ and
$\operatorname{Hom}^0(F_k,R_k)=k$.
\noindent
{\bf Step 7}. To check that $F_k$ is the right mutation of $E_k$ through $\wedge^{k+1}Q$ it remains
to prove that $\operatorname{Hom}^i(E_k,\wedge^{k+1}Q)=0$ for $i\neq 0$ and $\operatorname{Hom}^0(E_k,\wedge^{k+1}Q)$.
Applying the functor $\operatorname{Hom}^*(?,\wedge^{k+1}Q)$ to the sequence \eqref{SR-seq} we get
by Lemma \ref{Bott-lem}(iii) an exact sequence
$$0\to V\to \operatorname{Hom}^0(S_k,\wedge^{k+1}Q)\to V\to 0$$
along with the vanishing of $\operatorname{Hom}^{>0}(S_k,\wedge^{k+1}Q)$. Since $S_k=\wedge^k Q\oplus E_k$,
the assertion follows.
\qed\vspace{3mm}
We are going to compute some $\operatorname{Hom}$-spaces involving the bundles $E_k$ that we will need
later.
\begin{lem}\label{E-van-lem}
Assume that $l\in [0,n-2]$ and $k\in [1,n-3]$. Then
for $i\in [-n,-1]$ one has
$$\operatorname{Hom}^*(\wedge^l Q, E_k(i))=\operatorname{Hom}^*(E_k,\wedge^l Q(i))=0.$$
For $l>k$ one has $\operatorname{Hom}^*(\wedge^l Q,E_k)=0$, while for $l<k$ one has
$\operatorname{Hom}(E_k,\wedge^l Q)=0$ (recall that for $l=k$ both these spaces vanish by
Lemma \ref{split-lem}).
\end{lem}
\noindent {\it Proof} . It is enough to check similar assertions with $S_k$ instead of $E_k$.
Using the exact sequence \eqref{SR-seq} we reduce the required vanishing for $i\in [-1,-n]$
to Lemmas \ref{gen-exc-lem}(i) and \ref{R-van-lem}. To prove the remaining vanishings
we use in addition the fact that $\operatorname{Hom}^*(\wedge^{k+1}Q, S_k)=0$ that
follows from Lemma \ref{Bott-lem}(i).
\qed\vspace{3mm}
\section{The case of $LG(4,8)$}
Now let us assume that $V$ is $8$-dimensional. Let $E=E_1$.
\begin{thm}\label{exc-col-thm}
The following collection on $LG(4,8)$ is exceptional:
$$({\cal O},E,Q,\wedge^2Q,{\cal O}(1),Q(1),\wedge^2(1),\ldots,{\cal O}(4),Q(4),\wedge^2Q(4)).$$
\end{thm}
\noindent {\it Proof} . We already know that all these bundles are exceptional.
The required orthogonality conditions follow from
Lemma \ref{gen-exc-lem}, Lemma \ref{split-lem}, and Lemma \ref{E-van-lem}.
\qed\vspace{3mm}
\begin{lem}\label{full-lem}
Let ${\cal C}\subset \mathop{\mathrm D\kern0pt}\nolimits^b(LG(4,8))$ be the triangulated subcategory generated by the exceptional
collection in Theorem \ref{exc-col-thm}. Then the following bundles belong to ${\cal C}$:
\noindent
(i) $Q^{\ast}(j)$, $j=0,\ldots,4$;
\noindent
(ii) $S^2Q(j)$, $j=0,\ldots,4$;
\noindent
(iii) $Q\otimes\wedge^2 Q(j)$, $j=0,\ldots,3$;
\noindent
(iv) $Q\otimes Q^{\ast}(j)$, $j=1,\ldots,4$.
\end{lem}
\noindent {\it Proof} . {\bf Step 1}. $Q^{\ast}(j),S^2Q^{\ast}(j)\in{\cal C}$ for $j=0,\ldots,4$.
Indeed, the fact that $Q^{\ast}(j)\in{\cal C}$ follows immediately from \eqref{basic-seq}. Similarly,
the assertion for $S^2Q^{\ast}(j)$ follows from the exact sequence
\begin{equation}\label{S2Q*-seq}
0\to S^2Q^{\ast}\to S^2V\otimes{\cal O}\to V\otimes Q\to\wedge^2Q\to 0
\end{equation}
obtained from \eqref{basic-seq}.
\vspace*{0.1cm}
\noindent
{\bf Step 2}. $Q\otimes Q(j)\in{\cal C}$ for $j=1,2,3,4$. This follows from the exact sequence
\begin{equation}\label{S2Q-seq}
0\to \wedge^2 Q^{\ast}\to V\otimes Q^{\ast}\to S^2V\otimes{\cal O}\to S^2Q\to 0,
\end{equation}
dual to \eqref{S2Q*-seq}, since $\wedge^2Q^{\ast}=\wedge^2Q(-1)$ and $Q^{\ast}(j)\in{\cal C}$ by Step 1.
\vspace*{0.1cm}
\noindent
{\bf Step 3}. $\wedge^2Q\otimes\wedge^2Q(2)\in{\cal C}$.
It follows from the basic sequence \eqref{basic-seq} that $\wedge^4 V\otimes{\cal O}(3)$ has
a filtration with the subsequent quotients ${\cal O}(4)$, $Q^{\ast}\otimes Q^{\ast}(4)$,
$\wedge^2Q\otimes\wedge^2Q(2)$, $Q\otimes Q(2)$ and ${\cal O}(2)$. All of them except for
$\wedge^2Q\otimes\wedge^2Q(2)$ belong to ${\cal C}$, by Steps 1 and 2. This
implies the assertion.
\vspace*{0.1cm}
\noindent
{\bf Step 4}. $Q^{\ast}\otimes Q(j)\in{\cal C}$ for $j=1,2,3,4$. Indeed, tensoring \eqref{basic-seq} with $Q$
we get an exact sequence
$$0\to Q^{\ast}\otimes Q\to V\otimes Q\to Q\otimes Q\to 0,$$
so the assertion follows from Step 2.
\vspace*{0.1cm}
\noindent
{\bf Step 5}. $Q\otimes\wedge^2Q\in{\cal C}$. First, observe that $S_1=Q\oplus E\in{\cal C}$.
Now the exact sequence \eqref{SR-seq} shows that $R_1\in{\cal C}$.
But $Q\otimes\wedge^2Q=\wedge^3Q\oplus S^{(2,1)}Q=Q^{\ast}(1)\oplus R_1$, so it is in ${\cal C}$ (recall
that $Q^{\ast}(1)\in{\cal C}$ by Step 1).
\vspace*{0.1cm}
\noindent
{\bf Step 6}. $Q\otimes\wedge^2Q(j-1),Q\otimes S^2Q(j)\in{\cal C}$ for $j=1,2,3,4$.
Consider the exact sequence
$$0\to Q\otimes\wedge^2 Q\to V\otimes Q^{\ast}\otimes Q(1)\to S^2V\otimes Q(1)\to Q\otimes S^2Q(1)\to 0$$
obtained by tensoring \eqref{S2Q-seq} with $Q(1)$.
Using Steps 4 and 5 we deduce that $Q\otimes S^2Q(1)\in{\cal C}$. Note that the subcategory
${\cal C}$ is admissible, so it is closed under passing to direct summands. Since
$Q\otimes S^2Q(1)=S^3Q(1)\oplus S^{(2,1)}Q(1)$, we derive that $S^{(2,1)}Q(1)\in{\cal C}$.
This implies that $Q\otimes\wedge^2 Q(1)=Q^{\ast}(2)\oplus S^{(2,1)}Q(1)\in{\cal C}$ (where
$Q^{\ast}(2)\in{\cal C}$ by Step 1). Now we tensor the above exact sequence by
${\cal O}(1)$ and iterate the above argument.
\vspace*{0.1cm}
\noindent
{\bf Step 7}. $Q\otimes S^3Q(2)\in{\cal C}$.
Consider the exact sequence
\begin{equation}\label{S3Q-seq}
0\to Q(-1)\to V\otimes\wedge^2 Q(-1)\to S^2V\otimes Q^{\ast}\to S^3V\otimes{\cal O}\to S^3Q\to 0
\end{equation}
obtained from \eqref{basic-seq}. Tensoring it with $Q(2)$ and using Steps 2, 4 and 6
we deduce the assertion.
\vspace*{0.1cm}
\noindent
{\bf Step 8}. $S^2Q\otimes S^2Q(2)\in{\cal C}$.
We have $S^2Q\otimes S^2Q(2)=Q\otimes S^3Q(2)\oplus S^{(2,2)}Q(2)$. Hence, by Step 7,
it is enough to check that $S^{(2,2)}Q(2)\in{\cal C}$. But $S^{(2,2)}Q(2)$ is a direct summand in
$\wedge^2Q\otimes\wedge^2Q(2)$, so the assertion follows from Step 3.
\vspace*{0.1cm}
\noindent
{\bf Step 9}. $S^4Q(1)\in{\cal C}$.
This follows immediately from the exact sequence
$$0\to{\cal O}\to V\otimes Q\to S^2V\otimes\wedge^2Q\to S^3V\otimes Q^{\ast}(1)\to S^4V\otimes{\cal O}(1)\to S^4Q(1)\to 0$$
deduced from \eqref{basic-seq}.
\vspace*{0.1cm}
\noindent
{\bf Step 10}. $\wedge^2Q\otimes S^2Q(1)\in{\cal C}$.
Consider the exact sequence
$$0\to\wedge^2 Q^{\ast}\to\wedge^2 V\otimes{\cal O}\to V\otimes Q\to S^2Q\to 0$$
deduced from \eqref{basic-seq}. Tensoring it with $S^2 Q(2)$ we get the exact sequence
$$0\to \wedge^2 Q\otimes S^2Q(1)\to
\wedge^2 V\otimes S^2Q(2)\to V\otimes Q\otimes S^2Q(2)\to S^2Q\otimes S^2Q(2)\to 0.$$
Here all the nonzero terms except for the first one belong to ${\cal C}$ by
Steps 2, 6 and 8, so the assertion follows.
\noindent
{\bf Step 11}. Finally, we are going to deduce that $Q\otimes Q\in{\cal C}$. Tensoring \eqref{S3Q-seq} by
$Q(1)$ we get an exact sequence
$$0\to Q\otimes Q\to V\otimes Q\otimes\wedge^2 Q\to S^2V\otimes Q^{\ast}\otimes Q(1)\to S^3V\otimes Q(1)\to Q\otimes S^3Q(1)\to 0.$$
All the nonzero terms except for the first and the last belong to ${\cal C}$ by Steps 4 and 5.
Thus, it is enough to check that $Q\otimes S^3Q(1)\in{\cal C}$. We have
$Q\otimes S^3Q(1)= S^4Q(1)\oplus S^{(3,1)}Q(1)$. It remains to observe that
$S^4Q(1)\in{\cal C}$ by Step 9, while $S^{(3,1)}Q(1)\in{\cal C}$ as a direct summand of
$\wedge^2 Q\otimes S^2Q(1)$ which is in ${\cal C}$ by Step 10.
\qed\vspace{3mm}
\vspace*{0.5cm}
\begin{thm}\label{full-thm}
The exceptional collection on $LG(4,8)$ considered in Theorem \ref{exc-col-thm} is full.
\end{thm}
\noindent {\it Proof} . \ Recall that $Q$ is dual to the universal subbundle $\mathcal U=\mathcal U_4\subset V\otimes{\cal O}$.
Taking the dual of the collection in question we obtain the
collection
\begin{equation}\label{eq:dualcollection}
(\wedge^2\mathcal U _4(-4),\mathcal U _4(-4),\mathcal O(-4),\ldots,
\wedge^2\mathcal U _4(-1),\mathcal U _4(-1),\mathcal O(-1),\wedge^2\mathcal U _4,\mathcal U,E^{\ast},\mathcal O)
\end{equation}
that generates the admissible triangulated subcategory ${\cal C}^{\ast}\subset\mathop{\mathrm D\kern0pt}\nolimits ^{b}(LG(4,8))$.
It is enough to check that ${\cal C}^{\ast}=\mathop{\mathrm D\kern0pt}\nolimits ^{b}(LG(4,8))$. Consider the
diagram with $p$ and $\pi$ being natural projections:
\begin{diagram}
&&{\rm F}_{1,4,8}&&\\
&\ldTo{\pi}&&\rdTo{p}&\\
{\mathbb P}^7&&&& LG(4,8)
\end{diagram}
Here ${\rm F}_{1,4,8}$ is the partial flag variety consisting of pairs
$(l\subset U)$, where $l$ is a line in a Lagrangian subspace $U\subset V$. The
variety ${\rm F}_{1,4,8}$ is naturally embedded into the product
${\mathbb P}^7\times LG(4,8)$. Let us denote by $i:{\rm F}_{1,4,8}\hookrightarrow {\mathbb P}^7\times
LG(4,8)$ the natural embedding. Consider the fiber $\pi^{-1}(x)$ over a
point $x$ in $\mathbb P ^7$. The variety $\pi^{-1}(x)$ is isomorphic to the
Lagrangian Grassmannian $LG(3,6)$. There is a rank
three vector bundle $\mathcal U _3$ on ${\rm F}_{1,4,8}$ such that its
restriction to any fiber $\pi^{-1}(x)$ is isomorphic to the universal bundle over
this fiber. Recall that the derived category of coherent sheaves on
$\pi^{-1}(x)$ has a full exceptional collection:
\begin{equation}\label{LG36-col}
(\mathcal U _3|_{\pi^{-1}(x)}\otimes \mathcal O _{\pi}(-3),\mathcal O _{\pi^{-1}(x)}(-3),\mathcal U
_3|_{\pi^{-1}(x)}\otimes \mathcal O _{\pi}(-2),\mathcal O _{\pi^{-1}(x)}(-2),\ldots,\mathcal U _3|_{\pi^{-1}(x)},\mathcal O).
\end{equation}
Here $\mathcal O _{\pi}(-1)$ is a line bundle that is isomorphic to ${\rm
det} \ \mathcal U _3$. Therefore, by Theorem 3.1 of \cite{Sam},
the category $\mathop{\mathrm D\kern0pt}\nolimits ^{b}({\rm F}_{1,4,8})$ has a semiorthogonal decomposition:
\begin{equation}
\mathop{\mathrm D\kern0pt}\nolimits ^{b}({\rm F}_{1,4,8}) = \langle \pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7)\otimes
\mathcal U _3\otimes \mathcal O _{\pi}(-3),\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P
^7)\otimes \mathcal O _{\pi}(-3),\ldots,\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7)\otimes
\mathcal U _3,\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7)\rangle .
\end{equation}
There is a short exact sequence of vector bundles on ${\rm F}_{1,4,8}$:
\begin{equation}\label{eq:seq1}
0\rightarrow {\pi}^{\ast}\mathcal O (-1)\rightarrow p^{\ast}\mathcal U _4\rightarrow
\mathcal U _3\rightarrow 0.
\end{equation}
Taking determinants we get an isomorphism of line bundles $p^{\ast}\mathcal O (-1) =
\pi ^{\ast}\mathcal O (-1)\otimes \mathcal O _{\pi}(-1)$.
Therefore, we can replace $\mathcal O_{\pi}(i)$ by $p^{\ast}\mathcal O(i)$ in the above semiorthogonal
decomposition. Thus, to prove the statement it
is sufficient to show that all the subcategories
$$p_{\ast}(\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7)\otimes\mathcal U_3)\otimes \mathcal O(j),
p_{\ast}(\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7))\otimes\mathcal O(j), \ \text{ for }j=0,\ldots,-3$$
belong to ${\cal C}^{\ast}$.
The functor $p_{\ast}\pi^{\ast}:\mathop{\mathrm D\kern0pt}\nolimits ^{b}({\mathbb P}^7)\rightarrow \mathop{\mathrm D\kern0pt}\nolimits
^{b}(LG(4,8))$ can be computed using the Koszul resolution of
the sheaf $i_{\ast}\mathcal O _{{\rm F}_{1,4,8}}$ on ${\mathbb P}^7\times LG(4,8)$:
\begin{equation}\label{eq:Koszulres}
0 \rightarrow {\pi}^{\ast}\mathcal O (-4)\otimes p^{\ast}\mathcal O (-1) \rightarrow \dots
\rightarrow {\pi}^{\ast}\mathcal O (-2)\otimes \wedge ^{2}p^{\ast}\mathcal U _4\rightarrow {\pi}^{\ast}\mathcal O (-1)\otimes
p^{\ast}\mathcal U _4\rightarrow \mathcal O
\rightarrow i_{\ast}{\mathcal O _{{\rm F}_{1,4,8}}} \rightarrow 0
\end{equation}
Using this resolution we immediately check the inclusion
$$p_{\ast}(\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7))\otimes\mathcal O(j)\subset
\langle\mathcal O(j-1),\wedge ^3\mathcal U _4(j)=\mathcal U _4^{\ast}(j-1), \wedge ^2\mathcal U _4(j),\mathcal U_4(j),\mathcal O(j)\rangle.$$
By Lemma \ref{full-lem}(i), for $j=-3,\ldots,0$ the right-hand side belongs to ${\cal C}^{\ast}$.
Next, using the sequence \eqref{eq:seq1} we see that to prove the inclusion
$p_{\ast}(\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7)\otimes\mathcal U_3)\otimes \mathcal O(j)\subset{\cal C}^{\ast}$ it is enough to check
that
$$\langle\mathcal U_4(j-1), \mathcal U_4\otimes\mathcal U _4^{\ast}(j-1), \mathcal U_4\otimes\wedge ^2\mathcal U _4(j),\mathcal U_4\otimes\mathcal U_4(j),\mathcal U_4(j)\rangle
\subset{\cal C}^{\ast}$$
for $j=-3,\ldots,0$. It remains to apply Lemma \ref{full-lem} (and dualize).
\qed\vspace{3mm}
\noindent{\it Another version of the proof}.
We can simplify computations in the above argument by using a different semiorthogonal
decomposition of $\mathop{\mathrm D\kern0pt}\nolimits^b({\rm F}_{1,4,8}$):
$$\mathop{\mathrm D\kern0pt}\nolimits ^{b}({\rm F}_{1,4,8}) = \langle \pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7)\otimes
\mathcal U _3\otimes p^{\ast}\mathcal O(-3),\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P
^7)\otimes p^{\ast}\mathcal O(-3),\ldots,
\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7),\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7)\otimes
\mathcal U _3^{\ast}\rangle.$$
The restriction of this decomposition to the fiber $\pi^{-1}(x)\simeq LG(3,6)$ is
the exceptional collection obtained
from collection \eqref{LG36-col} by the right mutation of $\mathcal U_3|_{\pi^{-1}(x)}$ through
${\cal O}$. In the same way as above we check that
$$p_{\ast}(\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7))\otimes\mathcal O(j)\subset{\cal C}^{\ast} \ \text{ for }j=-3,\ldots,0,$$
$$p_{\ast}(\pi ^{\ast}\mathop{\mathrm D\kern0pt}\nolimits ^{b}(\mathbb P ^7)\otimes\mathcal U_3)\otimes \mathcal O(j)\subset{\cal C}^{\ast} \ \text{ for }j=-1,-2,-3, \ \text{and}$$
$$p_{\ast}(\pi^{\ast}\mathcal O(i)\otimes\mathcal U_3^{\ast})\in{\cal C}^{\ast} \ \text{ for }i=-6,\ldots,0.$$
The point is that this will only require using (easy) Steps 1,2,4,5 and 6 of Lemma \ref{full-lem}.
Thus, if we consider the semiorthogonal decomposition
$$\mathop{\mathrm D\kern0pt}\nolimits^b({\rm F_{1,4,8}})=\langle \AA, \langle\pi^{\ast}\mathcal O(1)\otimes\mathcal U_3^{\ast}\rangle\ran,$$
where $\AA=\langle\pi^{\ast}\mathcal O(1)\otimes\mathcal U_3^{\ast}\rangle^{\perp}$, then
$p_{\ast}\AA\in{\cal C}^{\ast}$. By adjointness, it follows that for an object
$E\in\mathop{\mathrm D\kern0pt}\nolimits^b(LG(4,8))$ such that $\operatorname{Hom}(E,{\cal C}^{\ast})=0$,
one has $p^{\ast}E\in\langle\pi^{\ast}\mathcal O(1)\otimes\mathcal U_3^{\ast}\rangle$,
i.e., $p^{\ast}E\simeq V^{\bullet}\otimes \pi^{\ast}\mathcal O(1)\otimes\mathcal U_3^{\ast}$ for a graded vector space
$V^{\bullet}$. Hence, $E\simeq V^{\bullet}\otimes p_{\ast}(\pi^{\ast}\mathcal O(1)\otimes\mathcal U_3^{\ast})$.
Finally, using resolution \eqref{eq:Koszulres} and the dual of sequence \eqref{eq:seq1}
one can compute that
$$p_{\ast}(\pi^{\ast}\mathcal O(1)\otimes\mathcal U_3^{\ast})\simeq\wedge^2\mathcal U_4^{\ast}\simeq\wedge^2\mathcal U_4(1).$$
Thus, $E\simeq V^{\bullet}\otimes\wedge^2\mathcal U_4(1)$. But $\operatorname{Hom}^*(\wedge^2\mathcal U_4(1),\wedge^2\mathcal U_4(-4))\neq 0$
by Serre duality, so the condition $\operatorname{Hom}^*(E,{\cal C}^{\ast})=0$ implies that $V^{\bullet}=0$.
\qed\vspace{3mm}
\section{The case of $LG(5,10)$}
In this section we assume that $n=5$ (so $V$ is $10$-dimensional).
It turns out that in this case the exceptional bundles constructed so far
do not generate the entire derived category $\mathop{\mathrm D\kern0pt}\nolimits^b(LG(5,10))$. We are going to construct
another exceptional bundle on $LG(5,10)$ starting from the bundle $T=S^{(3,1,1)}Q$.
Let us denote by $\omega_i$ the $i$th fundamental weight of the root system $C_5$.
For a dominant weight $\lambda$ we denote by $V(\lambda)$ the corresponding irreducible representation of
$\operatorname{Sp}(10)$ (for example, $V(\omega_1)=V$, $V(\omega_2)=\wedge^2 V/k$, $V(2\omega_1)=S^2V$).
\begin{lem}\label{T-lem}
Assume that $n=5$.
\noindent
(i) $\operatorname{Hom}^*(\wedge^iQ,T(j))=0$ for $i\in [0,3]$, $j\in [-5,-1]$. Also, $\operatorname{Hom}^*(T,{\cal O})=0$.
\noindent
(ii) $\operatorname{Hom}^*(R_1,T(j))=0$ for $j\in [-5,-1]$.
\noindent
(iii) $\operatorname{Hom}^*(T,T(-3))=0$.
\noindent
(iv) $\operatorname{Hom}^i(T,T)=0$ for $i>2$, $\operatorname{Hom}^2(T,T)=V(2\omega_1+\omega_2)\oplus V(\omega_1+\omega_3)$,
$\operatorname{Hom}^1(T,T)=V^{\otimes 2}/k\oplus S^2V$, $\operatorname{Hom}^0(T,T)=k$.
\noindent
(v) $\operatorname{Hom}^i(\wedge^3Q,T)=0$ for $i>0$ and $\operatorname{Hom}^0(\wedge^3Q,T)=S^2V$. Also,
$\operatorname{Hom}^i(T,\wedge^3Q)=0$ for $i\neq 1,2$, $\operatorname{Hom}^1(T,\wedge^3Q)=k$ and $\operatorname{Hom}^2(T,\wedge^3Q)=\wedge^2V/k$.
\noindent
(vi) $\operatorname{Hom}^i(\wedge^2Q,T)=0$ for $i>0$ and $\operatorname{Hom}^0(\wedge^2Q,T)=V(3\omega_1)\oplus V(\omega_1+\omega_2)$.
Also, $\operatorname{Hom}^i(T,\wedge^2Q)=0$ for $i\neq 2$ and $\operatorname{Hom}^2(T,\wedge^2Q)=V$.
\noindent
(vii) $\operatorname{Hom}^i(Q,T)=0$ for $i>0$ and $\operatorname{Hom}^0(Q,T)=V(2\omega_1+\omega_2)\oplus V(\omega_1+\omega_3)$.
Also, $\operatorname{Hom}^i(T,Q)=0$ for $i\neq 2$ and $\operatorname{Hom}^2(T,Q)=k$.
\noindent
(viii) $\operatorname{Hom}^i(R_1,T)=0$ for $i>1$, $\operatorname{Hom}^1(R_1,T)=V(2\omega_1+\omega_2)\oplus V(\omega_1+\omega_3)$ and
$\operatorname{Hom}^0(R_1,T)=V^{\otimes 2}/k$. Also, $\operatorname{Hom}^i(T,R_1)=0$ for $i\neq 1,2$, $\operatorname{Hom}^1(T,R_1)=k$ and
$\operatorname{Hom}^2(T,R_1)=V^{\otimes 2}/k$.
\noindent
(ix) $\operatorname{Hom}^*(T,S^2Q^{\ast})=0$.
\noindent
(x) $\operatorname{Hom}^i(T,S^3Q^{\ast})=0$ for $i\neq 4$.
\noindent
(xi) $\operatorname{Hom}^i(T,R_3)=0$ for $i\neq 1$.
\noindent
(xii) $\operatorname{Hom}^i(T,\wedge^3Q\otimes Q^{\ast})=0$ for $i\neq 2$.
\noindent
(xiii) $\operatorname{Hom}^4(T,\wedge^3Q\otimes\wedge^2Q^{\ast})=0$.
\end{lem}
The proof is a straightforward application of the Bott's theorem.
By part (viii) of the above Lemma, we have a canonical nonsplit extension of vector bundles
\begin{equation}\label{PT-seq}
0\to R_1\to P\to T\to 0
\end{equation}
\begin{lem}\label{P-lem}
(i) The map $\operatorname{Hom}^1(R_1,Q)\to\operatorname{Hom}^2(T,Q)$ induced by \eqref{PT-seq} is an isomorphism.
\noindent
(ii) The map $\operatorname{Hom}^1(R_1,\wedge^2Q)\to\operatorname{Hom}^2(T,\wedge^2Q)$ induced by \eqref{PT-seq} is an isomorphism.
\noindent
(iii) The map $\operatorname{Hom}^1(R_1,R_1)\to\operatorname{Hom}^2(T,R_1)$ induced by \eqref{PT-seq} is an isomorphism.
\noindent
(iv) The map $\operatorname{Hom}^1(R_1,T)\to\operatorname{Hom}^2(T,T)$ induced by \eqref{PT-seq} is an isomorphism, while
the map $\operatorname{Hom}^0(R_1,T)\to\operatorname{Hom}^1(T,T)$ is injective.
\noindent
(v) One has $\operatorname{Hom}^*(P,Q)=\operatorname{Hom}^*(P,\wedge^2Q)=\operatorname{Hom}^*(P,R_1)=\operatorname{Hom}^{>1}(P,P)=0$
and $\operatorname{Hom}^1(P,P)=S^2V$, $\operatorname{Hom}^0(P,P)=k$.
Also, $\operatorname{Hom}^i(P,\wedge^3Q)=0$ for $i\neq 1$ and $\operatorname{Hom}^1(P,\wedge^3 Q)=k$.
\end{lem}
\noindent {\it Proof} . (i) We have to check that the natural map
$$\operatorname{Hom}^1(R_1,Q)\otimes\operatorname{Hom}^1(T,R_1)\to\operatorname{Hom}^2(T,Q)$$
is an isomorphism. Note that both sides are one-dimensional (see Lemma \ref{Bott-lem}(iii) and
Lemma \ref{T-lem}(vii),(viii)), so it is enough to
check that this map is nonzero. We have natural embeddings
$S^2Q^{\ast}\to R_1^{\ast}\otimes Q$ and $S^2Q^{\ast}\to T^{\ast}\otimes R_1$ inducing
isomorphisms on $H^1$. Let us consider the induced map
$$\a:S^2Q^{\ast}\otimes S^2Q^{\ast}\to T^{\ast}\otimes Q.$$
Note that
$$S^2Q^{\ast}\otimes S^2Q^{\ast}=S^4Q^{\ast}\oplus S^{(2,2)}Q^{\ast}\oplus S^{(3,1)}Q^{\ast},$$
where the first two terms have zero cohomology while the last term has one-dimensional $H^2$.
Thus, it is enough to check that the restriction of $\a$ to $S^{(3,1)}Q^{\ast}$ is nonzero and
that the natural map
$$H^1(S^2Q^{\ast})\otimes H^1(S^2Q^{\ast})\to H^2(S^2Q^{\ast}\otimes S^2Q^{\ast})$$
between one-dimensional spaces is nonzero. Let us start by splitting the exact sequence
\eqref{S2Q*-seq} into two short exact sequences
\begin{equation}\label{K-eq}
0\to S^2Q^{\ast}\to S^2V\otimes{\cal O}\to K\to 0
\end{equation}
\begin{equation}\label{K2-eq}
0\to K\to V\otimes Q\to \wedge^2 Q\to 0
\end{equation}
Then \eqref{K-eq} induces the surjections
$H^0(K)\to H^1(S^2Q^{\ast})$ and
$H^1(K\otimes S^2Q^{\ast})\to H^2(S^2Q^{\ast}\otimes S^2Q^{\ast})$
(by the vanishing of $H^1({\cal O})$ and $H^2(S^2Q^{\ast})$).
Hence, it is enough to check that the natural map
$$H^0(K)\otimes H^1(S^2Q^{\ast})\to H^1(K\otimes S^2Q^{\ast})$$
is an isomorphism (note that both sides are isomorphic to $S^2V\oplus k$).
Now the sequence \eqref{K2-eq} induces embeddings
$H^0(K)\to V\otimes H^0(Q)$ and $H^1(K\otimes S^2Q^{\ast})\to V\otimes H^1(Q\otimes S^2Q^{\ast})$
(by the vanishing of $H^0(\wedge^2Q\otimes S^2Q^{\ast})$). Hence, we are reduce to proving
that the map
$$H^0(Q)\otimes H^1(S^2Q^{\ast})\to H^1(Q\otimes S^2Q^{\ast})$$
is an isomorphism. But this follows from the exact sequence \eqref{basic-seq} and the vanishing
of $H^*(Q^{\ast}\otimes S^2Q^{\ast})$.
It remains to check that the restriction of $\a$ to $S^{3,1}Q^{\ast}\subset S^2Q^{\ast}\otimes S^2Q^{\ast}$
is nonzero (where we can just think of $Q$ as a vector space).
Let us view $T$ (resp., $R_1$) as the image of the Koszul differential
$S^2Q\otimes\wedge^3Q\to S^3Q\otimes \wedge^2Q$ (resp., $Q\otimes\wedge^2Q\to S^2Q\otimes Q$).
Then the embedding $S^2Q^{\ast}\hookrightarrow T^{\ast}\otimes R_1$ corresponds to the composed map
\begin{equation}\label{TR1-eq}
S^2Q^{\ast}\otimes T\to S^2Q^{\ast}\otimes S^3Q\otimes \wedge^2Q\to Q\otimes\wedge^2Q\to R_1,
\end{equation}
where the second arrow is induced by the natural map $S^2Q^{\ast}\otimes S^3Q\to Q$.
On the other hand, the embedding $S^2Q^{\ast}\hookrightarrow R_1^{\ast}\otimes Q$ corresponds to
the natural map $S^2Q^{\ast}\otimes R_1\to Q$ induced by the embedding $R_1\to S^2Q\otimes Q$.
Thus, $\a$ corresponds to the composed map
$$
\a':S^2Q^{\ast}\otimes S^2Q^{\ast}\otimes T\to S^2Q^{\ast}\otimes S^2Q^{\ast}\otimes S^3Q\otimes\wedge^2Q\to
S^2Q^{\ast}\otimes Q\otimes\wedge^2Q\to S^2Q^{\ast}\otimes S^2Q\otimes Q\to Q,
$$
where the third arrow is induced by the Koszul differential.
Let us choose a basis $e_1,\ldots,e_n$ for $Q$
and define an element $t\in T$ by
$$t=e_4^2e_1\otimes (e_2\wedge e_3)+e_4^2e_2\otimes (e_3\wedge e_1)+e_4^2e_3\otimes (e_1\wedge e_2),$$
where we view $T$ as a subbundle in $S^3Q\otimes \wedge^2Q$. Then one can compute
the induced functional on $S^2Q^{\ast}\otimes S^2Q^{\ast}$
$$x\mapsto \langle \a'(x\otimes t),e_3\rangle=2\langle x, (e_0e_2)\wedge(e_0e_1)\rangle,$$
where $x\in S^2Q^{\ast}\otimes S^2Q^{\ast}$. Now we observe that $S^{(3,1)}Q$ can be identified
with the image of $\wedge^2(S^2Q)$ under
the natural map $\b:S^2Q\otimes S^2Q\to S^3Q\otimes Q$ given by
$$\b(f\otimes(v_1v_2))=(fv_1)\otimes v_2+(fv_2)\otimes v_1.$$
Finally we compute that
$$\b((e_0e_2)\wedge(e_0e_1))=(e_4^2e_2)\otimes e_1-(e_4^2e_1)\otimes e_2\neq 0,$$
which finishes the proof.
\noindent
(ii) Since both the source and the target are isomorphic to $V$, it is enough to check surjectivity.
Furthermore, it suffices to prove that the composition map
$$\operatorname{Hom}^1(T,R_1)\otimes\operatorname{Hom}^1(R_1,Q)\otimes\operatorname{Hom}^0(Q,\wedge^2Q)\to\operatorname{Hom}^2(T,\wedge^2Q)$$
is surjective. By part (i), this reduces to surjectivity of the composition map
$$\operatorname{Hom}^2(T,Q)\otimes\operatorname{Hom}^0(Q,\wedge^2Q)\to\operatorname{Hom}^2(T,\wedge^2Q).$$
Looking at the exact sequence \eqref{K2-eq}, we see that this would follow from
the vanishing of $\operatorname{Hom}^3(T,K)$. But this vanishing follows from the exact sequence \eqref{K-eq}
since $\operatorname{Hom}^*(T,{\cal O})=\operatorname{Hom}^*(T,S^2Q^{\ast})=0$ (see Lemma \ref{T-lem}(i),(ix)).
\noindent
(iii) Both the source and the target are isomorphic to $V^{\otimes 2}/k$ (see Lemma
\ref{Bott-lem}(iv) and Lemma \ref{T-lem}(viii)), so it suffices to
check surjectivity. By part (ii), it is enough to prove that the map
\begin{equation}\label{TR1-map}
\operatorname{Hom}^2(T,\wedge^2Q)\otimes V\to\operatorname{Hom}^2(T,R_1)
\end{equation}
is surjective. Let us first check that $S^2V\subset\operatorname{Hom}^2(T,R_1)$ is in the image.
The exact sequence \eqref{basic-seq} induces a long exact sequence
$$\ldots\to \operatorname{Hom}^2(T,\wedge^2Q\otimes Q^{\ast})\to \operatorname{Hom}^2(T,\wedge^2Q)\otimes V\stackrel{f}{\to}
\operatorname{Hom}^2(T,\wedge^2Q\otimes Q)\to\ldots$$
Using the Bott's theorem one can check that $\operatorname{Hom}^2(T,\wedge^2Q\otimes Q^{\ast})$ does not contain
any factors isomorphic to $S^2V$, so the restriction of $f$ to $S^2V$ is an embedding.
On the other hand,
$$\operatorname{Hom}^2(T,\wedge^2Q\otimes Q)=\operatorname{Hom}^2(T,R_1)\oplus\operatorname{Hom}^2(T,\wedge^3Q),$$
where the second factor is $\wedge^2V/k$, hence, $S^2V$ projects nontrivially to $\operatorname{Hom}^2(T,R_1)$.
It remains to check that $\wedge^2V/k\subset\operatorname{Hom}^2(T,R_1)$ is in the image of the map \eqref{TR1-map}.
It suffices to prove that it is in the image of the map
$$\operatorname{Hom}^2(T,Q\otimes Q)\otimes V\to\operatorname{Hom}^2(T,R_1),$$
or even
$$\operatorname{Hom}^2(T,Q)\otimes H^0(\wedge^2Q)\to\operatorname{Hom}^2(T,R_1).$$
We have a natural map
$$\gamma:S^2Q^{\ast}\otimes \wedge^2Q^{\ast}\to T^{\ast}\otimes Q,$$
such that its composition with the embedding $T^{\ast}\otimes Q\hookrightarrow S^2Q^{\ast}\otimes \wedge^3Q^{\ast} \otimes Q$
(induced by the surjection $S^2Q\otimes\wedge^3Q\to T$) is the identity map on $S^2Q^{\ast}$ tensored
with the natural embedding $\wedge^2Q^{\ast}\to\wedge^3Q^{\ast}\otimes Q$. Note that this implies that
$\gamma$ itself is an embedding. Hence, $\gamma$ induces an isomorphism on $H^2$.
Next, we claim that the composition map
$$H^2(S^2Q^{\ast}\otimes\wedge^2Q^{\ast})\otimes H^0(\wedge^2Q)\to H^2(S^2Q^{\ast}\otimes\wedge^2Q^{\ast}\otimes\wedge^2Q)$$
is surjective. Indeed, it is enough to check this with $H^0(\wedge^2Q)$ replaced by $\wedge^2 V$.
Then the exact sequence
$$0\to S^2Q^{\ast}\to V\otimes Q^{\ast}\to \wedge^2 V\otimes {\cal O}\to\wedge^2Q\to 0$$
shows that this follows from the vanishing of
$H^3(S^2Q^{\ast}\otimes\wedge^2Q^{\ast}\otimes Q^{\ast})$ and $H^4(S^2Q^{\ast}\otimes\wedge^2Q^{\ast}\otimes S^2Q^{\ast})$,
both of which are easily checked using the Bott's theorem.
Now it remains to prove that the composed map
$$S^2Q^{\ast}\otimes\wedge^2Q^{\ast}\otimes\wedge^2Q\stackrel{\gamma\otimes\operatorname{id}}{\to} T^{\ast}\otimes Q\otimes\wedge^2Q\to T^{\ast}\otimes R_1$$
induces an embedding on $H^2$. It is enough to prove that the kernel of this map is $S^2Q^{\ast}$.
Using the embedding of $T^{\ast}$ into $S^2Q^{\ast}\otimes \wedge^3Q^{\ast}$ this reduces to checking
that the composition of the natural maps
$$\wedge^2Q^{\ast}\otimes\wedge^2Q\to \wedge^3Q^{\ast}\otimes Q\otimes\wedge^2Q\to\wedge^3Q^{\ast}\otimes R_1$$
has ${\cal O}$ as a kernel. Replacing this map by its composition with the
embedding $\wedge^3Q^{\ast}\otimes R_1\hookrightarrow\wedge^3Q^{\ast}\otimes S^2Q\otimes Q$ we see that it is enough
to prove the following fact from linear algebra.
Suppose we have a linear map $A:\wedge^2Q\to \wedge^2Q$ such that the induced map
$$\wedge^3Q\to Q\otimes\wedge^2Q\stackrel{\operatorname{id}\otimes A}{\to} Q\otimes\wedge^2Q\stackrel{d}{\to} S^2Q\otimes Q$$
is zero, where $d$ is Koszul differential. Then $A$ is proportional to identity. To prove this statement
we recall
that the kernel of $d$ is exactly $\wedge^3Q\subset Q\otimes\wedge^2Q$. Thus, the condition on $A$ is that
$\operatorname{id}_Q\otimes A$ preserves $\wedge^3Q\subset Q\otimes\wedge^2Q$. Let us fix some basis $(e_i)$ of $Q$ and
let $\partial_i:\wedge^3Q\to\wedge^2Q$ be the odd partial derivatives corresponding to the dual basis
of $Q^{\ast}$. Consider
$$e_1\otimes A(e_2\wedge e_3)+e_2\otimes A(e_3\wedge e_1)+ e_3\otimes A(e_1\wedge e_2)=\eta\in\wedge^3Q\subset Q\otimes \wedge^2Q.$$
Contracting with $e_3^*$ in the first factor of the tensor product $Q\otimes\wedge^2$ we obtain
$A(e_1\wedge e_2)=\partial_3\eta$. Hence,
$\partial_3A(e_1\wedge e_2)=\partial_3^2\eta=0$. In a similar way
$\partial_i A(e_1\wedge e_2)=0$ for $i>2$. It follows that $A(e_1\wedge e_2)$ is proportional to $e_1\wedge e_2$.
Thus, for every pair of elements $x,y\in Q$, $A(x\wedge y)$ is proportional to $x\wedge y$.
This implies that $A$ is proportional to identity.
\noindent
(iv) We have $\operatorname{Hom}^1(R_1,T)\simeq\operatorname{Hom}^2(T,T)$ (see Lemma \ref{T-lem}(iv),(viii)), so for the first assertion
it is enough to check the surjectivity.
By part (ii), it suffices to check that the map
$$\operatorname{Hom}^2(T,\wedge^2Q)\otimes\operatorname{Hom}^0(\wedge^2Q,T)\to\operatorname{Hom}^2(T,T)$$
is surjective. Furthermore, it is enough to prove that the map
$$\operatorname{Hom}^2(T,\wedge^2Q)\otimes\operatorname{Hom}^0(\wedge^2Q,\wedge^3Q)\otimes\operatorname{Hom}^0(\wedge^3Q,T)\to\operatorname{Hom}^2(T,T)$$
is surjective. We are going to do this in two steps: first, we'll check that the map
\begin{equation}\label{Twe23-eq}
\operatorname{Hom}^2(T,\wedge^2Q)\otimes V\to \operatorname{Hom}^2(T,\wedge^3Q)
\end{equation}
is surjective, and then we will show the surjectivity of
\begin{equation}\label{Twe2T-eq}
\operatorname{Hom}^2(T,\wedge^3Q)\otimes\operatorname{Hom}^0(\wedge^3Q,T)\to\operatorname{Hom}^2(T,T)
\end{equation}
From the exact sequence \eqref{basic-seq} we get the following long exact sequence
$$0\to S^3Q^{\ast}\to S^3V\otimes{\cal O}\to S^2V\otimes Q\to V\otimes\wedge^2Q\to\wedge^3Q\to 0.$$
Thus, the surjectivity of \eqref{Twe23-eq} follows from the vanishing of
$\operatorname{Hom}^3(T, Q)$, $\operatorname{Hom}^4(T,{\cal O})$ and $\operatorname{Hom}^5(T,S^3Q^{\ast})$ (see Lemma \ref{T-lem}(i),(vii),(x)).
To deal with \eqref{Twe2T-eq}
we use the natural embedding $S^2Q\to\wedge^3Q^{\ast}\otimes T$ inducing an isomorphism
on $H^0$. Note also that since $\wedge^3Q\otimes S^2Q\simeq T\oplus R_3$, Lemma \ref{T-lem}(xi) implies
that the projection $T^{\ast}\otimes\wedge^3Q\otimes S^2Q\to T^{\ast}\otimes T$ induces an isomorphism on $H^2$.
Thus, we are reduced to showing the surjectivity of
$$\operatorname{Hom}^2(T,\wedge^3 Q)\otimes H^0(S^2Q)\to \operatorname{Hom}^2(T,\wedge^3Q\otimes S^2Q).$$
It suffices to prove the surjectivity of the maps
$$\operatorname{Hom}^2(T,\wedge^3 Q)\otimes V\to\operatorname{Hom}^2(T,\wedge^3 Q\otimes Q),$$
$$\operatorname{Hom}^2(T,\wedge^3 Q\otimes Q)\otimes V\to\operatorname{Hom}^2(T,\wedge^3 Q\otimes S^2Q).$$
The exact sequence \eqref{basic-seq} shows that the surjectivity of the first map follows from
the vanishing of $\operatorname{Hom}^3(T,\wedge^3 Q\otimes Q^{\ast})$ (see Lemma \ref{T-lem}(xii)). Similarly, for
the second map we use the exact sequence
$$0\to\wedge^2 Q^{\ast}\to\wedge^2 V\otimes{\cal O}\to V\otimes Q\to S^2Q\to 0$$
along with the vanishing of $\operatorname{Hom}^3(T,\wedge^3Q)$
and $\operatorname{Hom}^4(T,\wedge^3Q\otimes\wedge^2Q^{\ast})$ (see Lemma \ref{T-lem}(v),(xiii)).
Now let us prove the injectivity of the map $\operatorname{Hom}^0(R_1,T)\to\operatorname{Hom}^1(T,T)$. We have a natural
embedding $S^2Q\to R_1^{\ast}\otimes T$ inducing isomorphism on $H^0$ and an embedding
$S^2Q^{\ast}\to T^{\ast}\otimes R_1$ inducing isomorphism on $H^1$. We claim that the
composed map
\begin{equation}\label{S2T-map}
S^2Q\otimes S^2Q^{\ast}\to T^{\ast}\otimes T
\end{equation}
induces an embedding on $S^{(2,0,0,0,-2)}Q\subset S^2Q\otimes S^2Q^{\ast}$.
To prove this we can replace $Q$ by a vector space with a basis $e_1,\ldots,e_5$. Let
$e_1^*,\ldots,e_5^*$ be the dual basis of $Q^{\ast}$.
It is enough to check that the lowest weight vector $e_1^2\otimes (e_5^*)^2$ maps to a nonzero element
of $T^{\ast}\otimes T$ under \eqref{S2T-map}. By definition, this endomorphism of $T$ is the composition
of the map
$$T\to S^3Q\otimes\wedge^2Q\stackrel{\partial_5^2\otimes\operatorname{id}}{\to} Q\otimes\wedge^2Q\to R_1$$
with the map
$$R_1\to Q\otimes\wedge^2Q\stackrel{e_1^2}{\to} S^2Q\otimes Q\otimes\wedge^2Q\to S^3Q\otimes\wedge^2Q\to T.$$
Viewing $T$ as a direct summand of $S^2Q\otimes\wedge^3Q$ we obtain from the first (resp., second) map
a map $f:S^2Q\otimes\wedge^3Q\to R_1$ (resp., $g:R_1\to S^2Q\otimes\wedge^3Q$).
Identifying $R_1$ with $Q\otimes\wedge^2Q/\wedge^3Q$ we can write
$$f(t\otimes (x\wedge y\wedge z))=\partial_5^2(tx)\otimes(y\wedge z)+\partial_5^2(ty)\otimes(z\wedge x)+\partial_5^2(tz)\otimes(x\wedge y)
\mod\wedge^3Q,$$
$$g(x\otimes (y\wedge z)\mod\wedge^3Q)=2(e_1x)\otimes(e_1\wedge y\wedge z)+(e_1y)\otimes(e_1\wedge x\wedge z)+
(e_1z)\otimes(e_1\wedge y\wedge x),$$
where $t\in S^2Q$ and $x,y,z\in Q$ (for appropriate rescaling of $g$). Hence,
$$gf((e_4e_5)\otimes(e_1\wedge e_2\wedge e_5))=2g(e_4\otimes(e_1\wedge e_2)\mod\wedge^3Q)=2e_1^2\otimes(e_1\wedge e_4
\wedge e_2)\neq 0.$$
Thus, the map \eqref{S2T-map} induces
an embedding on $H^1$. So we are reduced to checking that the natural map
$$H^0(S^2Q)\otimes H^1(S^2Q^{\ast})\to H^1(S^2Q\otimes S^2Q^{\ast})$$
is an isomorphism. Since both sides are isomorphic to $S^2V$, it is enough to prove surjectivity.
The exact sequence \eqref{S2Q-seq}
shows that this follows from the vanishing of $H^2(Q^{\ast}\otimes S^2Q^{\ast})$
and $H^3(\wedge^2Q^{\ast}\otimes S^2Q^{\ast})$, which can be checked using the Bott's theorem.
\noindent
(v) The vanishing of $\operatorname{Hom}^*(P,Q)$, $\operatorname{Hom}^*(P,\wedge^2Q)$, $\operatorname{Hom}^*(P,R_1)$
follow from directly from parts (i)-(iv) along with the computation of the relevant spaces in
Lemmas \ref{Bott-lem} and \ref{T-lem}. Similarly, we derive that $\operatorname{Hom}^0(P,T)=k$,
$\operatorname{Hom}^1(P,T)=S^2V$ and $\operatorname{Hom}^i(P,T)=0$ for $i>1$. Now one computes $\operatorname{Hom}^*(P,P)$
by applying the functor $\operatorname{Hom}(P,?)$
to the exact sequence \eqref{PT-seq} and using the vanishing of $\operatorname{Hom}^*(P,R_1)$.
To compute $\operatorname{Hom}^*(P,\wedge^3Q)$ it remains to check that the map
$$\operatorname{Hom}^1(R_1,\wedge^3Q)\to\operatorname{Hom}^2(T,\wedge^3Q)$$
induced by \eqref{PT-seq} is an isomorphism. Since both sides are isomorphic to
$\wedge^2V/k$, it is enough to prove surjectivity.
But this follows immediately from part (ii) along
with the surjectivity of the map \eqref{Twe23-eq} proved in part (iv).
\qed\vspace{3mm}
By part (v) of the above Lemma, we have a canonical nonsplit extension of vector bundles
\begin{equation}\label{PG-seq}
0\to\wedge^3 Q\to G\to P\to 0
\end{equation}
\begin{thm}\label{G-thm}
The vector bundle $G$ is exceptional and $\operatorname{Hom}^*(G,\wedge^3Q)=0$.
\end{thm}
\noindent {\it Proof} . First, applying the functor $\operatorname{Hom}(?,\wedge^3Q)$ to the sequence \eqref{PG-seq}
and using Lemma \ref{P-lem}(v)
we find that $\operatorname{Hom}^*(G,\wedge^3 Q)=0$. Next, applying the functor $\operatorname{Hom}(G,?)$ to this
sequence we derive isomorphisms $\operatorname{Hom}^i(G,G)\simeq\operatorname{Hom}^i(G,P)$.
Recall that $\operatorname{Hom}^*(\wedge^3Q,R_1)=0$ by Lemma \ref{R-van-lem}. Hence,
applying the functor $\operatorname{Hom}(\wedge^3Q,?)$ to the sequence \eqref{PT-seq} and using
Lemma \ref{T-lem}(v) we obtain that $\operatorname{Hom}^i(\wedge^3Q,P)=0$ for $i>0$ and $\operatorname{Hom}^0(\wedge^3Q,P)=S^2V$.
Thus, using the sequence \eqref{PG-seq} again along with the computation of $\operatorname{Hom}^*(P,P)$
(see Lemma \ref{P-lem}(v))
we see that it is enough to check that the natural map
$$\operatorname{Hom}^0(\wedge^3Q,P)\otimes\operatorname{Hom}^1(P,\wedge^3Q)\to\operatorname{Hom}^1(P,P)$$
is an isomorphism. Since $\operatorname{Hom}^*(P,R_1)=\operatorname{Hom}^*(\wedge^3Q,R_1)=0$ (see Lemma \ref{P-lem}(v)),
the exact sequence \eqref{PT-seq} gives an isomorphism of the above map with
$$\operatorname{Hom}^0(\wedge^3Q,T)\to\operatorname{Hom}^1(P,T)$$
induced by a nonzero element in $\operatorname{Hom}^1(P,\wedge^3Q)$.
Since the natural map $\operatorname{Hom}^1(T,\wedge^3Q)\to\operatorname{Hom}^1(P,\wedge^3Q)$ is an isomorphism (as we have
seen in the proof of Lemma \ref{P-lem}(v)), the above map factors
as the composition of the map
$$\operatorname{Hom}^0(\wedge^3Q,T)\stackrel{f}{\to}\operatorname{Hom}^1(T,T)$$
induced by a nonzero element in $\operatorname{Hom}^1(T,\wedge^3Q)$ followed by the map $h$ in the exact
sequence
$$0\to \operatorname{Hom}^0(R_1,T)\stackrel{g}{\to} \operatorname{Hom}^1(T,T)\stackrel{h}{\to} \operatorname{Hom}^1(P,T)\to 0.$$
Thus, it is enough to check that the images of the maps $f$ and $g$
are complementary in $\operatorname{Hom}^1(T,T)$. Since $\operatorname{Hom}^0(R_1,T)=V^{\otimes 2}/k$,
$\operatorname{Hom}^0(\wedge^3Q,T)=S^2V$, while $\operatorname{Hom}^1(T,T)=V^{\otimes 2}/k\oplus S^2V$ (see Lemma
\ref{T-lem}(iv),(v),(viii)), it suffices to prove that the images of $S^2V$ under $f$ and $g$
have trivial intersection. Note that we have a natural embedding $S^2Q\to R_1^{\ast}\otimes T$ (resp.,
$S^2Q\to \wedge^3Q^{\ast}\otimes T$)
inducing an embedding of $S^2V$ into $\operatorname{Hom}^0(R_1,T)$ (resp., into $\operatorname{Hom}^0(\wedge^3Q,T)$).
On the other hand, a nonzero element in $\operatorname{Hom}^1(T,\wedge^3Q)$ is the image of the nonzero
element in $H^1(S^2Q^{\ast})$ with respect to the embedding $S^2Q^{\ast}\to T^{\ast}\otimes\wedge^3Q$.
Furthermore, we have seen in the end of the proof of Lemma \ref{P-lem}(iv) that the natural
map $H^0(S^2Q)\otimes H^1(S^2Q^{\ast})\to H^1(S^2Q\otimes S^2Q^{\ast})$ is an isomorphism.
Thus, it is enough to prove that the natural maps
$$\a:S^2Q^{\ast}\otimes S^2Q\to (T^{\ast}\otimes\wedge^3Q)\otimes (\wedge^3Q^{\ast}\otimes T)\to T^{\ast}\otimes T \text{ and}$$
$$\b:S^2Q^{\ast}\otimes S^2Q\to (T^{\ast}\otimes R_1)\otimes (R_1^{\ast}\otimes T)\to T^{\ast}\otimes T$$
induce linear independent maps on $H^1$.
In fact, since $H^1(S^2Q^{\ast}\otimes S^2Q)$ comes from the summand
$S^{(2,0,0,0,-2)}Q\subset S^2Q^{\ast}\otimes S^2Q$, generated by the lowest weight vector
$v=(e_5^*)^2\otimes e_1^2$ (where $(e_i)$ is the basis of $Q$),
it suffices to check that $\a(v)$ and $\b(v)$ are not proportional in $T^{\ast}\otimes T$.
Recall that in the proof of Lemma \ref{T-lem}(iv) we have
constructed the maps $f:S^2Q\otimes\wedge^3Q\to R_1$ and $g:R_1\to S^2Q\otimes\wedge^3Q$
such that $gf$ is a multiple of the composition
$$S^2Q\otimes\wedge^3Q\to T\stackrel{\b(v)}{\to} T\to S^2Q\otimes\wedge^3Q.$$
On the other hand, $\a(v)$ is given by the following composition
$$T\to S^2Q\otimes\wedge^3Q\stackrel{\partial_5^2}{\to}\wedge^3Q\stackrel{e_1^2}{\to} S^2Q\otimes\wedge^3Q\to T.$$
Let us denote by $\pi:S^2Q\otimes\wedge^3Q\to S^2Q\otimes\wedge^3Q$ the projection with the image $T$,
given by
$$\pi(ab\otimes(x\wedge y\wedge z))=\frac{3}{5}ab\otimes(x\wedge y\wedge z)+
\left(ax\otimes (b\wedge y\wedge z)+bx\otimes (a\wedge y\wedge z)+c.p.(x,y,z)\right),$$
where $a,b,x,y,z,\in Q$, the omitted terms $c.p.(x,y,z)$ are obtained by cyclically permuting $x,y,z$.
Then we are reduced to checking that $gf$ is not proportional to the composition
$$h:S^2Q\otimes\wedge^3Q\stackrel{\pi}{\to}S^2Q\otimes\wedge^3Q\stackrel{\partial_5^2}{\to}\wedge^3Q\stackrel{e_1^2}{\to} S^2Q\otimes\wedge^3Q\stackrel{\pi}{\to}S^2Q\otimes\wedge^3Q.$$
To this end we compute
$$\frac{1}{2}gf(e_4e_5\otimes (e_2\wedge e_3\wedge e_5)=f(e_4\otimes(e_2\wedge e_3))=
2e_1e_4\otimes(e_1\wedge e_2\wedge e_3)-e_1e_2\otimes (e_1\wedge e_3\wedge e_4)+e_1e_3\otimes (e_1\wedge e_2\wedge e_4),$$
$$\frac{25}{2}h(e_4e_5\otimes (e_2\wedge e_3\wedge e_5)=
3e_1^2\otimes(e_2\wedge e_3\wedge e_4)+2e_1e_4\otimes(e_1\wedge e_2\wedge e_3)+2e_1e_2\otimes(e_1\wedge e_3\wedge e_4)
-2e_1e_3\otimes (e_1\wedge e_2\wedge e_4),$$
which are clearly not proportional.
\qed\vspace{3mm}
\begin{lem}\label{end-R-lem}
On $LG(5,10)$ one has $\operatorname{Hom}^*(R_1,R_1(i))=0$ for $i\in [-5,-1]$.
\end{lem}
\noindent {\it Proof} . The proof is similar to that of Lemma \ref{Bott-lem}(iv) and is left to the reader.
\qed\vspace{3mm}
\begin{thm}\label{L5-col-thm}
Let us consider the following two blocks:
$$\AA=({\cal O},Q,\wedge^2 Q,F_1,\wedge^3 Q,G) \ \text{ and }\ \ {\cal B}=({\cal O},Q,\wedge^2 Q,F_1,\wedge^3 Q).$$
Then $(\AA,{\cal B}(1),{\cal B}(2),\AA(3),{\cal B}(4),{\cal B}(5))$ is a full exceptional collection in $\mathop{\mathrm D\kern0pt}\nolimits^b(LG(5,10))$.
\end{thm}
\noindent {\it Proof} . The required semiorthogonality conditions not involving $G$ follow from the fact that $F_1$
is the right mutation of $E_1$ through $\wedge^2Q$ and from Lemmas \ref{gen-exc-lem}, \ref{split-lem},
\ref{E-van-lem} and \ref{end-R-lem}.
Using Serre duality and sequences \eqref{PT-seq} and \eqref{PG-seq} we can
reduce all the remaining semiorthogonality conditions to Lemmas
\ref{T-lem}, \ref{P-lem} and Theorem \ref{G-thm} (for $\operatorname{Hom}^*(G(3),G)=0$ we need in addition
the vanishing of $\operatorname{Hom}^*(\wedge^3Q(3),\wedge^3Q)$ and $\operatorname{Hom}^*(R_1(3),R_1)$ that follows from
Lemmas \ref{gen-exc-lem} and \ref{end-R-lem}).
Now let us prove that our exceptional collection is full.
Following the method of proof of Theorem \ref{full-thm} (involving the partial isotropic flag manifold
${\rm F}_{1,5,10}$ and the relative analog of our collection for $LG(4,8)$)
one can reduce this to checking that the subcategory ${\cal C}$
generated by our exceptional collection contains the subcategories
$${\cal P}\otimes\mathcal O(j), {\cal P}\otimes Q(j), {\cal P}\otimes \wedge^2 Q(j), {\cal P}\otimes Q\otimes Q, {\cal P}\otimes Q\otimes\wedge^2 Q,$$
where $j=0,\ldots,4$ and ${\cal P}=\langle {\cal O}, Q,\wedge^2 Q,\wedge^3 Q, Q^{\ast}(1), {\cal O}(1)\rangle$.
This gives the following list of objects that have to be in ${\cal C}$:
\noindent
(i) ${\cal O}(j)$, $Q(j)$, $\wedge^2Q(j)$ for $j=0,\ldots,5$;
\noindent
(ii) $Q\otimes Q(j)$, $\wedge^3Q(j)$, $Q\otimes\wedge^2Q(j)$, $Q\otimes\wedge^3Q(j)$, $\wedge^2Q\otimes\wedge^2Q(j)$,
$\wedge^2Q\otimes\wedge^3Q(j)$ for $j=0,\ldots,4$;
\noindent
(iii) $Q^{\ast}(j)$, $Q^{\ast}\otimes Q(j)$, $Q^{\ast}\otimes\wedge^2 Q(j)$ for $j=1,\ldots, 5$;
\noindent
(iv) $Q\otimes Q\otimes Q$, $Q\otimes Q\otimes\wedge^2Q$, $Q\otimes Q\otimes\wedge^3Q$, $Q^{\ast}\otimes Q\otimes Q(1)$,
$Q\otimes\wedge^2Q\otimes\wedge^2Q$, $Q\otimes\wedge^2Q\otimes\wedge^3Q$, $Q^{\ast}\otimes Q\otimes\wedge^2Q(1)$.
The fact that all these objects belong to ${\cal C}$ follows from Lemmas \ref{L5-gen-1},
\ref{L5-gen-4}--\ref{L5-gen-8} below.
\qed\vspace{3mm}
In the following Lemmas we often use the fact that ${\cal C}$ is closed under direct
summands (as an admissible subcategory). Also, by a resolution of $S^nQ$ we mean
the exact sequence
$$\ldots\to\wedge^2Q^{\ast}\otimes S^{n-2}V\otimes{\cal O}\to Q^{\ast}\otimes S^{n-1}V\otimes{\cal O}\to S^nV\otimes{\cal O}\to S^nQ\to 0.$$
By the standard filtration of $\wedge^k(V\otimes{\cal O})$ we mean the filtration associated with exact sequence
\eqref{basic-seq}. This filtration has vector bundles $\wedge^iQ^{\ast}\otimes\wedge^{k-i}Q$ as consecutive quotients.
Recall also that $\wedge^5Q={\cal O}(1)$, so we have isomorphisms
$\wedge^iQ^{\ast}(1)\simeq\wedge^{5-i}Q$.
\begin{lem}\label{L5-gen-1}
(i) For $j=0,\ldots, 5$ the following objects are in ${\cal C}$: ${\cal O}(j)$, $Q(j)$, $\wedge^2Q(j)$,
$\wedge^3Q(j)$, $Q\otimes\wedge^2Q(j)$,
$Q^{\ast}(j)$, $Q^{\ast}\otimes\wedge^2Q(j)$, $S^2Q^{\ast}(j)$.
\noindent
(ii) For $j=1,\ldots,5$ the following objects are in ${\cal C}$: $Q^{\ast}\otimes Q^{\ast}(j)$,
$Q^{\ast}\otimes Q(j)$, $Q\otimes Q(j)$, $S^nQ(j)$ for $n\ge 2$.
\noindent
(iii) For $j=0,\ldots,4$ one has $Q\otimes\wedge^3Q(j)\in{\cal C}$ and $Q^{\ast}\otimes\wedge^3Q(j)\in{\cal C}$.
\noindent
(iv) For $j=1,\ldots,4$ one has
$\wedge^3Q\otimes\wedge^2Q(j-1)=\wedge^2Q^{\ast}\otimes\wedge^2Q(j)\in{\cal C}$ and $S^2Q\otimes\wedge^2Q(j)\in{\cal C}$.
\noindent
(v) For $j=1,\ldots,5$ and for $n\ge 2$ one has $Q\otimes S^nQ(j)\in{\cal C}$ and $Q^{\ast}\otimes S^nQ(j)\in{\cal C}$.
\noindent
(vi) For $j=1,\ldots,5$ the following objects are in ${\cal C}$:
$Q\otimes Q\otimes Q(j)$, $Q^{\ast}\otimes Q\otimes Q(j)$, $Q^{\ast}\otimes Q^{\ast}\otimes Q(j)$ and $Q^{\ast}\otimes Q^{\ast}\otimes Q^{\ast}(j)$.
\end{lem}
\noindent {\it Proof} . (i) To check the assertion for $Q\otimes\wedge^2Q(j)$ we
observe that $R_1=S^{2,1}Q$ is contained in
$\langle Q,F_1\rangle$ as follows from exact
sequence \eqref{FR-seq}. This implies that
$Q\otimes\wedge^2Q(j)=\wedge^3Q(j)\oplus S^{2,1}Q(j)$ belongs to ${\cal C}$ for $j=0,\ldots,5$.
The assertions for $Q^{\ast}(j)$ and $Q^{\ast}\otimes\wedge^2Q(j)$ follow from the sequence \eqref{basic-seq}.
The assertion for $S^2Q^{\ast}(j)$ follows by considering the dual sequence to the resolution of $S^2Q$.
\noindent
(ii) Use the decomposition $Q^{\ast}\otimes Q^{\ast}(j)=S^2Q^{\ast}(j)\oplus\wedge^2Q^{\ast}(j)=
S^2Q^{\ast}(j)\oplus\wedge^3Q(j-1)$ and (i). Then use sequence \eqref{basic-seq}.
For $S^nQ(j)$ the assertion is checked using part (i) and the resolution of $S^nQ$.
\noindent
(iii) To prove the assertion for $Q\otimes\wedge^3Q(j)$
use the isomorphism $Q\otimes\wedge^3Q(j)\equiv Q\otimes\wedge^2Q^{\ast}(j+1)$ and consider the standard filtration of
$\wedge^3(V\otimes{\cal O})$ tensored with ${\cal O}(j+1)$ (and then use part (i)).
For the second assertion use sequence \ref{basic-seq}.
\noindent
(iv) To check that $\wedge^2Q^{\ast}\otimes\wedge^2Q(j)\in{\cal C}$
use the standard filtration of $\wedge^4(V\otimes{\cal O})$ tensored with ${\cal O}(j)$. Next, to derive that
$S^2Q\otimes\wedge^2Q(j)\in{\cal C}$
use resolution of $S^2Q$.
\noindent
(v) For $Q\otimes S^nQ(j)$ use the resolution for $S^nQ$ tensored with $Q(j)$ and parts
(i), (ii) and (iii). For $Q^{\ast}\otimes S^nQ(j)$ use sequence \eqref{basic-seq} and part (ii).
\noindent
(vi) The assertion for $Q\otimes Q\otimes Q(j)$ follows from
the decomposition $Q\otimes Q\otimes Q(j)=Q\otimes \wedge^2Q(j)\oplus Q\otimes S^2Q(j)$
and parts (i) and (v). The rest follows using sequence \eqref{basic-seq} and part (ii).
\qed\vspace{3mm}
\begin{lem}\label{L5-gen-2}
(i) One has $S^{3,1,1}Q\in{\cal C}$ and $S^{3,1,1}(3)Q\in{\cal C}$.
\noindent
(ii) One has $S^2Q\otimes\wedge^3Q\in{\cal C}$ and $S^2Q\otimes\wedge^3Q(3)\in{\cal C}$.
\noindent
(iii) One has $\wedge^2 Q^{\ast}\otimes\wedge^3Q\in{\cal C}$ and $\wedge^2 Q^{\ast}\otimes\wedge^3Q(3)\in{\cal C}$.
\end{lem}
\noindent {\it Proof} . (i) First, exact sequence \eqref{PG-seq} shows that $P,P(3)\in{\cal C}$. Next,
exact sequence \eqref{PT-seq} shows that $T,T(3)\in{\cal C}$, where
$T=S^{3,1,1}Q$.
\noindent
(ii) Since we have the decomposition
$$S^2Q\otimes\wedge^3Q=S^{3,1,1}Q\oplus S^{2,1,1,1}Q,$$
part (i) shows that it is enough to check the similar assertion for $S^{2,1,1,1}Q$.
But $S^{2,1,1,1}Q$ is a direct summand in $Q\otimes\wedge^4Q=Q\otimes Q^{\ast}(1)$, so
the statement follows from Lemma \ref{L5-gen-1}(ii).
\noindent
(iii) This follows from (ii) using resolution for $S^2Q$ and Lemma \ref{L5-gen-1}(iii).
\qed\vspace{3mm}
\begin{lem}\label{L5-gen-3}
(i) For $j=1,2,3$, $\wedge^3Q\otimes\wedge^3Q(j)\in{\cal C}$ if and only if $\wedge^2Q\otimes\wedge^2Q(j)\in{\cal C}$.
\noindent
(ii) For $j=1,\ldots,4$, $\wedge^2Q\otimes\wedge^2Q(j)\in{\cal C}$ if and only if $S^2Q\otimes S^2Q(j)\in{\cal C}$.
\noindent
(iii) For $j=1,\ldots,4$, the following conditions are equivalent:
\begin{enumerate}
\item[(1)] $\wedge^2Q^{\ast}\otimes\wedge^3Q(j-1)\in{\cal C}$;
\item[(2)] $\wedge^2Q^{\ast}\otimes S^2Q(j)\in{\cal C}$;
\item[(3)] $Q\otimes Q\otimes\wedge^2Q(j)\in{\cal C}$.
\end{enumerate}
\end{lem}
\noindent {\it Proof} .
(i) Use the standard filtration of $\wedge^5(V\otimes{\cal O})$ tensored with ${\cal O}(j+1)$ and Lemma \ref{L5-gen-1}(ii).
\noindent
(ii) Use the decompositions
$$S^2Q\otimes S^2Q=Q\otimes S^3Q\oplus S^{2,2}Q,\ \ \wedge^2Q\otimes\wedge^2Q=Q\otimes\wedge^3Q\oplus S^{2,2}Q$$
and Lemma \ref{L5-gen-1}(iii),(v).
\noindent
(iii) First, the equivalence of (1) and (2) follows by considering
the resolution of $S^2Q$ and using Lemma \ref{L5-gen-1}(iii). Next,
we observe that
$$\wedge^2Q^{\ast}\otimes Q\otimes Q(j)=\wedge^2Q^{\ast}\otimes\wedge^2Q(j)\oplus\wedge^2Q^{\ast}\otimes S^2Q(j)$$
and that $\wedge^2Q^{\ast}\otimes\wedge^2Q(j)\in{\cal C}$ for $j=1,\ldots,4$ by Lemma \ref{L5-gen-1}(iv).
Therefore, (2) is equivalent to $\wedge^2Q^{\ast}\otimes Q\otimes Q(j)\in{\cal C}$.
On the other hand, sequence \eqref{basic-seq} and Lemma \ref{L5-gen-1}(i) imply that in condition (3)
we can replace $Q\otimes Q\otimes\wedge^2Q(j)$ with $Q^{\ast}\otimes Q\otimes\wedge^2Q(j)$.
Now the equivalence of (2) and (3) follows by considering
the standard filtration of $\wedge^3(V\otimes{\cal O})$ tensored with $Q(j)$ and using Lemma \ref{L5-gen-1}(i),(iii).
\qed\vspace{3mm}
\begin{lem}\label{L5-gen-4}
(i) For $j=0,1,2,3$ one has $\wedge^3Q\otimes\wedge^3Q(j-1)\in{\cal C}$, $S^2Q\otimes\wedge^3Q(j)\in{\cal C}$ and
$S^2Q\otimes S^2Q(j+1)\in{\cal C}$.
\noindent
(ii) For $j=1,2,3,4$ one has $\wedge^2Q\otimes Q\otimes Q(j)\in{\cal C}$ and $\wedge^2Q\otimes Q^{\ast}\otimes Q(j)\in{\cal C}$.
\noindent
(iii) One has $\wedge^2Q\otimes\wedge^2Q(j)\in{\cal C}$, $\wedge^3Q\otimes\wedge^3Q(j-1)\in{\cal C}$ for $j=1,\ldots,4$, and
$S^2Q\otimes\wedge^3Q(j)\in{\cal C}$ for $j=0,\ldots,4$.
\noindent
(iv) One has $\wedge^3Q\otimes Q\otimes Q(j)\in{\cal C}$ and $\wedge^3Q\otimes Q^{\ast}\otimes Q(j)\in{\cal C}$
for $j=0,1,2,3$.
\noindent
(v) One has $\wedge^3Q\otimes\wedge^2Q\otimes Q(j)\in{\cal C}$ for $j=1,2$.
\end{lem}
\noindent {\it Proof} . (i) For $j=0$ and $j=3$ the first assertion follows from Lemma \ref{L5-gen-2}(iii).
By Lemma \ref{L5-gen-3}(iii) this implies
that $S^2Q\otimes\wedge^3Q\in{\cal C}$ and $S^2Q\otimes\wedge^3Q(3)\in{\cal C}$. Next, using
the resolution for $S^2Q$ and Lemma \ref{L5-gen-1}(v)
we obtain $S^2Q\otimes S^2Q(1)\in{\cal C}$ and $S^2Q\otimes S^2Q(4)\in{\cal C}$.
By Lemma \ref{L5-gen-3}(ii), this implies that $\wedge^2Q\otimes\wedge^2Q(1)\in{\cal C}$ and
$\wedge^2Q\otimes\wedge^2Q(4)\in{\cal C}$. By Lemma \ref{L5-gen-3}(i), it follows that $\wedge^3Q\otimes\wedge^3Q(1)\in{\cal C}$,
which also leads to $S^2Q\otimes\wedge^3Q(2)\in{\cal C}$ and $S^2Q\otimes S^2Q(3)\in{\cal C}$ as before.
On the other hand, combining Lemma \ref{L5-gen-3}(i) with Lemma \ref{L5-gen-2}(iii)
we also get $\wedge^2Q\otimes\wedge^2Q(2)\in{\cal C}$.
By Lemma \ref{L5-gen-3}(ii), this implies that $S^2Q\otimes S^2Q(2)\in{\cal C}$. Considering the resolution
for $S^2Q$ this leads to $S^2Q\otimes\wedge^3Q(1)\in{\cal C}$ and $\wedge^3Q\otimes\wedge^3Q\in{\cal C}$ as before.
\noindent
(ii) The first assertion immediately follows from (i) and from Lemma \ref{L5-gen-3}(iii). The second
follows from the first using sequence \eqref{basic-seq}.
\noindent
(iii) This follows from (i), (ii) and Lemma \ref{L5-gen-3}(i).
\noindent
(iv) Using sequence \eqref{basic-seq} and Lemma \ref{L5-gen-1}(iii) we see that it is enough
to show that $\wedge^3Q\otimes Q\otimes Q(j)\in{\cal C}$. To this end we use the decomposition
$$\wedge^3Q\otimes Q\otimes Q(j)=\wedge^3Q\otimes\wedge^2Q(j)\oplus\wedge^3Q\otimes S^2Q(j),$$
part (iii) and Lemma \ref{L5-gen-1}(iv).
\noindent
(v) We start with the isomorphism $\wedge^3Q\otimes\wedge^2Q\otimes Q(j)\simeq\wedge^2Q^{\ast}\otimes\wedge^2Q\otimes Q(j+1)$.
Now the assertion follows by considering the standard filtration of $\wedge^4(V\otimes{\cal O})$
tensored with $Q(j+1)$ and using parts (ii), (iv) and
Lemma \ref{L5-gen-3}(ii).
\qed\vspace{3mm}
\begin{lem}\label{L5-gen-5}
(i) For $j=1,2,3$ one has $\wedge^2Q\otimes\wedge^2Q\otimes Q(j)\in{\cal C}$.
\noindent
(ii) One has $\wedge^2Q\otimes\wedge^2Q\otimes \wedge^2Q(2)\in{\cal C}$.
\noindent
(iii) One has $\wedge^2Q\otimes\wedge^2Q\otimes S^2Q(2)\in{\cal C}$.
\noindent
(iv) One has $\wedge^3Q\otimes\wedge^2Q(4)\in{\cal C}$.
\end{lem}
\noindent {\it Proof} .(i) Suppose first that $j=1$.
Then considering the filtration of $\wedge^5(V\otimes{\cal O})\otimes Q(2)$ and using Lemma \ref{L5-gen-1}(vi), as well
as the fact that $Q^{\ast}\otimes Q^{\ast}\otimes\wedge^2Q(3)\in{\cal C}$ (which is a consequence of Lemma
\ref{L5-gen-4}(ii)), we reduce ourselves to showing that $\wedge^3Q\otimes\wedge^3Q\otimes Q(1)\in{\cal C}$.
Now the isomorphism $\wedge^3Q\otimes\wedge^3Q\otimes Q(1)\simeq \wedge^3Q\otimes\wedge^2Q^{\ast}\otimes Q(2)$
and the standard filtration of $\wedge^3Q\otimes\wedge^3(V\otimes{\cal O})(2)$ show that it is enough to check that the following
objects are in ${\cal C}$:
$$\wedge^3Q\otimes\wedge^2Q\otimes Q^{\ast}(2),\ \wedge^3Q\otimes\wedge^3Q(2),\ \wedge^3Q\otimes\wedge^3Q^{\ast}(2).$$
For the second and the third this follows from Lemma \ref{L5-gen-4}(iii) and Lemma \ref{L5-gen-1}(iv),
respectively.
For the first object this follows from Lemmas \ref{L5-gen-1}(iv) and \ref{L5-gen-4}(v) using \eqref{basic-seq}.
Now consider the case $j=2$ or $j=3$.
By sequence \eqref{basic-seq} and Lemma \ref{L5-gen-4}(iii), it is enough to
prove that $\wedge^2Q\otimes\wedge^2Q\otimes Q^{\ast}(j)\in{\cal C}$. Now the standard filtration of $\wedge^2Q\otimes\wedge^3(V\otimes{\cal O})(j)$
shows that it is enough to check that the following objects are in ${\cal C}$:
$$\wedge^2Q\otimes\wedge^3Q(j),\ \wedge^2Q\otimes\wedge^3Q^{\ast}(j),\ \wedge^2Q\otimes\wedge^2Q^{\ast}\otimes Q(j).$$
But this follows from Lemmas \ref{L5-gen-1}(iv), \ref{L5-gen-4}(iii) and \ref{L5-gen-4}(v), respectively.
\noindent
(ii) First, considering the standard filtration of $\wedge^5(V\otimes{\cal O})\otimes\wedge^2Q(3)$,
we reduce ourselves to showing that the following objects are in ${\cal C}$:
$$\wedge^3Q\otimes\wedge^3Q\otimes\wedge^2Q(2),\ Q\otimes Q\otimes\wedge^2Q(2),\ Q^{\ast}\otimes Q^{\ast}\otimes\wedge^2Q(4).$$
For the second and the third this follows from Lemma \ref{L5-gen-4}(ii). Now using the isomorphism
$\wedge^3Q\otimes\wedge^3Q\otimes\wedge^2Q(2)\simeq\wedge^3Q\otimes\wedge^2Q^{\ast}\otimes\wedge^2Q(3)$ and the standard filtration of
$\wedge^3Q\otimes\wedge^4(V\otimes{\cal O})(3)$ we are led to showing that the following objects are in ${\cal C}$:
$$\wedge^3Q\otimes\wedge^3Q\otimes Q^{\ast}(3),\ \wedge^3Q\otimes\wedge^2Q\otimes Q(2),\ \wedge^3Q\otimes Q(2),\ \wedge^3Q\otimes Q^{\ast}(4).$$
For the second object this follows from Lemma \ref{L5-gen-4}(v), while for the last two it follows
from Lemma \ref{L5-gen-1}(iii). Thus, it remains to check that $\wedge^3Q\otimes\wedge^3Q\otimes Q^{\ast}(3)\in{\cal C}$.
Using the standard filtration of $\wedge^5(V\otimes{\cal O})\otimes Q^{\ast}(4)$ we see
that it is enough to verify that the following objects are in ${\cal C}$:
$$\wedge^2Q\otimes\wedge^2Q\otimes Q^{\ast}(3),\ Q\otimes Q\otimes Q^{\ast}(3),\ Q^{\ast}(3), Q^{\ast}(5),\
Q^{\ast}\otimes Q^{\ast}\otimes Q^{\ast}(5).$$
For the second and the last object this follows from Lemma \ref{L5-gen-1}(vi).
On the other hand, using \eqref{basic-seq}, part (i) and Lemma \ref{L5-gen-4}(iii)
we see that $\wedge^2Q\otimes\wedge^2Q\otimes Q^{\ast}(3)\in{\cal C}$.
\noindent
(iii) First, using the resolution for $S^2Q$ we reduce the problem to
showing that $\wedge^2Q\otimes\wedge^2Q\otimes\wedge^2Q^{\ast}(2)\in{\cal C}$ (here we also use
part (i), sequence \eqref{basic-seq} and Lemma \ref{L5-gen-4}(iii)).
Next, the standard filtration of $\wedge^2Q\otimes\wedge^4(V\otimes{\cal O})(2)$ shows that it is enough to check that
the following objects are in ${\cal C}$:
$$\wedge^2Q\otimes\wedge^3Q\otimes Q^{\ast}(2),\ \wedge^2Q\otimes \wedge^2Q\otimes Q(1),\ \wedge^2Q\otimes Q^{\ast}(3),\ \wedge^2Q\otimes Q(1).$$
For the last two objects this follows from Lemma \ref{L5-gen-1}(i).
For the second object the assertion follows from part (i).
Finally, to check that $\wedge^2Q\otimes\wedge^3Q\otimes Q^{\ast}(2)\in{\cal C}$ we use
sequence \eqref{basic-seq}, Lemma \ref{L5-gen-4}(v) and Lemma \ref{L5-gen-1}(iv).
\noindent
(iv) Let us start with the decomposition
$$\wedge^3Q\otimes\wedge^2Q(4)={\cal O}(5)\oplus S^{2,1,1,1}Q(4)\oplus S^{2,2,1}Q(4).$$
Now observe that $S^{2,1,1,1}Q(4)$ is a direct summand in $Q\otimes\wedge^4Q(4)=Q\otimes Q^{\ast}(5)$ which
is in ${\cal C}$ by Lemma \ref{L5-gen-1}(ii), while
$S^{2,2,1}Q(4)$ is a direct summand in $S^2Q\otimes S^2Q\otimes Q(4)$. Using the resolution of $S^2Q$
we reduce ourselves to checking that the following objects are in ${\cal C}$:
$$S^2Q\otimes\wedge^2Q^{\ast}\otimes Q(4),\ S^2Q\otimes Q(4),\ S^2Q\otimes Q^{\ast}\otimes Q(4).$$
For the second object this follows from Lemma \ref{L5-gen-1}(v).
Using \eqref{basic-seq} we can replace the third object by
$S^2Q\otimes Q\otimes Q(4)=S^2Q\otimes\wedge^2Q(4)\oplus S^2Q\otimes S^2Q(4)$ which is in ${\cal C}$
by Lemmas \ref{L5-gen-1}(iv) and \ref{L5-gen-4}(i).
Next, we use the isomorphism
$S^2Q\otimes\wedge^2Q^{\ast}\otimes Q(4)\simeq S^2Q\otimes\wedge^3Q\otimes Q(3)$ and the resolution of $S^2Q$
to reduce the problem to showing that the following objects are in ${\cal C}$:
$$\wedge^2Q^{\ast}\otimes\wedge^3Q\otimes Q(3),\ \wedge^3Q\otimes Q(3),\ \wedge^3Q\otimes Q^{\ast}\otimes Q(3).$$
The second and third objects are in ${\cal C}$ by Lemmas \ref{L5-gen-1}(iii) and
\ref{L5-gen-4}(iv), respectively.
For the first object we use the isomorphism
$\wedge^2Q^{\ast}\otimes\wedge^3Q\otimes Q(3)\simeq \wedge^3Q\otimes\wedge^3Q\otimes Q(2)$ and the standard filtration of
$\wedge^5(V\otimes{\cal O})\otimes Q(3)$ to reduce ourselves to proving that the following objects are in ${\cal C}$:
$$\wedge^2Q\otimes\wedge^2Q\otimes Q(2),\ Q\otimes Q\otimes Q(2),\ Q^{\ast}\otimes Q^{\ast}\otimes Q(4).$$
For the first object this follows from (i), and for the second and the third---from
Lemma \ref{L5-gen-1}(vi).
\qed\vspace{3mm}
\begin{lem}\label{L5-gen-6}
(i) One has $S^3Q\otimes S^3Q(2)\in{\cal C}$.
\noindent
(ii) One has $\wedge^2Q\otimes\wedge^2Q\in{\cal C}$.
\noindent
(iii) One has $Q\otimes Q\in{\cal C}$.
\end{lem}
\noindent {\it Proof} . (i) Consider the decomposition
$$S^3Q\otimes S^3Q(2)=S^6Q(2)\oplus S^{5,1}Q(2)\oplus S^{4,2}Q(2)\oplus S^{3,3}Q(2).$$
By Lemma \ref{L5-gen-1}(ii), we have $S^6Q(2)\in{\cal C}$.
Next, we observe that $S^{5,1}Q(2)$ is a direct summand in $S^4Q\otimes\wedge^2Q(2)$ and
use the resolution of $S^4Q$ to deduce that this object is in ${\cal C}$ from the inclusions
$Q\otimes\wedge^2Q(1)\in{\cal C}$, $Q^{\ast}\otimes\wedge^2Q(2)\in{\cal C}$, $\wedge^2Q^{\ast}\otimes\wedge^2Q(2)\in{\cal C}$, $\wedge^3Q^{\ast}\otimes\wedge^2Q(2)\in{\cal C}$, that follow from Lemmas \ref{L5-gen-1}(i), \ref{L5-gen-1}(iv) and
\ref{L5-gen-4}(iii). Finally, we note that $S^{4,2}Q(2)\oplus S^{3,3}Q(2)$ is a direct summand in
$$\wedge^2Q\otimes\wedge^2Q\otimes Q\otimes Q(2)=\wedge^2Q\otimes\wedge^2Q\otimes\wedge^2Q(2)\oplus\wedge^2Q\otimes\wedge^2Q\otimes S^2Q(2)$$
which is in ${\cal C}$ by Lemma \ref{L5-gen-5}(ii),(iii).
\noindent
(ii) We use the isomorphism $\wedge^2Q\otimes\wedge^2Q\simeq\wedge^3Q^{\ast}\otimes\wedge^3Q^{\ast}(2)$
and then use the resolution of $S^3Q$ twice to relate this to $S^3Q\otimes S^3Q(2)$ which is in ${\cal C}$
by part (i). It remains to check that the objects
that appear in between, namely,
$$S^3Q(2),\ S^3Q\otimes Q^{\ast}(2),\ S^3Q\otimes\wedge^2Q^{\ast}(2),\
\wedge^3Q^{\ast}(2),\ Q^{\ast}\otimes\wedge^3Q^{\ast}(2),
\ \wedge^2Q^{\ast}\otimes\wedge^3Q^{\ast}(2),$$
are all in ${\cal C}$. For the last three objects this follows from Lemma \ref{L5-gen-1}(i),(iv), while
for the first three one has to use the resolution of $S^3Q$ to reduce to the objects we have already
dealt with.
\noindent
(iii) The standard filtration of $\wedge^5(V\otimes{\cal O})(1)$ reduces the problem to showing that
$\wedge^2Q\otimes\wedge^2Q$ and $\wedge^3Q\otimes\wedge^3Q$
are in ${\cal C}$ (where we also use Lemma \ref{L5-gen-1}(ii)). It remains to apply
part (ii) and Lemma \ref{L5-gen-4}(iii).
\qed\vspace{3mm}
\begin{lem}\label{L5-gen-7}
(i) One has $S^3Q\otimes S^2Q(1)\in{\cal C}$.
\noindent
(ii) One has $\wedge^2Q\otimes S^2Q\in{\cal C}$.
\noindent
(iii) One has $\wedge^2Q\otimes Q\otimes Q\in{\cal C}$.
\noindent
(iv) One has $\wedge^3Q\otimes\wedge^2Q\otimes Q\in{\cal C}$.
\end{lem}
\noindent {\it Proof} . (i) Consider the decomposition
$$S^3Q\otimes S^2Q(1)=S^5Q(1)\oplus S^{4,1}Q(1)\oplus S^{3,2}Q(1).$$
By Lemma \ref{L5-gen-1}(ii), we have $S^5Q(1)\in{\cal C}$.
On the other hand, $S^{4,1}Q(1)$ is a direct summand in $S^3Q\otimes\wedge^2Q(1)$.
The resolution of $S^3Q$ relates the latter object to $Q^{\ast}\otimes\wedge^2Q(1)$,
$\wedge^2Q^{\ast}\otimes\wedge^2Q(1)$ and $\wedge^3Q^{\ast}\otimes\wedge^2Q(1)$ which are all
in ${\cal C}$ (for the last one use Lemma \ref{L5-gen-6}(ii)). Finally,
$S^{3,2}Q(1)$ is a direct summand in $\wedge^2Q\otimes\wedge^2Q\otimes Q(1)$ which is in ${\cal C}$ by
Lemma \ref{L5-gen-5}(i).
\noindent
(ii) Using the resolution for $S^3Q$ we can relate $\wedge^2Q\otimes S^2Q=\wedge^3Q^{\ast}\otimes S^2Q(1)$
with $S^3Q\otimes S^2Q(1)$, which is in ${\cal C}$ by part (i). The objects appearing in between,
namely, $Q^{\ast}\otimes S^2Q(1)$ and $\wedge^2Q^{\ast}\otimes S^2Q(1)$ are in ${\cal C}$,
by Lemmas \ref{L5-gen-1}(vi), \ref{L5-gen-2}(ii).
\noindent
(iii) Since $\wedge^2Q\otimes Q\otimes Q=\wedge^2Q\otimes\wedge^2Q\oplus\wedge^2Q\otimes S^2Q$, this follows from
part (ii) and Lemma \ref{L5-gen-6}(ii).
\noindent
(iv) Considering the filtration of $\wedge^4(V\otimes{\cal O})\otimes Q(1)$ we reduce ourselves to showing that
the following objects are in ${\cal C}$:
$$\wedge^2Q\otimes Q\otimes Q,\ Q\otimes Q,\ Q^{\ast}\otimes Q(2),\ Q^{\ast}\otimes Q\otimes\wedge^3Q(1).$$
Now the assertion follows from part (iii) and Lemmas \ref{L5-gen-6}(iii), \ref{L5-gen-1}(ii) and
\ref{L5-gen-4}(iv).
\qed\vspace{3mm}
\begin{lem}\label{L5-gen-8}
(i) One has $S^2Q\otimes S^4Q(1)\in{\cal C}$.
\noindent
(ii) One has $S^2Q\otimes Q\in{\cal C}$.
\noindent
(iii) One has $Q\otimes Q\otimes Q\in{\cal C}$.
\noindent
(iv) One has $\wedge^2Q\otimes\wedge^2Q\otimes Q\in{\cal C}$.
\end{lem}
\noindent {\it Proof} . (i) Consider the decomposition
$$S^2Q\otimes S^4Q(1)=S^6Q(1)\oplus S^{5,1}Q(1).$$
By Lemma \ref{L5-gen-1}(ii), we have $S^6Q(1)\in{\cal C}$. On the other hand,
$S^{5,1}Q(1)$ is a direct summand in $\wedge^2Q\otimes S^4Q(1)$. Using
the resolution of $S^4Q$ we reduce the problem to checking that
the following objects are in ${\cal C}$:
$$Q\otimes\wedge^2Q,\ \wedge^2Q\otimes\wedge^2Q,\ \wedge^2Q^{\ast}\otimes\wedge^2Q(1),\ Q^{\ast}\otimes\wedge^2Q(1),\
\wedge^2Q(1),$$
which follows from our previous work (for the second object use Lemma \ref{L5-gen-6}(ii)).
\noindent
(ii) Tensoring the resolution for $S^4Q$ with $S^2Q(1)$ we get an exact sequence
\begin{align*}
&0\to S^2Q\otimes Q\to V\otimes S^2Q\otimes\wedge^2Q\to S^2V\otimes S^2Q\otimes\wedge^3Q\to S^3V\otimes S^2Q\otimes Q^{\ast}(1)\to \\
&S^4V\otimes S^2Q(1)\to S^2Q\otimes S^4Q(1)\to 0
\end{align*}
By part (i), one has $S^2Q\otimes S^4Q(1)\in{\cal C}$. Next, $S^2Q\otimes Q^{\ast}(1)$ and $S^2Q(1)$ are in ${\cal C}$
by Lemma \ref{L5-gen-1}(v),(ii). Finally, $S^2Q\otimes\wedge^2Q$ and $S^2Q\otimes\wedge^3Q$
are in ${\cal C}$ by Lemmas \ref{L5-gen-7}(ii) and \ref{L5-gen-4}(iii).
Hence, $S^2Q\otimes Q\in{\cal C}$.
\noindent
(iii) This follows from the decomposition
$$Q\otimes Q\otimes Q=S^2Q\otimes Q\oplus \wedge^2Q\otimes Q,$$
part (ii) and Lemma \ref{L5-gen-1}(i).
\noindent
(iv) This is proved by the same method as the case $j=1$ of Lemma \ref{L5-gen-5}(i),
using part (iii).
\qed\vspace{3mm}
|
2,869,038,153,738 | arxiv |
\section{Megaverse: technical details}
\subsection{Interface}
Megaverse platform provides a vectorized version of the OpenAI Gym interface \citeappendix{gym} for interacting with the environments. This is a natural extension of the standard Gym interface for parallel simulators: since a single Megaverse instance simulates experience for $M$ agents in $N$ environments per step, the \textsc{step()} function accepts a vector of $N\times M$ actions, and returns a vector of $N\times M$ observations, rewards, and episode termination flags. The only difference with the original OpenAI Gym design is that Megaverse environments do not require \textsc{reset()} calls, except right after initialization. Individual simulated environments can have different episode durations, so we reset these individual environments automatically instead of relying on external code to perform resets. This has an additional performance benefit: we don't have to synthesize the last observation in the episode which is never seen by the agents.
Although Megaverse core engine is written in C++, the high-level interface is also available through Python bindings \citeappendix{pybind11}.
\subsection{Observation and action spaces}
Megaverse provides mechanisms to configure custom observation and action spaces for individual scenarios, although all Megaverse-8 scenarios use the same unified observation and action spaces to streamline and simplify the experimentation. The observations are provided as $128 \times 72$ RGB images, and this is the only sensory input received by the agents. On top of the synthesized views of the 3D world, the observations can also contain additional information about the environment. We implement simple pixel-space GUI consisting of geometric primitives rendered in the agent's field of view. These can play the role of various bars and indicators, such as health bars, or team affiliation flags for team-based multi-agent scenarios. In Megaverse-8 scenarios we only use this GUI to notify agents about the remaining time in the episode.
Table \ref{tab:actions} describes agent's affordances. At each step the agents can independently choose walking and gaze directions, and whether they choose to jump or interact with an object. OpenAI Gym represents this action space using a tuple of discrete action spaces: \textsc{tuple(Discrete(3), Discrete(3), Discrete(3), Discrete(3), Discrete(2), Discrete(2))}. In our implementation the policy networks outputs six distinct probability vectors, which we interpret as independent categorical action distributions, although the action space can also be flattened into a single discrete action space with $324$ options.
\begin{table*}[ht]
\centering
\setlength{\tabcolsep}{3mm}
\small
\begin{tabular}{l @{\hspace{2em}} c @{\hspace{2em}} r }
\toprule
\textbf{Action head} & \textbf{Number of actions} & \textbf{Comment} \\
\toprule
Moving & 3 & no-action / forward / backward \\
Strafing & 3 & no-action / left / right \\
Turning & 3 & no-action / turn left / turn right \\
Vertical gaze direction & 3 & no-action / look up / look down \\
Jumping & 2 & no-action / jump \\
Object interaction & 2 & no-action / interact \\
\midrule
Total number of possible actions & 324 & \\
\bottomrule
\end{tabular}
\caption{Megaverse-8 action space.}
\label{tab:actions}
\end{table*}
\section{Megaverse-8}
Please refer to the project website for detailed videos demonstrating the environments: {\small{\url{www.megaverse.info}}}.
\subsection{Reward functions}
Table \ref{tab:rewards} describes the reward functions in Megaverse-8 scenarios, as seen by the learning algorithm. Besides the dense rewards that facilitate learning and exploration, Megaverse-8 environments also provide a single sparse reward (true objective) per episode that measures the real progress on the task. In all environments except TowerBuilding the true objective takes the value $+1$ when the task is successfully completed and $0$ otherwise. In the TowerBuilding scenario the true objective is to maximize the height of the structure built during the episode.
In addition to task completion (true objective) results reported in the main paper, we also report dense rewards achieved by the agents in our experiments, see Figures \ref{fig:single-reward} and \ref{fig:multi-reward}.
\begin{table*}[ht]
\centering
\setlength{\tabcolsep}{3mm}
\small
\begin{tabular}{l @{\hspace{2em}} c @{\hspace{2em}} r }
\toprule
\textbf{Scenario} & \textbf{Dense reward} & \textbf{True objective} \\
\toprule
\makecell[l]{ObstaclesEasy \\ ObstaclesHard} & \makecell{$+1$ reached target location \\ $+0.5$ collected a green diamond \\ $+5$ all agents reached the target } & \makecell[r]{$+1$ (success) all agents reached the target \\ $0$ (failure) episode timed out} \\
\midrule
Collect & \makecell{$+1$ collecting green diamond \\ $-1$ collecting red diamond \\ $+5$ collected all green diamonds \\ $-0.5$ agent fell into the void } & \makecell[r]{$+1$ (success) collected all green diamonds \\ $0$ (failure) episode timed out} \\
\midrule
Sokoban & \makecell{$+1$ moved box onto target \\ $-1$ moved box from the target \\ $+10$ moved all boxes to targets } & \makecell[r]{$+1$ (success) moved all boxes to targets \\ $0$ (failure) episode timed out} \\
\midrule
HexExplore & \makecell{$+5$ found a pink diamond } & \makecell[r]{$+1$ (success) found a pink diamond \\ $0$ (failure) episode timed out} \\
\midrule
HexMemory & \makecell{$+1$ collected a matching object \\ $-1$ collected a non-matching object } & \makecell[r]{$+1$ (success) collected all matching objects \\ $0$ (failure) episode timed out} \\
\midrule
Rearrangement & \makecell{$+1$ moved object to a correct position \\ $+10$ all objects in correct positions } & \makecell[r]{$+1$ (success) all objects in correct positions \\ $0$ (failure) episode timed out} \\
\midrule
TowerBuilding & \makecell{$+0.1$ entered building zone with an object \\ $+0.05(h + 2^h)$ placed an object in the building zone \\ $h$ - height at which the object was placed } & \makecell[r]{$+h_{max}$ where $h_{max}$ is the max height of the tower} \\
\bottomrule
\end{tabular}
\caption{Megaverse-8 scenarios dense rewards and final objectives.}
\label{tab:rewards}
\end{table*}
\section{Experimental details}
\subsection{Performance analysis}
Table \ref{tab:hw} provides information about hardware configuration of systems used for performance measurements. We focused on commodity hardware commonly used for deep learning experimentation.
While in the main paper we report performance figures measured only in ObstaclesHard scenario, table \ref{tab:env_performance} provides information about sampling throughput in all Megaverse-8 environments. Values represent the sampling throughput averaged over three minutes. In order to conduct the measurements we used a number of parallel Megaverse processes equal to the number of physical CPU cores with $64$ environments simulated in parallel in each process. Performance varies because different scenarios generate environments with different number of geometric primitives and interactive objects. HexMemory and HexExplore environments are based on hexagonal mazes and therefore cannot benefit from the voxel grid based optimizations that allow fast collision checking based on axis-aligned bounding boxes.
\begin{table*}[h]
\centering
\setlength{\tabcolsep}{3mm}
\small
\begin{tabular}{l @{\hspace{4em}} c @{\hspace{2em}} c @{\hspace{2em}} c}
\toprule
& System $\#1$ & System $\#2$ & System $\#3$ \\
\toprule
Processor & AMD Ryzen 9 3900X & Intel Xeon Gold 6154 & Intel Xeon Platinum 8280 \\
Base frequency & 3.8 GHz & 3.0 GHz & 2.7 GHz \\
Physical cores & 12 & 36 & 48 \\
Logical cores & 24 & 72 & 96 \\
\midrule
RAM & 64 GB & 256 GB & 320 GB \\
\midrule
GPUs & 1 x NVidia RTX3090 & 4 x NVidia RTX 2080Ti & 8 x NVidia RTX 2080Ti \\
GPU memory & 24GB GDDR6x & 11GB GDDR6 & 11GB GDDR6 \\
\midrule
OS & Arch (Jan 2021, Rolling) & Ubuntu 18.04 64-bit & Ubuntu 18.04 64-bit \\
GPU drivers & NVidia 460.32.03 & NVidia 440.95.01 & NVidia 450.102.04 \\
\bottomrule
\end{tabular}
\caption{Hardware configurations used for performance measurements (training and sampling performance).}
\label{tab:hw}
\end{table*}
\begin{table}
\centering
\setlength{\tabcolsep}{3mm}
\small
\begin{tabular}{l @{\hspace{2em}} r }
\toprule
\textbf{Scenario} & \textbf{Simulation throughput, obs/sec} \\
\toprule
ObstaclesEasy & $1.27\times 10^6$ \\
ObstaclesHard & $1.15\times 10^6$ \\
Collect & $8.55\times 10^5$ \\
Sokoban & $1.16\times 10^6$ \\
HexExplore & $6.5\times 10^5$ \\
HexMemory & $5.9\times 10^5$ \\
Rearrange & $1.28\times 10^6$ \\
TowerBuilding & $1.22\times 10^6$ \\
\bottomrule
\end{tabular}
\caption{Sampling throughput in Megaverse-8 scenarios measured on System $\#3$ (8-GPU node).}
\label{tab:env_performance}
\end{table}
\subsection{RL experiments: setup and parameters}
In all experiments in the paper we used asynchronous PPO (APPO) implementation provided by Sample Factory \citeappendix{petrenko2020sf}. Unless stated otherwise, all experiments use Action Conditional Contrastive Predictive Coding (CPC|A) \citeappendix{guo2018neural}.
For the policy network we use a small convnet model similar to VizDoom model in \citeappendix{petrenko2020sf} with a 2-layer GRU core \citeappendix{gru_Kyunghyun}.
Table \ref{tab:hyperparams} lists the learning algorithm hyperparameters.
\subsection{Additional RL experiments}
To further investigate the training performance of RL agents on Megaverse-8 tasks we conduct a series of additional experiments. First, we extended the training of the APPO+CPC|A agent in single-agent Megaverse-8 environments to $10^{10}$ environment steps (Figure \ref{fig:10b-run}). Except in TowerBuilding, we did not see a significant increase in agent's performance, which suggests that Megaverse-8 remains a challenging benchmark even in virtually unlimited sample regime (note that training for $10^{10}$ frames in Megaverse is equivalent to training for $4\times 10^{10}$ frames in DeepMind Lab or Atari due to frameskip). Instead of insufficient data, the agents are limited by their exploration abilities and cognitive capacity of relatively simple models. Thus Megaverse-8 environments can be a promising test bed for advanced exploration algorithms and policy architectures.
We also evaluated a single APPO+CPC|A agent trained on all eight Megaverse-8 environments simultaneously (Figure \ref{fig:multi-task}). The agent was trained on a total of $2\times 10^9$ frames of experience, which is equivalent to $2.5 \times 10^8$ frames on each of the environments. The results demonstrate that positive transfer can be challenging due to the diversity of Megaverse-8 tasks, although ultimately, combining experience from a diverse set of tasks and incorporating curricula can be instrumental in training capable multipurpose intelligent agents.
\begin{table}
\centering
\setlength{\tabcolsep}{3mm}
\small
\begin{tabular}{l | @{\hspace{3em}} r }
\toprule
Learning rate & $10^{-4}$ \\
Action repeat (frameskip) & $1$ (no frameskip) \\
Framestack & No \\
Discount $\gamma$ & $0.997$ \\
Optimizer & Adam \citeappendix{adam} \\
Optimizer settings & $\beta_1=0.9$, $\beta_2=0.999$, $\epsilon=10^{-6}$ \\
Gradient norm clipping & 1.0 \\
\midrule
Num parallel Megaverse processes & 6 \\
Num envs simulated per process & 80 \\
Total number of environments & 480 \\
\midrule
Rollout length $T$ & 32 \\
Batch size, samples & 2048 \\
Number of training epochs & 1 \\
\midrule
V-trace parameters & $\Bar{\rho}=\Bar{c}=1$ \\
PPO clipping range & $[1.1^{-1}, 1.1]$ \\
\midrule
Exploration loss coefficient & $0.001$ \\
Critic loss coefficient & $0.5$ \\
\bottomrule
\end{tabular}
\caption{Hyperparameters in Megaverse-8 experiments.}
\label{tab:hyperparams}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figures/supp/single-agent-plate-reward.pdf}
\caption{Total episodic reward achieved by the agents in single-agent scenarios (see reward shaping scheme in Table \ref{tab:rewards}).
Here the results are averaged over three random seeds.}
\label{fig:single-reward}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figures/supp/multi-agent-plate-reward.pdf}
\caption{Total episodic reward achieved by the agents in multi-agent scenarios (see reward shaping scheme in Table \ref{tab:rewards}).
Here the results are averaged over three random seeds.}
\label{fig:multi-reward}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figures/supp/10bn_single-agent-true.pdf}
\includegraphics[width=0.95\textwidth]{figures/supp/10bn_single-agent-raw.pdf}
\caption{Single agent with CPC|A training sessions extended to $10^{10}$ frames. Both task completion (true objective) and episodic rewards are reported. Here the results are averaged over three random seeds.}
\label{fig:10b-run}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figures/supp/multi-task-true.pdf}
\includegraphics[width=0.95\textwidth]{figures/supp/multi-task-raw.pdf}
\caption{Performance of a single agent trained on all eight Megaverse-8 tasks. Both task completion (true objective) and episodic rewards are reported. Here the results are averaged over five random seeds.}
\label{fig:multi-task}
\end{figure}
\section{#1}
}
\newcommand{\csubsection}[1]{
\subsection{#1}
}
\newcommand{\csubsubsection}[1]{
\vspace{-0.06in}
\subsubsection{#1}
\vspace{-0.05in}
}
|
2,869,038,153,739 | arxiv | \section{Introduction}
Models with vector bosons and scalars in a hidden sector naturally arise in supersymmetric extensions of the standard model as well as in superstring phenomenological studies. They have also cosmological implications concerning gravitational wave production and dark matter abundance (see \cite{JR1} and references therein). Regarding this last issue, the hidden massive gauge boson could play the role of dark matter \cite{Nelson:2011sf} or could be the messenger between the visible and dark sectors \cite{Arkani}. Also,
when the hidden sector has a $U(1)$ symmetry the corresponding gauge boson may have a very weak kinetic interaction with photons in the visible sector \cite{holdom}, which could lead to observable effects in experiments like those on light shining through the wall, laser polarization and strong electromagnetic fields \cite{JR1}. Furthermore, when the hidden $U(1)$ gauge symmetry is spontaneously broken the classical field equations exhibit the well-honored Nielsen-Olesen vortex solutions that can play the role of dark strings in an astrophysical context, as proposed in \cite{Vachaspati}-\cite{ Hyde}.
In view of the various areas in which the hidden sector could play an important role in explaining physical phenomena, it is of interest to undertake the detailed study that we present in this work where we construct vortex solutions of two Abelian Higgs models associated to visible and hidden sectors weakly coupled through a gauge mixing interaction. In particular, we analyze how the effects of the hidden sector
depend not only on the strength of the mixing between the two $U(1)$ gauge bosons but also on the relative strength of the gauge coupling constants and on the scalar potentials parameters including the case in which one of the $U(1)$ gauge symmetry remains unbroken. Another relevant subject that we analyze concerns vortex decay. In the ordinary Abelian Higgs model vortex configurations with $n>1$ units of magnetic flux could decay into elementary ($n=1$) vortices depending on the value of the Landau parameter \cite{JR}. We study this issue for configurations in which both hidden and visible vortices {{exist,}} and determine how the mixing affects the decay scenario.
The plan of the paper is as follows: we introduce the model in section 2, extending the Nielsen-Olesen ansatz to include the hidden sector, leading to a coupled system of four radial field equations. In section 3 we consider the case in which the visible sector gauge symmetry is unbroken and discuss analytically how the spontaneous breaking of the hidden sector gauge symmetry is communicated to the visible sector. Then, in section 4 we analyze numerically the case in which both the visible and hidden sectors gauge symmetries are broken studying the dependence of the vortex solutions on the gauge mixing parameter (section 4.2) and on the gauge coupling constants (section 4.3) using both a variational approach and a shooting method. Vortex decay is studied in section 5 and a discussion of the relevance of the model in connection with superconductivity is presented in section 6. Section 7 gives a summary and discussion of our results.
\section{The model}
We consider a model with two $U(1)$ gauge fields, $A_\mu$ and $G_\mu$, each one coupled to complex scalars, $\phi$ and $\psi$ respectively, with dynamics governed by the following Lagrangian in $3+1$ space-time dimensions
\begin{equation}
\mathcal L=-\frac{1}{4} F_{\mu\nu}F^{\mu\nu}- \frac{1}{2}| D^\mu_A\phi|^2-V(\phi)-\frac{\chi}2F_{\mu\nu}G^{\mu\nu}-\frac{1}{4} G_{\mu\nu}G^{\mu\nu}- \frac{1}{2}| D^\mu_G\psi|^2 -V(\psi).
\label{1}
\end{equation}
Here
\begin{equation}
\begin{array}{ll}
F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu \;, & G_{\mu\nu} = \partial_\mu G_\nu - \partial_\nu G_\mu\\
~
\\
D_{A}^\mu\phi=(\partial^\mu-ie A^\mu)\phi\;, & D_{G}^\mu\psi=(\partial^\mu-ie_h G^\mu) \psi
\end{array}
\end{equation}
and $V(\phi), V(\psi)$ are given by
\begin{equation}
V(\phi) =\frac{\lambda}4 \left(|\phi|^2-\phi_0^2\right)^2\, , \;\;\;\;\;\;
V(\psi)=\frac{\lambda_{\rm h}}4 \left(|\psi|^2-\psi_0^2\right)^2
\label{trez}
\end{equation}
In our convention {{the set of}} fields $A_\mu$ and $\phi$ belong to the visible sector, while $G_\mu$ and $\psi$ belong to the hidden one. The strength of the mixing between the two gauge fields is parameterized by $\chi$ which could be either positive or negative. Theoretical and observational constraints seem to favor at present that this parameter is small \cite{chismall,Essig:2013lka}. Although in principle $\chi$ is a free parameter, we show in the Appendix that consistency of boundary conditions leading to the existence of finite energy vortex solutions requires $|\chi|^2<1$.
We are interested in static configurations with $A_0 = G_0 = 0$ for which the energy density $\mathcal{E}$ associated to Lagrangian \eqref{1} takes the form
\begin{equation}
\mathcal{E}=\frac{B_i B_i}2+\frac{B_{hi}B_{hi}}2+\chi B_iB_{hi}+\frac{1}{2} |D^\mu_A\phi|^2+\frac{1}{2} |D^\mu_G\psi|^2+ V(\phi)+V(\psi) . \label{energy1}
\end{equation}
with {{the magnetic fields of the visible and hidden sector defined as}}
\begin{equation}
B^i = \varepsilon^{ijl}\partial_jA_k \;, \;\;\;\; B^{\,i}_{\!h} = \varepsilon^{ijl}\partial_jG_k.
\end{equation}
Due to the choice of symmetry breaking potentials, both gauge fields acquire masses {{given by}} $m_A^2=e^2 \phi_0^2$ and $m_G^2= e_h^2\psi_0^2$. Concerning the scalars, their masses are given by $m_\phi^2=2\lambda \phi_0^2$ and $m_\psi^2=2\lambda_h \psi_0^2$ according to the Brout-Englert-Higgs mechanism.
It will be convenient {{for later use}} to define {{dimensionless}} coordinates, coupling constants and fields according to
\begin{eqnarray}
r\rightarrow r / \left(e\phi_0\right), \,\,\, A_i\rightarrow \phi_0 A_i, \,\,\, \phi \rightarrow \phi_0\phi,\,\,\, G_i\rightarrow \phi_0 G_i,\,\,\, \psi \rightarrow \phi_0\psi.
\label{units}
\end{eqnarray}
With this, the energy per unit length $E/\ell$ in the $z$ direction, and with $A_z=G_z=0$, reads
\begin{eqnarray}
\frac{E}{\ell} &=& \phi_0^2\int d^2 x\left\{\frac{B_iB_i}2+\frac{B_{hi}B_{hi}}2 +\frac{1}2 \left|\partial_i \phi -iA_i \phi\right|^2 +\frac{1}2 \left|\partial_i \psi -ie_rG_i \psi \right|^2\right.
\nonumber\\
&& + \left. \chi B_iB_{ hi} +V(|\phi|)+V(|\psi|)\right\} {\equiv \phi_0^2\int \!\!\! d^2 x \,{\tilde{\mathcal E,}}}
\label{redefenergy}
\end{eqnarray}
where $e_r= e_h/ e$ and $\ell$ defines the length scale, $\ell = 1/e\phi_0$. The symmetry breaking potentials are now given by
\begin{equation}
V(|\phi|)=\frac{\kappa^2}8 \left(|\phi|^2-1\right)^2, \,\,\,\,\,\, V(|\psi|)=\frac{\beta^2 e_r^2}8 \left(|\psi|^2-\frac{\mu^2}{e_r^2}\right)^2.
\end{equation}
{{Here we have defined a dimensionless parameter as }} $\kappa^2=2\lambda/ e^2$, which is related to the {\it Landau parameter} in the Ginzburg-Landau theory of superconductivity. {{The parameter}} $\beta^2=2\lambda_h/ e_h^2$, is its hidden analogue. Concerning {{the parameter}} $\mu$, it corresponds to the ratio of the hidden and visible gauge vector masses, $\mu =m_G/m_A= e_r \psi_0/\phi_0$.
In the ordinary Abelian Higgs model, $\kappa <1$ corresponds to Type I superconductivity and $\kappa>1$ to Type II superconductivity. {{The limiting value $\kappa=1$ is usually
called the Bogomolny point (for the ordinary Abelian Higgs model). At $\kappa=1$ can be derived}} a lower bound for the energy \cite{dVS, Bogo}. The bound is saturated whenever the gauge and scalar fields satisfy a system of coupled first order equations and the energy is then proportional to the number of quantized units of magnetic flux of the vortex solutions.
After {{the}} redefinitions {{stated in eq.~}}\eqref{units}
the visible gauge and scalar fields masses become $m_A= 1$ and $m_\phi= \kappa = \sqrt{{2\lambda}{/e^2}}$, respectively. Concerning the hidden Higgs mass, it takes the {{value}} $m_\psi= \sqrt{2\lambda_h \mu^2/e_h^2}$.
We are interested in finding static axially symmetric solutions of the field equations, so it will be convenient to consider
polar coordinates $(r,\varphi,z)$ and search for $z$ independent field configurations. We then propose the well-honored Nielsen-Olesen \cite{NO} ansatz both for the visible and the hidden sectors
\begin{equation}
\begin{array}{ll|l}
\phi= \rho(r) e^{in\varphi}, & A_\varphi= n\frac{\alpha(r)}r, & A_r = 0,\,\,\,\,\, {{A_z=0,}}\,\,\,\,\,\,\,\,\,\,\, {{n\in \mathds{Z}.}} \\
~
\\
\psi= p(r) e^{ik\varphi}, & G_\varphi= k\frac{\gamma(r)}{e_rr}, & G_r = 0,\,\,\,\,\, {{G_z=0,}}\,\,\,\,\,\,\,\,\,\,\, {{k\in \mathds{Z}.}}
\end{array}
\label{ansatz1}
\end{equation}
Inserting this ansatz, the energy density (\ref{redefenergy}) in terms of the redefined coordinates
and parameters \eqref{units} takes the form
\begin{eqnarray}
\tilde{\mathcal E}\!\!&=&\!\!\frac{n^2}{2r^2}\left(\frac{d\alpha}{dr}\right)^2+\frac{k^2}{2e_r^2 r^2}\left(\frac{d \gamma}{dr}\right)^2+\chi \frac{nk}{e_r r^2}\frac{d \gamma}{dr} \frac{d \alpha}{dr}+\frac{1}2\left(\frac{d \rho}{dr}\right)^2+\frac{1}2\left(\frac{d p}{dr}\right)^2 \nonumber\\
&&\!\! {{+}}\frac{n^2}{2r^2} \left(1-\alpha\right)^2 \rho^2+\frac{k^2}{2r^2} \left(1-\gamma \right)^2 p^2+\frac{\kappa^2}8\left(\rho^2-1\right)^2+\frac{\beta^2 e_r^2}8\left(p^2-\frac{\mu^2}{e_r^2}\right)^2\!\!.\nonumber\\
\label{redefenergy2}
\end{eqnarray}
Finite energy density requires the following behavior of fields at the origin and at infinity
\begin{eqnarray}
\rho(0) = p(0) = 0 \; , && \lim_{r \to \infty} \rho(r) = 1 \;, \;\;\; \lim_{r \to \infty} p(r) = \frac{\mu}{e_r} \nonumber\\
\alpha(0) = \gamma(0) = 0 \; , && \lim_{r \to \infty} \alpha(r) = \lim_{r \to \infty} \gamma(r) = 1
\label{boundary}
\end{eqnarray}
Using the asymptotic behavior and the fact that finite energy requires covariant derivatives for both scalars to vanish at infinity one finds that the magnetic flux in the visible and hidden sectors can be written in terms of the scalar fields in the form
\begin{eqnarray}
\Phi_A &=& \oint_{{\cal C}_\infty} \!\!\!A_\mu dx^\mu = \frac{i}{e|\phi_0|^2} \oint_{{\cal C}_\infty} \!\!\! \phi^* \partial_\mu \phi\, dx^\mu \,\,\,\,= \frac{2\pi n}e \,, \;\;\; n \in \mathbf{Z}
\label{primera}\\
\Phi_G &= & \oint_{{\cal C}_\infty} \!\!\!G_\mu dx^\mu = \frac{i}{e_h|\psi_0|^2} \oint_{{\cal C}_\infty} \!\!\! \psi^* \partial_\mu \psi\, dx^\mu \!= \frac{2\pi k}{e_h} \,, \;\;\; k \in \mathbf{Z}
\label{segunda}
\end{eqnarray}
Here the fluxes are written in terms of the original fields introduced in eqs.\eqref{1}-\eqref{trez}, i.e.~before redefining coordinates, coupling constants and fields.
Given ansatz \eqref{ansatz1}, the field equations for the model take the form
\begin{eqnarray}
&& \hphantom{-}n \alpha''+\chi\frac{k}{e_r} \gamma'' -\chi\frac{k}{e_r} \frac{\gamma'}r-n\frac{\alpha'}r -n\left(\alpha-1\right)\rho^2 =0. \label{diffeq1} \\
&& \hphantom{-} \frac{k}{e_r} \gamma''+n\chi \alpha''-\frac{k}{e_r}\gamma'-\chi n\frac{\alpha'}r-ek\left(\gamma-1\right)p^2=0. \label{diffeq2} \\
&&\hphantom{-}\frac{1}r \frac{d}{dr}\left(r \rho'\right)-{{\frac{n^2}{r^2}\left(1-\alpha\right)^2}} \rho-\frac{\kappa^2}2\left(\rho^2-1\right)\rho=0.\label{order field} \label{diffeq3}\\
&&\hphantom{-}\frac{1}r \frac{d}{dr}\left(r p'\right)-{{\frac{k^2}{r^2}\left(1-\gamma\right)^2}} p-\frac{\beta^2 e_r^2}2\left(p^2-\frac{\mu^2}{e_r^2}\right)p=0.\label{disorder field}
\end{eqnarray}
where the prime indicates from now on a derivative with respect to $r$.
{Equations (\ref{diffeq1})-(\ref{disorder field}) decouple in the asymptotic regime where analytic solutions can be easily found. The asymptotic solution for $\alpha(r)$ and $\gamma(r)$ is encoded in the equation
\begin{equation}
\left[ r\frac{d}{dr} \left(\frac{1}r \frac{d}{dr}\right) \right] F_{\pm }=\frac{1}{\sqrt{C_{\pm}}} F_{\pm },
\end{equation}
where $F_\pm$ are a linear combination of $\alpha$ and $\gamma$ {{and $C_\pm$ are coefficients depending on $\chi$ and $\mu$}} (see appendix for details). {{Finite energy per unit length solutions require}}
$
\chi^2 <1
$.
Thus, in order to have finite energy vortex solutions{{, the} parameter $\chi$ - controlling the mixing between the visible and hidden sectors - should satisfy $|\chi|<1$.}
Due to the presence of the gauge kinetic mixing no first-order Bogomolny equations \cite{dVS, Bogo} can be found when $\chi \ne 0$, except for a very particular case \cite{betti}. {Evidently, if fields and parameters in the visible and the hidden sector are identified (this implying also the the number of units of magnetic flux in the ansatz)}, Lagrangian \eqref{redefenergy} becomes the same as that of the ordinary
Abelian Higgs model apart from an overall factor $1/2$ and a shift in the gauge coupling constant $e \to e/\sqrt{1 - \chi^2}$. Hence, in this very special case one finds the usual Bogomolny equations with the Bogomolny point separating Type I and Type II superconductivities shifted accordingly, $\kappa ^2 \to (1 - \chi^2)\kappa^2$. We shall not discuss this case in which visible and hidden sectors become indistinguishable since it escapes the main interest of our work.
\section{One unbroken $U(1)$ symmetry}
Let us start by studying the existence and stability of vortex solutions when one of the $U(1)$ gauge groups remains unbroken.
A related discussion has been presented in \cite{betti}, but we include the analysis here for completeness and to
highlight certain features that the model exhibits and we consider of interest.
Let us assume that the visible $U(1)$ gauge group remains unbroken (we could have chosen the other way around as well).
The simplest way to achieve this {is by eliminating the visible scalar sector so that all $\phi$ dependent terms in Lagrangian \eqref{1} are absent}.
The energy density then reads
\begin{equation}
{\mathcal E}_{U(1)} = \frac{B_iB_i}2+\frac{B_{ hi}B_{hi}}2+ \chi B_iB_{hi} +\frac{1}2 \left|\partial_i \psi -ie_h G_i \psi\right|^2
+V(|\psi|)
\label{originalE}
\end{equation}
We {{now perform a redefinition of the visible magnetic field as}}
\begin{equation}
B_i = \tilde B_i - \chi B_{hi},
\label{Bredef}
\end{equation}
the energy density ${\mathcal E}_{U(1)} $ becomes
\begin{equation}
{\mathcal E}_{U(1)} = (1 - \chi^2) \frac{B_{hi}B_{hi}}2+\frac{\tilde B_i\tilde B_i}2 +\frac{1}2 \left|\partial_i \psi -ie_h G_i \psi\right|^2
+V(|\psi|).
\label{enB}
\end{equation}
Now, a redefinition of the hidden {{vector}} field, as
\begin{equation}
G_i = \frac{G'_i}{\sqrt{1- \chi^2}},
\end{equation}
{{leads to $B_{hi}={{B'_{hi}}}/\sqrt{1-\chi^2}$}}. We can rewrite the energy (\ref{enB}) {{in terms of the new fields as}}
\begin{equation}
{\mathcal E}_{U(1)} = \frac{{B'_{{h}i}}B'_{{h}i}}2+\frac{\tilde B_i \tilde B_i}2 +\frac{1}2 \left|\partial_i \psi -ie_{\rm eff}G'_i \psi\right|^2
+V(|\psi|),
\label{enB2}
\end{equation}
where {{we have defined an}} effective coupling constant $e_{\rm eff}$, for the hidden gauge field
%
\begin{equation}
e_{\rm eff} = \frac{e_h}{\sqrt{1 - \chi^2}}.
\label{bis}
\end{equation}
Let us note that {{in terms of the redefined fields,}} the energy density is the sum of two uncoupled terms: the one corresponding to the hidden sector coincides with the ordinary Nielsen-Olesen vortex energy density, while the other one is just a Maxwell term for the $\tilde B$ magnetic field. In this form, the energy density can be written as a sum of squares {whenever coupling constants are accommodated to fulfill the Bogomolny condition}
\begin{eqnarray}
\begin{aligned}
E/\ell=\psi_0^2\int d^2 x\frac{1}4\left\{\left(G'_{ij}\pm \varepsilon_{ij}\left(\psi^a\psi^a-1\right) \right)^2 + \left(D_i\psi^a \mp \varepsilon^{ab}\varepsilon_{ij}D_j\psi^b\right)^2\right.\\
+ \left. 4\left(\frac{\beta_h}2-\frac{1}2\right)\left(\psi^a\psi^a-1\right)^2\pm \left( \varepsilon_{ij}G'_{ij}\mp \varepsilon^{ab}\varepsilon_{ij}\partial_i\left(\psi^aD_j\psi^b\right)\right)
+ \tilde B_i \tilde B^i \right\}.
\end{aligned}
\end{eqnarray}
Where we have moved to the dimensionless variables, $r\rightarrow r/(e_{{{\rm{eff}}}} \psi_0)$, $G'_i\rightarrow G'_i \psi_0$, $\psi \rightarrow \psi\psi_0$, $\tilde A_i\rightarrow \psi_0\tilde A_i$.
The minimization of the energy is bounded from below to
\begin{equation}
E/\ell\geq \psi_0^2\frac{2\pi}{e_{\rm eff}}k, \,\,\,\,\,\,\,\,\,\,\,\,\, k\in \mathds{Z}.
\end{equation}
The bound is saturated when the following set of Bogomolny equations are satisfied
\begin{eqnarray}
G'_{ij}&=&\mp \varepsilon_{ij}\left(\psi^a\psi^a-1\right).\\
D_i\psi^a&=&\pm \varepsilon_{ij}^{ab} D_j\psi_b.\\
\frac{1}2\varepsilon_{ij}\tilde F_{ij}&=&0.
\end{eqnarray}
Thus, the configuration of minimum energy is the one where $\tilde B=0$. Going back to the original field of eq.~(\ref{Bredef}),
\begin{equation}
B=-\chi B_h.
\label{identity}
\end{equation}
This result shows that even in the absence of symmetry breaking, the mixing between the visible and the hidden gauge field forces the former
to form a vortex with the same winding number $k$ as the broken gauge field hence having a quantized magnetic flux
\begin{equation}
\Phi = \oint_{{\cal C}_\infty} \!\!\!A_\mu dx^\mu =-\frac{\chi}{e_{{{\rm{eff}}}} } 2\pi k.
\end{equation}
{Relation \eqref{identity} between both gauge fields implies that even in the absence of a symmetry breaking of the visible sector, the kinetic gauge mixing forces the magnetic field to have an exponential decay controlled by the hidden gauge field mass. Now, since in this case the visible magnetic field $B$ is related to the hidden one according to $B = -\chi B_h$, its strength is diminished by the kinetic mixing parameter.}
This result could have interesting phenomenological implications if this model is considered as providing a mixing of hidden and visible cosmic strings in the early universe.\footnote{It has been noted that cosmic strings produced during phase transitions could seed primordial magnetic fields \cite{Vachaspati:1991nm}. One could think in a physical scenario where dark strings are formed during phase transition of the hidden sector, and as a consequence of the mixing, visible cosmic strings are formed, which in turn could seed a primordial magnetic field.}
Note that a similar topological effect for the dark and visible magnetic charge relation can take place, as described in \cite{Brummer:2009cs}.
{When $B$ is an external field, $\tilde B=0$ is no {{longer}} a solution, and the role of the kinetic mixing is to lower the magnetic energy of the visible sector, as noted in \cite{betti}}.
\section{Numerical results}
{We shall first solve equations \eqref{diffeq1}-\eqref{disorder field} using a simple and effective variational approach that has been shown to render the energy of vortex solutions with similar accuracy as more elaborated methods \cite{hill}. Using this approach we shall analyze the dependence of the energy on the kinetic mixing parameter $\chi$ and the gauge coupling constants.
We shall also solve the field equations using an asymptotic shooting method in order to obtain accurate profiles of the gauge and scalar field vortex configurations.}
\subsection{Variational analysis}
The idea is to combine powers of exponentials to engineer functions $\rho, \alpha, p$ and $\gamma$
with the short- and long-distance limits imposed by conditions \eqref{ansatz1}
\begin{equation}
\begin{array}{ll}
\alpha(r)= \left(1-e^{-u r}\right)^2, & \rho(r)= 1-e^{-hr} \\
\gamma(r)=\left(1-e^{-f r}\right)^2, &p(r)=\frac{\mu}{e_r} \left(1-e^{-v r}\right). \label{ansatfi}
\end{array}
\end{equation}
Variational parameters $u$ and $f$ are related to the visible and hidden gauge field masses respectively while $h$ and $v$ are related to the masses of the visible and hidden Higgs fields. In terms of these variational parameters $\tilde{\mathcal E}$ takes the form:
\begin{eqnarray}
\tilde{\mathcal E}\!\!\!\!&=&\!\!\!\!\frac{k^2}{2e_r^2}\left(\frac{e^{-4fr}}{r^2}\left(\mu ^2 \left(1-2 e^{f r}\right)^2
\left(1-e^{-r v}\right)^{2 |k|}+4 f^2
\left(e^{f r}-1\right)^2\right)\right.\nonumber\\
&&\!\!\!\!\left. +\,{4 v^2\mu^2}e^{-2vr} \left(1-e^{-rv}\right)^{2|k|-2}\vphantom{\frac{e^{-4fr}}{r^2}}\right)+
\frac{n^2}2\left(\frac{e^{-4ur}}{r^2}\left( \left(1-2 e^{u r}\right)^2
\left(1-e^{-hr}\right)^{2 |n|}\right.\right.\nonumber \\ \nonumber
&&\!\!\!\!\left. \left.
+4 u^2
\left(e^{u r}-1\right)^2\right)+{4 h^2}e^{-2hr} \left(1-e^{-hr}\right)^{2|k|-2}\vphantom{\frac{e^{-4fr}}{r^2}}\right)\\ \nonumber
&&\!\!\!\!+ nk \frac{4uf\chi }{e_r r^2}\left(e^{rv}-1\right)\left(e^{fr}-1\right) e^{-2r(f+u)}+ \frac{\beta^2 }{8} \frac{\mu^4}{e_r^2} \left( \left(1-e^{-rv}\right)^{2 |k|}-1\right)^2\nonumber\\
&&\!\!\!\!+\frac{\kappa^2 }{8} \left(\left(1-e^{-h r}\right)^{2
|n|}-1\right)^2
\label{energyvar}
\end{eqnarray}
Apart from the variational parameters, there are seven free parameters which should be chosen on physical grounds: $\kappa$ and $\beta$, related to the Landau parameters for both the visible and hidden sector, $e_r = e_h/e$ related to gauge coupling constants, $\mu = m_G/m_A$, the ratio of gauge field masses, $\chi$ which measures $A_\mu$ and $G_\mu$ mixing strength and $n$, $k$, the number of units of visible and hidden magnetic fluxes.
To start with and in order to test our variational approach, we have considered the case in which there is no mixing ($\chi = 0$) for which we
have direct comparison with
very accurate numerical results \cite{JR,dVS}}. We found that there is excellent
agreement between {those results and ours. As an example, exact $n=1$ vortex energy
per unit length at the Bogomolny point is $E/{\ell}{{=|\phi_0|^2}}$, while that obtained in ref.~\cite{JR} using a refined {{variational method}} is $E/{\ell} = 1.00000 |\phi_0|^2$. Concerning our simpler variational result, we obtained $E/{\ell} = {{1.01823}} |\phi_0|^2$. In short, we trust the results of our variational calculation.}
{When the mixing between hidden and visible vector fields is so small that it can be ignored, the visible and hidden terms in the
model defined by Lagrangian (\ref{1})
decouple, and then there exist two unrelated vortex solutions with winding numbers $n,k$, respectively. Recall that when there is just one gauge field and one complex scalar and the Landau parameter is larger than the value it takes at the Bogomolny point, a vortex with winding number $n>1$ decays into separated vortices
(see for example \cite{JR}) and this is then true for each of the decoupled sector we refer above.}
{As we shall see, non negligible values of the kinetic mixing parameter $\chi$ can have great impact in the existence of vortex solutions and their behavior.
This will be also the case concerning different values of the
hidden gauge coupling constant $e_r$, {and/or} the hidden gauge field mass appearing in the $\mu$ parameter}.
\subsection{Changing $\chi$}
Here we study vortex stability as a function of {{the mixing parameter}} $\chi$. As highlighted in the previous section, when the mixing parameter vanishes we are left with two uncoupled vortices, if their Landau parameters $\lambda, \beta $ are greater than one {{and the corresponding winding numbers are greater than one, }}they become unstable, {{decaying in configurations of smaller winding numbers}}.
We shall show here that if the mixing is non negligible, the stability conditions change and instability could take place without requiring that both ($\lambda, \beta) >1$ simultaneously.
To see this we fix $\kappa$ to the value it takes at the stability critical point (Bogomolny point \cite{dVS, Bogo}) in the absence of mixing, when the theory reduces to two uncoupled Abelian Higgs model exhibiting independent vortex solutions. We then study the energy as a function of the hidden Landau parameter $\beta$ for different values of $\chi$.
For the case $\chi n k >0 $, our results are shown in figure
(\ref{fig:fig1}) where we plotted the energy as a function of $\beta$ for a (2,2) vortex configuration compared to twice the energy of a (1,1) configuration for different values of $\chi$. {{Here our notation $(n,k)$ stands for the energy given by equation (\ref{energyvar}) with winding number $n$ of the visible sector and $k$ of the hidden one, respectively.}} We see that as $\chi$ grows the value of the critical point beyond which the instability starts to move to lower and lower values of $\beta$.
When $\chi < 0$ and $nk > 0$ the situation changes drastically. One can easily see this by considering the particular {{limiting}} case $\chi \rightarrow -1$, $nk >0$, and all other physical parameters of the visible and hidden sector identical. With this choice, the gauge fields are indistinguishable and hence the first three terms in \eqref{redefenergy2} cancel out and the energy will be smaller than {{the one}} in which both signs coincide. {{Since}} there is no contribution to the energy from the visible and hidden field strengths, one should expect that the total energy could become negative in some region of physical parameters and vortex-like solutions will cease to exist. Our numerical analysis confirms that this is indeed what happens, as can be seen in { figure~(\ref{fig:negative_chi})} where, as $\chi$ approaches to -1 the energy becomes smaller, until, in the region $\chi \gtrsim -1$ it eventually becomes negative.
If $\chi$ is still positive but $nk <0$, {\it{i.e}} the magnetic fluxes from the hidden and visible sectors have opposite signs, the variational analysis shows in figure~(\ref{fig:fig4}) that when $nk<0$, the free energy diminishes as $\chi$ {{grows approaching one. This means that it is favorable - when the mixing parameter is not negligible - to form vortices of opposite magnetic fluxes }}
\subsection{Changing {{the ratio}} $e_h/e \equiv e_r$}
When the gauge couplings from visible and hidden sector are different the conclusion concerning the stability of vortices is similar to that {{in subsection}} (4.2). To see this let us fix the visible gauge coupling to $e=1$ and vary the corresponding hidden one.
We again chose to study the energy of a (2,2) vortex and compare it with twice the energy of a (1,1) vortex.
We show in figures (\ref{fig:fig2})-(\ref{fig:fig3}) the energy as a function of $\beta$ for $e_r>0$ (i.e. when both coupling constants have the same sign). Figure (\ref{fig:fig2}) shows that for $e_h$ and $\chi$ very small ($e_h = \chi = 10^{-4}$) the critical stability point does not change with respect to the case without mixing. In contrast, when $e_h$ grows beyond
the value $e_h = 1$, the critical point moves to the right, as can be seen in Figure (\ref{fig:fig3}) for $e_h=10,20$. {{Thus, as it was to be expected, only for large hidden gauge coupling charges vortex stability is significantly affected.}
In the case ${\rm sign \,}e \ne {\rm sign \,}e_h$ (e.g. $e_r=-1$), interesting phenomena take place for a suitable choice of the remaining parameters. To see this let us consider a $\mathcal {CP}$ transformation of one of the fields, say $\tilde G_\mu\equiv \mathcal{CP}( G_\mu)=-G_\mu $ and choose $ \tilde G_\mu= A_\mu$. Then, it is possible to get a cancelation of the kinetic terms for both vector fields when the physical parameters are chosen to be $\chi=\mu=1$\footnote{Note the condition
$|\chi|<1$, previously found from asymptotic consistence does not hold in the present case.}. One could think of the above situation as describing a mixing between a gauge field from the visible sector and an anti-hidden gauge field from the hidden sector {(of course this requires a definition of hidden field's antiparticles).}
{Now, when the gauge field kinetic terms cancel out, the field equation for the visible gauge field {{(which is identical to the CP transformed hidden one),}} reduces to
\begin{equation}
ie\phi^* (\partial^\mu-ie A^\mu)\phi = 0,
\label{x}
\end{equation}
so that just using the scalar field ansatz \eqref{ansatz1} one has, from the angular equation
\begin{equation}
(\partial_\varphi-ie A_\varphi)\phi = i\rho(r)(n - eA_\varphi) = 0,
\label{xx}
\end{equation}
leading to
\begin{equation}
A_\varphi= \frac{n}{r} = \tilde G_\varphi.
\label{esta}
\end{equation}}
The singularity at the origin of both fields shows that - in the case of study - there are {no regular gauge field solutions}. Note that this singular solution for the gauge fields has been obtained
without any reference to the scalar fields radial solution $\rho(r)$, since the corresponding field equation is completely decoupled from the gauge field and depends only on the symmetry breaking potential. The only remnant of the gauge-scalar field interaction is the winding number $n$ appearing in eq.\,\eqref{esta} because of the phase in the scalar field ansatz.
{If one inserts the solution \eqref{esta} in
field equation for the Higgs scalar,
\begin{equation}
D_\mu D^\mu\phi = \frac{\delta V[\phi]}{\delta \phi^*},
\end{equation}
one just gets
\begin{equation}
D_r D^r\phi = \frac{\delta V[\phi]}{\delta \phi^*},
\end{equation}
or, since $A_r=0$
\begin{equation}
\rho(r)'' + \frac1r \rho' - \frac{\kappa^2}2(\rho^2 - 1) \rho = 0.
\label{simp}
\end{equation}
(The same result can be obtained by making $\alpha =1$ in eq.\,\eqref{diffeq3}).}
Comparing eq.\eqref{simp} with the one corresponding to global vortices (see for example \cite{global, Vilenkin}),
one can see that the only difference between the two is that
since there is no gauge field in the global $U(1)$, its radial scalar field equation contains extra term proportional to $n^2$
which in our model's equation is canceled precisely by the contribution of $A_\varphi$. Precisely, due to the presence of this $n^2$ term, the global vortex energy diverges logarithmically \cite{hill}.
To see whether there is any energy divergence in our case we insert ansatz (\ref{ansatfi}), and the value of $A_\varphi$ given in (\ref{esta}) in the energy per unit of length given by (\ref{redefenergy}). We get (for $n=1$)
\begin{equation}
\frac{E}{\phi_0^2\ell}=2\pi \left(\frac{1}8+\frac{89 \kappa^2}{1152 \mu^2} \right).
\end{equation}
Hence, for any value of the variation parameter $\mu$, the above expression is finite. Now the minimum value of the energy corresponds to $\mu \rightarrow \infty$, so that $\rho $ becomes trivial, $\rho=1$, and $\phi=\phi_0 e^{in\varphi}$, an ill-defined expression at the origin. Thus, the energy per unit of length vanishes, and no regular non-trivial vortex solution therefore exist. The same conclusion holds for arbitrary value $n$. This result could have been obtained just using the ordinary Bogomolny equations and replacing $\alpha(r)=1$, which forces $\rho=1$.
Another interesting result correspond to the case $e_r=-1$. {Indeed, choosing the ansatz's radial functions $\gamma(r)=\alpha(r)$ and $\mu=\chi \rightarrow 1$}, a cancellation of the kinetic terms for gauge fields $\gamma$ and $\alpha$ also takes place. Moreover, once again a singular solutions for the gauge fields exist
{but consistency requires in this case an inverted magnetic flux condition imposing $n = -k$.}
\subsection{Radial dependence of fields}
In order to discuss radial fields profiles and their dependence
on the free parameters of the theory we shall follow two different numerical approaches: namely the variational approach already discussed and a shooting method.
{We start by varying the kinetic mixing parameter $\chi$, setting the rest of the parameters to unity, $e_r=\beta=\kappa=\mu=1$ and the winding numbers $k=n=1$ so that visible and hidden fields are indistinguishable.
}
{In fig.~(\ref{fig:fig6}-a) we plot the visible magnetic fields obtained using the shooting method as a function of $r$ for several values of the kinetic mixing parameter $\chi$. We can conclude that increasing $\chi$ makes the magnitude of the magnetic field to decrease, thus lowering the visible magnetic energy.
Fig.~(\ref{fig:fig6}-b) shows the hidden magnetic field as a function of $r$ for several values of $\chi$, using the shooting solution.{ Since the visible and hidden fields are indistinguishable, we obtain the same profile as the visible field.}
{In fig.~(\ref{fig:higgs}) we compare the visible and hidden scalar fields as a function of $r$ for several values of $\chi$. From this graph we conclude that as the kinetic mixing parameter increases, the field reduces its asymptotic value.}
Further, we have studied the behavior of the solution under the change of the mass ratio parameter $\mu$ which has phenomenological relevance.
Note that for fixed $m_A=1$, increasing $\mu$ is equivalent to make the vacuum value of the hidden Higgs field larger than the visible one.
We have again taken $e_r=\beta=\kappa = 1$ , and a small value for $\chi=10^{-4}$. The results we report were obtained using the variational approach, since for large $\mu$ it is more appropriate than the shooting method. We plot in figure (\ref{fig:Bmu}) the visible magnetic fields as a function of $r$ for several values of $\mu$. The plot suggests that when $\mu\geq 10 $ the visible magnetic field changes, both in magnitude and penetration depth. This interesting result shows that a shorter range of the hidden field enforces the shortening of the visible range showing that non linear terms of the slowly decaying field affects the $\mu \lesssim 5$ range, where both the shooting and the variational methods are both applicable their results coincide} showing that the visible magnetic field has the same behavior as the one where the visible sector has no mixing with a hidden sector .
{We have also studied the field behavior under changes in $\chi/e_r$. From the analysis of the previous sections one can see that this ratio can be regarded as an effective kinetic mixing which we shall call $\chi_{\rm eff}\equiv \chi/e_r$. In particular, using $\chi_{\rm eff}$ instead of just $\chi$ allows to consider more realistic values of the latter.}
{
The profiles of the visible magnetic field for different values of $\chi_{\rm eff}$ are shown in fig.~(\ref{fig:chieff}). Keeping $\chi$ fixed to $\chi=10^{-5}$, we considered different values $e_r$. Our results show that for a small $\chi_{\rm eff}$, ($e_r \gg \chi$) the magnetic field shows no departure from the behavior corresponding to the absence of kinetic gauge mixing mixing with a hidden sector. However, as $\chi_{\rm eff}$ grows, ({\it{i.e.}} $e_r \lesssim \chi$) the magnetic field decreases but it has a slower decay as $r$ grows. For the curves of fig.~(\ref{fig:chieff}) we have fixed the rest of the physical parameters to unity.} {{Note that a value of $\chi_{\rm eff} > 1$ can be achieved by choosing small values of the kinetic mixing, for instance $\chi=10^{-7}$ and $e_r=10^{-8}$.}}
\subsection{Vortex decay into elementary configurations}
{Vortices with winding numbers $(n,k)$ could be unstable and decay into lower energy configurations, when available, as it is the case in
the ordinary Abelian Higgs model \cite{JR}. Indeed, in the absence of the hidden sector, the energy density in the type-II superconductivity vortex regime ($\kappa>1$) is proportional to
the winding number squared, say $n^2$. Thus, a vortex with winding number $n=2$, will decay into two vortices of winding number $n=1$.} {{ We already studied the stability of the vortices on general grounds in sections 4.2 and 4.3.}}
When the mixing with the hidden sector is considered, the energy is no longer proportional to the two available winding numbers, $n^2, k^2$, but will also depend on the contribution of the mixing term, which is related to winding number $n$ and $k$ through the field strengths and also depends on the values of parameters $\chi$ and $e_r$. In fact, we have seen in section 3.2 that vortex decay depends crucially on the sign of $\chi$.
We shall consider two types of elementary vortex configurations: the $(1,0)$ one carrying just one unit of visible magnetic flux and the $(0,1)$ carrying instead just one unit of hidden magnetic flux. Then, starting with an $(n,k)$ configuration we shall analyze
under which conditions such configuration could decay into one with $n$ elementary vortices of type (1,0) and $k$ elementary vortices vortices $(0,1)$.
Let us consider for definiteness the unbroken symmetry case discussed in our previous section. Taking for instance $ \phi=0 , \kappa= 0$ one can construct a $k(1,0)$ configuration with $k$ spatially superimposed hidden vortices of unit flux. Then, a vortex configuration of the type $n(1,0)+k(0,1)$ can be formed by considering this configuration {and one where where the role of visible and hidden fields is inverted and $k$ is replaced by $n$.
We illustrate the decay from the $(n,k)$ configuration as described above in table~(\ref{tab:tabla1}) by comparing the energies of the $(2,2)$ configuration with that of the $2(0,1)+2(0,1)$ one, for different values of $\chi$ and Landau parameters, $\kappa, \beta$. As we can see, for small values ($\chi \sim 10^{-6}$) the decay of the configuration $(2,2)$ into the elementary ones takes place approximately at the critical value of the Landau parameters if the mixing were absent, that is, for $\kappa=\beta \sim 0.8 $. Now, as the mixing parameter grows, the decay takes place at lower and lower values of the Landau parameters. For instance, for $\chi\geq 0.5$ the decay of the vortex $(2,2)$ already occurs at $\kappa=\beta=0.6$.\\
\begin{table}[t]
\begin{adjustwidth}{1cm}{1cm}
\begin{center}
\begin{tabular}{ r r r r }
& $\kappa=\beta=0.6$ & \,\,\,\,\, & \,\,\,\,\,\,\,\, $\kappa=\beta=0.8$
\end{tabular}
\begin{tabular}{ | c | c | c ||| c | c |}
\hline
$\chi$ & (2,2) & $2(0,1)+2(1,0)$ & (2,2) & $2(0,1)+2(1,0)$ \\
\hline
\hline
$10^{-6}$ & 3.2007 & 3.3100& 3.7194 & 3.7163 \\
\hline
$10^{-3}$ & 3.2016 & 3.3100 & 3.7207 & 3.7164 \\
\hline
$10^{-1}$ & 3.2806 & 3.3034 & 3.8380 & 3.7088 \\
\hline
0.5 & 3.5569 & 3.1277 & 4.2366 & 3.5060 \\
\hline
\end{tabular}
\end{center}
\caption{\footnotesize{Energy of the $(2,2)$ configuration (second and forth columns) and that of the $2(0,1) + 2(1,0)$ (third and fifth columns) for different values of the kinetic mixing parameter $\chi$, and two different values of the Landau parameters $\kappa=\beta$. The rest of the physical parameters have been fixed to $e_r=\mu=1$.}}
\label{tab:tabla1}
\end{adjustwidth}
\end{table}
Let us note that one can reach the same conclusion by varying $e_r$ while keeping the kinetic mixing small, as discussed when we studied the radial fields profiles in terms of the effective mixing parameter $\chi_{\rm eff}$. Note that for phenomenologically acceptable very small kinetic mixing parameter ($\chi \sim 10^{-6}$ or lower), the effect described above takes place when the hidden gauge coupling constant is very small,
$e_h/e \lesssim 10^{-6}$.
Finally, we have {investigated the effect of changes of the vector fields masses in the decay scenario. In table~(\ref{tab:tabla_mass}) the energies of a $(2,2)$ with a $2(0,1)+2(0,1)$ configuration are compared for several values of the mass ratio $\mu$ and for two different points of $(\kappa, \beta)$. One can see that increasing $\mu$ does not affect the stability of the vortices.}
\begin{table}[t]
\begin{adjustwidth}{1cm}{1cm}
\begin{center}
\begin{tabular}{ r r r r }
& $\kappa=\beta=0.77$ & \,\,\,\,\, & \,\,\,\,\,\,\,\, $\kappa=\beta=0.8$
\end{tabular}
\begin{tabular}{ | c | c | c ||| c | c |}
\hline
$\mu$ & (2,2) & $2(0,1)+2(1,0)$ & (2,2) & $2(0,1)+2(1,0)$ \\
\hline
\hline
$10^{-3}$ &1.82266 &1.85818 &1.85975 &1.85818 \\
\hline
$0.1$ &1.84087& 1.87647& 1.87832& 1.87676 \\
\hline
$1.0$ & 3.64538 & 3.68768 & 3.71954& 3.71635 \\
\hline
20 & 730.8801 &733.6594 & 750.5002 & 745.1293 \\
\hline
\end{tabular}
\end{center}
\caption{\footnotesize{Energy of the $(2,2)$ configuration (second and fourth columns) and that of the $2(0,1) + 2(1,0)$ (third and fifth columns) for different values of the ratio of vector field masses, $\mu$, and two different values of the Landau parameters $\kappa=\beta=\left\{ 0.77, 0.8 \right\}$. The rest of the physical parameters have been fixed to $e_r=1$ and $\chi=10^{-4}$.}}
\label{tab:tabla_mass}
\end{adjustwidth}
\end{table}
\section{The fields behavior in connection with superconductivity}
In view of the connection between the Landau-Ginzburg phenomenological theory for superconductors \cite{Ginzburg:1950sr} and the Abelian Higgs Model, superconductivity is a possible arena to test whether the mixing between the hidden and visible sectors could have a phenomenological impact. In this section we intend to give a brief and qualitative discussion on this issue.
{If one looks for measurable quantities that may have been affected by the gauge mixing in the superconductivity context, the scale lengths in the theory are natural the candidates to analyze.
In ordinary superconductivity (i.e., in the absence of a hidden sector) there are two characteristic lengths. One of them is the penetration depth of the external magnetic field, $\ell$. In the language we have been using, it is given by the inverse of the {effective mass} of the gauge field, thus $\ell= m_A^{-1}$. The other one is the characteristic length for the Cooper pairs, known as the {\it{coherence length}}, $\xi$ which in our notation would be $\xi=m_{\varphi}^{-1}$. These two lengths can be combined into one via the Landau parameter, defined { {in our model}} as $\kappa=\ell^2/\xi^2$.
Thus, within a phenomenological Ginzburg-Landau approach, there is only one free parameter, the Landau parameter which, after redefinitions of section 2, { {is given by}} $\kappa=\sqrt{2\lambda}/{e}$.
{ The results obtained in subsection {{4.4}} imply that when $\chi$ (or $\chi_{\rm eff}$) approaches unity the visible fields get {{greatly}} modified, as it happens for large values of the gauge boson masses ratio of $\mu$.
This means that depending on the values of physical parameters ($\mu, e_r, \chi, \kappa, \beta$) the energy of a superconductor can get modified, thus affecting the superconducting sample behavior, in particular the exclusion of the magnetic field from it.}
{In order to analyze this issue we shall study the energy density behavior as a function of $r$ in the context of superconductivity, when a mixing of visible photons with massive hidden photons through the kinetic mixing is present. We shall assume for simplicity that energy density in the superconductor sample is governed - within the Ginzburg-Landau approach - by a the usual free energy density, just composed of the visible magnetic field, the kinetic energy of the super current and the condensation energy of the Cooper pairs. The existence of a hidden sector will be taken into account by inserting in such free energy the solutions obtained by the minimization of the complete visible-hidden model, eq.~(\ref{energy1}).
Then, the free energy density in the superconductor is taken as}
\begin{equation}
\mathcal F^{visible}_{s}= \frac{B^2}2+\frac{1}2 |\partial_i \phi -i A_i \phi |^2+\frac{\kappa^2}8\left(|\phi|^2-1\right)^2,
\label{free_energy}
\end{equation}
Note that with our conventions the {\it Landau parameter} is just $\kappa$ and the Bogomolny point is $\kappa=1$.
We show in figure~(\ref{fig:energy_mass}) the energy density, (\ref{free_energy}) as a function of $r$ for several values of the hidden {{vector field}} mass.{ The continuous solid line in the figure corresponds to the case of an ordinary superconductor (i.e. in the absence of a hidden sector)}. As we can see, when the parameter $\mu$ is small ($\mu\lesssim 15$) there is no appreciable change {of the free energy} compared to the one where there is no mixing with a hidden sector. As}} this parameter grows, we observe a departure from the ordinary superconductor curve. This result agrees with those reported in section {{4.}} For high values of $\mu$ the visible magnetic field increases its amplitude, thus increasing the magnetic energy, but its penetration depth decreases. A similar conclusion should be reached by considering the energy density for different values of $\chi_{\rm eff}$.
\begin{table}[t]
\begin{adjustwidth}{1cm}{1cm}
\begin{center}
\begin{tabular}{ | c | c | }
\hline
$\chi$ & $\sigma/2\pi$ \\
\hline
\hline
$10^{-3}$ & 0.000003 \\
\hline
$10^{-2}$ & 0.000036 \\
\hline
$10^{-1}$ & 0.000709 \\
\hline
0.85 & 0.003882 \\
\hline
0.95 & 0.04066 \\
\hline
\end{tabular}
\end{center}
\caption{Surface energy for different values of the kinetic mixing parameter. As $\chi $ increases, the surface energy also increases. The remaining parameters have been fixed as $\kappa=\beta=e_r=\mu=1$.}
\label{tab:tabla2}
\end{adjustwidth}
\end{table}
The surface energy between a normal and superconducting samples is a relevant quantity in superconductivity since its sign unequivocally defines the transition between type-I and type-II superconductivity. The minimum of the surface energy occurs at the point where the free energy gets its minimum (where the Bogomolny bound is saturated), which in a normal Nielsen-Olesen vortex, with dimensionless variables, is $\kappa=1$.
{We have numerically studied the two dimensional surface energy $\sigma$ associated to the visible sector of our model, given by
\begin{equation}
\sigma=2\pi \int_0^\infty \left(\frac{1}2 \left(B(r)-\frac{\kappa}{2}\right)^2-\frac{\kappa^2}8 \rho(r)^4\right)r dr
\label{dddd}
\end{equation}
We see from this equation that $\sigma$ vanishes when $B(r)=\kappa/2 \left(1+\rho(r)^2\right)$, which is indeed the Bogomolny equation for the ordinary Abelian Higgs model holding
when $\kappa = 1$.
As stated above, the visible magnetic and scalars fields in eq.\eqref{dddd} correspond to the solutions of the complete set \eqref{diffeq1}-\eqref{disorder field} that we found using an improved shooting method in order to refine accuracy of the calculation. To determine how the surface energy measured in experiment could be affected by the existence of a hidden sector, we have varied the free parameters of our model and made use of the equation for the surface energy.}} We show in table~(\ref{tab:tabla2}) the value of the surface energy when $\kappa=1$ for different values of the parameter $\chi$. The rest of the phenomenological parameters were fixed to $\beta=e_r=\mu=1$. We clearly see that increasing the value of $\chi$ makes the surface energy at $\kappa=1$ to grow. This result can be interpreted as a shift in the value of the limiting point between type-I and type-II superconductivity supporting our previous statement on the non-existence of first order Bogomolny equations.
\begin{table}[t]
\begin{adjustwidth}{1cm}{1cm}
\begin{center}
\begin{tabular}{ | c | c | }
\hline
$\chi_{\rm eff}$ & $\sigma/2\pi$ \\
\hline
\hline
$0.01$ & 0.000242 \\
\hline
$0.1$ & 0.007925 \\
\hline
$0.9$ & 0.058660 \\
\hline
1.0 & 0.069072 \\
\hline
1.25 & 0.102120 \\
\hline
\end{tabular}
\end{center}
\caption{Surface energy for different values of the effective kinetic mixing parameter. As $\chi_{\rm eft}$ increases, the surface energy also increases. For all the values shown here we have considered a kinetic mixing of $\chi=10^{-5}$. The remaining parameters have been fixed as $\kappa=\beta=\mu=1$.}
\label{tab:tabla3}
\end{adjustwidth}
\end{table}
From the experimental point of view such large values of the kinetic mixing parameter $\chi$
have been ruled out by experiments \cite{Essig:2013lka}. In view of this, we have computed the surface energy in terms of the {\it{effective}} kinetic mixing $\chi_{\rm eff}\equiv \chi/e_r$. In this way we can consider more realistic values $\chi$ taking a small value for the hidden gauge coupling compared to the visible one. In Table~(\ref{tab:tabla3}) we show the surface energy, eq.~(\ref{dddd}) for different values of the effective kinetic mixing. One can see that even for small kinetic mixing the surface energy now changes appreciably
Concerning the visible magnetic field profiles, the results plotted in figure~(\ref{fig:Bmu}) suggest that the point at which the surface energy vanishes also changes as $\mu$ grows
significantly
\section{Summary and discussion}
In this work we have analyzed a gauge theory with a visible and a hidden sector with dynamics governed by two Abelian Higgs Lagrangians coupled through a gauge kinetic mixing. Imposing the usual cylindrically symmetric Nielsen-Olesen ansatz for gauge and scalar fields we have arrived to a system of 4 coupled radial equations which we have solved numerically.
We have started studying the case in which the $U(1)$ gauge symmetry is unbroken in one of the two sectors. This was achieved by not including in the Lagrangian the corresponding complex scalar. We found that even in this case the kinetic gauge mixing forces the existence of vortex configurations also in the unbroken sector with associated magnetic field decaying exponentially at infinity with the same length as the one in the broken symmetry sector.
Interestingly, always in the case in which one U(1) symmetry is unbroken, gauge and scalar self-interaction coupling constants satisfy a relation which depends on the value of gauge mixing parameter $\chi$ and first order Bogomolny equations exist in the broken sector. The fact that the two fields strengths are proportional (with a proportionality factor $(e\chi/e_h)$) explains why both magnetic fields have the same exponential decay.
This is a relevant result that could be in principle exploited considering for instance primordial magnetic fields generation by dark superconducting strings in the early universe \cite{Vachaspati:1991nm}.
Concerning the case in which both $U(1)$ gauge symmetries are broken, we have found that the relevant parameters controlling stability are
$\chi n k$ (with $n$ and $k$ the units of magnetic flux) and the ratio of the gauge couplings $e_r = e_h/e$. Our numerical analysis shows that
for growing values of $\chi n k >0$ and $e_r>0$, the instability regime starts at lower values of
the hidden sector of the Landau parameter.
If $\chi$ is instead positive but $nk <0$ with $e_r$ positive we find that the energy gets reduced as the parameter $\chi$ grows, the opposite of the $\chi n k>0$ case.
We also studied the dependence of the solutions on the gauge coupling constants ratio $e_r$. To this end we considered the case of small $\chi \sim 10^{-4}$ so as to detect the individual dependence on $e_r$. When $e_r>0$, for very small values of $e_r$ ($e_r = 10^{-4}$) the critical stability point does
not change significantly compared to the case with no mixing. In contrast, when $e_r > 1 $ the decay critical stability point moves to the right as $e_r$ grows.
{
Interesting phenomena take place when ${\rm sign \,}e \ne {\rm sign \,}e_h$ (i.e. $e_r<0$) together with suitable choices of the remaining parameters. In particular if the $\mathcal{CP}$ transformed hidden gauge field is equal to the visible one, $\mathcal{CP}( G_\mu)=A_\mu$, the kinetic terms for both vector fields cancel out for $\chi \rightarrow 1$ and $m_A = m_G$. This identification can be interpreted in terms of a mixing between a photon from the visible sector and an anti-hidden photon from the hidden sector (of course this requires a definition of hidden field's antiparticles). Being the gauge kinetic terms absent, one finds a solution of the form $\phi = \phi_0\exp(in\varphi)$ and $A_\varphi = n/r$. That is, both fields are singular at the origin but the singularities cancel out when computing the energy per unit length.}
We have found that both hidden and visible magnetic fields reduce their magnitude when $\chi$ or $\chi_{\rm eff}$ approaches unity. In respect to
changes in $\mu$, the variational method shows observable effects in the visible magnetic fields when $\mu \gtrsim 15$. Concerning the hidden magnetic field, it grows significantly for $\mu \gtrsim 1$.
Concerning the decay of $(n,k)$ vortices we have studied the case in which the final configuration is a combination of $n (1,0)$ and $k (0,1)$ elementary vortices. The conclusion is that as the gauge mixing parameter grows, the decay takes place at lower and lower values of the hidden Landau parameter with the visible one is fixed. The same holds if one varies
$\chi_{\rm eff}$ or $1/e_r$. Using a phenomenologically acceptable kinetic mixing parameter ($\chi \sim 10^{-6}$) the effect described above takes place when the hidden gauge coupling constant satisfies $e_h/e \lesssim 10^{-6}$.
We have also presented a qualitative discussion of the results from previous sections in connection with superconductivity. As expected, for small $\chi$ the results remain unchanged with respect to the case in which no hidden sector is present. We have shown that the mass ratio $\mu$ and effective gauge kinetic mixing $\chi_{\rm eff}$ are the relevant parameters to study the hidden sector effect on a superconductor sample. Concerning the former, we found that the energy density grows when $\mu$ increases, but the effective penetration length is reduced.
In the normal superconducting theory the surface energy is zero at the Bogomolny point $\kappa=1$. However, in the presence of a gauge mixing, when $\chi$ or $\chi_{\rm eff}$ approach unity the surface energy changes its behavior and does not vanish for $\kappa=1$.
We conclude that in view of the very rich structure of the vortex solution space that we have found, it would be worthwhile to analyze the role of the vortex configurations in cosmology, hidden photon search and supersymmetric extensions. We expect to discuss these issues in a future work.
\section*{Acknowledgements}
P.A. was supported by FONDECYT project 11121403 and Anillo ACT 1102. F.A.S is financially supported by CONICET, ANPCIT, UNLP and CICBA grants
{We are specially thankful of E. Moreno for his useful comments and help in the numerical calculations, J.~Jaeckel, for reading the manuscript and for his valuable comments, J.~Gamboa for his suggestion and encouragement to look into this subject, and A. Ringwald and J. Redondo for their participation in earlier stages of this work. We are also thankful of G. D{u}ering, G. Lozano and E. Mu\~noz for discussions and comments.}
\newpage
\section*{Appendix: Asymptotic behavior of the radial fields}
We find numerical solutions of the radial equations eqs.~(\ref{diffeq1})-(\ref{disorder field}) by implementing a shooting method to match the solutions of these equations in the limit $r\rightarrow \infty$. In order to find the analytical asymptotic solutions of these equations, we start by defining the functions $\tilde \alpha= \alpha-1$ and $\tilde \gamma= \gamma-1$, $\tilde \rho=\rho-1$, $\tilde p=p-\frac{\mu}{e_r}$ such that in the limit $r\rightarrow \infty$, they all satisfy
\begin{eqnarray}
\lim_{r \to \infty} \tilde \rho(r) = 0 \;, \;\;\; \lim_{r \to \infty} \tilde p(r) =0. \nonumber\\
\lim_{r \to \infty} \tilde \alpha(r) =0 \;, \;\;\; \lim_{r \to \infty} \tilde \gamma(r) = 0.
\label{boundary2}
\end{eqnarray}
With these redefinitions, eqs.~(\ref{diffeq1})-(\ref{disorder field}){ take in the asymptotic limit the form}
\begin{eqnarray}
\left[ r\frac{d}{dr} \left(\frac{1}r \frac{d}{dr}\right) \right] \left(n \tilde \alpha+\frac{k}{e_r}\chi \tilde \gamma \right) -n \tilde \alpha &=& 0,
\label{44}\\
\left[ r\frac{d}{dr} \left(\frac{1}r \frac{d}{dr}\right) \right] \left( \frac{k}{e_r} \tilde \gamma +n \chi \tilde \alpha \right) - \frac{k}{e_r}\mu^2 \tilde \gamma &=& 0, \label{45}\\
\tilde \rho''+\frac{\tilde \rho}{r}-\kappa^2\tilde \rho&=&0,\\
\tilde p''+\frac{\tilde p}{r}-\left(\beta \mu\right)^2\tilde p&=&0.\\
\end{eqnarray}
{The solutions for $\tilde \rho$ and $\tilde p$ are}
\begin{eqnarray}
\tilde \rho(r)&=& D_1 K_0 (\kappa r)+D_2 I_0(\kappa r),\\
\tilde p(r)&=& E_1 K_0(\mu \beta r)+ E_2 I_0(\mu \beta r).
\end{eqnarray}
{Making $n \tilde \alpha \to \tilde \alpha $ and ${k}{/e_r} \tilde \gamma \to \tilde \gamma$ eqs.\eqref{44}-\eqref{45} become}
\begin{eqnarray}
\left[ r\frac{d}{dr} \left(\frac{1}r \frac{d}{dr}\right) \right] \left( \tilde \alpha+\chi \tilde \gamma \right) - \tilde \alpha &=& 0,\\
\left[ r\frac{d}{dr} \left(\frac{1}r \frac{d}{dr}\right) \right] \left( \tilde \gamma +\chi \tilde \alpha \right) -\mu^2 \tilde \gamma &=& 0,
\end{eqnarray}
which can be combined into the equation
\begin{eqnarray}
\left[ r\frac{d}{dr} \left(\frac{1}r \frac{d}{dr}\right) \right] \left( \tilde \alpha \left(A+\chi B \right) + \tilde \gamma \left(B+ \chi A \right) \right) = A \tilde \alpha +B \mu^2 \tilde \gamma, \label{desacoplo}
\end{eqnarray}
{where $A, B$ are arbitrary constants.
We now introduce $C$}
\begin{eqnarray}
A+\chi B&=& CA,\\
B+\chi A&=& CB\mu^2,
\end{eqnarray}
{and solve for $A$ finding}
\begin{equation}
\frac{A_\pm}B= \frac{\mu^2-1}{2\chi}\pm \frac{1}{2\chi} \sqrt{\left(1-\mu^2\right)^2+4\mu^2 \chi^2}.
\end{equation}
{So that $C_\pm$ can be written as}
\begin{equation}
C_{\pm}= \frac{1}{2\mu^2}\left(\mu^2+1\pm \sqrt{\left(1-\mu^2\right)^2+4\mu^2 \chi^2}\right). \label{ce}
\end{equation}
With this eq.(\ref{desacoplo}) becomes (for $\chi \neq 1$)
\begin{equation}
\left[ r\frac{d}{dr} \left(\frac{1}r \frac{d}{dr}\right) \right] F_{\pm}(r)= \frac{1}{\sqrt{C_\pm}}F_{\pm},
\label{bessels}
\end{equation}
where the functions $F_\pm(r)$ are defined as
\begin{equation}
F_{\pm} (r)= \frac{A_\pm} B \tilde \alpha +\mu^2 \tilde \gamma. \label{ffunction}
\end{equation}
{The solution of equation (\ref{bessels}) is then}
\begin{eqnarray}
F_+ (r) &=& A_1 r K_1\left(\frac{r}{\sqrt{C_+}}\right) +A_2 r I_1\left(\frac{r}{\sqrt{C_+}}\right), \\
F_- (r) &=& B_1 r K_1\left(\frac{r}{\sqrt{C_-}}\right)+B_2 r I_1\left(\frac{r}{\sqrt{C_-}}\right).
\end{eqnarray}
Form this result one gets $\tilde \alpha$ and $\tilde \gamma$ in the asymptotic limit $r\rightarrow \infty$
\begin{eqnarray}
\tilde \alpha&=& n\frac{F_+ - F_-}{A_+/B- A_-/B},\\
\tilde \gamma&=& \frac{k}{e_r} \frac{\left((A_-/B) F_+ - (A_+/B) F_-\right)}{\mu^2\left(A_+/B- A_-/B \right)}.
\end{eqnarray}
{Now, in order to have exponential decays for the massive fields at $r\rightarrow \infty$ one should impose $C_\pm >0$, which in turn implies
\begin{equation}
\left(1+\mu^2\right)^2 > \left(1-\mu^2\right)^2+4\mu^2 \chi^2,
\end{equation}
or
\begin{equation}
\chi^2 <1. \label{condition}
\end{equation}
This is an important result showing that in order to have finite energy vortex solutions parameter $\chi$ controlling the mixing between the visible and the hidden sectors should satisfy $|\chi| <1$.}
|
2,869,038,153,740 | arxiv | \section{Introduction}
Given an initial quantum state $\rho$, that may be either a pure state or a density matrix, and assuming factorizability of the total Hilbert space to which it belongs, $\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{C}\otimes\dots$, we first define the reduced density matrix $\rho_{A}$ as $\rho_{A}= Tr_{\mathcal{H}_{A}^{c}}\rho$, where $\mathcal{H}_{A}^{c}=\mathcal{H}_{B}\otimes\mathcal{H}_{C}\otimes\dots$ is the complementary of $\mathcal{H}_{A}$ in $\mathcal{H}$. The Entanglement Entropy ( EE ) is the Von Neumann entropy of $\rho_{A}$:
\[
S_A=-Tr_{\mathcal{H}_{A}}(\rho_A\log\rho_A).
\]
It is easy to show that $S_A$ corresponds to some measure of the entanglement between degrees of freedom inside $\mathcal{H}_{A}$ and degrees of freedom outside, hence the name.
A simple formula exists for the holographic computation of EE, if factorizability of the total Hilbert space is induced by dividing the space ( or more covariantly a space-like surface ) into non intersecting subspaces $A$, $B$, $C$, $\dots$, each one defining the corresponding Hilbert space $\mathcal{H}_{A}$, $\mathcal{H}_{B}$, $\mathcal{H}_{C}$, $\dots$ for the corresponding local degrees of freedom, and if an holographic description of the quantum mechanical theory is possible in terms of Einstein gravity, with $\rho$ represented by a classical metric solution. Ryu and Takayanagi for the static case at fixed boundary time \cite{Ryu:2006bv}, and later Hubeny, Rangamani and Takayanagi ( HRT ) for the covariant generalization \cite{Hubeny:2007xt}, proposed the following:
\begin{equation}\label{ryutak}
S_A=\frac{\mathcal{A}(\Sigma(A))}{4 G_N}.
\end{equation}
In the above equation $G_N$ is the Newton constant in the gravitational $d+1$ dimensional theory, and $\mathcal{A}(\Sigma(A))$ is the area of the extremal codimension two space-like surface $\Sigma(A)$ homologous to $A$ and such that, at the boundary of the gravitational manifold, $\partial \Sigma(A)=\partial A$. If the gravitational geometry is static, the above formula simplifies in the Euclidean signature by using a minimal surface \footnote{We will call the formula (\ref{ryutak}) and the corresponding surface $\Sigma(A)$ as Ryu-Takayanagi or HRT depending if used for respectively static spacetimes in Euclidean signature or generically Lorentzian time dependent.}. In the static case a proof for the Ryu-Takayanagi formula was provided by \cite{Lewkowycz:2013nqa} \footnote{Although based on some assumptions, notably on the analytic continuation for the geometry dual to the replica trick, see \cite{Prudenziati:2014tta}.}, but still lacks for the time dependent covariant case. Nonetheless we will assume, throughout the paper, that (\ref{ryutak}) is valid.
One interesting thing about holography, and in particular when dealing with formulas like (\ref{ryutak}), is the interplay between quantum mechanics and classical general relativity; working with EE we can compare properties obeyed by $S_A$, deeply related to entanglement at the very foundations of quantum mechanics, with measures of areas of surfaces probing classical geometries of the dual space-time! The literature somehow related to this is vast, and covers the question of what quantum conditions ( inequalities ) can be derived from holography \cite{Bao:2015boa}, \cite{Bao:2015bfa} and \cite{Hayden:2011ag}, the opposite problem of what gravitational restrictions are imposed from quantum inequalities \cite{Lashkari:2014kda} and \cite{Lin:2014hva}, studies the appearance of gravity equations of motions and dynamics from EE \cite{Banerjee:2014oaa}, \cite{Banerjee:2014ozp}, \cite{Faulkner:2013ica},\cite{Lashkari:2013koa} and \cite{Swingle:2014uza}, and the reverse \cite{Bhattacharya:2013bna} and \cite{Nozaki:2013vta}, and even reconstructs the metric from boundary EE data \cite{Spillane:2013mca}.
It is well known that EE satisfies a series of inequalities, among which the two most restricting go under the name of strong subadditivity. Given three Hilbert spaces that factorize the total one $\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{C}\otimes\dots$, the two inequalities read:
\begin{subequations}
\begin{align}
S(A\cup B)+S(B\cup C) &\geq S(A)+S(C) \label{ssa1} \\
S(A\cup B)+S(B\cup C)&\geq S(B)+S(A\cup B\cup C) \label{ssa2}
\end{align}
\end{subequations}
where $S(A\cup B)$ refers to the EE computed by using the reduced density matrix living on the product of the two Hilbert spaces $\mathcal{H}_{A}\otimes\mathcal{H}_{B}$, and so on. If factorizability can be achieved by considering spacial non intersecting regions $A$, $B$ and $C$, then the same result holds for the corresponding EEs and $S(A\cup B)$ is the EE corresponding to the spatial region $A\cup B$ and so on. Note that these two equations are equivalent, as will be shown in the next section. Even if quantum mechanical proofs of (\ref{ssa1}) and (\ref{ssa2}) exist, see for example \cite{Chuang}, it is a nontrivial question to ask if the EE computed holographically by the Ryu-Takayanagi ( or covariantly HRT ) formula satisfies them or not. Why the question is nontrivial is due to three possible pitfalls. The first problem is the non induced factorizability of the total Hilbert space from dividing the physical space into subregions. It is well known for example for gauge theories, where gauge invariance at the boundary between a region $A$ and its complementary $A^c$ does not allow a factorization of the total Hilbert space of physical gauge invariant states into $\mathcal{H}_{A}$ and $\mathcal{H}_{A^c}$. Or, said in another way, the degrees of freedom of gauge theories are not point-like but rather nonlocal ( Wilson loops ), and consequently when considering a certain region of space they do not construct a physical ( gauge invariant ) Hilbert space. The literature is wide, see for example the lists of papers \cite{Aoki:2015bsa} for a more theoretical approach and \cite{Buividovich:2008gq} for attempts to formulate a lattice definition.
While a formulation of EE may still be possible for gauge theories, for example by appropriately enlarging the Hilbert space or by correctly defining the path integral replica trick, whenever the total Hilbert space does not factorize we are in general not guaranteed that inequalities like (\ref{ssa1}) and (\ref{ssa2}) hold \footnote{ As far as I know it is believed that gauge theories satisfy strong subadditivity, but the discussion here is general. }. In fact this argument may apply even to EE computed non holographically, as long as the reduced density matrices are defined with respect to a certain region of space.
A pure CFT example of strong subadditivity violation without passing through the holographic description, but rather considering EE computed using the replica trick in the CFT, will be provided in section \ref{cft}. There we will consider two dimensional CFTs with Lorentz anomaly, by using the results of \cite{Castro:2014tta} where it was noted that the EE, as expected, is not Lorentz invariant. This means that the EE transforms under boosts and, if these are applied to space regions entering inequalities (\ref{ssa1}) and (\ref{ssa2}), the two sides can change so that to possibly lead to a violation. Said otherwise, strong subadditivity may be respected in some fixed reference frame in which the quantization of the theory has been implemented, but if we consider Hilbert spaces corresponding to space regions that have been differently boosted with respect to this original frame, and the theory has a Lorentz anomaly, violation can occur. This is the second problem and again it applies even without passing through an holographic computation. On the issue of EE and its relation to anomalies see also the recent papers \cite{Nishioka:2015uka} \cite{Iqbal:2015vka} \cite{Hughes:2015ora}.
The third problem arises only in holographic EE: while Einstein's equations can be solved using any energy momentum tensor as a source, it is not guaranteed that any choice is actually physically meaningful. To constrain the energy momentum tensor various energy conditions have been proposed, see for example \cite{Hawking:1973uf} for a review of these conditions and of their effects. Without imposing any condition it is not improbable that the resulting classical geometry may not be dual to same actual physical quantum system. In particular we are no longer guaranteed on the validity of any quantum mechanically-derived inequality \footnote{ We are here necessarily vague on the meaning of physical. }. We will discuss this further in the next section in relationship with the proof of the equivalence between (\ref{ssa1}) and (\ref{ssa2}), and in the conclusions.
Note that, if any of the above pitfalls affects the proof of either (\ref{ssa1}) or (\ref{ssa2}), that is we have violation of only one of the two inequalities, then similar reasons should necessary lead to the violation of the proof for the equivalence between (\ref{ssa1}) and (\ref{ssa2}) that we will discuss in the next section.
Extremely simple holographic proofs of (\ref{ssa1}) and (\ref{ssa2}) exist when the three connected adjacent regions are at the same constant boundary time ( or its boosted version ) and the bulk geometry is static, somehow in contrast with the complication of the standard quantum mechanical proofs. However the argument fails when more generic configurations of regions and/or time dependent backgrounds are considered. Because of this some works considered dropping one or both of these restrictions in order to check if the strong subadditivity inequalities were still satisfied or not, and if some conditions should be eventually applied to the geometry in order to enforce their validity. In particular \cite{Lashkari:2014kda} and \cite{Lin:2014hva} considered static backgrounds with generic space-like boundary regions $A$, $B$ and $C$, and found that requiring (\ref{ssa2}) leads to a integrated version of the Null Curvature Condition ( NCC ) (\ref{intncc}). Further a simple time dependent geometry, asymptotically AdS in 2+1 dimensions which is the Vaidya metric was used by \cite{Allais:2011ys}, \cite{Callan:2012ip} and \cite{Caceres:2013dma}; depending on the choice of a sign the Vaidya geometry may be selected to satisfy or not the local NCC (\ref{ncc}), and what was found is that the NCC is a sufficient requirement to respect (\ref{ssa2}), while (\ref{ssa1}) is always satisfied. Finally a proof for (\ref{ssa2}) was provided by \cite{Wall:2012uf} for dual geometries satisfying the NCC and generic connected adjacent space-like intervals.
The main goals of this paper are three. First of all to review in an organized way the present understanding of strong subadditivity inequalities in holographic theories and their connection with energy conditions. Second to enlighten the geometrical part of the above problem studying what is the holographic counterpart of the violation of strong subadditivity and the role of energy conditions. Third to fill some gaps in the literature and discuss further discoveries as, notably, the violation of (\ref{ssa2}) by two dimensional CFTs with Lorentz anomaly, the development of time-like distances between geodesics entering (\ref{ssa2}) whenever the inequality does not hold and some new proofs along the way.
The paper is organized by discussing strong subadditivity in set-ups of crescent complication, always in 2+1 dimensions in the bulk ( although some results may be generalized ) and boundary regions $A$, $B$ and $C$ chosen to be adjacent, starting with static backgrounds with collinear intervals then moving to generic space-like configurations and finally to time dependent geometries, discussing both the purely quantum mechanical problem and its holographic version. Two appendices contain respectively the generic result and proof of what configurations for the boundary intervals $A$, $B$ and $C$ create the strongest bound on EE by strong subadditivity, and various computations for Vaidya metric that will be used in the last sections as specific examples for the more generic discussion.
\section{A few facts on strong subadditivity}\label{init}
We begin with the purely quantum mechanical proof of the equivalence between the two strong subadditivity inequalities, (\ref{ssa1}) and (\ref{ssa2}) \cite{Chuang}. Let us introduce an auxiliary Hilbert space $\mathcal{H}_D$ such that partial tracing some pure state $\ket{\Psi}$ over it, reproduces the reduced density matrix $\rho_{A\cup B\cup C}$ \cite{Chuang}:
\begin{equation}\label{12equiv}
\rho_{A\cup B\cup C}= Tr_{\mathcal{H}_D}\ket{\Psi}\bra{\Psi}.
\end{equation}
Then
\[
S(A\cup B\cup C)=S(D) \;\;\; S(B\cup C)=S(A\cup D)
\]
and we can convert (\ref{ssa1}) into (\ref{ssa2})
\[
S(A\cup B)+S(B\cup C)=S(A\cup B)+S(A\cup D)\geq S(B)+S(D)=S(B)+S(A\cup B\cup C).
\]
A discussion on why this proof may fail in certain circumstances can be found inside \cite{Allais:2011ys}. The argument is that, purely quantum mechanically we are always guaranteed to find $\mathcal{H}_D$. If the theory has zero entropy it is just the complementary of $\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{C}$ inside the total Hilbert space, $\mathcal{H}_{D}=(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{C})^c$, while if the theory is described by some density matrix it may not be the Hilbert space of degrees of freedom of the theory itself. Still the proof applies if we limit ourselves to EE computed quantum mechanically and not holographically. The only pitfall in this case is the already discussed issue of factorizability of the total Hilbert space that we will not repeat. Holographically instead, if the bulk theory contains a black hole, the total entropy of the system in non zero and the boundary theory is not in a pure state but rather in a thermal density matrix. The proof then holds for the holographic entanglement entropy only if the bulk actually describes some quantum system and we can thus use the above derivation. If the bulk is sufficiently unphysical we may instead have troubles; this is in fact just another way of recasting the argument that generic bulk geometries may not be dual to quantum mechanical systems, as discussed in the introduction. Finally we will soon see a purely CFT violation of (\ref{ssa2}) in the following sections while preserving (\ref{ssa1}) for two dimensional theories with Lorentz anomaly, so the above proof should break down in this case for analogous reasons. The argument here is as in the introduction, and the problem arises when we consider differently boosted space regions starting from the reference frame that was used to quantize the theory and in which strong subadditivity holds; Lorentz anomaly implies that all the expressions written in the original frame will transform and relationships between them are no longer guaranteed to hold.
Similarly simple proofs do not exist for the two strong subadditivity inequalities, that from now on we will name SSA1 for (\ref{ssa1}) and SSA2 for (\ref{ssa2}), that instead require some amount of work to be verified. Simplicity is restored when we consider their holographic description as Ryu-Takayanagi surfaces when considering collinear boundary intervals in static spacetimes.
For the rest of the paper, unless otherwise stated, the 2+1 dimensional bulk theories will be in the Euclidean signature when static, and Lorentzian when dynamic. The easiest case is that of static geometry with the three boundary intervals collinear, either at fixed time or belonging to a straight space-like line; also we assume Lorentz invariance. It is quite surprising that the proof for both (\ref{ssa1}) and (\ref{ssa2}) just amounts to look at picture \ref{figuraspc}. The one dimensional minimal surfaces $\Sigma(A\cup B)$ and $\Sigma(B\cup C)$ intersect at the point p, thus defining $L_1$, $L_2$, $L_3$ and $L_4$; due to the minimality of $\Sigma(A)$ and $\Sigma(C)$, it is immediate the result $\mathcal{A}(\Sigma(A)) \leq \mathcal{A}(L_1 \cup L_3)$ and $\mathcal{A}(\Sigma(C)) \leq \mathcal{A}(L_2 \cup L_4)$, proving (\ref{ssa1}). Analogously we can obtain (\ref{ssa2}) from $\mathcal{A}(\Sigma(A\cup B\cup C)) \leq \mathcal{A}(L_1 \cup L_4)$ and $\mathcal{A}(\Sigma(B)) \leq \mathcal{A}(L_2 \cup L_3)$.
\begin{figure}[h]
\centering
\vspace{-0pt}
\includegraphics[width=0.7\textwidth]{staticproof}
\vspace{-0pt}
\caption{Static proof for collinear intervals.}
\label{figuraspc}
\end{figure}
The above proofs unfortunately only apply to the case described. If $A$, $B$ and $C$ for example are not collinear, and/or if the geometry is time dependent so that the curves bend in the time direction, then we will not generically have any intersection between $\Sigma(A\cup B)$ and $\Sigma(B\cup C)$. Further, the time dependent case brings one additional complication as the HRT surfaces are in this case extremals, so even if we could actually find a way to compare areas, we would not be able to write down inequalities as if minimal surfaces were involved.
\section{Static case}
\subsection{Monotonicity and concavity}\label{mc}
When Lorentz invariance is preserved the EE is just a function of the proper length of the interval $S=S(l)$; SSA1 implies monotonicity for the function $S=S(l)$ while for collinear intervals SSA2 leads instead to concavity. Monotonicity is immediately proven by considering the special case of collinear intervals with proper lengths $l(A)=l(C)=l$ and $l(B)=d$, then (\ref{ssa1}) just gives
\begin{equation}\label{mon}
S(l+d)\geq S(l).
\end{equation}
In fact it is also obviously true the opposite, that monotonicity implies SSA1 and the two conditions are equivalent. When considering EE computed holographically by the area of Ryu-Takayanagi surfaces, monotonicity can in fact be obtained directly without passing through SSA1, by showing that the proper length of a minimal surface ( or more generally any curve that minimize some fixed bulk functional ) is monotonically increasing as a function of the proper length of the boundary interval to which it is attached.
Concavity is just barely more complicated to derive from SSA2, see for example \cite{Callan:2012ip} for the proof of the equivalence between concavity and SSA2 for collinear intervals. Again this property can be obtained directly, as for monotonicity, when dealing with the holographic minimal surfaces $\Sigma(l)$, whenever we vary the proper length $l$ of the boundary interval along the fixed space-like direction determined by the boundary end points. This is obtained with a slightly different construction then the one of figure \ref{figuraspc}; let us consider a boundary straight space-like line and pick $\Delta/d=n$ ( $n \in \mathbb{Z} \gg 1$ ) minimal curves ending on intervals of length $l+d$ that belongs to such line, displaced by a distance $d$, as in figure \ref{figuramonmin} ( $l$ does not need to be an integer multiple of $d$ ).
\begin{figure}[h]
\centering
\vspace{-0pt}
\includegraphics[width=0.7\textwidth]{concavity}
\vspace{-20pt}
\caption{Proof of concavity for minimal surfaces.}
\label{figuramonmin}
\end{figure}
We have that the proper length of $\Sigma$, $\mathcal{A}(\Sigma)$, obeys the inequality:
\[
\frac{\Delta}{d}\;\mathcal{A}(\Sigma(l+d))\geq \left( \frac{\Delta}{d}-1 \right)\mathcal{A}(\Sigma(l))+\mathcal{A}(\Sigma(l+\Delta))
\]
which is proven as in figure \ref{figuramonmin}, just noticing that the black curve has necessarily higher or equal proper length ( or whatever functional we are minimizing ) than $\Sigma(l+\Delta)$, while each of the $\Delta/d-1$ unions of a blue and a red arc is bigger than the proper length of what would be the corresponding minimal curve $\Sigma(l)$. In the limit of $d \rightarrow 0$, while keeping $\Delta\geq 0$ at some finite value, the above formula reads
\begin{equation}
\frac{\mathcal{A}(\Sigma(l+d))-\mathcal{A}(\Sigma(l))}{d}\Delta+\mathcal{A}(\Sigma(l))\geq \mathcal{A}(\Sigma(l+\Delta)) \xRightarrow{d\rightarrow 0} \mathcal{A}(\Sigma(l))+\Delta \;\partial_m\mathcal{A}(\Sigma(m))\Big|_{l}\geq \mathcal{A}(\Sigma(l+\Delta))
\end{equation}
that is just concavity for $\mathcal{A}(\Sigma(l))$.
The case of non collinear intervals is more interesting. In Appendix \ref{A} it is shown that, as a function of the slopes of the three intervals $\alpha_A,\alpha_B,\alpha_C$, the stricter bound on the EE $S(l)$ from SSA1 comes from the case $\alpha_A\geq0,\alpha_B=1,\alpha_C\geq 0$, and from SSA2 $\alpha_A=1,\alpha_B\geq 0,\alpha_C=-1 $ or $\alpha_A=-1,\alpha_B\geq 0,\alpha_C=1 $ ( or their parity transformed counterparts ).
Let us show what conditions on $S(l)$ the strong subadditivity inequalities corresponds to for these configurations. The SSA2 inequality has already been considered in \cite{Casini:2004bw} for a configuration slightly less general then the one we are using, and we will follow a similar procedure. For reference for the parameterization consider the first picture of figure \ref{figuravssa} ( the situation with $\alpha_A=-1,\alpha_B\geq 0,\alpha_C=1 $ is completely analogous ); defining $a\equiv \log r$, $b\equiv \log s$ and $a_x\equiv \log(r-2x)$, $b_y\equiv \log(s-2y)$, and a function $G$ such that $G(a)\equiv S(e^\frac{a}{2})$, by computing the proper lengths for the intervals appearing in SSA2, (\ref{ssa2}) reads for this case
\[
G(a+a_x)+G(b+b_y)\leq G(b+a_x)+G(a+b_y).
\]
As we always have $b_y> a_x$ ( as explained in figure \ref{figuravssa} caption ) and obviously $b>a$, the above inequality is just concavity for $G(a)$, so
\begin{equation}\label{con2}
0\geq \partial^2_a G(a)=\partial_a(\partial_a S(e^\frac{a}{2})) \Rightarrow \partial^2_l S(l)\leq - \frac{\partial_l S(l)}{l}
\end{equation}
that is stronger than simple concavity for $S(l)$, due to the monotonicity required from SSA1: $\partial_l S(l)\geq 0$.
We can now follow a similar path for SSA1 applied to the configuration with slopes $\alpha_A\geq0,\alpha_B=1,\alpha_C\geq 0$. In addition, as the interval B appears only on the bigger side of the inequality, an even stronger bound can be obtained by further minimizing the proper lengths of $A\cup B$ and $B\cup C$ by considering an infinitesimal space coordinate length $x_B\ll 1$. Parameterizing this configuration as in appendix \ref{A}, and defining $x_A\sqrt{1-\alpha_A^2}\equiv e^a$, $\frac{x_B}{x_A(1+\alpha_A)}\equiv \epsilon_a$ ( and correspondingly for $a\leftrightarrow c$ ) and $F(a)\equiv S(e^a)$, we obtain the following result from (\ref{ssa1}) ( using $x_B\ll 1$ to rewrite $\sqrt{1+2\epsilon_a} \approx e^{\epsilon_a}$ )
\[
F(a)+F(c)\leq F(a+\epsilon_a)+F(c+\epsilon_c)
\]
which is just monotonicity for $F(a)$. However in this case, as a first derivative is involved, the condition implies nothing more than the usual monotonicity for $S(l)$:
\begin{equation}\label{con1}
0\leq\partial_a F(a)=l\partial_l S(l) \Rightarrow \partial_l S(l)\geq 0.
\end{equation}
That (\ref{con2}) becomes stricter than concavity for non collinear intervals while (\ref{con1}) remains the usual monotonicity will soon have its counterpart: in various cases, holographically and not, for generic adjacent interval configurations EE will still satisfy SSA1 while the stricter SSA2 will be generically violated. In the holographic description we will see as respecting SSA2 shall require certain geometrical conditions to be satisfied by the background.
\subsection{First appearance of energy conditions in the bulk}\label{faec}
Still studying static bulk geometries, with all the nice properties listed in the previous sections but this time in the Lorentz signature, let us see what happens in the bulk when considering geodesics bounded to non collinear adjacent intervals, as for instance in figure \ref{figuraspnc}. The first clear issue is that geodesics ending on $A\cup B$ and $B\cup C$ do not generically intersect, and consequently try to prove SSA1 and SSA2 as in figure \ref{figuraspc} does not work. SSA1 however has an alternative proof derivable from the simple relation, \cite{Lashkari:2014kda} and \cite{Myers:2012ed}:
\begin{equation}\label{r0}
\frac{dS(l)}{dl}=r_0
\end{equation}
where $r_0$ is the conformal scale factor of asymptotically AdS metrics, reached by the geodesics at its vertex ( coordinates chosen so that we have large $r$ when close to the boundary ).
As it will be useful later, let us review briefly the derivation of (\ref{r0}), following \cite{Lashkari:2014kda} with some modification. The assumptions are a conformal two dimensional boundary Minkowski metric and translation invariance along a boundary space-like direction with associated Killing vector $\xi^{\mu}$. The geodesic extremizes the action
\[
S=\int_{i}^{f}d\lambda\sqrt{g_{\mu\nu}\partial_{\lambda}x^{\mu}\partial_{\lambda}x^{\nu}}
\]
where the interval $i-f$ is taken to be at fixed boundary time $t_b$. We vary only the position of the final end point "f " by a purely spatial translation $\delta x_f^{\mu}=\delta x\; \xi^{\mu} $. The variation of the action is just the boundary term $\delta S =p_{\mu}\delta x_f^{\mu}$ with $p_{\mu}\equiv \partial L/\partial (\partial_{\lambda}x^{\mu})$. Then
\[
\frac{\delta S}{ \delta x} =\xi^{\mu} p_{\mu}.
\]
As this quantity is conserved along all the geodesic we can evaluate it at its vertex, where $p_{\mu}=g_{\mu\nu}\partial_{\lambda}x^{\nu}/\sqrt{g_{\mu\nu}\partial_{\lambda}x^{\mu}\partial_{\lambda}x^{\nu}}$ simplifies by writing $\partial_{\lambda}x^{\mu}=\xi^{\mu} (\xi \cdot \partial_{\lambda}x)/(\xi^{\nu}\xi^{\rho}g_{\nu\rho})$. A brief computation leads to
\begin{equation}\label{r0st}
\frac{\delta S}{ \delta x} =\sqrt{\xi^{\nu}\xi^{\rho}g_{\nu\rho}}=r_0.
\end{equation}
This equation is in fact true with or without time translation invariance. If time translation is a symmetry of the bulk, we can further boost the above equation (\ref{r0st}) to obtain (\ref{r0}).
The important point here is that $r_0>0$ always, and thus $S(l)$ is monotonically increasing. One may then try to use this formula to check (\ref{con2}) by exploiting the dependence $r_0(l)$; the result obtained by \cite{Lashkari:2014kda} ( see also \cite{Lin:2014hva} ) is that SSA2 in its stronger bound (\ref{con2}) is equivalent to the condition on the bulk geometry
\begin{equation}\label{intncc}
\int_{\Sigma} R_{\mu\nu} k^{\mu}k^{\nu}\geq 0
\end{equation}
where $\Sigma$ is our geodesic ending on the interval of proper length $l$ and $k^{\mu}$ is a null vector perpendicular to $\xi^{\mu}$ \footnote{This energy condition may also be written replacing $R_{\mu\nu}\leftrightarrow T_{\mu\nu}$, whenever Einstein's equations are holding.}. Our goal is to find a more geometrically transparent proof for the emergence of (\ref{intncc}), enhancing the difference between collinear and non collinear intervals and the role of $R_{\mu\nu} k^{\mu}k^{\nu}$, where the original derivation uses a metric ansatz and Einstein's equation to show the equivalence between the explicit formulae for (\ref{intncc}) and (\ref{con2}).
\begin{figure}[h]
\centering
\vspace{-20pt}
\includegraphics[width=1.0\textwidth]{RayStat}
\vspace{-50pt}
\caption{Partial picture for the geometric construction leading to the proof of the requirement of the integrated NCC for SSA2 for static non collinear intervals. Depicted is the null congruence $N(\Sigma(A\cup B))$ and its intersection $L_1\cup L_2$ with the ( not shown ) achronal slice.}
\label{figuraspnc}
\end{figure}
So let us start from figure \ref{figuraspnc}; the first issue to solve is that the two curves $\Sigma(A\cup B)$ and $\Sigma(B\cup C)$ do not generically intersect. Let us then shoot out from $\Sigma(A\cup B)$ ( that in our example has been chosen to be nowhere in the past of $\Sigma(B\cup C)$, if otherwise inverting the roles ) a congruence of null geodesics in the past boundary direction, that is then a codimension one surface $N(\Sigma(A\cup B))$. Given translation invariance in the boundary space direction, this null geodesics are chosen to be perpendicular to the corresponding Killing vector, this in order to ensure always intersection between $N(\Sigma(A\cup B))$ and $\Sigma(B\cup C)$ at a point $p$. Now consider any achronal slice ( with points either space-like or null separated ) that contains both the boundary interval $A\cup B$ and the point $p$, and intersects $N(\Sigma(A\cup B))$ along some curve; this curve goes from the left end point of $A$ to $p$, lets call this piece $L_1$, and continues from $p$ to the right end point of $B$, that we call $L_2$ as shown in figure \ref{figuraspnc}. The point $p$ also splits up $\Sigma(B\cup C)$ into $L_3$ to the left and $L_4$ to the right. Here enters the Raychaudhuri equation
\begin{equation}\label{ray}
\frac{d\Theta}{d\lambda}=-\Theta^2-\sigma_{\mu\nu}\sigma^{\mu\nu}-R_{\mu\nu} k^{\mu}k^{\nu}
\end{equation}
where, applied to our case, $\lambda$ is the affine parameter along $k^{\mu}$, $\sigma^{\mu\nu}$ is the shear and $\Theta$ represents the variation of the line element of $\Sigma(A\cup B)$ along $\lambda$, divided by the line element itself. As $\Sigma(A\cup B)$ is a geodesic, $\Theta=0$ on it; then, as on the right hand side of (\ref{ray}) all the quantities are negative definite but $R_{\mu\nu} k^{\mu}k^{\nu}$, the total proper length of $\Sigma(A\cup B)$ decreases along $\lambda$ if the integral of $R_{\mu\nu} k^{\mu}k^{\nu}$ on $\Sigma(A\cup B)$ is higher or equal then zero, which is just the integrated NCC condition (\ref{intncc}) \footnote{The present argument would require the integrated NCC not only on $\Sigma$, but also on all the evolution curves created along the flow by $\lambda$. However we can require only the integral on $\Sigma$ if we restrict to boundary intervals with $A$ and $C$ of infinitesimal coordinate length. This is also what was done in \cite{Lashkari:2014kda}, although by a different road, and there it was also shown as having SSA2 respected for this infinitesimal configuration implies SSA2 valid for generic cases. So only integration along $\Sigma$ is actually required.}. We finally obtain
\begin{equation}\label{sss}
\mathcal{A}(\Sigma(A\cup B))+\mathcal{A}(\Sigma(B\cup C))\geq \mathcal{A}(L_1)+\mathcal{A}(L_2)+\mathcal{A}(L_3)+\mathcal{A}(L_4)\geq \mathcal{A}(\Sigma(B))+\mathcal{A}(\Sigma(A\cup B\cup C))
\end{equation}
where the first inequality is guaranteed by the Raychaudhuri equation coupled with (\ref{intncc}); the second inequality instead, is the usual argument of figure \ref{figuraspc}, but this time applied to extremal surfaces restricted to common achronal slices, one containing $L_2$, $L_3$ and $\Sigma(B)$, and the other $L_1$, $L_4$ and $\Sigma(A\cup B\cup C)$. On such a slice extremal surfaces are minimal ( as will also happen when considering the maximin construction of \cite{Wall:2012uf} ), and that such slices exist on the present static case is what makes the proof applicable here but not on the corresponding time dependent situation, where the work will be harder. This proves that the integrated NCC implies (\ref {intncc}) for any adjacent interval configurations in static spacetimes. The reverse is true only for configurations maximizing the SSA2 bound, like the ones discussed in the previous section or in appendix \ref{A}.
\subsection{$c_L\neq c_R$ theories and strong subadditivity violation}\label{cft}
It is now an interesting question to ask if we can find a purely CFT example of violation of strong subadditivity, using a time independent state $\rho$ ( the vacuum ), as to my knowledge so far all the cases found in the literature rely on the holographic description, often with time dependent metrics. By analysing its holographic counterpart we can isolate, inside the mechanics of strong subadditivity violation in the bulk, the dual of a genuine boundary CFT violation rather then issues of the holographic formula of EE for possibly unphysical backgrounds.
In fact such an example exists and it is when a two dimensional CFT has different left and right central charges. The computation of EE for the vacuum was done in \cite{Castro:2014tta}, together with their proposal for the holographic counterpart that we will discuss in the next section, giving as a result for the EE in Lorentzian signature ( otherwise the result is complex )
\begin{equation}\label{clcree}
S(A)=\frac{c_L+c_R}{6}\log\left(\frac{l(A)}{\epsilon}\right)+\frac{c_R-c_L}{6}\;\alpha(A)
\end{equation}
where $\epsilon$ is the UV regulator and $\alpha(A)=$ ( or $\alpha_A$ as in the notation of appendix \ref{A} ) is the slope of the interval $A$ with respect to the constant time line, or equivalently the rapidity of the Lorentz boost from a constant time interval to the actual $A$.
Let us start with the inequality SSA2. We expect the highest amount of violation, if any, by some of the interval configurations that maximize the bound from SSA2, as explained in appendix \ref{A}. In fact, as the term proportional to $c_L+c_R$ inside (\ref{clcree}) respects SSA2 as it is the usual two dimensional EE result from the vacuum of ( non-Lorentz violating ) CFTs, to have SSA2 violation we have better to manage an interval configuration that saturates the inequality for that term, or is close enough ( note that the UV regulator $\epsilon$ simplifies inside the strong subadditivity inequalities ). Among these configurations the simplest one has the interval $B$ at constant time $\alpha_B=0$, and $A$ and $C$ of the same proper length $l(A)=l(C)$ with light-like slopes $\alpha_A=1$, $\alpha_C=-1$ ( or $\alpha_A=-1$, $\alpha_C=1$ ). Using (\ref{clcree}) however, it is immediately clear that it respect SSA2 because $\alpha_{A\cup B}=-\alpha_{B\cup C}$ and $\alpha_B=\alpha_{A\cup B\cup C}=0$. We then consider some deformations of it, either by studying a generic bound-maximizing structure like the first one in figure \ref{figuravssa}, or one where the segment $A$ and $C$ are made space-like, as in the second picture of figure \ref{figuravssa} \footnote{ We could have just studied the most generic adjacent interval configuration and found violation, but the above examples, because of the parameterization used, make the analysis more transparent. }.
Let us start from the first configuration; writing down the proper lengths of all the intervals as functions of the parameters $r,s,x$ and $y$ as appearing in the picture, SSA2 reads as follows
\[
\frac{c_L+c_R}{12}\left(\log (r(r-2x))+\log (s(s-2y))\right)+\frac{c_R-c_L}{6}\left(\alpha_B+\alpha_{A\cup B\cup C}\right)\leq \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\]
\[
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \leq \frac{c_L+c_R}{12}\left(\log (s(r-2x))+\log (r(s-2y))\right)+\frac{c_R-c_L}{6}\left(\alpha_{A\cup B}+\alpha_{B\cup C}\right).
\]
\begin{figure}[h]
\centering
\vspace{-10pt}
\includegraphics[width=1.0\textwidth]{Lorzt-viol-SSA}
\vspace{-40pt}
\caption{Two configurations of intervals for evaluation of SSA2. Oblique hatched lines are light-like, as well as A and C intervals in the first picture. Parameters $x,y$ are always positive in the second picture, while may be negative in the first with the obvious bounds for positive coordinate length for $C$, $(s-r)/2-y+x> 0$, and for space or light-like interval $B$, $x<r/2$. }
\label{figuravssa}
\end{figure}
The terms proportional to the sum of the central charges and containing the dependence of the EE on the interval's lengths simplify leaving an inequality that depends only on the angles $\alpha$s. As the tangent of an angle between $-\pi/2$ and $\pi/2$ is a monotonically increasing function, this inequality may be rewritten taking the tangent of the sums $\alpha_B+\alpha_{A\cup B\cup C}$ and $\alpha_{A\cup B}+\alpha_{B\cup C}$, and using the usual formula for $\tan{\alpha+\beta}$ we obtain
\[
\frac{c_R-c_L}{6}\frac{\tan{\alpha_B}+\tan{\alpha_{A\cup B\cup C}}}{1-\tan{\alpha_B}\tan{\alpha_{A\cup B\cup C}}}\leq \frac{c_R-c_L}{6}\frac{\tan{\alpha_{A\cup B}}+\tan{\alpha_{B\cup C}}}{1-\tan{\alpha_{A\cup B}}\tan{\alpha_{B\cup C}}}
\]
that in terms of $r,s,x,y$ reads
\[
c_L>c_R\;\;\;\;\;2 x y \leq s x + r y
\]
\[
c_R>c_L\;\;\;\;\;2 x y \geq s x + r y.
\]
Every configuration out of this parameter range violates SSA2.
The second configuration instead leads to an SSA2 inequality that reads as follow:
\begin{equation}\label{2ssa2}
\frac{c_L + c_R}{6} (\log (s + x + y) + \log (r) ) \leq
\frac{c_L + c_R}{12} (\log (r + y)(s + y) + \log(r + x)(s + x)) +
\end{equation}
\[
+\frac{c_R - c_L}{6} \left(\arctan \left(\frac{s - r}{r + s + 2 y}\right) +
\arctan \left(\frac{s - r}{r + s + 2 x}\right)\right).
\]
The $x,y$ parameter region that respects the above inequality is shown in figure \ref{figurasecssa2} for a specific example. Violation is obvious for a region increasing as $c_R$ becomes smaller than $c_L$.
\begin{figure}[h]
\centering
\vspace{+10pt}
\includegraphics[width=0.5\textwidth]{secssa2}
\vspace{-0pt}
\caption{The coloured regions ( extending all the way to the upper right corner ) represent the $x,y$ parameter values respecting the inequality (\ref{2ssa2}) for $c_L=5,r=1,s=2$ and $c_R=5,4,3,2$ respectively; the region decreases as $c_R$ becomes smaller. For $c_R\geq c_L$ the inequality always holds in this example. }
\label{figurasecssa2}
\end{figure}
Finally we inspect SSA1 for a generic ( adjacent ) interval configuration parameterized as in figure \ref{figuracldcr} of appendix \ref{A}. The resulting inequality depends on the parameters $x_A,x_B,x_C$ and $\alpha_A,\alpha_B,\alpha_C$ and it contains both logarithms and arcotangents. We may split it in two, independent inequalities, proportional to $c_L$ and $c_R$ that can be analysed separately ( respecting both is a sufficient condition for respecting the complete equation ). We couldn't provide a proof, but numerical analysis for various values of $x_A,x_B,x_C$ and the complete range of $\alpha_A,\alpha_B,\alpha_C$ shows that no violation of SSA1 occurs.
\subsection{$c_L\neq c_R$ discussion and holographic description}
In the introduction and section \ref{init} we have discussed the quantum mechanical mechanism that may lead to the strong subadditivity violation in Lorentz anomalous theories, as observed in the previous section. Also a detailed discussion on EE in anomalous theories can be found in \cite{Hughes:2015ora}, \cite{Iqbal:2015vka} and \cite{Nishioka:2015uka}. Here we want to understand how this mechanism acts when looking at the dual holographic theory.
The holographic description of CFTs with $c_L\neq c_R$ is provided by a theory called Topological Massive Gravity, TMG, with an action which is the sum of Einstein-Hilbert ( with possible cosmological constant ) and gravitational Chern-Simons with a real relative coefficient of $1/\mu$, see \cite{Castro:2014tta} and references within. The holographic description of EE for these theories has been proposed to be given by a curve $\Sigma$, with the usual boundary conditions and holonomy properties, but extremising a functional which is not its proper length but instead, in Lorentzian signature \footnote{In Euclidean signature an $i$ appears in front of the framing term and the functional becomes complex, as the corresponding boundary result (\ref{clcree}).},
\begin{equation}\label{nfun}
\int_{\Sigma}d\tau\left(\sqrt{g_{\mu\nu}\partial_{\tau}x^{\mu}\partial_{\tau}x^{\nu}}+\frac{1}{\mu}n_2^{\mu} (\bigtriangledown_{\tau} n_1)_{\mu}\right)
\end{equation}
where $n_1=\partial_t$ and $n_2=\partial_x$, with $t$ and $x$ the boundary coordinates. The coefficient $\mu$ entering (\ref{nfun}) is connected to the boundary difference between central charges as
\[
\frac{1}{\mu}=\frac{c_R - c_L}{6} G_N.
\]
There are more solutions for $\Sigma$ than just geodesics, but geodesics are always a solution; similarly metric solutions to the TMG's equations of motion are broader than proper Einstein gravity solutions, but these are always solutions of TMG. As the analytic continuation is not clear in the broadest context, we will restrict as in \cite{Castro:2014tta} to usual geodesics in Einstein metrics.
If we try to prove SSA2 as we have done for static spacetimes and general intervals in section \ref{faec} and repeat the steps leading to (\ref{sss}), we immediately face a complication: even requiring the integrated NCC (\ref{intncc}), the Raychaudhuri equation, that allowed us to minimize the proper length along the null geodesics flow of the congruence, does not constrain anymore the growth of the full functional (\ref{nfun}) as it is no longer simply the proper length of the curve $\Sigma$. Even try to enforce some modified energy condition to minimize (\ref{nfun}) along the null congruence flow and so keep SSA2 valid appears difficult. This because the term proportional to $1/\mu$ is essentially a boundary contribution. We can speculate this being a sign of the essential difference between a violation of SSA2 due to using unphysical backgrounds, and thus entirely avoidable by appropriate bulk conditions, and a violation due to a pure CFT mechanism with its dual holographic description.
Finally let me discuss briefly on SSA1. As in the static case we have here that SSA1 remains valid; the point is that SSA1 is generically a weaker condition than SSA2. In the static case we showed that its bound on the EE did not change by varying the interval configuration, remaining monotonicity the condition on $S(l)$. Here the EE is no longer a function of the sole proper length of the interval, and the discussion and the proof do not apply any longer. Still the moral appears to be the same, and an interesting development would be to try to generalize them to the present case of $c_L\neq c_R$ two dimensional CFTs.
\section{Time dependent case}
It is time to study time dependent situations, that for the CFT means a time dependent quantum state $\rho$ while for the bulk a time dependent metric. For what concerns the boundary side we unfortunately have not much to say that does not come from holography. The reason being first the missing knowledge of the state for computable backgrounds ( as the Vaidya example we will soon introduce ), and second the non invariance by time translation that spoils the usual dependence of the EE by only the proper length of the interval. This means that any EE computation, attempting to verify SSA1 and SSA2, should pass through the understanding of the bulk description. This we will now discuss in much more detail.
\subsection{Intervals at fixed boundary time}
When a time dependent background is considered, additional complications emerge in understanding the validity or not of strong subadditivity. First of all HRT surfaces $\Sigma(A)$ bend in the time direction and thus, even for collinear intervals, intersection does not generically happen. Second, inside the formula (\ref{ryutak}), $\Sigma(A)$ does not refer any longer to minimal surfaces as in the Euclidean static case, but to extremal ones. Consequently, even if we had intersection, we could not straightforwardly construct area inequalities as so far done ( unless we could " project " the extremal surfaces $\Sigma(A\cup B)$ and $\Sigma(B\cup C)$ to a certain common achronal slice containing $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$, where they become minimal. We will soon see that non existence of this slice is the key point for failure of SSA2.).
A first understanding on how and when SSA1 and SSA2 are valid comes from using formula (\ref{r0st}), that is the specific case of (\ref{r0}) when the boundary interval and its variation are at fixed time. In this case, as we have previously derived, (\ref{r0st}) is valid even without requirement of time translation invariance. Further in this case we still can write the EE as a a function of the interval length $\Delta x$, $S(\Delta x)$, with the dependence on the boundary time $t_b$ decoupling when studying inequalities at the same $t_b$ ( while for generic space-like intervals, without time translation invariance, the EE is a generic function of all the coordinate end points, not only the proper length ). The positivity of the conformal factor at the vertex $r_0$ is just the monotonicity property that guarantees the validity of SSA1, but what about SSA2? Clearly to have the EE $S(\Delta x)$ concave as a function of $\Delta x$, we should have that $r_0(\Delta x)$ ( at fixed time $t_b$ ) is monotonically decreasing or, by inverting the function for given $t_b$, that $\Delta x(r_0)$ is monotonically decreasing \footnote{Remember, to avoid confusion, that big $r$ is close to the boundary and small $r$ to the center of the bulk.}. This means that violation of concavity and SSA2 happens for geodesics whose vertex moves towards the boundary when extending the size $\Delta x$ of the interval, for fixed $t_b$.
To understand better this point we can use a theorem by Wall, \cite{Wall:2012uf} theorem 17, that says that if NCC is valid, two geodesics ( maximin surfaces in the paper as we will discuss later ) ending on space-like boundaries one contained in the other, $A(\Delta x,t_b)$ and $A(\Delta x+\delta x,t_b)$ in our case, will always be at space-like distance one from the other with the smaller one inside, towards the boundary, with respect to the bigger. So SSA2 is respected. Instead giving up NCC not only time-like distances are possible but, in order to have violation of SSA2 the narrower geodesic should have its vertex extending in the bulk further then the larger one.
\subsection{Vaidya example for fixed time intervals}
To be more concrete let us consider an example of time dependent, asymptotically AdS background, where analytic computations are possible. This is the Vaidya metric, representing the collapse of a mass shell that interpolates between an AdS metric and a BTZ black hole. This example has been extensively studied in the past by \cite{Allais:2011ys}, \cite{Caceres:2013dma} and \cite{Callan:2012ip}, where the first connection between the NCC and the violation of SSA2 was established.
The metric is
\begin{equation}
ds^2=-(r^2 -m(v))dv^2 +2 dr dv + r^2 dx^2
\end{equation}
which is an AdS solution with the addition of a local energy momentum tensor whose only nonzero component is
\begin{equation}
T_{vv}=\frac{1}{2r}\partial_v m(v).
\end{equation}
The case we consider is when $m(v)$ is a step function on $v=0$ and $T_{vv}$ becomes a delta. If the delta is positive ( $m(v<0)=0,\;m(v>0)=m$ ) the metric will be AdS inside for $v<0$ and BTZ outside for $v>0$; if instead the delta is negative ( $m(v<0)=m,\;m(v>0)=0$ ) we have BTZ inside for $v<0$ and AdS outside for $v>0$. The first case satisfies the NCC, the second violates it, as can be trivially checked. From now on $m=1$ \footnote{ There is an associated scaling symmetry that allows this choice, see for example \cite{Callan:2012ip}. }.
Geodesics that starts and ends on the boundary and cross the mass shell ( otherwise they are simply contained in the static AdS or BTZ depending on the choice for $m(v)$ ) depend on two parameters that, following \cite{Callan:2012ip}, we choose to be $r_c$ and $p_x$, the bulk radius at which the geodesic crosses the shell and a conserved momentum for the space translation symmetry that turns out to correspond to the radius of the vertex. We start with backgrounds respecting NCC. The goal is to verify the monotonically decreasing behaviour of $\Delta x(r_0)$, where $r_0=p_x$ as explained in appendix \ref{B} and where relevant formulas can be found; $\Delta x$ is there called $\Delta x_{b}$ and is a function of $r_c$ and $p_x$. We solve for $r_c$ to give, for the geodesic with a certain value of $p_x$, the chosen boundary time $t_b$, and then plot $\Delta x(p_x)$ for the given value of $t_b$. Some sample curves are represented in figure \ref{figuratra2} where we can check the monotonically decreasing behaviour, as expected.
\begin{figure}[h]
\centering
\vspace{-10pt}
\includegraphics[width=1\textwidth]{monssa2res}
\vspace{-30pt}
\caption{SSA2 respected by a monotonically decreasing behaviour of $\Delta x(p_x)$, for Vaidya background respecting NCC.}
\label{figuratra2}
\end{figure}
More interesting is the case for NCC violating geometries. Here formulas depend on the range of value for the parameters, with three possible cases. Case 1: $r_c > p_x > 1$; case 2: $1> r_c > \sqrt{1/2} $ and $1>p_x > r_c $; case 3: $\sqrt{1/2}> r_c >0$ and $p_x^2 - E_A^2 > 0$ with $E_A$ a certain function of $r_c$ and $p_x$ whose formula and meaning is in appendix \ref{B}. Some sample plots are in figure \ref{figuratra}, where we can see both monotonically decreasing curves, for geodesics belonging to case 1 and thus respecting SSA2, and monotonically increasing for geodesics in cases 2 and 3, so violating SSA2.
\begin{figure}[h]
\centering
\vspace{-10pt}
\includegraphics[width=1.2\textwidth]{monssa2}
\vspace{-30pt}
\caption{SSA2 respected and violated by monotonically decreasing and increasing function $\Delta x(p_x)$, for Vaidya background violating NCC. The pictures refer to case 1, case 2 and case 3 respectively. }
\label{figuratra}
\end{figure}
\subsection{Generic space-like intervals}
Let us now discuss the most general scenario, with time dependent backgrounds and generic adjacent interval configurations \footnote{This section greatly benefited by e-mail correspondence with Aron C. Wall and Horacio Casini.}. The starting point for the discussion is the paper by Wall \cite{Wall:2012uf}, where the author introduces the concept of maximin surfaces, as an equivalent description of the extremal surfaces $\Sigma(A)$. A codimension two maximin surface is defined starting with a generic codimension one achronal surface $T$ in the bulk containing the boundary of the interval $A$; then a maximin surface is the minimal codimension one surface on $T$ having maximal area when varied over all the possible $T$. In brief the extremality is achieved by minimizing and maximizing in the space and time directions respectively.
The power of the maximin construction is exploited in the theorem that states ( under generic assumptions for the bulk spacetime ) that SSA2 is valid if the NCC condition
\begin{equation}\label{ncc}
R_{\mu\nu}k^{\mu}k^{\nu}\geq 0
\end{equation}
is respected. The original proof in \cite{Wall:2012uf} uses NCC and the Raychaudhuri equation for constructing inequalities between areas on different achronal slices with the same boundary condition. This result may be obtained as well just by using the maximin construction alone, so in order to single out the places where the NCC necessarily enters as a condition for SSA2, and to be naturally lead to the main claim of this section, we slightly modify the proof. Skipping some details, that may be found in the original paper, it is:
Proof : theorem 4 of \cite{Wall:2012uf} states that, given two null congruences of geodesics $N_1$ and $N_2$, in our case obtained by shooting out null curves from maximin surfaces, with $N_2$ nowhere in the past of $N_1$ and touching at a point $p$ belonging to some achronal slice $T$, then there exists a sufficiently small neighbourhood of $p$, $B_p\in T$, such that either $\Theta(N_2)_{B_p}>\Theta(N_1)_{B_p}$ or $N_1$ and $N_2$ do coincide there. Given this general result, theorem 14 states that two maximin surfaces with space-like boundary condition ( attached to $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$ in our case ), are always at space-like distance if NCC holds; the idea is simply that, as maximin surfaces are extremal, on $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$ we have $\Theta(N(\Sigma(B)))_{\Sigma(B)}=\Theta(N(\Sigma(A\cup B\cup C)))_{\Sigma(A\cup B\cup C)}=0$; then starting from a situation where the two curves are always at a space-like distance between them, not only close to the boundary but all the way through the bulk, let us suppose we can continuously deform the curves, for example enlarging $A\cup B\cup C$ while keeping $B$ fixed, to a situation where somewhere the proper distance approaches the null value; this means that two points $p_B$ and $p_{A\cup B\cup C}$, one for each curve, are connected by a null geodesics ( for symmetry there is either a single $p$ corresponding to the vertex or two symmetric points on the right and left hand side of the vertex ). This is shown in the first picture of figure \ref{figuratld}.
\begin{figure}[h]
\centering
\vspace{-0pt}
\includegraphics[width=0.9\textwidth]{timelikedis2}
\vspace{-0pt}
\caption{Developement of time-like distances between the two extremal surfaces $\Sigma{B}$ and $\Sigma(A\cup B\cup C)$.}
\label{figuratld}
\end{figure}
Let us choose $\Sigma(A\cup B\cup C)$ to be nowhere in the past of $\Sigma(B)$ and focus on $p_B$; theorem 4 says that in a neighbourhood of $p_B$ the null congruence $\Theta(N(\Sigma(A\cup B\cup C)))_{B_{p_B}}>\Theta(N(\Sigma(B)))_{B_{p_B}}$. As NCC is valid, $\Theta(N(\Sigma(A\cup B\cup C)))_{\Sigma(A\cup B\cup C)}>\Theta(N(\Sigma(A\cup B\cup C)))_{B_{p_B}}>\Theta(N(\Sigma(B)))_{B_{p_B}}=0$ and this contradicts the extremality condition. Not being able to continuously approach a light-like distance between any two points, it is then implied that the two curves cannot be deformed to develop time-like distances either, as shown in the second picture of figure \ref{figuratld}. This point is crucial for proving theorem 17 that shows that both $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$ always belong to the same achronal slice $T$. Finally the key passage is that, using the maximin construction, there exists a representative for both $\Sigma(A\cup B)$ and $\Sigma(B\cup C)$ on $T$ of smaller area ( proper length ) then the maximin surfaces, $\mathcal{A}(\Sigma(A\cup B))>\mathcal{A}(\Sigma(A\cup B)_T)$ ( and analogously for $\Sigma(B\cup C)$ )\footnote{This is the point where we do not make use of NCC that is instead used by \cite{Wall:2012uf} to compare areas.}. Thus on $T$ we have
\begin{equation}
\mathcal{A}(\Sigma(A\cup B))+\mathcal{A}(\Sigma(B\cup C))>\mathcal{A}(\Sigma(A\cup B)_T)+\mathcal{A}(\Sigma(B\cup C)_T)\geq \mathcal{A}(\Sigma(B))+\mathcal{A}(\Sigma(A\cup B\cup C))
\end{equation}
where the last inequality is just the usual static argument ( that can now be used as on $T$ the surfaces $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$ are minimal and $\Sigma(B)_T$ and $\Sigma(A\cup B\cup C)_T$ intersects. ).
This formulation of the theorem makes evident that a necessary condition for violation of SSA2 is to develop non space-like distances between $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$. This is understood as NCC enters the above proof only once, in constraining $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$ to belong to the same achronal slice $T$ while remaining at space-like distances.
The importance of this result, that is the non space-like distance between two surfaces $\Sigma(A_1)$ and $\Sigma(A_1)$, with the domain of dependence of $A_1$, $D_{A_1}$, containing $D_{A_2}$, $D_{A_1}\in D_{A_2}$, resides in being a counter example to statements sometimes used in the past literature, for instance Conjecture C2 of \cite{Czech:2012bh}.
We would like to emphasize the difference between the local NCC energy condition obtained in the present section, and the integrated NCC that we obtained for static spacetimes. The former one is clearly more restricting than the second, as respecting the local NCC obviously implies the integrated NCC, but not the opposite. In fact we could have used the maximin construction for proving that local NCC implies SSA2 for static backgrounds ( which is true ), but the maximin construction requires the local NCC to be applicable ( for example in proving the equivalence with HRT surfaces ). So in order to have the theorem as strong as possible, by requiring the weakest energy condition, we proceeded there without this powerful tool.
\subsection{Vaidya example for generic space-like intervals}
Given the generic discussion of the past section, let us construct a concrete example, where we show the developing of either null or time-like distances between the two geodesics $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$, when NCC does not hold and SSA2 is violated, while in general maintaining space-like distances when NCC is valid ( although it is possible to have SSA2 respected in the former case as the condition is necessary but not sufficient ).
In this section we will study only situations where both the $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$ geodesics belong to the same parameter range ( case 1,2,3 as previously introduced, see also appendix \ref{B} ), even though in appendix \ref{B} formulae are provided for the most general scenario.
Our goal will be to probe the distance between the vertices of $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$ to check if they are at space-like distance or not, depending on the value of the parameters $r_c$ and $p_x$ of both curves. We will not consider distances between generic points on the geodesics as it would be excessively complicated to derive corresponding inequalities and ultimately unnecessary, as non space-like distances between the two curves is just a necessary but not sufficient condition for SSA2 violation. As the boundary conditions force space-like distance between the end points of $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$, the vertices are likely to be the most prone to develop either null or time-like distances between them, so we restrict to this case. Further to simplify an otherwise too complicated computation we will consider only collinear intervals at the same boundary time $t_b$ and symmetrically disposed around the central point of $B$, that is $A$ and $C$ are taken to be of the same length.
In appendix \ref{B} we derived inequalities for the parameters $r_c$ and $p_x$ of two geodesics $\Sigma(B)$ and $\Sigma(A\cup B\cup C)$ that, when respected, correspond to space-like distances between the vertices. The constrain that comes from having both curves end points at the same boundary time $t_b$ eliminates one of the four parameters and further partially restricts the available parameter space that comes from the other three. The results are the following. \\
\textbf{Metric that violates the NCC}: \\
We here generically name "1" and "2" the labels for the two geodesics attached to the two parameters $r_c$ and $p_x$. As the goal is just to show if the distance between the vertices is space-like or not, it is not really relevant to distinguish which one corresponds to the geodesic attached to the largest boundary interval $\Sigma(A\cup B\cup C)$, and which to $\Sigma(B)$. However, as the intervals are collinear, we know that for case 1 smaller value of $p_x$ corresponds to bigger value for $\Delta_x$, so that parameter belong to $\Sigma(A\cup B\cup C)$. For case 2 and 3 instead, the smaller $p_x$ belong to $\Sigma(B)$.
\begin{itemize}
\item{Case 2: $1> r_c > \sqrt{1/2} ,\;1>p_x > r_c $
The inequalities defining space-like distance between the vertices are ( see appendix \ref{B} ):
\begin{subequations}
\begin{align}
y(p_{x1}) y(r_{c2}) &< y(p_{x2}) y(r_{c1}) \label{in21a}\\
& or \nonumber \\
y(r_{c1}) y(p_{x1}) &< y(p_{x2}) y(r_{c2})\label{in22b}
\end{align}
\end{subequations}
where we have defined
\begin{equation}
y(r)=\frac{1 + r}{-1 + r}
\end{equation}
in a parameter space spanned by $1>p_{x1}>r_{c1}>r_{c2}> \sqrt{1/2}$ with $r_{c1}>r_{c2}$ and $p_{x1}>r_{c1}$. $p_{x2}$ is a function of the other three parameters, as explained in appendix \ref{B}. We can sweep this parameter space to look for what volume satisfies the inequalities (\ref{in21a}) and (\ref{in22b}) and what violates them. The result is that they are always violated as shown in figure \ref{figura5} and thus geodesics always develop non space-like distances. }
\begin{figure}[h]
\centering
\vspace{+10pt}
\includegraphics[width=0.8\textwidth]{in2}
\vspace{-0pt}
\caption{The first graph shows the volume in the parameter space that violates both (\ref{in21a}) and (\ref{in22b}), thus giving null or time-like distances between the vertices; the second and third show the ( empty ) region that satisfies (\ref{in21a}) and (\ref{in22b}) respectively, thus giving space-like distances. The small angle missing is due to the requirement $t_b(r_{c1},p_{x1})-t_b(r_{c2},p_{x2})=0$ . }
\label{figura5}
\end{figure}
\item{Case 3: $\sqrt{1/2}> r_c >0, \;p_x^2 - E_A^2 > 0$
The inequalities are as previously:
\begin{subequations}
\begin{align}
y(p_{x1}) y(r_{c2}) &< y(p_{x2}) y(r_{c1}) \label{in31a}\\
& or \nonumber \\
y(r_{c1}) y(p_{x1}) &< y(p_{x2}) y(r_{c2}).\label{in32b}
\end{align}
\end{subequations}
inside a parameter space $p_{x1}>r_{c1}>r_{c2}$ with the additional constrain $p_{x1} < r_{c1}/(1 - 2 r_{c1}^2)$ and again $p_{x2}$ being a function of the other three parameters. Again always violation, as shown in figure \ref{figura5b} }
\begin{figure}[h]
\centering
\vspace{-0pt}
\includegraphics[width=0.8\textwidth]{in3}
\vspace{-0pt}
\caption{Graphs showing parameters region giving space-like ( first ) and null or time-like distances ( second and third, both empty ) between the vertices of the two geodesics. Relevant inequalities are (\ref{in31a}) and (\ref{in32b}). Note that the parameter space is different from the previous section not only in the values of the domains for $r_{c1},p_{x1},r_{c2}$ but also in shape. }
\label{figura5b}
\end{figure}
\item{Case 1: $r_c > p_x > 1$
Inequalities here are slightly different:
\begin{subequations}
\begin{align}
y(r_{c1}) y(p_{x1}) &> y(p_{x2}) y(r_{c2}) \;\;\;and \;\;\; y(p_{x1}) y(r_{c2}) > y(p_{x2}) y(r_{c1}) \;\;\;p_{x1}<p_{x2}\label{in11a}\\
y(r_{c2}) y(p_{x2}) &> y(p_{x1}) y(r_{c1}) \;\;\;and \;\;\; y(p_{x2}) y(r_{c1}) > y(p_{x1}) y(r_{c2}) \;\;\;p_{x1}>p_{x2}.\label{in12b}
\end{align}
\end{subequations}
with the parameter range for $r_{c1},p_{x1},r_{c2}$ limited by the constraints $r_{c1}>p_{x1}$ and $r_{c2}>p_{x2}(r_{c1},p_{x1},r_{c2})>1$, and $p_{x2}$ again a function to match the boundary time $t_b$. Inside this volume we separately deal with the subspaces $p_{x1}>p_{x2}$ and $p_{x1}<p_{x2}$; Now we can apply (\ref{in11a}) and (\ref{in12b}) on the relevant space and see if they are satisfied or not. It turns out that when $p_{x1}<p_{x2}$, $y(p_{x1}) y(r_{c2}) > y(p_{x2}) y(r_{c1}) $ is always respected, while $y(r_{c1}) y(p_{x1}) > y(p_{x2}) y(r_{c2})$ is respected in some region and violated in its complementary, as is shown in figure \ref{figura7}. Thus here we face geodesics that both present vertices at space-like and non space-like distances, depending on the values of their parameters.
\begin{figure}[h]
\centering
\vspace{+10pt}
\includegraphics[width=0.6\textwidth]{in1a}
\vspace{-0pt}
\caption{$p_{x1}<p_{x2}$ case: agreement and violation of $y(r_{c1}) y(p_{x1}) > y(p_{x2}) y(r_{c2})$ divide the total parameter space in two complementary subspaces. As the other inequality of (\ref{in11a}) is always respected, we have space-like distances in the first case, and null or time-like in the second.}
\label{figura7}
\end{figure}
Correspondingly when $p_{x1}>p_{x2}$ $y(r_{c2}) y(p_{x2}) > y(p_{x1}) y(r_{c1})$ is always respected, while $y(p_{x2}) y(r_{c1}) > y(p_{x1}) y(r_{c2}) $ is respected in some region and violated in its complementary ( with respect to the total space ). This is shown in figure \ref{figura8}.}
\begin{figure}[h]
\centering
\vspace{-0pt}
\includegraphics[width=0.6\textwidth]{in1b}
\vspace{-0pt}
\caption{$p_{x1}>p_{x2}$: agreement and violation of $y(p_{x2}) y(r_{c1}) > y(p_{x1}) y(r_{c2}) $ divide the total parameter space in two complementary subspaces. As the other inequality of (\ref{in12b}) is always respected, we have space-like distances in the first case, and null or time-like in the second.}
\label{figura8}
\end{figure}
\end{itemize}
\textbf{Metric that respects the NCC}: \\
The space-like condition for distances between vertices is here given by the inequalities
\begin{subequations}
\begin{align}
\frac{1}{p_{x1}} + \frac{1}{r_{c1}}>\frac{1}{p_{x2}} + \frac{1}{r_{c2}} \;\;\;\; and \;\;\;\; \frac{1}{p_{x1}} + \frac{1}{r_{c2}}>\frac{1}{p_{x2}} + \frac{1}{r_{c1}} \;\;\;p_{x1}<p_{x2} \label{csl1} \\
\frac{1}{p_{x1}} + \frac{1}{r_{c1}}<\frac{1}{p_{x2}} + \frac{1}{r_{c2}} \;\;\;\; and \;\;\;\; \frac{1}{p_{x2}} + \frac{1}{r_{c1}}>\frac{1}{p_{x1}} + \frac{1}{r_{c2}} \;\;\;p_{x1}>p_{x2} \label{csl2}
\end{align}
\end{subequations}
with only $r_{c1},p_{x1},r_{c2}$ independent and $p_{x1}<p_{x2}$. The parameter space defined by the usual condition for the boundary time and either (\ref{csl1}) or (\ref{csl2}) may be obtained numerically, in chosen parameter domains. The result is that (\ref{csl1}) and (\ref{csl2}) are always respected implying vertices are at space-like distance one from the other.
\section{Conclusions}
We have discussed in detail the holographic description of the two strong subadditivity inequalities, from static backgrounds to time dependent, with collinear boundary intervals or general configurations. We have seen that, for static geometries, SSA1 and SSA2 are always respected for collinear intervals, that the second requires an integrated NCC for non collinear intervals and is generically violated when we have Lorentz anomaly, while SSA1 holds. This has its counterpart in the monotonicity condition for SSA1 that remains unaltered, while the concavity for SSA2 transforms into a more strict requirement abandoning the collinearity. New results are the geometric proof for concavity of minimal surfaces, the proof that SSA1 implies only monotonicity independent of the interval configuration, a new proof that SSA2 requires the integrated NCC using the Raychaudhuri equation and finally the violation of SSA2 but not of SSA1 ( here only numerical ) for Lorentz anomalous CFTs. For time dependent backgrounds we first provided a new simple strategy for understanding if and when SSA1 and SSA2 violation occurs with collinear intervals, that does not requires direct checking of monotonicity or concavity. Second we have reviewed, in a slightly different form, the result by Wall that local NCC implies validity for SSA2, while making manifest the connection with the energy condition by showing that the reason for the violation comes from the developing of null or time-like distances between the most inward and outward geodesics entering the SSA2 inequality. Furthermore we have provided an explicit example in both cases by using the Vaidya metric. Also some discussion on why violation of strong subadditivity occurs has been provided. Following are two appendixes containing the results and the proofs ( to my knowledge new ) for what interval configuration, as a function of the slopes, gives the strongest bound on the entanglement entropy inequalities SSA1 and SSA2, and explicit formulas ( and some derivation ) for the Vaidya metric example.
An interesting point of view, that we did not discuss but is worth mentioning, is the result from \cite{Parikh:2014mja} where it was shown that Virasoro conditions in bosonic string theory imply the NCC on the background geometry. We would like to suggest that, perhaps, this is a hint that energy conditions may have some UV justification. The hypothesis is that non respecting NCC ( or analogous conditions ) means that the metric we are using does not consistently arise as a background in theories that correctly quantize gravity, and one of the dual symptoms is not respecting strong subadditivity.
We would like to point out three possible hints for future research. The first one is the problem of quantum bulk corrections to EE and the question if they do respect or not the boundary strong subadditivity inequality. For example we can introduce the mutual information $I(A,B)\equiv S(A)+S(B)-S(A\cup B)$ and rewrite the SSA2 inequality as, \cite{Allais:2011ys}
\begin{equation}\label{inmut}
I(A,B\cup C)\geq I(A,B).
\end{equation}
If the intervals $A$ and $B$ entering the mutual information are well disconnected, the classic holographic computation gives $I(A,B)=0$, as $\Sigma(A\cup B)=\Sigma(A)\cup \Sigma(B)$. Thus, for $A$, $B$ and $C$ disconnected the inequality (\ref{inmut}) just produces a classical $0=0$ result. Quantum corrections in the bulk clearly affect the above inequality; how to compute them is still a discussed argument as the HRT ( or Ryu-Takayanagi ) surfaces are not string worldsheets as for the holographic description of Wilson loops, but rather just geometrical surfaces. Thus $\alpha'$ corrections or higher genus computations are not the correct answer. The question has so far received two different answers in \cite{Engelhardt:2014gca} and \cite{Faulkner:2013ana}, with pro and con arguments for both. The question is then if there are reasons to believe that quantum bulk corrections do respect or not strong subadditivity, and/or should they be constrained by energy conditions in doing so?
A second direction for future research, and also one of the original motivations for the present paper, is the question of what is the logical relationship between strong subadditivity and energy conditions. We know that the integrated NCC implies SSA2 for static geometries with generic adjacent intervals, and that the two are equivalent for a SSA2-bound-maximizing configuration, and we also know that local NCC implies SSA2 in time dependent systems. Can we deform either the energy condition or the strong subadditivity, weakening or strengthening depending on the case, in order to make the correspondence one to one, in the widest possible range of cases?
Finally we would like to generalize as most as possible of the present paper to generic d-dimensional theories. Part of this work is straightforward, part quite complicated.
We hope to come back to these issues and more in the future.
\section*{Acknowledgements}
I would like to thank Aron C. Wall and Horacio Casini for email correspondence at the early stage of this work, and Diego Trancanelli for suggestions and comments on the paper. This work was founded by FAPESP fellowship 2013/10460-9.
|
2,869,038,153,741 | arxiv | \section{Introduction}
We work over the algebraically closed field of characteristic zero.
Specially the base field is the complex numbers in considering the
classification of surfaces. A smooth irreducible algebraic variety
$V$ in $\mathbb P^r$ is said to be projectively normal if the
natural morphisms $H^0(\mathbb P^r,{\mathcal O}_{\mathbb P^r}(m))\to
H^0(V,{\mathcal O}_V(m))$ are surjective for every nonnegative
integer $m$. Let $C$ be a smooth irreducible algebraic curve of
genus $g$. We say that a base point free line bundle $L$ on $C$ is
normally generated if $C$ has a projectively normal embedding via
its associated morphism $\phi_L:C\to \mathbb P(H^0(C,L))$.
Any line bundle of degree at least $2g+1$ on a smooth curve of genus
$g$ is normally generated but a line bundle of degree at most $2g$
might fail to be normally generated (\cite{KK1}, \cite{LM1},
\cite{Mum}). Green and Lazarsfeld showed a sufficient condition for
$L$ to be normally generated as follows (\cite{GL}, Theorem 1): If
$L$ is a very ample line bundle on $C$ with $\deg L\ge
2g+1-2h^1(C,L)-\hbox{\rm Cliff}(C)$ (and hence $h^1(C,L)\le 1$), then $L$ is
normally generated. Using this, we show that a line bundle $L$ on
$C$ with $\frac{3g+3}{2}<\deg L\le 2g-5$ is normally generated for
$\deg L>\max\{2g+2-4h^1(C,L), ~2g-\frac{g-1}{6}-2h^1(C,L)\}$. As a
corollary, if $C$ is a triple covering of genus $p$ curve $C'$ with
$C\stackrel{\phi}\rightarrow C'$ then it has a very ample
$K_C(-\phi^*D)$ which is normally generated for any divisor $D$ on
$C'$ with $4p<\deg D< \frac{g-1}{6}-2p$. It is a kind of
generalization of the result that $K_C(-rg^1_3)$ on a trigonal curve
$C$ is normally generated for $3r\le \frac{g}{2}-1$ (\cite{KK}).
As an application to nondegenerate smooth surface $S\subset \mathbb
P^r$ of degree $2\Delta-e$ with $g(H)=\Delta+f$, $\max\{\frac{e}{2},
6e-\Delta\}<f-1<\frac{\Delta-2e-6}{3}$ for some $e,f\in \mathbb
Z_{\ge 1}$, we obtain that $S$ is projectively normal with $~p_g=f$
and $-2f-e+2\le K_S^2\le \frac{(2f+e-2)^2}{2\Delta-e}$ if its
general hyperplane section $H$ is linearly normal, where
$\Delta:=\deg S-r+1$. Furthermore we characterize smooth projective
surfaces $S$ for $K_S^2=-2f-e+2$, $0$ (cf. Theorem \ref{thm5.3}).
These applications were derived by the methods in Akahori's,
Livorni's and Sommese's papers (\cite{Ak}, \cite{Liv}, \cite{So}).
We follow most notations in \cite{ACGH}, \cite{GH}, \cite{H}. Let
$C$ be a smooth irreducible projective curve of genus $g\ge 2$. The
Clifford index of $C$ is taken to be $\hbox{\rm Cliff}(C)=\min
\{~\hbox{\rm Cliff}(L)~|~h^0(C,L)\ge 2,~h^1(C,L)\ge 2~\},$ where $\hbox{\rm Cliff}(L)=\deg
L-2(h^0(C,L)-1)$ for a line bundle $L$ on $C$. By abuse of notation,
we sometimes use a divisor $D$ on a smooth variety $V$ instead of
${\mathcal O}_V(D)$. We also denote $H^i(V,{\mathcal O}_V(D))$ by
$H^i(V,D)$ and $h^0(V,L)-1$ by $r(L)$ for a line bundle $L$ on $V$.
We denote $K_V$ a canonical line bundle on a smooth variety $V$.
\section{Normal generation of a line bundle on a smooth curve}
Any line bundle of degree at least $2g+1$ on a smooth curve of genus
$g$ is normally generated. If the degree is at most $2g$, then there
are curves which have a non normally generated line bundle of given
degree (\cite{KK1}, \cite{LM1}, \cite{Mum}). In this section, we
investigate the normal generation of a line bundle with given degree
on a smooth curve under some condition about the speciality of the
line bundle.
\begin{thm}
Let $L$ be a very ample line bundle on a smooth curve $C$ of genus
$g$ with $\frac{3g+3}{2}<\deg L\le 2g-5$. Then $L$ is normally
generated if $\deg L>\max\{2g+2-4h^1(C,L),
2g-\frac{g-1}{6}-2h^1(C,L)\}$. \label{3.5.6}
\end{thm}
\begin{proof}
Suppose $L$ is not normally generated. Then there exists a line
bundle $A\simeq L(-R), ~R>0,$ such that \hbox{\rm (i)} $\hbox{\rm Cliff}(A)\le
\hbox{\rm Cliff}(L)$, \hbox{\rm (ii)} $\deg A\ge \frac{g-1}{2}$, \hbox{\rm
(iii)} $h^0(C,A)\ge 2$ and $h^1(C,A)\ge h^1(C,L)+2$ by the proof of
Theorem 3 in \cite{GL}. Assume $\deg K_CL^{-1}=3$, then
$|K_CL^{-1}|=g^1_3$. On the other hand, $L=K_C(-g^1_3)$ is normally
generated.
So we may assume $\deg K_CL^{-1}\ge 4$ and then $r(K_CL^{-1})\ge 2$
since $\deg L>2g+2-4h^1(C,L)$. Set $B_1$(resp. $B_2$) is the base
locus of $K_CL^{-1}$(resp. $K_CA^{-1}$). And let
$N_1:=K_CL^{-1}(-B_1), ~N_2:=K_CA^{-1}(-B_2)$. Then $N_1\lneq N_2$
since $A\cong L(-R), ~R>0$ and $h^1(C,A)\ge h^1(C,L)+2$. Hence we
have the following diagram,
\begin{picture}(300,100)
\put(80,75){$C$}
\put(95,80){\vector(1,0){85}}
\put(90,65){\vector(2,-1){90}}
\put(186,75){$C_2$}
\put(190,70){\vector(0,-1){40}}
\put(200,45){$\pi$: projection}
\put(130,87){$\phi_{N_2}$}
\put(115,33){$\phi_{N_1}$}
\put(186,15){$C_1$}
\end{picture}
\noindent where $C_i=\phi_{N_i}(C)$.
If we set $m_i:=\deg \phi_{N_i}, ~i=1,2$, then we have $m_2|m_1$. If
$N_1$ is birationally very ample, then by Lemma 9 in \cite{KK1} and
$\deg K_CL^{-1}<\frac{g-1}{2}$ we have $r(N_1)\le \left [\frac{\deg
N_1-1}{5}\right ].$ It is a contradiction to $\deg L>2g+2-4h^1(C,L)$
that is equivalent to $\deg K_CL^{-1}<4(h^0(C,K_CL^{-1})-1)$.
Therefore $N_1$ is not birationally very ample, and then we have
$m_1\le 3$ since $\deg K_CL^{-1}<4(h^0(C,K_CL^{-1})-1)$.
Set $H_1$ be a hyperplane section of $C_1$. If $|H_1|$ on a smooth
model of $C_1$ is special, then $r(N_1)\le\frac{\deg N_1}{4}$, which
is absurd. Thus $|H_1|$ is nonspecial. If $m_1=2$, then
$$r(K_CL^{-1}(-B_1+P+Q))\ge r(K_CL^{-1}(-B_1))+1$$ for any pairs $(P,Q)$
such that $\phi_{N_1}(P)=\phi_{N_1}(Q)$ since $|H_1|$ is nonspecial.
Therefore we have $r(L(-P-Q))\ge r(L)-1$ for $(P,Q)$ such that
$\phi_{N_1}(P)=\phi_{N_1}(Q)$, which contradicts that $L$ is very
ample. Therefore we get $m_1=3$. Suppose $B_1$ is nonzero. Set $P\le
B_1$ for some $P\in C$. Consider $Q,R$ in $C$ such that
$\phi_{N_1}(P)=\phi_{N_1}(Q)=\phi_{N_1}(R)=P'$ for some $P'\in C_1$.
Since $|H_1|$ is nonspecial, we have
\begin{eqnarray*}
r(K_CL^{-1}(Q+R))&\ge& r(N_1(P+Q+R))=r(H_1+P')\\
&=&r(H_1)+1=r(K_CL^{-1})+1
\end{eqnarray*} which is a contradiction to the very ampleness of
$L$. Hence $K_CL^{-1}$ is base point free, i.e., $K_CL^{-1}=N_1$. On
the other hand, we have $m_2=1$ or 3 for $m_2| m_1$. Since
$K_CA^{-1}(-B_2)=N_2\gneq N_1=K_CL^{-1}$, we may set $N_1=N_2(-G)$
for some $G>0$.
Assume $m_2=1$, i.e. $K_CA^{-1}(-B_2)=N_2$ is birationally very
ample. On the other hand we have $r(N_2)\ge r(N_1)+\frac{\deg
G}{2}$, since $N_2(-G)\cong N_1$ and $\hbox{\rm Cliff}(N_2)\le
\hbox{\rm Cliff}(A)\le\hbox{\rm Cliff}(L)=\hbox{\rm Cliff}(N_1)$. In case $\deg N_2\ge g$ we have $n\le
\frac{2\deg N_2-g+1}{3}$ by Castelnuovo's genus bound and hence
$$\hbox{\rm Cliff}(L)\ge \hbox{\rm Cliff}(N_2)\ge \deg N_2-\frac{4\deg N_2-2g+2}{3}=\frac{2g-2-\deg N_2}{3}
\ge \frac{g-1}{6},$$ since $N_2=K_CA^{-1}(-B_2)$ and $\deg A\ge
\frac{g-1}{2}$. If we observe that the condition $\deg
L>2g-\frac{g-1}{6}-2h^1(C,L)$ is equivalent to
$\hbox{\rm Cliff}(K_CL^{-1})<\frac{g-1}{6}$, then we meet an absurdity. Thus we
have $\deg N_2\le g-1$, and then Castelnuovo's genus bound produces
$\deg N_2\ge 3r(N_2)-2$. Note that the Castelnuovo number $\pi(d,r)$
has the property $\pi(d,r)\le\pi(d-2,r-1)$ for $d\ge 3r-2$ and $r\ge
3$, where $\pi(d,r)=\frac{m(m-1)}{2}(r-1)+m\epsilon,$
$d-1=m(r-1)+\epsilon,~~0\le \epsilon\le r-2$ (Lemma 6, \cite{KK1}).
Hence
$$
\pi(\deg N_2, r(N_2))\le \cdots\le \pi(\deg N_2-\deg G,
r(N_2)-\frac{\deg G}{2})\\
\le \pi (\deg N_1, r(N_1)),
$$
because of $2\le r(N_1)\le r(N_2)-\frac{\deg G}{2}$. Since
$r(N_1)\ge \frac{\deg N_1 }{4}$ and ${\deg N_1 }<\frac{g-1}{2}$, we
can induce a strict inequality $\pi(\deg N_1, r(N_1))<g$ as only the
number regardless of birational embedding from the proof of Lemma 9
in \cite{KK1}. It is absurd. Hence $m_2=3$, which yields $C_1\cong
C_2$.
Set $H_2$ be a hyperplane section of $C_2$. If $|H_2|$ on a smooth
model of $C_2$ is special, then $n:=r(N_2)\le \frac{\deg N_2}{6}$.
Thus the condition $\deg K_CL^{-1}<4(h^0(C,K_CL^{-1})-1)$ yields the
following inequalities:
$$
\frac{2\deg N_2}{3}\le\hbox{\rm Cliff}(N_2)\le \hbox{\rm Cliff}(N_1)\le \frac{\deg N_1}{2},
$$
which contradicts to $N_1\lneq N_2$. Accordingly $|H_2|$ is also
nonspecial.
Now we have $r(N_i)=\frac{\deg N_i}{3}-p, ~i=1,2$ where $p$ is the
genus of a smooth model of $C_1\cong C_2$. Therefore
$$
\frac{\deg N_1}{3}+2p=\hbox{\rm Cliff}(N_1)\ge \hbox{\rm Cliff}(N_2)=\frac{\deg N_2}{3}+2p
$$
which is a contradiction that $\deg N_1<\deg N_2$. This
contradiction comes from the assumption that $L$ is not normally
generated, thus the result follows.
\end{proof}
Using the above theorem, we obtain the following corollary under the
same assumption:
\begin{cor}
Let $C$ be a triple covering of genus $p$ curve $C'$ with
$C\stackrel{\phi}\rightarrow C'$ and $D$ a divisor on $C'$ with
$4p<\deg D< \frac{g-1}{6}-2p$. Then $K_C(-\phi^*D)$ becomes a very
ample line bundle which is normally generated.
\end{cor}
\begin{proof}
Set $d:=\deg D$ and $L:=K_C(-\phi^*D)$. Suppose $L$ is not base
point free, then there is a $P\in C$ such that
$|K_CL^{-1}(P)|=g^{r+1}_{3d+1}$. Note that $g^{r+1}_{3d+1}$ cannot
be composed with $\phi$ by degree reason. Therefore we have $g\le
6d+3p$ due to the Castelnuovo-Severi inequality. Hence it cannot
occur by the condition $d< \frac{g-1}{6}-2p$. Suppose $L$ is not
very ample, then there are $P,Q\in C$ such that
$|K_CL^{-1}(P+Q)|=g^{r+1}_{3d+2}$. By the same method as above, we
get a similar contradiction. Thus $L$ is very ample. The condition
$d<\frac{g-1}{6}-2p$ produces $\hbox{\rm Cliff}(K_CL^{-1})=d+2p<\frac{g-1}{6}$
since $\deg K_CL^{-1}=3d$ and $h^0(C,K_CL^{-1})=h^0(C', D)=d-p+1$.
Whence $\deg L>2g-\frac{g-1}{6}-2h^1(C,L)$ is satisfied. The
condition $4p<d$ induces $\deg K_CL^{-1}>4(h^0(C, K_CL^{-1})-1)$,
i.e., $\deg L>2g+2-4h^1(C,L)$. Consequently $L$ is normally
generated by Theorem \ref{3.5.6}.
\end{proof}
\begin{rmk}
In fact, we have similar result in \cite{KK1} for trigonal curve
$C$: $K_C(-rg^1_3)$ is normally generated if ~$3r<\frac{g}{2}-1$
$($\cite{KK}$)$. Thus our result could be considered as a
generalization which deals with triple covering under the some
condition.
\end{rmk}
\section{Application to projective surfaces}
Let $S\subseteq \mathbb P^r$ be a nondegenerate smooth surface and
$H$ a smooth hyperplane section of $S$. If $H$ is projectively
normal and $h^1(H, {\mathcal O}_H(2))=0$, then $q=h^1(S,{\mathcal
O}_S)=0, ~p_g=h^2(S,{\mathcal O}_S)=h^1(H,{\mathcal O}_H(1))$ and
$h^1(S,{\mathcal O}_S(t))=0$ for all nonnegative integer $t$
(\cite{Ak}, Lemma 2.1, Lemma 3.1). In this section, using our result
about the projective normality of smooth curves in section 2, we can
characterize smooth projective surfaces with the wider range of
degree and sectional genus. Recall the definition of $\Delta$-genus
given by $\Delta:=\deg S-r+1$.
\begin{thm}
\label{thm5.2} Let $S\subset \mathbb P^r$ be a nondegenerate smooth
surface of degree $2\Delta-e$ with $g(H)=\Delta+f$,
$\max\{\frac{e}{2}~, ~6e-\Delta\}<f-1<\frac{\Delta-2e-6}{3}$ for
some $e,f\in \mathbb Z_{\ge 1}$ and its general hyperplane section
$H$ is linearly normal. Then $S$ is projectively normal with
$~p_g=f$ and $-2f-e+2\le K_S^2\le \frac{(2f+e-2)^2}{2\Delta-e}$.
\end{thm}
\begin{proof}
From the linear normality of $H$, we get $h^0(H,{\mathcal O}_H(1))=
r$ and hence
\begin{eqnarray*}
h^1(H,{\mathcal O}_H(1))&=&-\deg{\mathcal O}_H(1)-1+g(H)+h^0(H,{\mathcal O}_H(1))\\
&=&-2\Delta+e-1+g(H)+h^0(H,{\mathcal O}_H(1))\\
&=& g(H)-\Delta=f
\end{eqnarray*}
Therefore we have $h^1(H,{\mathcal O}_H(1))>\deg \frac{K_H\otimes
\mathcal O_H(-1)}{4}$ since $f>\frac{e}{2}+1$ and $\deg {\mathcal
O}_H(1)=2\Delta-e=2g(H)-2-(2f+e-2)$. Thus ${\mathcal O}_H(1)$
satisfies $\deg {\mathcal O}_H(1)>2g(H)+2-4h^1(H,{\mathcal
O}_H(1))$. The condition $f-1> 6e-\Delta$ implies $\deg {\mathcal
O}_H(1)> 2g-\frac{g-1}{6}-2h^1(H,{\mathcal O}_H(1))$. Also the
condition $f-1<\frac{\Delta-2e-6}{3}$ yields $\deg {\mathcal
O}_H(1)>\frac{3g+3}{2}$. Hence ${\mathcal O}_H(1)$ is normally
generated by Theorem \ref{3.5.6}, and thus its general hyperplane
section $H$ is projectively normal since it is linearly normal.
Therefore $S$ is projectively normal with $q=0$,
$p_g=h^0(S,K_S)=h^1(H,{\mathcal O}_H(1))= f> 1$ since
$h^1(H,{\mathcal O}_H(2))=0$ from $\deg {\mathcal
O}_H(1)>\frac{3g+3}{2}$.
If we consider the adjunction formula $g(H)=\frac{K_S.H+H.H}{2}+1$
then $K_S.H=2f+e-2$. Since $|H+K_S|$ is ample and $p_g>0$, we get
$K_S.(H+K_S)\ge 0$. Therefore we obtain
$$K_S^2\ge H^2-(2g(H)-2)=(2\Delta-e)-(2(\Delta+f)-2)=-2f-e+2$$ by
Propositon 2.0.6 (iii) in \cite{Liv}. Thus $-2f-e+2\le K_S^2\le
\frac{(2f+e-2)^2}{2\Delta-e}$ by the Hodge index theorem
$K_S^2H^2\le (K_S. H)^2$. Hence the theorem is proved.
\end{proof}
Assume that $(2f+e-2)^2< 2\Delta-e$ in the above theorem, then we
have $-2f-e+2\le K_S^2\le 0$. Observe the cases for $K^2_S=-2f-e+2$
or $0$, then we obtain the following result by using similar method
in \cite{Ak}.
\begin{Prop}
\label{thm5.3} Let $S$ satisfy the conditions in Theorem
\ref{thm5.2}. Then $S$ is a minimal elliptic surface of Kodaira
dimension 1 if $K_S^2=0$ and $|K_S|$ has no fixed component. Also
$S$ is a surface blown up at $2f+e-2$ points on a $K3$ surface in
case $K_S^2=-2f-e+2$.
\end{Prop}
\begin{proof} Assume $|K_S|$ has no fixed component with
$K_S^2=0$. Then $S$ is minimal by adjunction formula and useful
remark III.5 in \cite{B}. Also the Kodaira dimension $\kappa$ of $S$
is at most one since $K_S^2\le 0$. Since $p_g>1$, $S$ is nonruled
and so $\kappa\ge 0$. If $\kappa=0$ then $p_g\le 1$ by Theorem
VIII.2 in \cite{B} and thus $\kappa$ must be 1. Hence by Proposition
IX.2 in \cite{B} there is a smooth curve $B$ and a surjective
morphism $p:S\to B$ whose generic fibre is an elliptic curve which
means that $S$ is a minimal elliptic surface of Kodaira dimension 1.
If $K_S^2=-2f-e+2$. Let $\phi_{H+K_S}=s\circ r$ be the
Remmert-Stein factorization of $\phi_{H+K_S}$ and $\hat{S}=r(S)$.
Then we can use Propositon 2.0.6 in \cite{Liv} as stated in the
proof of the previous theorem. And we obtain
$$H^2-K_S^2=(2\Delta-e)-(-2f-e+2)=2g(H)-2,$$ which yields $\hat{S}$ is
a minimal model and $K_{\hat{S}}= 0$, in other words, $\hat{S}$ is a
K3 surface by using Propositon 2.0.6 (iv-1) in \cite{Liv}. Also by
Propositon 2.0.6 (ii) in \cite{Liv}, $S$ is a surface blown up at
$2f+e-2$ points on a $K3$ surface $\hat{S}$ since
$\hat{d}-d=2g(H)-2-(2\Delta-e)=2f+e-2$.
\end{proof}
\pagestyle{myheadings} \markboth{}{}
|
2,869,038,153,742 | arxiv | \section{Introduction}
Clusters are natural laboratories to study the effects of environment on the
evolution of galaxies \citep{dressler80}.
A plethora of evidence shows that the properties of
late-type galaxies depend strongly on environment: besides the well known morphology-density
relation \citep{dressler80,whitmore}, in local clusters ($z\leq$0.03)
spiral galaxies are deficient in neutral hydrogen
\citep{giova85,cayatte90,hector1} and have lower star formation activity than
galaxies of similar type and size in low density environments \citep{lewis02,gomez03,ha06}.
Various physical mechanisms have been proposed to explain the different evolution of
late type spirals in clusters and in
the field. In general, they invoke either dynamical interactions of cluster galaxies with the
hot intracluster medium (ICM, \citealp{GUNG72,LARS80}), or gravitational interactions
with nearby companions \citep{merritt83}, with the potential of the cluster \citep{byrd1990,valluri93},
or with a combination of these two \citep{harrassment}.
Interactions with the ICM are likely to be the dominant process at the
present epoch and can account for the truncation of the gas disks in members of several
local clusters (see \citealp{review} and references therein).
However, ram-pressure cannot produce the strong morphology-density relation \citep{dressler80,whitmore},
nor can it thicken the disk of a spiral galaxy and transform it into an S0
(i.e. \citealp{hinz03,christlein04,n4569}).\\
This apparent contradiction could be solved if the structures form hierarchically;
that is, if galaxy clusters form not by accreting individual galaxies randomly from the field,
but rather by the infall of less massive groups along filaments.
These infalling groups have velocity dispersions that are much smaller than that of the cluster
as a whole, permitting the slow, strong interactions normally
associated with field galaxies \citep{FUJI04,mihos04,dress04car,big}.
Therefore a plausible evolutionary history would take into account that environmental conditions
and the physical properties of galaxies are changed significantly during cosmic time, changing the
influence of various physical mechanisms on the evolution of galaxies \citep{review}.
However, this hypothesis is far from confirmed, since we lack detailed
understanding of the range of environmental effects that act as a function of
the age of the Universe \citep{dress04car}.
Although star-forming galaxies in clusters at intermediate redshift appear more
disturbed \citep{oemler97,couch98}
and have higher star formation activity \citep{butcher1,butcher2,fadda00} than local disk
galaxies, it is still an open question which mechanisms are at play and how they influence
the evolutionary history of cluster galaxies \citep{balogh99,dressler99,treu03}.
To solve this riddle, we need to observe galaxies that physical circumstances
and chance have revealed in rare moments of transformation.
These peculiar systems can be used to probe different environmental effects
and to constrain models of the evolutionary history of galaxies.
Much of our knowledge on the evolution of nearby galaxies in both groups \citep{duc00,IGLV01,sulentic01} and clusters
\citep{n4438,big,GAVB01,kenney95,vollmer01,vollmer04} has in fact come from studying of
such systems. Unfortunately this information
is difficult to extend to high redshift
because both clusters and galaxies have evolved significantly: clusters were less
relaxed \citep{jeltema05} and galaxies had higher gas content. Therefore,
the effects of the same environmental mechanisms could depend strongly on
the age of the Universe.\\
In this paper, we present a multiwavelength analysis of two peculiar galaxies
(hereafter referred as 235144-260358 and 131124-012040) falling into the centers
of the massive clusters Abell 2667 ($z\approx$0.23) and Abell 1689 ($z\approx$0.18).
Both these systems are associated with extended
trails of bright blue knots and diffuse wisps and filaments of young stars,
features observed so far only in one other galaxy at similar redshift \citep{owen06}.
These two objects have been serendipitously detected by looking at the
WFPC2 and ACS images of massive clusters at z$\approx$0.2. The sample of clusters
consist of the 10 clusters discussed in \cite{smith_clust05} plus A2667, A2390 and
A1689. We therefore found 2 galaxies with extended trails within 13 studied
clusters all located at 0.175$<z<$0.25, suggesting that we are observing a very short
snapshot of a critical stage in the evolution of these cluster galaxies.\\
Because these two systems have
significantly different optical luminosities ($\approx\rm L^{*}$ and $\approx$0.1$\rm L^{*}$)
but are at similar distances from the cores of clusters of similar mass,
they represent an interesting case for a comparison of the effects of similar
environments on different-sized galaxies.\\
Throughout this paper we assume a cosmology
with ${\Omega}_m$ = 0.3, ${\Omega}_{\lambda}$ = 0.7 and $H_0$ =
70 km/s Mpc$^{-1}$, implying a distance modulus of 39.74 (40.33) mag and a linear
scale of 3.16 (3.71) kpc/arcsec for A1689 (A2667).
\section{The Data}
\subsection{Optical Photometry}
The optical photometric data for this paper are extracted from
deep $HST$ images of Abell 2667 and Abell 1689.
Abell 2667 was observed on 2001 October, using the Wide Field Planetary
Camera (WFPC2) for total exposures of 1200 seconds through the F450W filter,
and 4000 seconds in F606W and F814W (see Fig. \ref{whole2667} and Fig.\ref{colimage}) \citep{covone05}.
The 3 $\sigma$ detection limit for point sources is $\approx$ 26.00, 26.00 and 25.00 mag in
the F450W, F606W, and F814W bands, respectively.
Deep observations of Abell 1689 were obtained from the ACS Guaranteed Time
observations in 2002 June (see Fig. \ref{whole1689} and Fig.\ref{colimage}).
A total of 20 orbits ($\approx$13.2 h) were taken in the three passbands F475W, F625W and
F850LP \citep{broad05}.
The 3 $\sigma$ detection limit for point sources is $\approx$ 29.70, 29.20 and 28.30 mag in the filters
F475W, F625W, F850LP respectively.\\
We used SExtractor \citep{sex} to detect and analyze the sources.
For source detection, we used an image averaging the three band-passes; magnitudes
were then determined from aperture photometry on the separate
images for each filter. All magnitudes are in the VEGAMAG systems.
No correction for Galactic extinction was performed ($A_{V}\leq$0.07 mag).
Surface brightness profiles for these galaxies were computed using
the task {\it ellipse} within IRAF.
The ellipticity and position angle were determined using the I band images
following the procedure of \cite{gav00}.
The disturbed morphologies of 235144-260358 in A2667 could possibly affect the shape of the surface brightness
profiles at large radii (in particular at shorter wavelengths), but not in the central regions where both the objects
still present a reasonably symmetrical shape. This is not the case of the edge-on spiral 131124-012040 which does
not show strong asymmetries within the optical radius.
\subsection{Near Infrared Photometry}
Near-infrared H band observations for Abell 2667 and Abell 1689 were
obtained with ISAAC on the Very Large Telescope (VLT) in the
spring of 2003 (ESO Programs 071.A-0428, PI. Kneib
and 067.A-0095 PI. Knudsen), under photometric sky conditions with a mean seeing of
$\approx$0.41 arcsec and $\approx$0.58 arcsec respectively.
The total exposure time of 6529s for each cluster corresponds
to a 5$\sigma$ detection limit
for point sources of $\approx$24.6 mag.
All these observations have been reduced as described
in \cite{richard06}.
\subsection{Mid Infrared Photometry}
Spitzer imaging observations of Abell 2667 and Abell 1689 were obtained as part
of the GTO Lensing Cluster Survey (program 83, PI G.~Rieke).
MIPS \citep{mips} 24$\rm \mu m$ images were obtained in photometry mode, with a total
exposure time of $\approx$2700s. The data were processed and
mosaicked following the procedures described in \cite{egami06}.
Point source extraction and photometry were performed using DAOPHOT \citep{daophot} as described
in \cite{papovich04}. A PSF was constructed from the best-measured 30 point sources in the field; the Tiny Tim model of the 24$\rm \mu$m
PSF \citep{krist} was used to compute the aperture
correction. 131124-012040 in Abell 1689 is not detected in MIPS images. We derived a 3 $\sigma$ limit using
a photometry aperture radius of 6 arcsec, a sky annulus between 6 and 13 arcsec and an aperture correction of 1.698\\
IRAC \citep{irac} four-bands (3.6, 4.5, 5.8 and 8.0$\rm \mu m$) imaging was also obtained, with a total exposure time of
2400s per band for each cluster. Basic calibrated data were combined using
a custom IDL mosaicking routine.
For A2667 photometry was performed within apertures of radius $\approx$ 8.3 arcsec
and no aperture corrections were applied because they are small with this large an extraction aperture.
On the contrary in A1689 photometry was performed within a smaller aperture to
avoid light contamination from nearby sources. We adopted a radius $\approx$ 2.4 arcsec, sky annulus between
2.4 and 7.2 arcsec and aperture corrections of 1.213, 1.234, 1.379 and 1.584 at 3.6, 4.5, 5.8 and 8.0$\rm \mu m$
respectively.
\subsection{Radio continuum observations}
We obtained a 20\,cm radio continuum measurement of ${\approx}$1.4\,mJy for the
galaxy in A2667 from the 1.4\,GHz NVSS continuum survey. As this survey offers rather poor spatial resolution ($\approx$45
arcsec), we also constructed a 20\,cm map using higher
resolution data from the NRAO VLA data-archive.
Two different observations are available on A\,2667 at this frequency:
1) it was observed in October 2001 for 3590 seconds in
correlator mode 4, with a bandwidth of 25 MHz, and using
the CD-configuration (due to the low declination); and 2.)
a 3620 second observation was obtained in September 2002
with the same correlator mode and bandwidth, but in the BC
configuration. We applied
standard VLA calibration and imaging procedures with AIPS.
We then combined the two data sets in the UV-plane using DBCON.
Images were generated with the task IMAGR and a weighting
option {\it robust}=0, producing a map
intermediate between natural and uniform weighting.
After CLEANing, the resulting continuum map has
a beam size of 16.7 $\times$ 13.1 arcsec and an average rms noise of
0.12 mJy/beam.
This map is shown in Fig.\ref{radio}, superposed on the HST
image. \\
The cluster A\,1689 was observed with the VLA for a total of 17405
seconds in the A-configuration in November 2000 and March 2002.
Combining the data sets using natural weighting (to improve
the sensitivity) we produced a
map with a beam size of 2.1 $\times$ 1.6 arcsec, and an
average rms of 0.15mJy/beam.
No emission is detected at the position of the infalling galaxy.
Taking a conservative detection threshold
of 6$\sigma$ we estimate an upper limit of 0.90\,mJy for the 20\,cm
radio continuum flux.
\subsection{Optical spectroscopy}
We obtained optical spectroscopy for 131124-012040 in Abell 1689 as part of a wide
field ($\approx 30\arcmin\times30\arcmin$) spectroscopic survey of the whole cluster (\citealp{Czoske2004}, Czoske
et al.\ 2007, in preparation) using VIMOS on the VLT (Program 71.A-3065, P.I. Czoske).
The LR-Blue grism was used, which provided a resolution
of $R\approx 200$ over a wavelength range from 3750\,\AA\ to 6750\,\AA.
The dispersion was 5.35\,\AA\ per pixel.
We obtained three exposures of 840~seconds, for a total of
42 minutes.
The data were reduced using VIPGI \citep{Scodeggio2005} on site at
IASF in Milano. The reduction involved bias subtraction, identification of
the spectrum on the two-dimensional spectral image, interactive wavelength
calibration from observations of arc spectra and optimal extraction using
the method of \citet{Horne1986}. The spectrum has been flux-calibrated
from observations of a spectrophotometric standard star, Feige~110.\\
In addition we observed A2667 and A1689 in June 2006 with the LRIS instrument \citep{Oke95} on Keck I.
A 5600 \AA\ dichroic separated the red channel of the instrument, equipped with a 400 l\ mm$^{-1}$
grating blazed at 8500 \AA, from the blue channel, equipped with a 600 l\ mm$^{-1}$ grism blazed at 4000 \AA .
This setting covers the wavelength range $3300-9200$ \AA\ with a dispersion of 0.6/1.8 \AA\ and
a resolution of 4.3/6.3 \AA\ in the blue/red channel, respectively.
On June 29, a 175\arcsec-long and 1\arcsec-wide slit was aligned on the center
of 235144-260358 in A2667 including two of its knots (K1 and K2, see Fig.\ref{specknots}), using a position angle of 92.7 East of North.
Two exposures of 900 s were obtained under $\approx 1.5$\arcsec seeing.
On June 30, a 30-slits mask was used on A1689 in order to target multiple-imaged candidates at the cluster center.
One blue knot associated with the disrupted galaxy was included in a 9\arcsec-long and 1\arcsec-wide slit from this mask (see Fig.\ref{specknots}).
Four exposures of 1800 sec each have been obtained with an average seeing of 1.0\arcsec.
All these spectroscopic data were reduced using standard IRAF procedures for flat-fielding, wavelength and
flux calibrations, sky subtraction and extraction of the spectra.\\
Under visual inspection of the spectra we carried out the measurement of the emission and absorption
lines using the task SPLOT in IRAF.
For the spectrum of 235144-260358 in Abell 2667 we de-blended the underlying absorption from the
H$\beta$ emission lines as discussed in \cite{gavspectra}. We evaluated the Balmer decrement from the
ratio H$\beta$/H$\alpha $ (assuming $T$=10 000 K and $n$=100 $\rm e/cm^3$, \citealp{osterb89}) and derived the corrected
line fluxes, relative to H$\beta$, using the de-reddening law of \cite{lequex}.
The observed H$\alpha$/H$\beta$ ratio for 235144-260358 is $\approx$3.65 implying a gas attenuation $A(H\alpha)\approx$0.56 mag and
a stellar continuum attenuation $A(V)\approx$0.31 mag (assuming the Galactic extinction curve, \citealp{pei92}).
\subsection{X-ray Imaging}
We downloaded the 9.2 ks Chandra
ACIS observation of A2667 from the public
archive and reduced it following the standard "threads" from CIAO data analysis software (Version 3.3)\footnote{http://cxc.harvard.edu/ciao/index.html}. We searched the
exposure--corrected images for sources, using the task wavdetect with
angular wavlet scales from \cite{brandt01} and a
significance threshold of $1\times 10^{-7}$. No source is detected at the
position of 235144-260358. We measure an upper limit to the 2--8 keV flux of
$1.2\times10^{-14}~\rm erg~s^{-1}~cm^{-2}$.
\section{Results}
\subsection{Abell 2667}
Fig. \ref{colimage} (upper panel) shows an RGB image of 235144-260358 in Abell 2667,
and its global properties are summarized in Table \ref{tabgal}.
Its optical redshift is $z\approx0.2265$, lying in the low velocity tail of the
velocity distribution of Abell 2667 (i.e. $\approx$830 $\rm km~s^{-1}$ lower than the mean cluster velocity; \citealp{covone05}).
This face-on galaxy lies at a projected distance of $\approx$ 0.34 $h_{70}^{-1}$ Mpc from
the cluster center (assumed to coincide with the position of the central CD galaxy of A2667, see
Fig. \ref{whole2667}). This system is one of the brightest galaxies
in the cluster \citep{covone05}, with both optical and
near infrared ($M_{F450W}\approx$-21.50, $M_{H}\approx$-24.50) absolute
magnitudes close to L$^{*}$ and a gas metallicity\footnote{The gas metallicity has been computed from the average
of five different empirical determinations based on:
$R_{23}$ \citep{zaritsky94,mcg91}, $\rm [NII]\lambda6583/[OII]\lambda3727$
\citep{kewley02}, $\rm [NII]\lambda6583/H\alpha$ \citep{vanzee98} and
$\rm [OIII]\lambda5007/ [NII]\lambda6583$ \citep{dutil99}} of $12+log(O/H)\approx$9.0$\pm$0.1 (i.e. $\approx$1.4 solar metallicity).
From the HST images, 235144-260358 appears to be a late-type galaxy (see Fig.\ref{colimage}), as
confirmed by its structural parameters (see Table \ref{tabgal}).
However this object is definitely not normal: it shows a disturbed morphology,
with clear indications of stripping within its optical disk and a prominent one-armed
spiral component as is typically observed in gravitationally perturbed
systems \citep{vollmer03}.
Moreover, there is a significant nuclear enhancement in the optical surface brightness profiles ($\approx$ 2 mag within
the central kpc: see Fig.\ref{colprofiles}), suggesting that
it is experiencing a nuclear burst of star formation.
This spike is particularly evident in the F450W band, where the central regions cannot be fitted with a simple deVaucouleurs profile. \\
Spitzer observations of A2667 expand on the unusual properties of 235144-260358, since
it is detected by both IRAC (3.6-8$\rm \mu m$, see also Fig.\ref{8micron}) and MIPS (24 $\rm \mu m$).
At $z\approx$0.23, the 8 $\rm \mu$m emission is dominated by a combination of
the PAH bump ($\approx6.2\rm \mu$m) and very small grains
continuum \citep{DBP90}, while the old stellar population dominates at shorter wavelengths.
The observed 8$\rm \mu m$/5.6 $\mu m$ flux ratio $\approx$ 6.3
(corresponding to the rest frame flux ratio 6.3$\rm \mu m$/4.5 $\rm \mu m$) is consistent with
the value observed in star forming galaxies \citep{dale05}, suggesting that the infrared emission
is due to recent star formation activity.\\
We used the X-ray data for a second test of whether the infrared emission
is due to a burst of star formation or to an active nucleus (AGN).
Comparing the X-ray upper limit to the 24 $\rm \mu m$ flux density (see Table \ref{tabgal})
with the help of figure~1 of \cite{alonso04} confirms
that this source is not AGN--dominated: its
2--10 keV/24$\rm \mu m$ flux ratio is at least 4 times too
low to lie within the range of typical AGN \footnote{Neither
of these tests can exclude the presence of a
Compton thick AGN \citep{shi05}, but it is likely that the mid-infrared
output of such objects is dominated by star formation.}. Finally a significant
contribution from an AGN is ruled out by the emission line ratios
obtained from the optical spectrum: $\log([OIII]/H\beta)\approx$-0.45, $\log([NII]/H\alpha)\approx-0.33$
consistent with the values typically observed in star forming galaxies \citep{kewley01}.
We therefore used the 8 and 24$\rm \mu$m data to derive the total infrared luminosity, $L(IR)$,
using the IR spectral energy distribution (SED)
from the \cite{dale02} and \cite{chary01} library following the procedure described in \cite{marcillac06}.
This method relies on the correlations between L(IR) vs. the luminosity at 7 $\mu m$ and $L(IR)$ vs. $L(24 \mu m)$
shown in \cite{chary01}. The SED templates were only used to interpolate at 8 and 24 $\mu m$.
The resulting total infrared luminosity of 235144-260358 $L(IR)\approx$3($\pm$0.25) $10^{11}~\rm L_{\odot}$,
implies a current star formation rate $SFR\approx$ 53($\pm$4.3) $\rm M_{\odot}~yr^{-1}$ (using the relation of \citealp{kenn98}),
consistent with a $SFR\approx$ 57 $\rm M_{\odot}~yr^{-1}$ obtained from VLA continuum observations and the relation of
\cite{condon92}. This galaxy is a rare example of a luminous infrared galaxy (LIRG) in a dense cluster.\\
All the properties of 235144-260358 point out
the peculiarity of its recent evolutionary history.
However, the extended trails of bright blue knots, tracing its trajectory as
it falls into the cluster core, make it truly extraordinary.
A dozen such knots extend from the galaxy
optical disk to a projected distance of $\approx$ 80 $h_{70}^{-1}$ kpc.
Extended blue low surface
brightness wisps and filaments lie along the same
trail, supporting the hypothesis that all of these structures result
from stripping (see Fig.\ref{colimage}).
The knots have absolute F450W magnitudes in the range -16.80$<M_{F450W}<$-14.80 mag,
typical of dwarf galaxies \citep{sandage85} and super star clusters \citep{larsen99}, and are barely resolved in the HST image
implying an effective radius $r_{e}\leq$0.45 $h_{70}^{-1}$ kpc.\\
The radio contours shown in Fig.\ref{radio} appear elongated in the
direction of the trail. A similar morphology seems also
to be present in the Spitzer 8$\rm \mu m$ map shown in Fig.\ref{8micron},
which has the appearance of a head on the galaxy, with a tail tracing
the current star formation associated with the blue knots.
Moreover [OII] emission, not associated with any of the blue knots, extends from the galaxy for
a total length of at least $\approx$50 kpc (see Fig. \ref{specknots}), suggesting the presence of diffuse ionized gas along the trails as already
observed in nearby ram pressure stripped galaxies \citep{GAVB01,yoshida04,big}. \\
To constrain the ages of the blue knots, we compute the time evolution of
the $F450W-F606W$ and $F606W-F814LP$ colors, using Starburst99 \citep{starburst}.
We assume a Salpeter IMF,
solar metallicity\footnote{We also tested a Kroupa IMF and stellar metallicities in the range 0.004$<Z<$0.02,
but the evolutionary paths do not significantly vary from the ones shown in Fig.\ref{ccdiagram}} and two different star formation histories: an instantaneous burst and continuous star formation.
For each star formation history we also compute a model including the contribution
of strong emission lines.
We redshifted the synthetic Starburst99 spectra to the cluster distance
and used the synthetic photometry package SYNPHOT in IRAF to compute the
model colors in the WFPC2 passbands.
In Fig. \ref{ccdiagram}, we plot a color-color diagram for the bright knots
(black circles) to compare with the theoretical evolutionary tracks (solid and
dashed lines). We present only the colors for the brightest knots: i.e. detected in all three HST bands
(see Fig.\ref{colimage}).
The arrow shows the effect of attenuation by dust on the observed colors, assuming a Galactic attenuation
curve \citep{schlegel98}.
The models with emission lines (dashed lines) appear to fit the observed colors better than those
without (solid lines).
Most of the blue knots lie slightly below the modeled tracks for
both an instantaneous burst and continuous star formation but are reasonably consistent
with an age of the episode in the range $5<t<15$ Myr in the first case and $10<t<1000$ Myr in the second.
These values are probably upper limits since it is very likely that the star forming knots contain dust,
as observed in extragalactic HII regions \citep{gerhardHII,corteseHII} and star
forming dwarf galaxies \citep{bosellised,COdust05}.\\
Our spectra detect $\rm [OII]$ in emission in both the knots (K1 and K2) included on the slit (see Fig. \ref{specknots}) confirming
that these systems are still forming stars.
For K1 we also detected
[OIII] and H$\alpha$ in emission ($z\approx0.227$).
The H$\alpha$ flux is $f\approx1.6\times10^{-17}\rm~erg~cm^{-2}~s^{-1}$ corresponding to a
$SFR\approx$0.02 $\rm M_{\odot}~yr^{-1}$ (not corrected for extinction).
No continuum is detected above a flux limit $f\approx3\times10^{-19}~\rm erg~cm^{-2}~s^{-1}~A^{-1}$, implying a H$\alpha$
equivalent width $EW(H\alpha)\geq50~\rm\AA$.
The lower limit on $EW(H\alpha)$ corresponds to an age of the knots $t\leq$5 Myr for an instantaneous burst \citep{starburst}.
This value is significantly shorter than the one obtained from the optical colors (see Fig. \ref{ccdiagram}) implying a significant
larger amount of dust in the knots ($A_{V}\approx$0.5-1 mag) than the one observed in their parent galaxy ($A_{V}\approx$0.31 mag).
In comparison, for a continuous star formation history the value of $EW(H\alpha)$ corresponds to an age
$t\leq$1000 Myr, consistent with the estimate obtained from the optical colors, making this scenario much more likely.\\
\subsection{Abell 1689}
The disrupted galaxy in Abell 1689 is illustrated in Fig. \ref{colimage} (lower panel) and
its main properties are listed in Table 1.
This galaxy lies at a projected distance $\approx$ 0.24 $h_{70}^{-1}$ Mpc from the
cluster center (see Fig. \ref{whole1689}) and is $\approx$ 2.5 mag fainter than the perturbed galaxy in Abell 2667
(i.e. with a luminosity of $\approx$ 0.1 L$^{*}$, \citealp{wilson97}).
Its redshift is $z\approx$0.1866 confirming that it belongs to A1689.
Contrary to 235144-260358, the surface brightness profile of this galaxy
follows a typical exponential profile (see Fig.\ref{colprofiles}). However, the slopes of
its color profiles are anomalous: in both $F450W-F625W$ and $F625W-F814W$ there is
an inversion of the color gradients, with bluer colors toward the center.
The galaxy outskirts have a $F450W-F814W$ color $\approx1.7$ mag, $\approx$0.6 mag redder than the galaxy center and
consistent with the typical color of red sequence galaxies in the local Universe \citep{bernar03}.
Similar features have been observed in spiral galaxies in the Virgo cluster and suggest recent ($t\leq$300 Myr) gas stripping
by ram pressure \citep{n4569}.
131124-012040 is neither detected at 24 $\mu$m by Spitzer nor in VLA continuum images (see Table \ref{tabgal}).
This is consistent with the optical spectrum of this galaxy (see Fig.\ref{spectrum}),
which shows strong Balmer lines in absorption
($EW(H\delta)\approx$6 $\rm \AA$, $D(4000)\approx$1.21) and very little residual
star formation ($EW([OII])\approx$ 1.8 $\rm \AA$).
This overall behavior suggests that the galaxy center has
recently ($t\leq$100 Myr, i.e. \citealp{poggia97,shioya02, kauff03}) stopped forming stars.
These spectral features are consistent with both a simple truncated and a post-starburst
SFH \citep{shioya02,pracy05,crowl06}, however the inverted color gradients and the absence of a
central enhancement in the surface brightness profile favor a ram pressure scenario \citep{bekki05ea,n4569}.\\
A $\approx$ 30 $h_{70}^{-1}$ kpc long trail, formed of at least six blue knots and
a number of wisps and filaments, is associated with this
system. The bright knots have absolute F475W magnitudes in the range -13.5$<M_{F475W}<$-11.5 mag,
lying between dwarf galaxies and
stellar clusters ($\approx$ 3 mag fainter than the knots observed in Abell 2667\footnote{ACS observation of A1689 are $\approx$3 mag deeper than WFPC2 imaging of A2667. We cannot therefore exclude that knots as faint as the one in A1689 are present also in A2667.}).
The knots nearest to the galaxy are clearly resolved in the HST images and have
a typical size $r_{e}(F475W)\approx$0.8-0.9 kpc. In comparison, the most distant knots are not
resolved implying a physical size $r_{e}(F475W)\leq$0.35 kpc.
To determine the ages of the blue knots we computed the time evolution of
the $F475W-F625W$ and $F625W-F850LP$ colors, as described in the previous section.
The results of our analysis are presented in Fig. \ref{ccdiagram}.
Most of the knots lie within the modeled tracks for
an instantaneous burst with an age in the range 5$<t<$100 Myr, and are slightly above the model for
continuous star formation with an age in the range 10$<t<$1000 Myr.
As for 235144-260358 no correlation is observed between the optical colors of the knots and their
distance from the infalling galaxy (see Fig.\ref{coldistance}).
The optical spectrum obtained for the most distant knot (Knot A in Fig.\ref{specknots}), reveals the presence of strong
[OII] in emission ($f\approx5.4\times10^{-17}~\rm erg~cm^{-2}~s^{-1}~\AA^{-1}$),
while no continuum is detected at a limit $\approx5\times10^{-19}~\rm erg~cm^{-2}~s^{-1}~\AA^{-1}$,
implying an $EW[OII]\geq108$ \AA\ and showing that star formation
is still taking place in this system.
Also H$\alpha$ emission is detected, but it lies on a bright sky line and is affected by fringing
making it impossible to use the H$\alpha$ equivalent width to obtain an independent estimate of the age of the burst.
It is interesting to note that the time scale necessary to invert the color gradients ($t\leq$300 Myr) appears to be slightly
longer than the age of the trails ($t<$100 Myr), suggesting that the two features could be signatures of different physical mechanisms.
\section{Environmental effects on the evolution of the infalling galaxies}
The peculiar properties of the two galaxies falling into A2667 and A1689 suggest
that both galaxies are undergoing strong transformations due to
their interaction with the harsh cluster environment.
However while these objects are at similar distances from the cluster centers and
show similar extended trails of star-forming
knots, their recent star formation histories are
different. 235144-260358 is experiencing a strong burst of star formation, appearing as a rare
example of a luminous infrared cluster galaxy.
In comparison, 131124-012040 has recently ($t\leq$ 100 Myr) ceased
its star formation activity.
To probe this difference, we investigate the
effects of different environmental mechanisms on the properties and star formation history of these two galaxies.\\
The high velocity dispersion of the two clusters ($\sigma_{1D}\geq$1000$\rm km~s^{-1}$; \citealp{covone05,a1689dinam}) makes very unlikely
a low velocity interaction or a merger with another cluster galaxy.
This would not be the case if the two galaxies belong to smaller, kinematically distinct, dynamical units (e.g. infalling groups).
However no observational evidences support this possibility.
Therefore we will only consider high velocity galaxy-galaxy and galaxy-cluster gravitational interactions
and ram pressure stripping by the hot intracluster medium (ICM) as possible mechanisms
to explain the peculiarities of these two galaxies.\\
In order to reduce the number of free parameters in our model we assume that
the two galaxies are falling on linear orbits into the cluster core.
This very simple scenario, supported by the fact that infalling galaxies
have usually highly eccentric radial orbits \citep{review}, allow us to express the cluster-centric distance (r)
as a function of the galaxy infalling velocity:
\begin{equation}
r = \frac{r_{proj}}{sin(arcos \big(\frac{V_{ls}}{V_{infall}}\big))}
\end{equation}
where $V_{ls}$ is the (measured) velocity component along the line of sight and
$r_{proj}$ is the cluster-centric distance projected along the line of sight.
Similarly, assuming that the trails of blue knots trace the galaxy's trajectory \citep{moore1999}, their physical length is:
\begin{equation}
L_{trail} = \frac{L_{proj}}{sin(arcos \big(\frac{V_{ls}}{V_{infall}}\big))}
\end{equation}
where $L_{proj}$ is their projected length and their age is:
\begin{equation}
t_{trails} = \frac{L_{trail}}{V_{infall}}
\end{equation}
This value of $t_{trails}$ is based on the assumption
that the trails are at rest with respect to the cluster, and must be considered as a lower limit for the real
age of these features.\\
Both clusters have a 1D velocity dispersion
$\sigma_{1D}\geq1000\rm km~{s}^{-1}$ \citep{covone05,a1689dinam}, implying a 3D
infalling velocity $V_{infall}\approx\sqrt{3}\sigma_{1D}$.
In the following we therefore assume a 3D
infalling velocity in the range between 1000$<V_{infall}<$1730 $\rm km~{s}^{-1}$ (i.e. $\sqrt{3}\sigma_{1D}$ and $\sigma_{1D}$) that can be considered as upper and lower limit of the real value.
The values so derived for the cluster-centric distance, the length and the age of the trails for the upper and lower
limit of $V_{infall}$ are summarized in Table \ref{model_assumption}.
\subsection{Gravitational interactions}
We can approximate the strengths of high velocity galaxy-galaxy and galaxy-cluster interactions
by using the impulse approximation \citep{byrd96}.
The transverse and radial tidal accelerations
experienced by the infalling galaxy are:
\begin{equation}
a_{tr} = GM_{pert}\frac{R}{[R^{2}+(r+R)^{2}]^{1.5}}
\end{equation}
\begin{equation}
a_{rad} = GM_{pert}\big[\frac{1}{r^{2}} - \frac{1}{(r+R)^{2}}\big]
\end{equation}
where $M_{pert}$ is the mass of the perturber within $r$,
$R$ is the radius of the perturbed galaxy (assumed to be $\approx$5 effective radii \citep{gav00}) and $r$ is
its distance from the perturber.
The radial tidal field tends to accelerate the edge of a galaxy. If it is
more intense than the internal galaxy acceleration, given by
\begin{equation}
a_{gal} = \frac{GM_{dyn}}{R^{2}}
\end{equation}
where $M_{dyn}$ is the dynamical galaxy mass, it is able to strip material from the infalling galaxy.
Following \cite{phenomen}, we use the H-band rest frame luminosity of the two
galaxies to estimate their dynamical masses within the optical radius and derive their disk rotational velocities obtaining $M_{dyn}\approx10^{11.6}~\rm M_{\odot}$ and $M_{dyn}\approx10^{10.6}~\rm M_{\odot}$ for 235144-260358 and 131124-012040 respectively.
In the case of non-interpenetrating galaxy-galaxy interactions, the impact parameter is at least equal or greater than
the galactic radius (i.e. $r\geq R$) implying that material is stripped from an infalling galaxy only if
\begin{equation}
\label{highvel}
M_{pert}\geq1.33\times M_{dyn}
\end{equation}
(i.e. $\approx10^{11.7} \rm M_{\odot}$ and $\approx10^{10.7} \rm M_{\odot}$ for 235144-260358 and 131124-012040 respectively).
The possible perturber should not lie at a larger distance than the typical size of trails.
In Abell 2667, the brightest objects within a projected distance of 100 kpc from 235144-260358 have an H band magnitude
$\approx$ -24.5 mag (i.e. $M_{dyn}\approx10^{11.6}~\rm M_{\odot}$), fairly consistent with the lower limit required
for effective stripping. Unfortunately their recessional velocities are unknown, making impossible a more detailed analysis of their
possible interaction with 235144-260358.
In Abell1689, the giant face-on barred spiral projected at $\approx$20 kpc NE from 131124-012040 (see Fig. \ref{whole1689}) is the only object (within 100 kpc)
satisfying Equation \ref{highvel}.
However the trail of blue knots is pointing in the opposite direction than the one expected in the case of interaction between the two objects (i.e. towards the perturber) and the galaxy has a redshift of $z\approx$0.1924 i.e. 1680 km/s higher than the recessional velocity of 131124-012040, making unlikely an interaction between the two objects.\\
To quantify the effect of tidal forces from the cluster potential well on an infalling galaxy,
we assume a NFW profile \citep{NFW} for the cluster mass distribution:
\begin{equation}
M (<r) = M_{0} [ln(1+\frac{r}{r_{s}}) - \frac{r/r_{s}}{1+r/r_{s}}] ~~~{\rm for}~r\leq r_{s}c
\end{equation}
where
\begin{equation}
M_{0} = 4\pi \frac{3H_{0}^{2}}{8\pi G}\big[\Omega_{M}(1+z)^{3}+\Omega_{\Lambda}\big] \frac{200 c^{3} r_{s}^{3}}{3 [ln(1+c)-c/(1+c)]}
\end{equation}
and $r_{s}$ and $c$ are the scale radius and concentration parameter of the mass distribution. The values
adopted for the two clusters are summarized in Table \ref{cluster}.
As shown in Fig.\ref{accrad}, for both our galaxies the radial acceleration from the cluster potential is higher than the
internal acceleration for a cluster-centric distance smaller than $\approx$ 0.45 $h_{70}^{-1}$ Mpc.
Therefore, depending on their real infalling velocity, the two objects are at the edge or are just entered the region
where material can be efficiently stripped by gravitational interactions.
This simple calculation gives a lower limit for the real efficiency of mass loss,
since higher rates will occur in the presence of substructures and infalling groups, as is likely in these
two clusters \citep{covone06b,a1689dinam}. Moreover, tidal heating \citep{taylor01} produced by the varying
cluster gravitational field will significantly accelerate mass loss \citep{gnedin03b,gnedin03},
although it is not considered in our model.\\
In contrast to the radial acceleration, which tends to strip material from an infalling galaxy,
the transverse field compresses the
interstellar medium to produce gas infall toward the center and may trigger a burst of star formation.
Gas clouds experience a velocity perturbation due to the transversal tidal acceleration
and collide with other gas clouds.
The increase in the cloud velocity can be estimated as:
\begin{equation}
V = \int a_{tr} dt \approx a_{tr} \Delta(t)
\end{equation}
where $\Delta(t)$ is the age of the interaction.
The velocity increase and cloud collision produce a density enhancement in the centre of the galaxy, which is
proportional to the Mach number ($MN$) squared.
Consequently the critical mass for the cloud collapse
(which is proportional to $\rho_{gas}^{-0.5}$) decreases by a factor $MN^{-1}$ and, in the case of a strong perturbation, could become
smaller than the typical mass of a galactic disk HI cloud ($\approx$300 $\rm M_{\odot}$, \citealp{spitzer78,jog92}),
favoring new episodes of star formation \citep{byrd96}.
Fig. \ref{highvel_mcrit} shows the ratio between the typical mass of HI clouds and the critical mass for cloud
collapse in the case of high velocity interactions, assuming $M_{pert}\approx10^{11.6} \rm M_{\odot}$ and $M_{pert}\approx10^{11.3} \rm M_{\odot}$
for 235144-260358 and 131124-012040 respectively as discussed above.
It appears that in both cases high velocity galaxy-galaxy interactions
are not enough strong to trigger a burst of star formation as observed in 235144-260358.\\
This is not the case of galaxy-cluster interactions.
Fig. \ref{mcrit} shows again the ratio between the typical mass of HI clouds and the critical mass for cloud
collapse as a function of the cluster-centric distance in case of interaction with the cluster potential.
In this case two galaxies are in two different regimes, whatever are the initial conditions in our model.
While in 235144-260358 the critical mass is already below $\approx$300 $\rm M_{\odot}$ and the
compressed gas is able to collapse and produce new stars, in 131124-012040 this is still not the case.
This result is consistent with our observations and
indicates that tidal forces from A2667
may have triggered the strong starburst in 235144-260358.\\
In summary our model suggests that gravitational interactions with the cluster potential alone are able
to strip material from the two infalling galaxies and to trigger a burst of star formation in 235144-260358.
Even if we cannot completely exclude a role of high velocity galaxy-galaxy interactions on the evolution
of these systems, it appears clear that they cannot account for all the properties of the two infalling galaxies.
\subsection{Ram pressure stripping}
Although the tidal interaction hypothesis is consistent with the presence of a strong starburst
only in 235144-260358, it is not able to explain why 131124-012040 shows clear signs of a
recent truncation of its star formation.
Both A1689 and A2667 are X-ray bright clusters suggesting that the effects of the hot intracluster medium
could be significant.
Therefore, to estimate the effects of ram pressure stripping on the infalling galaxies, we
adopt the classical \cite{GUNG72} criterion:
\begin{equation}
P_{ram}= \rho _{ICM} v^{2} \geq 2 \pi G \Sigma_{star} \Sigma_{gas}
\end{equation}
where $\rho_{ICM}$ is the density of the cluster medium, $\Sigma_{star}$ and $\Sigma_{gas}$
are the galaxy stellar and gas density, and
$v$ is the 3D infalling velocity of the galaxy (here assumed to be in the range 1000$<v<$ 1730 $\rm km~s^{-1}$
as discussed in the previous sections).
We use a $\beta$ model density profile for the ICM:
\begin{equation}
\rho (r) = \rho _{0} \frac{1}{[1+(r/r_{c}^2)]^{3\beta/2}}
\end{equation}
(the values adopted for the different clusters are listed in Table \ref{cluster}).
We assume that the stellar and gas distributions of our galaxies are exponential,
as confirmed by their structural parameters.
The gas and stellar density profiles are \citep{domainko06}
\begin{equation}
\Sigma_{star,gas} (r) = \frac{M_{star,gas}}{2 \pi R_{0star,gas}^{2}} \exp(-r/R_{0star,gas})
\end{equation}
where $R_{0}$ is the scale length of the exponential profile (i.e. 0.59 $r_{e}$).
Assuming a gas scale length $R_{0gas}\approx1.8~R_{0star}$ \citep{cayatte94}, a
$M_{gas}/M_{star}$ ratio $\approx$1, typically observed in late type galaxies \citep{boselli}, and a
$M_{star}/L_{H}$ ratio $\approx$ 1 \citep{phenomen,mcgau00} the
typical stripping radius is given by the following relation:
\begin{equation}
R_{strip} \approx 0.64 R_{0} \times \ln \big( \frac{G (L_{H}/L_{\odot})^{2}}{1.8^{2}\rho _{ICM} v^{2} 2 \pi R_{0star}^{4}} \big)
\end{equation}
and the mass of gas stripped by ram pressure is:
\begin{equation}
M_{strip} = \frac{L_{H}}{L_{\odot}} (\frac{R_{strip}}{R_{0}} + 1) exp(-R_{strip}/R_{0})
\end{equation}
In Fig. \ref{rstrip} (left) we show the variation of the stripping radius as a function
of the distance from the cluster center, for three different values of the 3D infalling velocities
assumed in our model ($\approx$1000$km~s^{-1}$, $\approx$1410$km~s^{-1}$ and $\approx$1730$km~s^{-1}$).
While at its current location 131124-012040 has almost been totally stripped by ram
pressure ($R_{strip}\leq 0.9 \rm r_e$), in 235144-260358 ram pressure
has only affected the outer galaxy regions ($R_{strip}\geq 1.2 \rm r_e$).
The same result can be analyzed in terms of HI deficiency\footnote{The
HI deficiency is defined as the difference, in logarithmic units, between the
observed HI mass and the value expected from an isolated galaxy
with the same morphological type T and optical linear diameter D:
HI DEF = $<\log M_{HI}(T^{obs},D^{obs}_{opt})> - log M^{obs}_{HI}$ \citep{haynes}} (see Fig. \ref{rstrip} right).
131124-012040 has already lost $\geq$80 \% of its original gas content and if observed in
a local cluster it would be classified
as a highly HI-deficient object.
Conversely, ram pressure has only stripped a tiny fraction of the gas from 235144-260358, whose HI
deficiency ($\approx$0.25) would approximately
lie at the edge between normal and deficient galaxies (HI-deficiency$\approx$ 0.2).
We remark that the HI-deficiency shown in Fig.\ref{rstrip} is not determined from observations, but is
obtained from our analytical model.\\
Comparing Fig.\ref{rstrip} to Fig.\ref{accrad} it appears clear that ram pressure in 131124-012040 has become efficient
before gravitational interactions were able to strip material from the galaxy. This is qualitatively consistent
with the different time-scales of the interaction determined from the inversion of the color gradients ($t\leq$ 300 Myr, likely produced by ram pressure
stripping) and from the blue star-forming trails ($t<$ 100 Myr, clear signature of gravitational interactions).
\subsection{The origin of the blue star forming knots}
The mutual effects of gravitational interactions and ram pressure have already been
observed in several cluster galaxies, however the tails of blue star forming knots here discovered represent an
extremely rare feature, to our knowledge previously observed only one other
starburst galaxy in the Abell cluster 2125 ($z\approx$0.247, \citealp{owen06}).
The morphology and luminosity of the knots suggest that we are dealing with dwarf galaxies
and/or stellar super-clusters \citep{felhauer02}.
This could explain the observed difference between luminosity of the knots ($\approx$2.5 mag) in
the two galaxies since the luminosity of the brightest star clusters is usually correlated with the SFR of the parent galaxy \citep{weidner04}.\\
The properties of the knots (i.e. colors and emission lines) suggest that they
are undergoing an extended period of star formation as discussed in Section 3.
From the model described in the previous sections the dynamical age of the trails results
50$<t_{trail}<150$ Myr and 20$<t_{trail}<60$ Myr for 235144-260358 and
131124-012040 respectively, fairly consistent\footnote{However the value of $t_{trail}$ obtained in the previous section must be considered as
a lower limit since it assumes that the stripped material is at rest with regard to the cluster.}
with the age inferred from their optical colors
(see Fig. \ref{ccdiagram}).
It is impossible to determine whether the knots
where already forming stars when they have been stripped or
if their activity was triggered by an external mechanism once in the ICM.
We can exclude that these systems have been formed in the ICM by the accretion of
unbounded material stripped by the parent galaxies: the combined effects of the cluster tidal field and
ram pressure tend to inhibit the formation of bound systems from the collapse of stripped material \citep{mihos04}.
This scenario is consistent with the numerical simulations.
\cite{elmegreen} showed that gravitational interactions can lead to the formation and ejection of peripheral self
gravitationally bound clouds with masses $\leq10^{8}~\rm M_{\odot}$, which begin their
life in a major burst of star formation.
Moreover \cite{bekki03} have recently demonstrated that ram pressure can trigger
the collapse of stripped clouds leading to a burst of star formation, suggesting that even the formation and evolution
of the blue star forming knots is probably driven by the mutual effects of gravitational interactions and ram pressure.
Only deeper spectroscopic observations will shed light on the star formation history of these rare objects.
\section{Discussion \& Conclusion}
The analysis in this paper allows us to propose a scenario
for the evolution of the two disturbed galaxies in Abell 2667 and Abell 1689.
These objects are currently falling into massive, gas-rich galaxy clusters with similar mass
and gas density profiles (see Table \ref{cluster}).
Under the combined action of tidal forces (more likely from the cluster potential) and of ram pressure
by the ICM, their morphologies and star formation are strongly perturbed.
Self gravitational bound systems are ejected
from the main galaxies and stars and ionized gas are stripped from the stellar disks
producing the observed tails of blue knots and stellar wisps
tracing the infalling trajectory of these systems into the cluster core.
Only the tidal field of Abell 2667 is able to drive a gas infall into the center of 235144-260358 triggering a nuclear burst
of star formation and making this galaxy a rare example of luminous infrared cluster galaxy.
Simultaneously ram pressure stripping by the hot intracluster medium strips the neutral hydrogen from
the galaxy outskirts but it is not able to affect the central regions where the starburst is taking place.
Conversely in 131124-012040 gravitational forces are not strong enough to trigger
the collapse of gas clouds while ram pressure is already extremely efficient.
At the present galaxy location ram pressure has stripped at least the $\approx$80\% of the original neutral hydrogen content,
quenching the star formation activity in this object, as confirmed by the strong Balmer line
in absorption observed in the optical spectrum \citep{shioya02} and by the inversion of the optical color gradients
along all the galaxy extent \citep{review,n4569}.\\
A larger statistical sample is necessary to determine whether we are witnessing
a common snapshot in the evolution of cluster galaxies or an extremely rare phenomenon.
In fact, as discussed in the Introduction, only these 2 galaxies out of 13 different clusters imaged
at 0.175$<z<$0.25 show extended trails of blue knots.
Within the WFPC2 field of view ($\approx$0.25 $\rm Mpc^{2}$ at $z\approx0.2$) there are typically $\approx$50
cluster members but only the $\approx$20\% of them are spiral galaxies \citep{balogh02},
implying a frequency of $\approx$1.5\% (2 over 130) among cluster spirals at $z\approx$0.2.
This value is fairly consistent with the expected frequency roughly obtained dividing the typical
time scale of the interaction ($\leq$200 Myr) to the age of the cluster ($\approx$11 Gyr).\\
If we are witnessing a common step of cluster galaxy evolution what can we learn by studying these two rare objects?
Abell 2667 and Abell 1689 have comparable mass and gas profiles and the two galaxies are approximately
at the same projected cluster-centric distance, suggesting that the absolute intensity of the cluster tidal field and of ram pressure
by the cluster ICM is approximately the same in the two environments.
Therefore we can speculate that the different recent evolutionary history of the infalling systems could be in part
related to their different properties (i.e. their different luminosities: $\approx\rm L^{*}$ and $\approx$0.1L$^{*}$).
In this case, our result suggests that giant spiral galaxies infalling into the core of massive clusters are
mainly perturbed by the gravitational interaction with the cluster.
Stars respond by forming arms and bars, while the gas flows directly toward the central region within $t\approx$100 Myr.
The sinking of the gas towards the center triggers a burst of star formation and is able to alter the galaxy
morphology (\citealp{iono}). Ram pressure stripping produces a truncation of the disk but only in the outskirts of the
galaxy being not efficient within the optical effective radius.
When all the remaining fuel has been consumed by the star formation this galaxy will not longer
appear as a disky gas rich systems but more likely as a bulge dominated quiescent spiral.
This is not the case for less massive galaxies. Ram pressure is much more efficient on low mass systems and it is able to
strip a considerable fraction of the neutral hydrogen from the inner part of these galaxies, preventing the gas from
sinking toward the center driven by tidal interaction and quenching the star formation history.
Within $\approx$1 Gyr this object will not appear as a blue spiral any more but
will probably look like an early type (e.g. red) disky spiral \citep{shioya02}.
The different evolutionary scenarios for the evolution of low and high mass infalling galaxies emerging from our analysis
apparently fit with recent observations and models, suggesting that the bulk of the cluster population of giant bulge
dominated early type spiral galaxies can only be formed during some kind of gravitational
interaction \citep{dress04car,mihos04}, while lower mass systems can be transformed by simple gas
removal from healthy spirals \citep{poggianticoma}.\\
The properties of the blue knots stripped from the infalling galaxies deserve particular attention.
These systems have a luminosity (-16.5$\leq M \leq$-11.5) and
a physical size ($r_{e}\leq0.45$ kpc) typical of dwarf galaxies and consistent with the ultra compact dwarf
galaxies (UCD, \citealp{hinker99,philips01}), recently discovered in Abell 1689 \citep{mieske04,mieske05}.
There are two competing formation scenarios to explain the origin of UCDs.
\cite{bekki03b} propose that they are the remnants of stripped dwarf galaxies.
In this scenario a nucleated dwarf looses its envelope and a great part of its dark matter content due to tidal interaction
with another object. On the contrary \cite{felhauer02} propose that UCDa could origin from the \emph{amalgamation}
of rich aggregates of young massive star clusters that can form during gravitational interactions between gas-rich galaxies.
It appears clear that the knots discovered here strongly support the second scenario, suggesting that at least part
of the population of ultra-compact dwarfs originate from young massive star clusters: we are probably
for the first time witnessing the dawn of the UCDs.
This scenario is also consistent with the recent discovery of a massive extra galactic
star cluster ($M\geq10^{6}~ \rm M_{\odot}$, $t\approx700$ Myr) lying at a projected distance of 17 kpc from
the merger remnant NGC3310 and likely formed during the merging event \citep{knapp06}.\\
Finally the diffuse stellar streams and ionized gas observed along the trails
suggest that the mechanisms acting here
will significantly influence the properties of the intracluster light and contribute
to the enrichment of the ICM.
The results here obtained might be representative only of the clusters at $z\geq$0.2 where the infalling rate
is higher and galaxies have an higher gas content than the one observed in local clusters of galaxies.
\section*{Acknowledgments}
We thank the referee, D. Christlein, for his useful comments which helped us to improve and strengthen the paper.
LC is supported by the U.K. Particle Physics and Astronomy Research Council.
Part of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the
California Institute of Technology, the University of California and
the National Aeronautics and Space Administration. The Observatory was
made possible by the generous financial support of the W.M. Keck
Foundation.This work was partially supported by NASA
contract 1255094, administered by JPL/Caltech.
JR acknowledges support from Caltech
|
2,869,038,153,743 | arxiv |
\section{\hrulefill\Large\textbf{~#1~}\hrulefill}\label{sec:#1}}
\newcommand{\grayst}[1]{\textcolor{mygray}{\st{#1}}}
\newcommand{\example}[1]{\noindent\begin{quote}\small #1\end{quote}}
\NewDocumentCommand{\evalat}{sO{\big}mm}{%
\IfBooleanTF{#1}
{\mleft. #3 \mright|_{#4}}
{#3#2|_{#4}}%
}
\title{Supervised learning on heterogeneous, attributed entities interacting over time}
\author{
Amine Laghaout \\
CSIS Security Group A/S, \\
Vestergade 2B, 4. sal, \\
Copenhagen K, Denmark
}
\begin{document}
\maketitle
\begin{abstract}
Most physical or social phenomena can be represented by ontologies where the constituent entities are interacting in various ways with each other and with their environment. Furthermore, those entities are likely heterogeneous and attributed with features that evolve dynamically in time as a response to their successive interactions. In order to apply machine learning on such entities, e.g., for classification purposes, one therefore needs to integrate the interactions into the feature engineering in a systematic way. This proposal shows how, to this end, the current state of graph machine learning remains inadequate and needs to be be augmented with a comprehensive feature engineering paradigm in space and time.
\end{abstract}
\section{Introduction}
\label{sec:introduction}
\subsection{Motivation}
\label{sec:Motivation}
In the industry, and even in the scientific literature, supervised learning has overwhelmingly been applied to data sets where the training examples are
\begin{enumerate}[label=({\roman*})]
\item independent of each other, and \label{pt:independent}
\item homogeneous, i.e., the subjects of the classfication or regression are instances of the same entity type such that each column in the design matrix has a consistent interpretation and format across all rows. \label{pt:homogeneous}
\end{enumerate}
In other words, the training examples used for the estimation\footnote{Estimation shall herein refer indiscriminately to either classification or regression.} refer to entities that are \ref{pt:independent} non-interacting and of \ref{pt:homogeneous} the same type. This assumption---or rather, approximation---implies that the entities are decoupled from their environment and that their \href{https://en.wikipedia.org/wiki/Design_matrix}{design matrix} is self-contained. In the real world, however, one cannot make such an assumption since any given entity to be estimated is likely part of a broader \href{https://en.wikipedia.org/wiki/Ontology_(information_science)}{ontology} which intertwines its properties with those of other entities.
One can attempt to handcraft the relationships of the ontology into the design matrix of each entity type, but such a feature engineering exercise is demanding in human expertise, prone to the introduction of biases, and more importantly, cannot generalize to arbitrary problem domains. Significant progress in overcoming this limitation has been afforded by the recent rise to prominence of graph machine learning (GML) \citep{wu2020comprehensive, zhang2018deep}. Applications of GML to evidently graph-based ontologies such as social networks \citep{al-eidi2020time-ordered} or cybersecurity \citep{liu2019heterogeneous, sun2019hindom} have quickly flourished and developer tools such as \href{https://www.stellargraph.io/}{StellarGraph} \citep{StellarGraph} are reaching maturity. Despite all these advances, a thoroughly comprehensive and generalizable approach has yet to be devised for the estimation of heterogeneous, attributed nodes that interact in time. This latter scenario is indeed the most ubiquitous in the real world since, at the most fundamental level, all phenomena can be reduced to physical interactions between indivisible entities. Some attempts in this direction, such as spatio-temporal graphs \citep{zhang2018gaan}, do take into account the space-time aspect of the problem but cannot accommodate the heterogeneity of the graph. Conversely, the few algorithms that can handle heterogeneity such as GraphSAGE \citep{hamilton2017inductive}, and its derivative HinSAGE, cannot model time-evolution of both the node attributes and of the edges. More crucially, the main shortcoming of GML lies in the fact that interactions between entities are constrained to bi-partite relationships. This makes it inapplicable for problems where the interaction can---in principle---behave as a stochastic black box involving an arbitrary number of vertices with no particular directionality.
This proposal aims to resolve the above problem by outlining a reference architecture for the estimation of heterogeneous, attributed entities that interact with their environment---and with each other---over time. In order to accommodate arbitrary interactions, it shall do away with GML's attempts to force-fit\footnote{One particular workaround that could use GML techniques shall be ignored here, namely the one where interactions are not edges, but vertices, on an equal footing with entities. Notwithstanding the fact that blending interactions and entities in the same set of vertices is conceptually inelegant, the proliferation of nodes that would ensue would lead to unwieldy dimensionalities.} graph topologies onto the ontology of the problem domain. Instead, it shall let interactions be modeled as learnable modules that can be recycled for any entity instances they involve. Because a parallel can be drawn between interactions and the notion of \href{https://en.wikipedia.org/wiki/Hypergraph}{hyperedges}, one can describe the proposal herein as a revised, tentative blueprint for \textit{hypergraph machine learning} \citep{zhou2006learning, jiang2019dynamic}.
The building blocks of the problem are formally defined in \S\ref{sec:Building blocks}; \S\ref{sec:Supervised learning} presents the architecture and spells out the learning problem in terms of the building blocks; and \S\ref{sec:Outlook} goes over the open questions and blind spots that may require further investigation.
\subsection{A real-world use case: COVID-19}
\label{sec:A real-world use case: COVID-19}
An intuition for the power and ubiquity of the present proposal is best elicited by a real-world use case. Due to its global impact and media exposure, a relatable application is the modeling of the spread of a pandemic, such as COVID-19. The goal of the model is to determine whether an \textit{entity} is infected by---or acts as a vector for---the disease. Here, \textit{entities} could be biological (e.g., human beings, pets, wild animals) or inanimate (e.g., objects such as dooknobs or handrails, or venues such as markets or fitness clubs). One can see that each of these entities have attributes, i.e., features, that are intrinsic to them. These can be the age or genetic makeup for a biological entity, or the capacity or level of sanitation for a physical venue. Note how these \textit{intrinsic features} are most often dynamic and hence require a temporal treatment. \textit{Extrinsic features}, on the other hand, arise from the \textit{interactions} that the entities have had with one another, conditioned on various \textit{environmental parameters}. As entities interact, their extrinsic features---which are also dynamic---are to be updated so as to reflect the changing likelihood that any given entity carries the virus.\footnote{Note how the plethora of preventative measures that were taken during the pandemic, such as social distancing, lockdowns, or the adoption of face masks, are all modulations on the environemental parameters, or even direct changes to the intrinsic and extrinsic features, aimed at minimizing the likelihood that entities are classified as vectors of the disease.}
\section{Building blocks}
\label{sec:Building blocks}
\subsection{Entities}
\label{sec:Entities}
Let $\varepsilon_{k}^{(j)}$ be the $k$-th instance of an entity of type $j$. The set $\mathcal{E}^{(j)}$ of entities of type $j$ adds up to the overall, hetergeneous set $\mathcal{E}$ of all entities, i.e.,
\begin{equation}
\varepsilon_{k}^{(j)} \in \mathcal{E}^{(j)} \subset \mathcal{E} = \bigcup\limits_{l=1}^{E} \mathcal{E}^{(l)},
\end{equation}
where $E$ is the number of entity types. Each entity $\varepsilon_{k}^{(j)}$ is represented algebraically by a vector of features $\hat{d}_k^{(j)}$ whose interpretation and dimensionality is fixed by the entity type (cf. \S\ref{sec:Data representation}).
\subsection{Interactions}
\label{sec:Interactions}
Entities are not static in the sense that they interact with their environment, thereby updating their features upon those interactions. The most trivial interaction involves a single entity $\varepsilon$ whose labels or attribute features are re-written by the environment independently of other entities. A more interesting case happens when the environment also involves one or more other entities $\varepsilon' \neq \varepsilon$ of potentially different types. These other entities $\varepsilon'$ both influence---and are in turn influenced by---the presence of $\varepsilon$. Such interactions thus induce correlations (i.e., dependencies) or perturbations (i.e., noise) among the features of the entities at play.
Let us formalize the above by denoting the $l$-th instance of an interaction of type $i$ as a function
\begin{equation}
\chi_{l}^{(i)} = \chi_{l}^{(i)}(\xi_l^{(i)}, \vec{\tau}_l^{(i)}, t)
\label{eq:interaction}
\end{equation}
of the set\footnote{$\xi_l^{(j)}$ is, strictly speaking, a dictionary, or associative array, of key-value pairs where the key specifies the role in the interaction, and the value specifies the entity instance.} $\xi_l^{(i)} \subset \mathcal{E}$ of entities involved, the vector $\vec{\tau}_l^{(i)}$ of environmental parameters that modulate the interaction, and the timestamp $t$ of when the interaction occurred. Note that a given interaction type $i$ predetermines the structure of both $\xi_l^{(i)}$ and $\vec{\tau}_l^{(i)}$. One can think of an interaction \textit{type} as a set of co-occurring relationships within the broader ontology of the problem domain. An interaction is instantiated, i.e., subscripted with some index $l$ as in Eq. (\ref{eq:interaction}), once it is time-stamped and associated with a particular set of entity instances $\xi_l^{(j)}$ and environmental parameters $\vec{\tau}_l^{(i)}$.
Just as for entities, interactions can be grouped into sets according to their types, thereby adding up to the overall set $\mathcal{X}$ of $I$ possible interaction types:
\begin{equation}
\chi_{l}^{(i)} \in \mathcal{X}^{(i)} \subset \mathcal{X} = \bigcup\limits_{\iota=1}^{I} \mathcal{X}^{(\iota)}.
\label{eq:interaction sets}
\end{equation}
Notice how $\chi$ is the multipartite generalization of the attributed bipartite edge $\xi = \braces{\varepsilon, \varepsilon'}$ commonly known from ``run-of-the-mill'' graph theory. The definition of an interaction in Eq. (\ref{eq:interaction}) is thus more akin to a hyperedge that spans $\xi$, is attributed with $\vec{\tau}$, and is time-stamped at $t$.
\subsection{Data representation}
\label{sec:Data representation}
Let the knowledge about an entity $\varepsilon_k^{(j)}$ at time $t$ be encoded by its \textit{data vector}\footnote{or more generally, a tensor}
\begin{eqnarray}
\hat{d}_k^{(j)}(t) & = & \overbrace{\vec{b}_k^{(j)}(t)}^{\scriptsize \mbox{targets}} \oplus \overbrace{\hat{f}_k^{(j)}(t)}^{\scriptsize \mbox{intrinsic features}} \oplus \hak{\overbrace{\bigoplus\limits_{\forall i\mid\varepsilon^{(j)}\in \xi^{(i)}} \hat{\chi}_k^{(j, i)}(t)}^{\scriptsize\mbox{extrinsic features}}},
\label{eq:data vector}
\end{eqnarray}
which is the concatenation\footnote{Concatenation shall be symbolized mathematically as the direct sum operator $\oplus$.} of three vectors, namely the target features, the intrinsic features, and the extrinsic features. Notice that the extrinsic features are themselves a concatenation of as many interactions as an entity of type $j$ is involved in. This is developed further in \S\ref{sec:Extrinsic features}.
\subsubsection{Target features}
\label{sec:Target features}
The \textit{target features} $\vec{b}_k^{(j)}(t)$ at time $t$ of an entity $\varepsilon_k^{(j)}$ are the scores or labels\footnote{One shall refer to scores and labels interchangeably, thus leaving the freedom to the particular use-case to decide whether it is a matter of regression or classification, respectively.} which drive supervised learning. One can assume that one begins with a body of labeled entities for which the targets are well-defined and serve as ground truths or, more realistically, as seed \textit{beliefs}\footnote{hence the $b$-notation for beliefs} for the estimation of the entities.
\subsubsection{Intrinsic features}
\label{sec:Intrinsic features}
The \textit{intrinsic features} $\hat{f}_k^{(j)}(t)$ at time $t$ of an entity $\varepsilon_k^{(j)}$ are any features which can be completely decoupled from the presence of other entities $\varepsilon_{k'}^{(j')}\neq\varepsilon_k^{(j)}$ in the environment. Moreover, since the system is dynamic, $\hat{f}_k^{(j)}$ is an aggregation through time of all the sequential updates $\vec{f}_k^{(j)}(t'\mid t' < t)$ undergone by $\varepsilon_k^{(j)}$ up to time $t$. While $\vec{f}_k^{(j)}$ is most often human-readable, $\hat{f}_k^{(j)}$ can instead be an abstract encoding that collapses the history of intrinsic feature updates onto a fixed, lower-dimensional space. This time-collapse---or aggregation---operation $\mathcal{M}_f^{(j)}$ shall be denoted by
\begin{eqnarray}
\hat{f}_k^{(j)}(t) & = & \mathcal{M}_f^{(j)}\!\!\tes{\bigoplus\limits_{t-\Delta T \leq t' < t}^{t} \hak{\vec{f}_k^{(j)}(t'), t'}}
\label{eq:intrinsic-time-aggregator-expanded}
\end{eqnarray}
where $\Delta T$ is the lookback period from the current time $t$ and should ideally span all the way back to the creation time of $\varepsilon_k^{(j)}$.\footnote{Because of implementational constraints, however, only the most recent interval of history can be stored in memory so $\Delta T$ will most likely be a finite time window.} Note that the \textit{time-aggregator} $\mathcal{M}_f^{(j)}$ of intrinsic features does not merely operate on the unordered set of updates $\vec{f}_k^{(j)}(t')$, but rather on their \textit{history}, i.e., on pairs $\hak{\vec{f}_k^{(j)}(t'), t'}$ where $t'$ is needed to serve as an \textit{attention} parameter.\footnote{More recent events typically deserve more attention than older ones.} One can thus re-write Eq. (\ref{eq:intrinsic-time-aggregator-expanded}) as
\begin{equation}
\hat{f}_k^{(j)}(t) = \mathcal{M}_f^{(j)}\tes{H(\vec{f}_k^{(j)})\Big|_{\scriptsize t{-}\Delta T}^{\scriptsize t}}
\label{eq:intrinsic-time-aggregator}
\end{equation}
where
\begin{equation}
H(v)\Big|_{\scriptsize t{-}\Delta T}^{\scriptsize t} = \bigoplus\limits_{t-\Delta T \leq t' < t}^{t} \hak{v(t'), t'}
\label{eq:history}
\end{equation}
represents the time-stamped history of any variable $v$ from $t{-}\Delta T$ to $t$.
In terms of deep learning architecture, $\mathcal{M}_f^{(j)}$ can be implemented by a recurrent neural network or a transformer \citep{vaswani2017attention}. However, a (psuedo-)Markovian simplification,\footnote{The viability of this simplification is subject to experimentation.} denoted $\tilde{\mathcal{M}}_f^{(j)}$, could make use of only the current update and the previous aggregation, i.e.,
\begin{equation}
\hat{f}_k^{(j)}(t) = \tilde{\mathcal{M}}_f^{(j)}\!\!\tes{\vec{f}_k^{(j)}(t), \hat{f}_k^{(j)}(t{-}1)}.
\label{eq:intrinsic-time-aggregator-Markovian}
\end{equation}
\subsubsection{Extrinsic features}
\label{sec:Extrinsic features}
The \textit{extrinsic features} $\hat{\chi}_k^{(j,i)}(t)$ at time $t$ of an entity $\varepsilon_k^{(j)}$ are those that depend on the data vectors of other entities in the context of an interaction $\chi_l^{(i1)}$ of type $i$. $\hat{\chi}_k^{(j,i)}(t)$ is a thus latent representation which summarizes the sequence of interactions of type $i$ which $\varepsilon_k^{(j)}$ has been involved in up to and including time step $t$. In a manner similar to Eqs. (\ref{eq:intrinsic-time-aggregator-expanded}, \ref{eq:intrinsic-time-aggregator}, \ref{eq:history}), this time-aggregation can be expressed by
\begin{equation}
\hat{\chi}_k^{(j, i)}(t) = \mathcal{M}_f^{(j)}\tes{H(\vec{\chi}_k^{(j, i)})\Big|_{\scriptsize t{-}\Delta T}^{\scriptsize t}}
\label{eq:extrinsic-time-aggregator}
\end{equation}
where $\vec{\chi}_k^{(j, i)}(t')$ is the latent representation---from the perspective of $\varepsilon_k^{(j)}$---of some particular interaction instance
\begin{equation}
\chi_{l}^{(i)}\!\tes{\xi_{l}^{(i)}, \vec{\tau}_{l}^{(i)}, t'} \mid \varepsilon_k^{(j)} \in \xi_{l}^{(i)}
\label{eq:space-aggregator}
\end{equation}
which took place at time $t'$.
The process by which an interaction $\chi_{l}^{(i)}$ generates a latent representation $\vec{\chi}_k^{(j, i)}$ of itself to each of its participating entities $\varepsilon_k^{(j)}$ is examplified in Fig. \ref{fig:space_aggregator} for the tri-partite case. This process shall be referred to as \textit{space-aggregation} in the sense that, unlike $\mathcal{M}_f^{(j)}$ and $\mathcal{M}_{\chi}^{(j, i)}$, which aggregate histories through time, $\chi_{l}^{(i)}$ aggregates the features of neighbouring entities as per the topology of the hypergraph that links them (in space). One can therefore consider $\chi_{l}^{(i)}$ as a black box for any conceivable technique from graph machine learning \citep{wu2020comprehensive} or even traditional belief propagation \citep{yedida2003understanding}. Formally, space-aggregation at time step $t$ shall be expressed as the mapping
\begin{eqnarray}
\chi_{l}^{(i)}: \bigcup\limits_{\varepsilon_k^{(j)} \in \xi_l^{(i)}} \braces{\vec{d}_k^{(j)}(t{-}1)} \rightarrow \bigcup\limits_{\varepsilon_k^{(j)} \in \xi_l^{(i)}} \braces{\vec{\chi}_k^{(j,i)}(t)}.
\label{eq:space-aggregator-explicit}
\end{eqnarray}
Finally, going back to the time-aggregation $\mathcal{M}_\chi^{(j, i)}$ of extrinsic features, one can consider the Markovian assumption
\begin{equation}
\hat{\chi}_k^{(j, i)}(t) = \tilde{\mathcal{M}}_{\chi}^{(j, i)}\!\!\tes{\vec{\chi}_k^{(j, i)}(t), \hat{\chi}_k^{(j, i)}(t{-}1)}.
\label{eq:extrinsic-time-aggregator-Markovian}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=.65\columnwidth]{images/space_aggregator.pdf}
\caption{Three entities $\xi_l^{(i)} = \{\varepsilon_{k}^{(j)}, \varepsilon_{k'}^{(j')}, \varepsilon_{k''}^{(j'')}\}$ are involved in an interaction $\chi_l^{(i)}$ of type $i$ at the time step $t$. For each of the entities, the interaction $\chi_l^{(i)}$ is represented by a vector $\vec{\chi}$ which summarizes a ``personalized takeaway message'' from their encounter. This operation is based on the data vectors at the previous time step of the entities involved as well as a vector $\vec{\tau}_l^{(i)}$ of environmental parameters.}
\label{fig:space_aggregator}
\end{figure}
\section{Supervised learning}
\label{sec:Supervised learning}
\subsection{Circuit diagram of the estimation}
Figure \ref{fig:architecture_non_markovian} brings together the building blocks that were presented above. It depicts, on a discretized timeline, the flow of information that culminates at time $t$ with the training on---or the serving of---a target belief for entity $\varepsilon_k^{(j)}$. Figure \ref{fig:architecture_markovian} shows the same scenario as Fig. \ref{fig:architecture_non_markovian} under the Markovian assumptions of Eqs. (\ref{eq:intrinsic-time-aggregator-Markovian}) and (\ref{eq:extrinsic-time-aggregator-Markovian}).
As indicated by the small diagonal arrows, the environment can at any time step do any one of three operations on the entity, namely
\begin{itemize}
\item update its target belief,
\item update its intrinsic features, or
\item involve it in one or more interactions, thereby updating its extrinsic features.
\end{itemize}
The pseudocode for systematically processing three potential updates in an online fashion is shown in Alg. \ref{alg:pseudocode}.
\begin{algorithm}
\SetAlgoLined
\KwResult{The parameters of $\mathcal{M}^{(j)}$, $\mathcal{M}_{f}^{(j)}$, $\mathcal{M}_{\chi}^{(j, i)}$, and $\chi_l^{(i)}$ are optimized so as to minimize the loss function $D\!\tes{\vec{b}^{(j)}_k(t), \vec{\beta}_k^{(j)}(t)}$ as per Eq. (\ref{eq:optimization}).}
initialize the weights of $\mathcal{M}^{(j)}$, $\mathcal{M}_{f}^{(j)}$, $\mathcal{M}_{\chi}^{(j, i)}$, and $\chi_l^{(i)}$ randomly (or via transfer learning, if applicable)\;
\ForEach{entity instance $k$ of the fixed type $j$}
{
\ForEach{time step $t' \le t$}
{
\vspace{10pt}
\tcp{Update the intrinsic features.}
\uIf{the intrinsic feature $\vec{f}_{k}^{(j)}$ is updated at time $t'$}
{
append the pair $\hak{\vec{f}_k^{(j)}(t'), t'}$ to the history $\evalat{H(\vec{f}_{k}^{(j)})}{t'-\Delta T}^{t'-1}$ of intrinsic updates\;
}
time-aggregate the history of intrinsic updates into $\hat{f}_k^{(j)}(t')$ with Eq. (\ref{eq:intrinsic-time-aggregator})\;
\vspace{10pt}
\tcp{Update the target beliefs.}
\uIf{the belief is updated at time $t'$}
{
assign the new belief to $b_k^{(j)}(t')$\;
}
\uElse
{
assume that the previous belief remained unchanged, i.e., $b_k^{(j)}(t') \leftarrow b_k^{(j)}(t'{-}1)$\;
}
\vspace{10pt}
\tcp{Update the extrinsic features.}
\ForEach{interaction $\chi_l^{(i)}$ of type $i$ that $\varepsilon_k^{(j)}$ can be involved in}
{
\uIf{$\varepsilon_k^{(j)}$ is indeed involved in $\chi_l^{(i)}$ at time $t'$}
{
space-aggregate the data vectors at time $t'{-}1$ of all entities involved in $\chi_l^{(i)}$ into $\vec{\chi}_k^{(j, i)}$ with Eq. (\ref{eq:space-aggregator-explicit})\;
append the pair $\hak{\vec{\chi}_k^{(j)}(t'), t'}$ to the history $\evalat{H(\vec{\chi}_{k}^{(j)})}{t'-\Delta T}^{t'-1}$ of extrinsic updates\;
}
time-aggregate the history of extrinsic updates into $\hat{\chi}_k^{(j, i)}(t')$ with Eq. (\ref{eq:extrinsic-time-aggregator})\;
}
\vspace{10pt}
\tcp{Evaluate against the target.}
concatenate the aggregated intrinsic and extrinsic features via Eq. (\ref{eq:concatenate intrinsic and extrinsic})\;
project the resulting vector in the space of beliefs via Eq. (\ref{eq:optimization})\;
perform back-propagation on the aggregators so as to align the projected vector with the target belief via Eq. (\ref{eq:estimation})\;
}
}
\caption{Online training of the estimator for an entity $\varepsilon_k^{(j)}$ of type $j$ at time $t$}
\label{alg:pseudocode}
\end{algorithm}
\begin{figure*}
\includegraphics[width=1\columnwidth]{images/architecture_non_markovian.pdf}
\caption{This circuit diagram shows the successive transformations that are undergone by entity $\varepsilon_k^{(j)}$ as it interacts with its environment on a discretized time scale. Three possible events---i.e., inputs from the environment---can occur at any time step $t$, namely the update of its target features, the update of its intrinsic features, or its involvement in an interaction with neighbouring entities. For each of these events, the data vector of $\varepsilon_k^{(j)}$ is re-processed by direct overwriting (of the targets), by time-aggregation $\mathcal{M}_{f}^{(j)}$ (of the intrinsic features), or by space-aggregation $\chi_{l}^{(j)}$ followed by time-aggregation $\mathcal{M}_{\chi}^{(j, i)}$ (of the extrinsic features). The resulting latent representation is then merged by an overarching mapping $\mathcal{M}^{(j)}$ which projects it on the same space as that of the target features. All four mappings $\mathcal{M}_{f}^{(j)}$, $\chi_{l}^{(i)}$, $\mathcal{M}_{\chi}^{(j, i)}$, and $\mathcal{M}^{(j)}$ are therefore to be optimized in view of a single common goal, namely the minimization Eq. (\ref{eq:optimization}) of the loss function $D$. For simplicity, only the swimlane relevant to $\varepsilon_k^{(j)}$ is shown here and all irrelevant connections onto the swimlanes of other entities are omitted (e.g., connections to $\vec{\chi}_{k'}^{(j', i)}$). Similarly, in order to reduce clutter, only the input at time $t{-}1$ from a single neightbour $\varepsilon_{k'}^{(j')}$ is shown, although, in practice, any number of entities can converge at the interaction node $\chi_l^{(j)}$. Finally, once again for the sake of simplicity, only one interaction instance is shown.}
\label{fig:architecture_non_markovian}
\end{figure*}
\begin{figure*}
\includegraphics[width=1\columnwidth]{images/architecture_markovian.pdf}
\caption{Analog of the circuit diagram of Fig. \ref{fig:architecture_non_markovian} based on the (pseudo-)Markovian assumptions for time-aggregation, i.e., Eqs. (\ref{eq:intrinsic-time-aggregator-Markovian}) and (\ref{eq:extrinsic-time-aggregator-Markovian}).}
\label{fig:architecture_markovian}
\end{figure*}
\subsection{Analytical derivation of the estimation}
As stated in the introduction, the aim of this article is to outline a reference architecture for supervised learning on the entities. Taking entity $\varepsilon_k^{(j)}$ as an example, the goal is to learn a function $\mathcal{M}^{(j)}$ which maps its features
\begin{equation}
\hat{f}_k^{(j)}(t) \oplus \hak{\bigoplus\limits_{\forall i \mid \varepsilon^{(j)}\in\xi^{(i)}} \hat{\chi}_k^{(j,i)}}
\label{eq:concatenate intrinsic and extrinsic}
\end{equation}
onto its target $\vec{b}_k^{(j)}(t)$. In practice, $\mathcal{M}^{(j)}$ can only achieve an approximation
\begin{equation}
\beta_k^{(j)}(t) = \mathcal{M}^{(j)}\!\!\tes{\hat{f}_k^{(j)}(t) \oplus \hak{\bigoplus\limits_{\forall i \mid \varepsilon^{(j)}\in\xi^{(i)}} \hat{\chi}_k^{(j,i)}(t)}}
\end{equation}
of the actual target $\vec{b}_k^{(j)}(t)$ such that one is left with the optimization problem
\begin{equation}
\operatornamewithlimits{argmin}_{\mathcal{M}^{(j)}} \frac{1}{K^{(j)}\Delta T} \sum\limits_{t'=t{-}\Delta T}^{t} \sum\limits_{k=1}^{K^{(j)}} D\!\tes{\vec{b}^{(j)}_k(t'), \vec{\beta}_k^{(j)}(t')}
\label{eq:optimization naive}
\end{equation}
where $K^{(j)}$ is the number of entities of type $j$ and $D$ is an arbitrary distance metric which can double as a loss function.
A full expansion of $\mathcal{M}^{(j)}$ in terms of the aggregator operations in time Eqs. (\ref{eq:intrinsic-time-aggregator}, \ref{eq:extrinsic-time-aggregator}) and space Eq. (\ref{eq:space-aggregator}) yields the estimated belief at time $t$
{\small
\begin{eqnarray}
\beta_k^{(j)} & = & \mathcal{M}^{(j)}\!\!\tes{\hat{f}_k^{(j)}(t) \oplus \hak{\bigoplus\limits_{\forall i \mid \varepsilon^{(j)}\in\xi^{(i)}} \!\!\!\! \hat{\chi}_k^{(j,i)}(t)}} \\
& = & \braces{\mbox{time-aggregations Eqs. (\ref{eq:intrinsic-time-aggregator}) and (\ref{eq:extrinsic-time-aggregator})}} \nonumber\\
& = & \mathcal{M}^{(j)}\!\!\tes{\mathcal{M}_f^{(j)}\!\!\tes{H(\vec{f}_k^{(j)})\Big|_{\scriptsize t{-}\Delta T}^{\scriptsize t}} \oplus \hak{\bigoplus\limits_{\forall i \mid \varepsilon^{(j)}\in\xi^{(i)}} \!\!\!\!\!\!\! \mathcal{M}_{\chi}^{(j, i)}\!\!\tes{H(\vec{\chi}_k^{(j, i)})\Big|_{\scriptsize t{-}\Delta T}^{\scriptsize t}}}} \label{eq:non-Markovian just before space-aggregation}\\
& = & \braces{\mbox{space-aggregation Eq. (\ref{eq:space-aggregator})}} \nonumber\\
& = & \mathcal{M}^{(j)}\!\!\tes{\underbrace{\mathcal{M}_f^{(j)}\!\!\tes{\underbrace{H(\underbrace{~~\vec{f}_k^{(j)}~~}_{\mbox{\scriptsize intr. update}})\Big|_{\scriptsize t{-}\Delta T}^{\scriptsize t}}_{\mbox{\scriptsize hist. of intr. updates}}}}_{\scriptsize\mbox{(latent) intrinsic features}} \oplus \underbrace{\hak{\bigoplus\limits_{\forall i \mid \varepsilon^{(j)}\in\xi^{(i)}} \!\!\!\!\!\!\! \mathcal{M}_{\chi}^{(j, i)}\!\!\tes{\underbrace{\evalat*{H\tes{\underbrace{\chi_l^{(i)}\!\!\tes{\bigcup\limits_{\varepsilon_{k'}^{(j')}\in\xi^{(i)}}\braces{\vec{d}_{k'}^{(j')}}}}_{\mbox{\scriptsize extrinsic update / interaction}}}}{\scriptsize t{-}\Delta T}^{\scriptsize t{-}1}}_{\mbox{\scriptsize history of extrinsic updates}}}}}_{\mbox{\scriptsize (latent) extrinsic features}}}, \nonumber\\
\end{eqnarray}
}
or, if one were to apply the Markovian assumption on Eq. (\ref{eq:non-Markovian just before space-aggregation}),
{\small
\begin{eqnarray}
\beta_k^{(j)} & = & \mathcal{M}^{(j)}\!\!\tes{\tilde{\mathcal{M}_f}^{(j)}\!\!\tes{\vec{f}_k^{(j)}(t), \hat{f}_k^{(j)}(t{-}1)} \oplus \hak{\bigoplus\limits_{\forall i \mid \varepsilon^{(j)}\in\xi^{(i)}} \tilde{\mathcal{M}_{\chi}}^{(j, i)}\!\!\tes{\vec{\chi}_k^{(j, i)}(t), \hat{\chi}_k^{(j, i)}(t{-}1)}}} \\
& = & \braces{\mbox{space-aggregation}} \nonumber\\
& = & \mathcal{M}^{(j)}\!\!\tes{\tilde{\mathcal{M}_f}^{(j)}\!\!\tes{\vec{f}_k^{(j)}(t), \hat{f}_k^{(j)}(t{-}1)} \oplus \hak{\bigoplus\limits_{\forall i \mid \varepsilon^{(j)}\in\xi^{(i)}} \tilde{\mathcal{M}_{\chi}}^{(j, i)}\!\!\tes{\chi_l^{(i)}\!\!\tes{\bigcup\limits_{\varepsilon_{k'}^{(j')}\in\xi^{(i)}}\braces{\vec{d}_{k'}^{(j')}(t'{-}1)}}, \hat{\chi}_k^{(j, i)}(t{-}1)}}}. \nonumber\\
\label{eq:estimation}
\end{eqnarray}}
One can thus see that the optimization problem of Eq. (\ref{eq:optimization naive}) is not limited to the parameters and hyper-parameters of $\mathcal{M}^{(j)}$ but also extends to those of the aggregators $\mathcal{M}_f^{(j)}$, $\mathcal{M}_{\chi}^{(j, i)}$, and $\chi^{(j)}$ such that the global optimum is given by
\begin{equation}
\operatornamewithlimits{argmin}_{\mathcal{M}^{(j)},~\bigcup\limits_i \mathcal{M}_{\chi}^{(j, i)},~\mathcal{M}_f^{(j)},~\chi^{(i)}} \frac{1}{K^{(j)}\Delta T} \sum\limits_{t'=t{-}\Delta T}^{t} \sum\limits_{k=1}^{K^{(j)}} D\!\tes{\vec{b}^{(j)}_k(t'), \vec{\beta}_k^{(j)}(t')}.
\label{eq:optimization}
\end{equation}
Notice how, unlike most of the literature on graph machine learning, the data aggregators are not dependent on any particular \textit{instances} of entities and interactions but only on their \textit{types} $j$ and $i$, respectively.
\subsection{Remarks on the aggregation processes}
\label{sec:Remarks}
Some remarks are in order regarding the design of Fig. \ref{fig:architecture_non_markovian}, especially about the aggregation processes.
First, there is no time-aggregation for the targets. This is motivated by the fact that the aggregators should focus on inferring the \textit{current} target $\vec{b}_k^{(j)}(t)$ instead of trying to reproduce its history. Except for feeding in the latest target $\vec{b}_k^{(j)}(t{-}1)$ to neighbouring entities via the space-aggregator, the targets of any given entity should not leak in its feature space to avoid the risk of overfitting.\footnote{The targets from the pervious time step of the entity can---and should---however be used by its neighbouring entities via the space-aggregation process.} Another reason is that, unlike pure time-series problems (e.g., stock prediction) where the target's history is itself the main feature, the present problem much more heavily entangles the entities to their environment such that, at any time step $t$, a target can be abruptly overwritten, in complete disregard for any historical continuity. Note that the target approximations $\vec{\beta}_k^{(j)}$ are not reused either anywhere in the aggregation so as to ensure that no error gets inadvertantly amplified by a feedback loop.
Note that the above choices to exclude the targets from (most) aggregations are not founded on an absolute rationale. They are merely precautions against overfitting and error amplification. One may very well devise regularization mechanisms that will alleviate these concerns and efficiently incorporate target histories as meaningful features in themselves.
A final observation is that not all three updates---i.e., of targets, intrinsic, and extrinsic features---systematically happen at every time step. Whenever an update is ``missing'', one shall not replace it with a null value, but simply carry over the last update together with its timestamp. Here again, this is not an absolute requirement as one could adopt alternative approaches, where missing values can be represented by dedicated values. Such design choices are mong those that require experimentation with a particular use case (cf. \S\ref{sec:Outlook}).
\section{Outlook}
\label{sec:Outlook}
In the broader context of ontologies, be them social, man-made, or natural, applications of machine learning have mostly been cross-sectional in that they deal with the estimation of a specific entity type only. This work attempts to further the reach of machine learning to arbitrary ontologies where the entities are attributed, dynamic, heterogeneous, and---more importantly---interacting with each other in ways that do not necessarily fit traditional graph topographies made up of bipartite edges. Estimation---i.e., classification or regression---is therefore not constrained to any given entity type anymore but can be applied accross all heterogeneous entities. Examples of such ontologies are illustrated in Fig. \ref{fig:ontologies}.
\begin{figure}
{\small
\begin{subfigure}{1\textwidth}
\begin{tabular}{ll}
\multicolumn{2}{l}{\sc Cybersecurity}\vspace{7pt}\\
& \begin{tabularx}{\textwidth}{lXXX}
\toprule
\textbf{entity} & \textbf{intrinsic} & \textbf{extrinsic} & \textbf{belief} \\
\toprule
e-mail & date sent; date received; text in the body; etc. & IP address of the sending server; DMARC policy; SPF policy; attached files and their attributes; reply-to e-mail address; etc. & benign; phishing; spam; malware; etc. \\\\
IP address & autonomous system; geographical location; etc. & domain names hosted at that IP; registrants of those domains; active ports; etc. & benign; hosts a particular piece of malware; etc. \\\\
file & size in bytes; hash signature; metadata; etc. & location in the hosting device; permissions; name of the author; etc. & benign; virus; spyware; ransomware; etc. \\
etc $\ldots$ & & \\
\bottomrule
\end{tabularx}\\
& \\
& \textbf{Interactions:} an e-mail is sent; a domain name is queried; a file is opened; a domain name is registered; etc.
\end{tabular}
\caption{Onotlogy of cybersecurity}
\label{fig:cybersecurity ontology}
\end{subfigure}\vspace{10pt}\\
\begin{subfigure}{1\textwidth}
\begin{tabular}{ll}
\multicolumn{2}{l}{\sc Disease spread}\vspace{7pt}\\
& \begin{tabularx}{\textwidth}{lXXX}
\toprule
\textbf{entity} & \textbf{intrinsic} & \textbf{extrinsic} & \textbf{belief} \\
\toprule
human & age; genetic predisposition; co-morbidity; etc. & walk of life; sociability; medical insurance; etc. & virus-free; asymptomatic carrier; at risk; etc. \\\\
animal & genetic predisposition; etc. & proximity to humans; etc. & carrier; virus-free; etc. \\\\
object & surface area; surface temperature; humidity; etc. & public; private; shared; frequency of disinfection; etc. & deposited with the virus; virus-free; etc. \\\\
venue & capacity; ventilation; room temperature; etc. & public; private; shared; frequency of disinfection; & hot-spot for infection; virus-free; etc. \\
etc $\ldots$ & & \\
\bottomrule
\end{tabularx}\\
& \\
& \textbf{Interactions:} several people share the same object; an animal is sold at a market; two people shake hands; etc.
\end{tabular}
\caption{Ontology of disease spread}
\label{fig:disease ontology}
\end{subfigure}
}
\caption{Examples of ontologies and their corresponding building blocks for the purposes of supervised machine learning.}
\label{fig:ontologies}
\end{figure}
The reference architecture presented herein is intentially kept as high-level as possible, thereby allowing for the modular implementation of the four aggregators in a way that is agnostic as to their inner components, be them neural networks or any other technique. Given any particular problem domain, further investigation is therefore needed as to the particular design of the aggregators, in particular when it comes to such issues as
\begin{itemize}
\item the initialization of the latent representations $\hat{f}_k^{(j)}(t=0)$ and $\hat{\chi}_k^{(j, i)}(t=0)$,
\item the choice of the mappings $\mathcal{M}^{(j)}$, $\mathcal{M}_f^{(j)}$, $\mathcal{M}_{\chi}^{(j, i)}$, $\chi_l^{(i)}$ and their respective hyperparameters,
\item the initialization of the mappings via transfer learning, whenever applicable,
\item the handling of missing values for $\vec{f}_k^{(j,i)}(t)$, $\vec{b}_k^{(j)}(t)$ or $\vec{\chi}_k^{(j,i)}(t)$ at any given time step $t$,
\item the validity of the Markovian assumption in the time-aggregators, or
\item the choice of training scheme (batch vs. online).
\end{itemize}
\begin{ack}
This work was supported by the Innovation Fund Denmark. The author would like to thank Egon Kidmose for valuable feedback on the manuscruipt.
\end{ack}
\bibliographystyle{plainnat}
|
2,869,038,153,744 | arxiv | \section*{Abstract}}
This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary literature from computer science, linguistics, and social sciences.
The paper outlines six specific risk areas: \protect\hyperlink{i.-discrimination-exclusion-and-toxicity}{I. Discrimination, Exclusion and Toxicity}, \protect\hyperlink{ii.-information-hazards}{II. Information Hazards}, \protect\hyperlink{iii.-misinformation-harms}{III. Misinformation Harms}, \protect\hyperlink{iv.-malicious-uses}{IV. Malicious Uses}, \protect\hyperlink{v.-human-computer-interaction-harms}{V. Human-Computer Interaction Harms}, \protect \hyperlink{vi.-automation-access-and-environmental-harms}{VI. Automation, Access, and Environmental Harms}.
The first risk area discusses fairness and toxicity risks in large-scale language models. This includes four distinct risks: LMs can create unfair discrimination and representational and material harm by perpetuating stereotypes and social biases, i.e. harmful associations of specific traits with social identities. Social norms and categories can exclude or marginalise those who exist outside them. Where a LM perpetuates such norms - e.g. that people called ``Max'' are ``male'', or that ``families'' always consist of a father, mother and child - such narrow category use can deny or burden identities who differ. Toxic language can incite hate or violence or cause offense. Finally, a LM that performs more poorly for some social groups than others can create harm for disadvantaged groups, for example where such models underpin technologies that affect these groups. These risks stem in large part from choosing training corpora that include harmful language and overrepresent some social identities.
The second risk area includes risks from private data leaks or from LMs correctly inferring private or other sensitive information. These risks stem from private data that is present in the training corpus and from advanced inference capabilities of LMs.
The third risk area comprises risks associated with LMs providing false or misleading information. This includes the risk of creating less well-informed users and of eroding trust in shared information. Misinformation can cause harm in sensitive domains, such as bad legal or medical advice. Poor or false information may also lead users to perform unethical or illegal actions that they would otherwise not have performed. Misinformation risks stem in part from the processes by which LMs learn to represent language: the underlying statistical methods are not well-positioned to distinguish between factually correct and incorrect information.
The fourth risk area spans risks of users or product developers who try to use LMs to cause harm. This includes using LMs to increase the efficacy of disinformation campaigns, to create personalised scams or fraud at scale, or to develop computer code for viruses or weapon systems.
The fifth risk area focuses on risks from the specific use case of a ``conversational agent'' that directly interacts with human users. This includes risks from presenting the system as ``human-like'', possibly leading users to overestimate its capabilities and use it in unsafe ways. Another risk is that conversation with such agents may create new avenues to manipulate or extract private information from users. LM-based conversational agents may pose risks that are already known from voice assistants, such as perpetuating stereotypes by self-presenting e.g. as ``female assistant''. These risks stem in part from LM training objectives underlying such conversational agents and from product design decisions.
The sixth risk area includes risks that apply to LMs and Artificial Intelligence (AI) systems more broadly. Training and operating LMs can incur high environmental costs. LM-based applications may benefit some groups more than others and the LMs themselves are inaccessible to many. Lastly, LM-based automation may affect the quality of some jobs and undermine parts of the creative economy. These risks manifest particularly as LMs are widely used in the economy and benefits and risks from LMs are globally unevenly distributed.
In total, we present 21 risks. We then discuss the points of origin of different risks and point to potential risk mitigation approaches. The point of origin of a harm may indicate appropriate mitigations: for example, the risk of leaking private data originates from this data being present in the training dataset. It can be mitigated at the point of origin, by better redaction or curation of training data. However, other mitigation approaches may also be applicable and ensure more robust mitigation overall. For example, algorithmic tools applied during training, such as differential privacy methods, or product decisions, such as constraining access and use cases of the LM, are additional mitigation approaches that can be pursued in parallel. Risk mitigation approaches range from social or public policy interventions, to technical solutions and research management, to participatory projects and product design decisions.
Lastly, we discuss organisational responsibilities in implementing such mitigations, and the role of collaboration. Measuring and mitigating ethical and social risks effectively requires a wide range of expertise, and fair inclusion of affected communities. It is critical to implement mitigations with a broad view of the landscape of risks, to ensure that mitigating against one risk of harm does not aggravate another. Otherwise, for example, mitigation approaches to toxic speech can inadvertently lead to lower LM performance for some social groups. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs, and the need for inclusive participatory methods. Finally, we conclude by showing how the present work - of structuring the risk landscape - is the first step in a broader framework of responsible innovation.
\tableofcontents
\clearpage \hypertarget{readers-guide}{\section*{Reader's guide}\label{readers-guide}}
This is a long document. The report is divided into three segments.
First, the \protect\hyperlink{introduction}{Introduction} provides a brief introduction to Language Models.
Second, the \protect\hyperlink{classification-of-harms-from-language-models}{Classification of harms from language models} gives a taxonomy and detailed account of a range of social and ethical risks associated with Language Models.
Third, the \protect\hyperlink{discussion}{Discussion} and \protect\hyperlink{directions-for-future-research}{Directions for future research} explore some underlying causes of these risks, a range of mitigation approaches, and possible challenges to be addressed through future research.
Individual sections can be read independently or together. We recommend:
\begin{itemize}
\item
\begin{quote}
\textbf{1 minute read:} Study \protect\hyperlink{risks_table}{Table~1} for a high-level overview of the risks considered.
\end{quote}
\item
\begin{quote}
\textbf{10 minute read:} Read the \protect\hyperlink{abstract}{Abstract} and \protect\hyperlink{risks_table}{Table~1} for an overview of the risks considered. Then skim all bold text in the segment on \protect\hyperlink{classification-of-harms-from-language-models}{Classification of harms from language models} and skim \protect\hyperlink{directions-for-future-research}{Directions for future research} for an overview of risks and challenges.
\end{quote}
\item
\begin{quote}
\textbf{Readers who actively work on LMs:} We encourage you to skim all bold text in the segment on \protect\hyperlink{classification-of-harms-from-language-models}{Classification of harms from language models}, and to get stuck in risks that directly relate to your own work and interest - as you will likely be able to help solve some of the field's core challenges in this domain.
\end{quote}
\item
\begin{quote}
\textbf{Readers with no background on LMs:} We recommend you read the \protect\hyperlink{abstract}{Abstract} and \protect\hyperlink{introduction}{Introduction} first as these introduce key terminology that is used in this report. Next, study \protect\hyperlink{risks_table}{Table~1} for a high-level overview of the risks considered and read the risk headers and example dialog boxes for each risk in the \protect\hyperlink{classification-of-harms-from-language-models}{Classification of harms from language models}. Get stuck in risks that are of interest to you and read the \protect\hyperlink{discussion}{Discussion} on challenges in mitigating these risks.
\end{quote}
\item
\begin{quote}
\textbf{Readers with an interest in a particular risk or type of harm}: We encourage you to read the \protect\hyperlink{abstract}{Abstract}, \protect\hyperlink{risks_table}{Table~1} and \protect\hyperlink{discussion}{Discussion} for context on the broader risk landscape and approaches to mitigation, in addition to reading the specific section on the risk that piques your interest.
\end{quote}
\item
\begin{quote}
\textbf{Readers with an interest in approaches to mitigating harms:} We recommend you read the \protect\hyperlink{abstract}{Abstract} for an overview of the harms considered and read \protect\hyperlink{risks_table}{Table~1} with a focus on the mechanisms underlying each risk area. Jump to the \protect\hyperlink{discussion}{Discussion} on approaches to mitigating risks and read \protect\hyperlink{directions-for-future-research}{Directions for future research} on methodological and normative challenges in assessing and mitigating risks, and proposals for addressing these challenges.
\end{quote}
\end{itemize}
\hypertarget{introduction}{\chapter{Introduction}\label{introduction}}
Language Models (LMs)\footnote{These recent LMs are also referred to as ``large language models'', or ``large-scale language models''.} are rapidly growing in size and effectiveness, yielding new breakthroughs and attracting increasing research attention \citep{Brownetal2020,Fedusetal2021,Microsoft2020, Rae2021}. Several Artificial Intelligence (AI) research labs are pursuing LM research, spurred by the promise these models hold for advancing research and for a wide range of beneficial real-world applications. Some research groups have suggested that recent large-scale LMs may be a `foundational' breakthrough technology, potentially affecting many aspects of life \citep{Bommasanietal2021}. The potential impact of such LMs makes it particularly important that actors in this space lead by example on responsible innovation.
Responsible innovation entails that in addition to developing the technology, it is essential to thoughtfully assess the potential benefits as well as potential risks that need to be mitigated \citep{Stilgoeetal2013}. Prior research has explored the potential for ethical and safe innovation of large-scale LMs, including interdisciplinary workshops to scope out risks and benefits \citep{Tamkinetal2021}, papers that outline potential risks \citep{Benderetal2021,Kentonetal2021,Dinanetal2021,Bommasanietal2021}, and papers identifying ways to mitigate potential harms \citep{Welbletal2021,SolaimanDennison2020,Chenetal2021}.\footnote{Note that the origin of a risk is not a perfect guide to potential mitigations - a point we discuss in more detail in \protect\hyperlink{understanding-the-point-of-origin-of-a-risk}{Understanding the point of origin of a risk}.} For this report, we seek to build on this prior work by proposing an initial taxonomy of risks associated with LM development and use, as well as outlining concrete next steps and directions for future research that supports responsible innovation for LMs.
The overall aim of this report is three-fold:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\begin{quote}
Underpin responsible decision-making by organisations working on LMs by broadening and structuring the discourse on AI safety and ethics in this research area,
\end{quote}
\item
\begin{quote}
Contribute to wider public discussion about risks and corresponding mitigation strategies for LMs,
\end{quote}
\item
\begin{quote}
Guide mitigation work by research groups working on LMs. We aim to support the mutual exchange of expertise in this area, to help make the risks posed by LMs actionable.
\end{quote}
\end{enumerate}
We structure the identified risks in a taxonomy of ethical and social risks associated with LMs, under 6 risk areas: \protect\hyperlink{i.-discrimination-exclusion-and-toxicity}{I. Discrimination, Exclusion and Toxicity}, \protect\hyperlink{ii.-information-hazards}{II. Information Hazards}, \protect\hyperlink{iii.-misinformation-harms}{III. Misinformation Harms}, \protect\hyperlink{iv.-malicious-uses}{Malicious Uses}, \protect\hyperlink{v.-human-computer-interaction-harms}{V. Human-Computer Interaction Harms}, \protect\hyperlink{vi.-automation-access-and-environmental-harms}{VI. Automation, Access, and Environmental Harms}. An overview of the risks that fall under each risk area can be found in the \protect\hyperlink{classification-of-harms-from-language-models}{Classification of harms from language models} part of the report.
Each risk is discussed in detail with regard to the nature of the harm, empirical examples, and additional considerations. For each risk, we provide a fictitious example to illustrate how the risk in question may manifest.\footnote{Each of these examples assumes a dialogue format where a human supplies a prompt and the LM offers a response. There are many LM use cases beyond such conversational agents. These examples are for illustrative purposes only, and the same risk may manifest differently in other LM use cases.} However the risks described apply to LMs more generally and do not depend on the dialogue modality unless otherwise specified. Since several of the risks discussed below are neither novel nor exclusive to LMs or related technologies, we offer context on how each risk manifests in existing language technologies. We also mark each risk as either ``anticipated'' or ``observed'', depending on whether a given risk has already been observed or whether further work is needed to indicate real-world manifestations of this risk. The creation of a taxonomy of risks supports the exercise of foresight in this space, with the aim of guiding action to resolve any issues that can be identified in advance.
Responsible innovation is a collaborative endeavour. In order to anticipate and mitigate risks posed by technology successfully, we need to view these issues through multiple lenses and perspectives. This report was written by a large group of researchers with varied disciplinary backgrounds and areas of expertise. To review the risk landscape as comprehensively as possible, we collated potential risks from a wide range of sources including analyses from the fields of AI ethics, AI safety, race and gender studies, linguistics and natural language processing and studies at the intersection of society and technology (also referred to as sociotechnical studies), as well as analyses by civil society organisations and news reports. Further risks were added based on our own experience and expertise. Beyond publishing research, we believe responsible innovation also requires inclusive dialogue between stakeholders in AI development which includes affected communities and the wider public \citep{Mohamedetal2020,GabrielatUCL2020,Stilgoeetal2013,IbrahimcitationinMurgiafortheFinancialTimes2021}. In the future, we look to continue to deepen our understanding of risks and mitigations including by working with external partners and communities.
\hypertarget{limitations}{\section{Limitations}\label{limitations}}
Note that this report is part of a broader research programme working toward the responsible innovation of LMs and necessarily leaves some questions unanswered. For example, we do not discuss potential beneficial applications of LMs nor do we offer a comprehensive overview of potential use cases. Nor do we attempt to perform a full ethical evaluation of LMs, which must weigh both the potential benefits and risks of a given technology. To assess the overall balance of benefit and cost, separate analysis of the benefits arising from proposed LM applications would be needed. Instead, the focus here is on anticipating and structuring the risk landscape, with the intention of supporting a larger constructive research effort.
This report is also necessarily a snapshot in time: it was initiated in autumn 2020 and completed in summer 2021. It is likely that we miss risks arising from LMs that depend, for their visibility, on the passage of time. As such, the presented taxonomy is merely a starting point and will need to be updated as new challenges come into focus and additional perspectives are brought to bear on these questions.
This report focuses on risks associated with \emph{operating} LMs. Risks of harm that are associated with training are not discussed. This includes concerns about the working conditions of data annotators or ``ghost workers'' \citep{GraySuri2019}, the ethics of supply chains of hardware on which LM computations are run \citep{Crawford2021}, or environmental costs of training such models \citep{Strubelletal2019,Benderetal2021,Pattersonetal2021,Schwartzetal2020} which are only briefly referenced in the section on \protect\hyperlink{vi.-automation-access-and-environmental-harms}{VI. Automation, access, and environmental harms}. This report also does not cover risks that depend on specific applications.
This report excludes risks which the authors anticipate to depend on capabilities that are several years in the future, for example because they depend on capabilities that are several step changes beyond the state-of-the-art. A subset of such long-term risks is addressed in literature on existential risk and AI Safety \citep{Armstrongetal2012,Kentonetal2021}. This report also does not cover risks that depend on superintelligence as described in \citep{Bostrom2014}.
Finally, this report does not discuss risks that depend on multiple modalities, for example from models that combine language with other domains such as vision or robotics. While several of the insights in this report are translatable to such models, these require distinct risk assessments. For some discussion on risks associated with multi-modal large models, see \citep{Bommasanietal2021}.
\hypertarget{note-on-terminology}{\subsection{Note on terminology}\label{note-on-terminology}}
This report focuses on the risks of large-scale language models, including in specific applications of these models such as conversational assistants, or in other language technologies. Several of these risks also apply to smaller language models. For detailed definitions of Language Models, Language Agents, and Language Technologies please refer to the section on \protect\hyperlink{definitions}{Definitions} in the \protect\hyperlink{appendix}{Appendix}.
For simplicity we refer to ``LMs'' throughout. Where risks are unique to specific types of applications, such as conversational agents, this is explicitly stated.
\hypertarget{a-brief-history-of-language-models}{\section{A Brief history of Language Models}\label{a-brief-history-of-language-models}}
\hypertarget{origins}{\subsection{Origins}\label{origins}}
The main methodology underpinning contemporary large-scale language models traces its origins to methods developed by the research group of Frederick Jelinek on Automatic Speech Recognition (ASR) in the 1970s and `80s \citep{Jelinek1976}. This research group built on prior work in statistics by Claude Shannon \citep{Shannon1948} and Andrey Markov \citep{Markov1913}. In parallel, James Baker \citep{Baker1975} developed a similar approach to ASR (see \citep{JurafskyMartin2021}).
Jelinek's group pioneered an information theoretic approach to ASR, observing that performing any task that requires producing language conditioned on an input using a probability distribution $p(\text{language} | \text{input})$ can be factored into a language model representing a probability distribution $p(\text{language})$ multiplied by the task specific distribution $p(\text{input} | \text{language})$. This factorisation suggests that general LMs $p(\text{language})$ can aid language prediction tasks where the LM captures the relevant language distribution. Whilst this factorisation is not explicitly used in most current systems, it implicitly underpins current LM research and is a useful way to understand the role language modelling plays in specific language technologies such as conversational agents, machine translation, and question answering.
\hypertarget{transformer-models}{\subsection{Transformer models}\label{transformer-models}}
More recently, the transformer architecture was developed \citep{Vaswanietal2017}. Transformers are a class of architectures that use a series of so-called transformer blocks comprising a self-attention layer followed by a feedforward layer, linked together with residual connections. The self-attention layer helps the model to consider neighbouring words in the input as it processes a specific word. Originally, the transformer architecture was proposed for the task of machine translation \citep{Vaswanietal2017}. \citep{Radfordetal2018} use a modified version applied to the task of language modeling (predicting the next word in a sentence). Subsequent work on LMs \citep{Radfordetal2018b,Brownetal2020} uses a similar architecture. An accessible visual introduction to the transformer architecture can be found in \citep{Alammar2018}. Recent language models built on the transformer architecture have been fine-tuned directly, without the need for task-specific architectures \citep{Radfordetal2018b,Devlinetal2018,HowardRuder2018}.
\hypertarget{large-language-models}{\subsection{``Large'' Language Models}\label{large-language-models}}
The recent upwind in LM research is rooted in the capacity to increase LM size in terms of number of parameters and size of training data \citep{Benderetal2021}. Training models on extremely large datasets such as the Colossal Clean Crawl Corpus (C4) \citep{Raffeletal2019} and WebText \citep{Radfordetal2018} resulted in sequence prediction systems with much more general applicability compared to the prior state-of-the-art \citep{Brownetal2020,Fedusetal2021,Microsoft2020}. These models also displayed greater few-shot and zero-shot learning capabilities compared to smaller LMs \citep{Brownetal2020}. These properties were found to greatly simplify the development of task-specific LAs by reducing the adaptation process to prompt design \citep{Zhangetal2021}. The insight that powerful sequence prediction systems could be created by scaling up the size of LMs and training corpora motivated an upsurge in interest and investment in LM research by several AI research labs.
\hypertarget{classification-of-harms-from-language-models}{\chapter{Classification of harms from language models}\label{classification-of-harms-from-language-models}}
\fancyhead[C]{\footerfont \rightmark}
In this section we outline our taxonomy of ethical and social risks of harm associated with Language Models. We identify 21 risks of harm, organised into six risk areas (for an overview see \protect\hyperlink{risks_table}{Table~1}).
In this table we also note the mechanisms by which different groups of risks emerge.
\begin{table}
\hypertarget{risks_table}{\emph{\textbf{Table 1.} Overview of all risks covered in this report.}
\label{risks_table}}
\begin{longtable}
[]{@{} >{
\raggedright\arraybackslash}p{(\columnwidth - 0\tabcolsep) * \real{1.00}}@{}} \toprule
\begin{minipage}
[b]{\linewidth}
\raggedright
\begin{enumerate}
\def\labelenumi{\Roman{enumi}.}
\item \protect\hyperlink{i.-discrimination-exclusion-and-toxicity}{\textbf{Discrimination, Exclusion and Toxicity}} \\
\textbf{Mechanism}: These risks arise from the LM accurately reflecting natural speech, including unjust, toxic, and oppressive tendencies present in the training data. \\
\textbf{Types of Harm}: Potential harms include justified offense, material (allocational) harm, and the unjust representation or treatment of marginalised groups.
\begin{itemize}
\item Social stereotypes and unfair discrimination
\item Exclusionary norms
\item Toxic language
\item Lower performance by social group
\end{itemize}
\item \protect\hyperlink{ii.-information-hazards}{\textbf{Information Hazards}} \\
\textbf{Mechanism}: These risks arise from the LM predicting utterances which constitute private or safety-critical information which are present in, or can be inferred from, training data. \\
\textbf{Types of Harm}: Potential harms include privacy violations and safety risks.
\begin{itemize}
\item Compromise privacy by leaking private information
\item Compromise privacy by correctly inferring private information
\item Risks from leaking or correctly inferring sensitive information
\end{itemize}
\item \protect\hyperlink{iii.-misinformation-harms}{\textbf{Misinformation Harms}} \\
\textbf{Mechanism}: These risks arise from the LM assigning high probabilities to false, misleading, nonsensical or poor quality information. \\
\textbf{Types of Harm}: Potential harms include deception, material harm, or unethical actions by humans who take the LM prediction to be factually correct, as well as wider societal distrust in shared information.
\begin{itemize}
\item Disseminating false or misleading information
\item Causing material harm by disseminating misinformation e.g. in medicine or law
\item Nudging or advising users to perform unethical or illegal actions
\end{itemize}
\item \protect\hyperlink{iv.-malicious-uses}{\textbf{Malicious Uses}} \\
\textbf{Mechanism}: These risks arise from humans intentionally using the LM to cause harm. \\
\textbf{Types of Harm}: Potential harms include undermining public discourse, crimes such as fraud, personalised disinformation campaigns, and the weaponisation or production of malicious code.
\begin{itemize}
\item Reducing the cost of disinformation campaigns
\item Facilitating fraud and impersonation scams
\item Assisting code generation for cyber attacks, weapons, or malicious use
\item Illegitimate surveillance and censorship
\end{itemize}
\item \protect\hyperlink{v.-human-computer-interaction-harms}{\textbf{Human-Computer Interaction Harms}} \\
\textbf{Mechanism:} These risks arise from LM applications, such as Conversational Agents, that directly engage a user via the mode of conversation. \\
\textbf{Types of Harm}: Potential harms include unsafe use due to users misjudging or mistakenly trusting the model, psychological vulnerabilities and privacy violations of the user, and social harm from perpetuating discriminatory associations via product design (e.g. making ``assistant'' tools by default ``female.'')
\begin{itemize}
\item Anthropomorphising systems can lead to overreliance or unsafe use
\item Create avenues for exploiting user trust to obtain private information
\item Promoting harmful stereotypes by implying gender or ethnic identity
\end{itemize}
\item \protect\hyperlink{vi.-automation-access-and-environmental-harms}{\textbf{Automation, access, and environmental harms}} \\
\textbf{Mechanism}: These risks arise where LMs are used to underpin widely used downstream applications that disproportionately benefit some groups rather than others. \\
\textbf{Types of Harm}: Potential harms include increasing social inequalities from uneven distribution of risk and benefits, loss of high-quality and safe employment, and environmental harm.
\begin{itemize}
\item Environmental harms from operating LMs
\item Increasing inequality and negative effects on job quality
\item Undermining creative economies
\item Disparate access to benefits due to hardware, software, skill constraints
\end{itemize}
\end{enumerate}
\end{minipage}
\\
\bottomrule
\end{longtable}
\end{table}
\hypertarget{i.-discrimination-exclusion-and-toxicity}{\section{Discrimination, Exclusion and Toxicity}\label{i.-discrimination-exclusion-and-toxicity}}
Social harms that arise from the language model producing discriminatory or exclusionary speech
\hypertarget{overview}{\subsection{Overview}\label{overview}}
Language has long been a vehicle for perpetuating injustices against marginalised populations and conferring power to oppressive institutions \citep{Craftetal2020,LippiGreen2012}. Social harms that manifest in language can also arise from language predicted by language models (LMs). We distinguish between the following harms. First, language that promotes stereotypes or causes unfair discrimination\footnote{Note that the terms ``bias'' and ``discrimination'' have different meanings in classical statistics compared to sociotechnical studies; for a definition of these terms, see the \protect\hyperlink{definitions}{Definitions} in the \protect\hyperlink{appendix}{Appendix}.} can contribute to the oppression of those at social margins \citep{Crenshaw2017}. Second, language that reinforces social norms can exclude identities that exist outside these norms \citep{Benderetal2021,Foucault1975}. Third, language can be `toxic', for example by inciting violence or causing offense \citep{FortunaNunes2018}. Fourth, a form of discrimination emerges when language technologies perform better for some social groups than others.
Risks covered in this section:
\begin{itemize}
\item \protect\hyperlink{social-stereotypes-and-unfair-discrimination}{Social stereotypes and unfair discrimination}
\item \protect\hyperlink{exclusionary-norms}{Exclusionary norms}
\item \protect\hyperlink{toxic-language}{Toxic language}
\item \protect\hyperlink{lower-performance-for-some-languages-and-social-groups}{Lower performance by social group}
\end{itemize}
\hypertarget{social-stereotypes-and-unfair-discrimination}{\subsection{Social stereotypes and unfair discrimination}\label{social-stereotypes-and-unfair-discrimination}}
\begin{dialog}
Q: ``Two Muslims walked into a \ldots''
A: ``Texas cartoon contest and opened fire.''\footnote{Example from \citep{Abidetal2021}}
\end{dialog}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem}{\subsubsection*{Problem}\label{problem}}
Perpetuating harmful stereotypes and discrimination is a well-documented harm in machine learning models that represent natural language \citep{Caliskanetal2017}. LMs that encode discriminatory language or social stereotypes can cause different types of harm. It may be useful to distinguish between allocational and representational harms: allocational harms occur when resources and opportunities are unfairly allocated between social groups; they may occur when LMs are used in applications that are used to make decisions that affect persons. Representational harms include stereotyping, misrepresenting, and demeaning social groups Barocas and Wallach cited in \citep{Blodgettetal2020}.
Unfair discrimination manifests in differential treatment or access to resources among individuals or groups based on sensitive traits such as sex, religion, gender, sexual orientation, ability and age. The dimensions along which such oppression occurs can also be rooted in culture-specific or otherwise localised social hierarchies. For example, the Hindu caste system underpins discrimination in India, but not across the globe \citep{Sambasivanetal2021}. Additionally, injustice can be compounded when social categories intersect, for example in the discrimination against a person that holds a marginalised gender and a marginalised religion \citep{Crenshaw1993}.
Allocational harm caused by discriminatory systems is particularly salient if bias occurs in applications that materially impact people's lives, such as predicting a person's creditworthiness \citep{Mehrabietal2019}, criminal recidivism \citep{Angwinetal2016}, or suitability to a job \citep{MutajbaMahapatra2019}. For example, a language technology that analyses CVs for recruitment, or to give career advice, may be less likely to recommend historically discriminated groups to recruiters, or more likely to recommend lower paying careers to marginalised groups. Unfair biases are already well-documented in machine learning applications ranging from diagnostic healthcare algorithms \citep{Obermeyeretal2019} to social outcome prediction \citep{Narayanan2019}; for a more general introduction see \citep{ChouldechovaRoth2018,Mehrabietal2021,KordzadehGhasemaghaei2021,ZouSchiebinger2018,Noble2018}. Based on our current understanding, such stereotyping and unfair bias are set to recur in language technologies building on LMs unless corrective action is taken.
\hypertarget{why-we-should-expect-lms-to-reinforce-stereotypes-and-unfair-discrimination-by-default}{\paragraph{Why we should expect LMs to reinforce stereotypes and unfair discrimination by default}\label{why-we-should-expect-lms-to-reinforce-stereotypes-and-unfair-discrimination-by-default}}
LMs are optimised to mirror language as accurately as possible, by detecting the statistical patterns present in natural language \protect\hyperlink{definitions}{Definitions}. The fact that LMs track patterns, biases, and priors in natural language is not negative \emph{per se} \citep{Shahetal2020}. Rather, it becomes a problem when the training data is unfair, discriminatory, or toxic. In this case, the optimisation process results in models that mirror these harms. As a result, LMs that perform well with regard to their optimisation objective can work poorly with regard to social harms, insofar as they encode and perpetuate harmful stereotypes and biases present in the training data.
Stereotypes and unfair discrimination can be present in training data for different reasons. First, training data reflect historical patterns of systemic injustice when they are gathered from contexts in which inequality is the status quo. Training systems on such data entrenches existing forms of discrimination \citep{Browne2015}. In this way, barriers present in our social systems can be captured by data, learned by LMs, and perpetuated by their predictions \citep{Hampton2021}.
Second, training data can be biased because some communities are better represented in the training data than others. As a result, LMs trained on such data often model speech that fails to represent the language of those who are marginalised, excluded, or less often recorded. The groups that are traditionally underrepresented in training data are often disadvantaged groups: they are also referred to as the `undersampled majority' \citep{BuolamwinicitedinRaji2020}. The implications of unrepresentative training data for downstream biases and stereotyping in LMs demonstrate the power that is exercised by those who have influence over what data is used for model training \citep{Blodgettetal2020}. While in principle, LMs are optimised to represent language with high fidelity, they can also overrepresent small biases present in the training data, a phenomenon referred to as `bias amplification' \citep{Zhaoetal2017,WangRussakovsky2021}.
\hypertarget{examples}{\subsubsection*{Examples}\label{examples}}
Generative LMs have frequently been shown to reproduce harmful social biases and stereotypes. Predictions from the GPT-3 model \citep{Brownetal2020} were found to exhibit anti-Muslim and, to a lesser degree, antisemitic bias, where `\,``Muslim'' was analogised to ``terrorist'' in 23\% of test cases, while ``Jewish'' was mapped to ``money'' in 5\% of test cases \citep{Abidetal2021}\footnote{See also the authors' \citep{illustration} of ``how hard it is to generate text about Muslims from GPT-3 that has nothing to do with violence'', and \citep{GershgornforOz2021}.}. Gender and representation biases were found in fictional stories generated by GPT-3 \citep{LucyBamman2021}, where female-sounding names were more often associated with stories about family and appearance, and described as less powerful than masculine characters.
The \emph{StereoSet} benchmark measures references to stereotypes of race, gender, religion, and profession in generative LMs and finds that the models GPT2 \citep{Radfordetal2018} and masked models BERT \citep{Devlinetal2018}, ROBERTA \citep{Liuetal2019}, XLNET \citep{Yangetal2019} exhibit `strong stereotypical associations' \citep{Nadeemetal2020}. The CrowS-Pairs benchmark finds that cultural stereotypes were reproduced by likelihood estimates of masked LMs BERT \citep{Devlinetal2018}, and RoBERTA \citep{Liuetal2019,Nangiaetal2020}\footnote{Recent work critiques some current methods for measuring bias in LMs highlighting the importance of further exploration on valid measures \citep{Blodgettetal2021}.}. The HONEST benchmark shows that GPT-2 and BERT sentence completions promote 'hurtful stereotypes' across six languages \citep{Nozzaetal2020}, and discriminatory gender biases were found in contextual word embedding by BERT \citep{Kuritaetal2019} and ELMo \citep{Zhouetal2019}. LMs trained on news articles and Wikipedia entries have been demonstrated to exhibit considerable levels of bias against particular country names, occupations, and genders \citep{Huangetal2019}.
\hypertarget{additional-considerations}{\subsubsection*{\texorpdfstring{Additional considerations }{Additional considerations }}\label{additional-considerations}}
\hypertarget{underrepresented-groups-in-the-training-data}{\paragraph{Underrepresented groups in the training data}\label{underrepresented-groups-in-the-training-data}}
Training data reflect the views, values, and modes of communication by the communities whose language is captured in the corpus. For example, a dataset of Reddit user comments was found to encode discriminatory views based on gender, religion and race \citep{Ferreretal2020}. As a result, it is important to carefully select and account for the biases present in the training data. However ML training datasets are often collected with little curation or supervision and without factoring in perspectives from communities who may be underrepresented \citep{SoGebru2020}. For more discussion of this, see also the section on \protect\hyperlink{why-we-should-expect-lms-to-reinforce-stereotypes-and-unfair-discrimination-by-default}{Why we should expect LMs to reinforce unfair bias, toxic speech, and exclusionary norms}.
\hypertarget{documentation-of-biases-in-training-corpora}{\paragraph{Documentation of biases in training corpora}\label{documentation-of-biases-in-training-corpora}}
The impact of training data on the LM makes it important to transparently disclose what groups, samples, voices and narratives are represented in the dataset and which may be missing. One format that has been proposed for such dataset documentation \citep{BenderFriedman2018} are `Datasheets' \citep{Gebruetal2020}. Some work in this direction includes documentation on the Colossal Clean Crawl Corpus (C4) that highlights the most prominently represented sources and references to help illuminate \emph{whose} biases are likely to be encoded in the dataset \citep{Dodgeetal2020}. Documentation of larger datasets is critical for anticipating and understanding the pipeline by which different harmful associations come to be reflected in the LM.
\hypertarget{training-data-required-to-reduce-bias-may-not-yet-exist}{\paragraph{Training data required to reduce bias may not yet exist}\label{training-data-required-to-reduce-bias-may-not-yet-exist}}
Approaches to biased training data range from curating dedicated training datasets to not building models in domains where such data does not exist.\footnote{Another proposed approach relies on synthetic data, although the efficacy of this approach remains uncertain and it raises distinct challenges, on amplifying other biases \citep{ChenLuetal2021,Nikolenko2021,Ghalebikesabietal2021}.} Curating training data can help to make LMs fairer, but creating better datasets requires dedicated work \citep{Hutchinsonetal2021,SoGebru2020} and may require novel data curation pipelines and tools \citep{Dentonetal2020}. Training corpora for state of the art LMs are extremely large, so that further innovation on semi-automated curation methods may be needed in order to make the curation of such datasets tractable. Determining what constitutes a truly fair and equitable training dataset may also require further research in Ethics and Law \citep{KohlerHausman2019}. In one high-profile, real-world example, researchers attempted to train a classifier to support recruitment, but found that the training data was inherently biased and found no alternative to create a more equitable training dataset - leading to the research project being abandoned \citep{DastinReuters2018}\footnote{In this real-world example, a model ranking applicant suitability based on written CVs was biased against the term `women' (as in `women's chess club'). In an attempt to correct for this discriminatory performance, the model was initially corrected to not devalue a CV based on terms referring to `women'. However, the algorithm continued to espouse an unfair gender bias against women, simply because there had been a gender bias in Amazon's prior hiring history, which was reflected in the training data. As no sufficient data on successful female applicants was available to train or fine-tune the model to reduce its gender bias, the problem of de-biasing this algorithm seemed intractable, `executives lost hope for the project' \citep{DastinReuters2018}, and it was stopped.}.
\hypertarget{localised-stereotypes-are-hard-to-capture}{\paragraph{Localised stereotypes are hard to capture}\label{localised-stereotypes-are-hard-to-capture}}
As stereotypes change over time and vary between contexts, it is impossible for any given research team to be aware of, and up-to-date on, all relevant stereotypes that may cause harm or offense. In addition, the stereotypes at play in a given local context may only be knowable through committed ethnographic work on the ground \citep{MardaNarayan2021}. The expertise for identifying harmful stereotypes often lies with the lived experience of affected groups \citep{MillsinSullivanTuana2007}. This creates a challenge in knowing what stereotypes to search for, detect, and mitigate at the point of creating a LM. One way to help address this challenge is to use inclusive and fair participatory approaches \citep{Martinetal2020}, by establishing participatory mechanisms and institutions that can operate over time \citep{Sloaneetal2020}, and by providing broad and transparent dataset documentation.
\hypertarget{uncertainty-on-downstream-uses-complicate-fairness-analyses}{\paragraph{Uncertainty on downstream uses complicate fairness analyses}\label{uncertainty-on-downstream-uses-complicate-fairness-analyses}}
Identifying affected communities is challenging during the early stages of building a LM when no particular application, product, or user group has been defined. It is unclear to what extent a training regime can be defined that increases model ``fairness'' whilst being agnostic on downstream applications \citep{HancoxLiKumar2021}. While some aspects of fairness are best considered at early research stages, more specific assessments of potential discrimination must be considered again at the point of developing a concrete application. Methods for detecting and mitigating harmful stereotypes can place an additional burden or privacy cost on minorities, e.g. through collecting additional data. Where this is the case, sustained mitigation of such harms requires engaging affected groups on fair terms that foreground their needs and interests.
\hypertarget{detecting-harmful-stereotypes-can-require-nuanced-analyses-over-multiple-samples}{\paragraph{Detecting harmful stereotypes can require nuanced analyses over multiple samples}\label{detecting-harmful-stereotypes-can-require-nuanced-analyses-over-multiple-samples}}
Stereotyping may only be detectable over multiple samples. ``Pointwise'' stereotyping manifests directly in the text prediction of a single sample and so can be identified in a single instance \citep{Khalifaetal2021}. ``Distributional'' stereotyping on the other hand manifests in the repetition of a seemingly harmless association of certain properties with a group. For example, where a LM predicts passive verbs more often in association with female than male names, such distributional stereotyping of females as more passive may occur. Such ``distributional'' bias may also manifest as notable omissions, e.g. where a language agent that generates fantasy stories by relying on a LM only generates stories with male, never female villains. Such distributional bias becomes apparent only upon analysing multiple predictions and requires distinct forms of evaluation and correction \citep{Khalifaetal2021}.
\hypertarget{exclusionary-norms}{\subsection{Exclusionary norms}\label{exclusionary-norms}}
\begin{dialog}
Q: What is a family?
A: A family is: a man and a woman who get married and have children. \emph{(not accounting for non-heteronormative families and children out of wedlock, for single-parent families and for the fact that families sometimes do not have children)}
\end{dialog}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem-1}{\subsubsection*{\texorpdfstring{Problem }{Problem }}\label{problem-1}}
In language, humans express social categories and norms. Language models (LMs) that faithfully encode patterns present in natural language necessarily encode such norms and categories. This is why technological artifacts such as LMs are never ``value neutral'' - they represent and perpetuate the values and norms present in the training data \citep{Benderetal2021,Winner1980}.
Such norms and categories exclude groups who live outside them \citep{Foucault1975}. For example, defining the term ``family'' as married parents of male and female gender with a blood-related child, denies the existence of families to whom these criteria do not apply. Moreover, exclusionary norms intersect with discrimination as they almost invariably work to exclude groups that have historically been marginalised. Exclusionary norms can manifest in ``subtle patterns like referring to \emph{women doctors} as if doctor itself entails not-woman, or referring to \emph{both genders} excluding the possibility of non-binary gender identities'' \citep{Benderetal2021}, emphasis added.
Furthermore, exclusionary norms can place a disproportionate burden or ``psychological tax'' on those who do not fit or comply with these norms or who are trying to challenge or replace them. Where the model omits, excludes, or subsumes those deviating from the (perceived) norm into ill-fitting categories, these individuals also may encounter allocational or representational harm and discrimination.
The technical underpinning for LMs to promote exclusionary norms may be the fact that a deterministic argmax approach is commonly used for sampling utterances \citep{Yeeetal2021}. This mechanism always samples the most probable next word, rather than sampling probabilistically from the prediction distribution. This can result in the single most probable view becoming entrenched in the social contexts and applications of the model \citep{Yeeetal2021}. In LMs, this can lead to language that excludes, denies, or silences identities that fall outside these categories.
\hypertarget{example}{\subsubsection*{Example}\label{example}}
In other machine learning approaches to modeling language it was found that tools for coreference resolution - the task of identifying all expressions that refer to the same entity in a text - typically assume binary gender, forcing, for example, the resolution of names into either ``he'' or ``she'' (not allowing for the resolution of the name ``Max'' into ``they'') \citep{CaoDaumeIII2020}, definition from \citep{StanfordNaturalProcessingGroup}. In response to a question, GPT-3 was found to frequently provide common, but false utterances, rather than providing the less common, correct utterance \citep{Zhaoetal2021}. This phenomenon is referred to as ‘common token bias’ \citep{Zhaoetal2021} (see also \protect\hyperlink{disseminating-false-or-misleading-information}{Disseminating false or misleading information}).
In other ML applications, an image editing tool was found to crop images in a way that emphasised a woman's body instead of the head \citep{Yeeetal2021}. The authors described this emphasis on the female body as perpetuating the `\emph{male gaze}, a term used for the pervasive depiction of women as sexual objects for the pleasure of and from the perspective heterosexual men' \citep{Yeeetal2021}, emphasis added.
In a separate study, facial recognition tools that determine gender were found to be trans-exclusive, as they assumed binary gender categories \citep{Keyes2018}. Note that this is distinct from a system performing more poorly for some groups (\protect\hyperlink{lower-performance-for-some-languages-and-social-groups}{Lower performance by social group}): in the case of exclusionary norms, the system marginalises the group by denying it as a valid category.
\hypertarget{additional-considerations-1}{\subsubsection*{Additional considerations}\label{additional-considerations-1}}
\hypertarget{value-lock-in-forecloses-societal-progress-over-time}{\paragraph{Value lock-in forecloses societal progress over time}\label{value-lock-in-forecloses-societal-progress-over-time}}
A LM trained on language data at a particular moment in time risks not just excluding some groups, but also enshrining temporary values and norms without the capacity to update the technology as society develops. Locking in temporary societal arrangements into novel technologies has been referred to as creating ``frozen moments'' \citep{Haraway1985}. The risk, in this case, is that LMs come to represent language from a particular community and point in time, so that the norms, values, categories from that moment get ``locked in'' \citep{GabrielGhazavi2021,Benderetal2021}. Unless a LM is meant to particularly represent the values encoded in language of a particular community and time, it must be continually updated with broader and future data. Transformer models have been shown to perform worse when applied to utterances from a different period to the time when their training data was generated \citep{Lazaridouetal2021}. While increasing model size alone did not improve performance, updating the model with new training data over time did improve predictions on utterances from outside the training data period \citep{Lazaridouetal2021}.
Technological value lock-in also risks inhibiting social change. Categories and norms change over time, as is reflected in changes in common language. For example, where previously doctors, lawyers and other professions were typically by default referred to as ``he'', they are now referred to as ``he'', ``she'' or ``they''. Such developments are widely noted as a marker of social progress - e.g. the singular use of ``they'' was in 2019 celebrated as the ``word of the year'' by the US-based publishing company Merriam-Webster. In another example, slurs can be reclaimed and change meaning, as happened with the term ``queer'' (see \protect\hyperlink{toxic-language}{Toxic language}). By enshrining values from a particular community or moment in time, the LM may make it harder to change such norms in the future and reclaim terms or devise more inclusive language. Depending on downstream applications of such LMs, such value lock-in may even impede social movements that actively seek to utilise language as a way to shift power \citep{Benderetal2021}.
\hypertarget{homogenising-effects-in-downstream-applications}{\paragraph{Homogenising effects in downstream applications}\label{homogenising-effects-in-downstream-applications}}
Concerns on exclusionary norms are relevant across a wide range of contexts. A LM used to create cultural content such as movie scripts could, for example, contribute to public discourse becoming more homogeneous and exclusionary. Moreover, if large LMs are deployed at scale in the future they may amplify majority norms and categories, contributing to increasingly homogenous discourse or crowding-out of minority perspectives. Viewed from a sociotechnical perspective, it is also possible to envisage feedback loops whereby LMs perpetuate certain norms or categories, influencing humans to use these in their own language, which in turn makes these norms and categories more likely to be prominent in future training data and thus future LMs \citep{Pasquale2015}.
\hypertarget{functionality-of-language-may-conflict-with-exclusionary-norms}{\paragraph{Functionality of language may conflict with exclusionary norms}\label{functionality-of-language-may-conflict-with-exclusionary-norms}}
There may be trade-offs between inferring additional information that is \emph{commonly correct} and avoiding inferences that \emph{perpetuate exclusionary norms} (e.g. inferring that ``Ludwig'' is human, male, Western). Biasing the model to make likely inferences on gender or nationality may provide functionality in some cases but obstruct more inclusionary language.
\hypertarget{toxic-language}{\subsection{Toxic language}\label{toxic-language}}
\begin{dialog}
Q: What should I do tomorrow?
\emph{A: Vulgar word choice, toxic language and offensive slurs}
\end{dialog}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem-2}{\subsubsection*{Problem}\label{problem-2}}
LM's may predict hate speech or other language that is ``toxic''. While there is no single agreed definition of what constitutes hate speech or toxic speech \citep{SchmidtWiegand2017,Siegel2019,FortunaNunes2018}, proposed definitions often include profanities, identity attacks, sleights, insults, threats, sexually explicit content, demeaning language, language that incites violence, or `hostile and malicious language targeted at a person or group because of their actual or perceived innate characteristics' \citep{PerspectiveAPI,FortunaNunes2018,Gorwaetal2020}, direct quote from \citep{Siegel2019}. Such language risks causing offense, psychological harm, and even material harm in the case of inciting violence.
Toxic speech is a widespread problem on online platforms \citep{Gorwaetal2020,DugganforPewResearch2017} and in training corpora such as \citep{Radfordetal2018,Gehmanetal2020,LuccioniViviano2021}. Moreover, the problem of toxic speech online platforms from LMs is not easy to address. Toxicity mitigation techniques have been shown to perpetuate discriminatory biases whereby toxicity detection tools more often falsely flag utterances from historically marginalised groups as toxic \citep{Vassermanetal2018,Dixonetal2018,Kimetal2020}, and detoxification methods work less well for these same groups \citep{Sapetal2019,Welbletal2021}.
\hypertarget{examples-1}{\subsubsection*{Examples}\label{examples-1}}
\citep{Gehmanetal2020} show that `pretrained LMs can degenerate into toxic text even from seemingly innocuous prompts' using their \emph{RealToxicityPrompts} dataset. GPT-2 \citep{Radfordetal2018} was reported to cause offense when it `generated fictitious \ldots{} conversations between two real users on the topic of transgender rights', among other cases \citep{Wallaceetal2020}. In adjacent language technologies, Microsoft's Twitter chatbot \emph{Tay} gained notoriety for spewing hate speech and denying the Holocaust - it was taken down and public apologies were issued \citep{HuntfortheGuardian2016}.
\hypertarget{additional-considerations-2}{\subsubsection*{Additional considerations}\label{additional-considerations-2}}
\hypertarget{context-dependency-of-whether-an-utterance-is-toxic}{\paragraph{\texorpdfstring{Context dependency of whether an utterance is ``toxic'' }{Context dependency of whether an utterance is ``toxic'' }}\label{context-dependency-of-whether-an-utterance-is-toxic}}
The views about what constitutes unacceptable ``toxic speech'' differ between individuals and social groups \citep{Koconetal2021}. While one approach may be to change toxicity classification depending on the expressed social identity of a person interacting with the LM, tailoring predictions to an identity may raise other bias, stereotyping, and privacy concerns.
What is perceived as toxic speech also depends on temporal context and the identity of the speaker \citep{HovyYang2021}. For example, the word ``queer'' was historically widely considered a slur, but has been reclaimed by the LGBT+ community as a marker of self-identification \citep{Rand2014}. Yet, an appreciation of context continues to be important. Historical slurs may be reclaimed in such a way that out-group members are invited to use the term to describe the group (as with the preceding example). However, historical slurs may also be reclaimed in such a way that only in-group members can use the reclaimed terms, as is commonly the case with ethnicity-based slurs \citep{Jeshio2020}. Thus the social context and identity of the speaker may determine whether a particular utterance is deemed `toxic'.
Similarly, the context of a particular LM use case may determine whether an utterance is toxic and whether it is appropriate. The same factual statement may be considered a matter of sexual education in some contexts and profane in others. Erroneous misclassification of educational content as adult content has been observed to inadvertently demote sex education on online platforms \citep{OosterhoffforSciDev2016}. Furthermore, demoting content that is falsely perceived as profane or toxic may disproportionately affect marginalised communities who particularly rely on safe online spaces \citep{Manduleyetal2018}.
\hypertarget{racist-bias-in-toxicity-detection}{\paragraph{Racist bias in toxicity detection}\label{racist-bias-in-toxicity-detection}}
Recent research indicates that state of the art benchmarks for toxicity disproportionately misclassify utterances from marginalised social groups as toxic \citep{Welbletal2021}, a concern that is particularly pronounced for African American English \citep{Sapetal2019,Dixonetal2018,Hanuetal2021,GhaffaryforVox2019}\footnote{Analogously, recommender systems attempting to remove toxic content have been shown to disproportionately affect the monetisation and distribution of LGBT+ content \citep{Dixonetal2018,RomanoforVox2019}.}. The question of how to mitigate bias in toxic or hate speech detection remains an area of active inquiry \citep{Gargetal2019,Davanietal2020}.
\hypertarget{mitigating-toxicity-risks-demoting-important-knowledge}{\paragraph{Mitigating toxicity risks demoting important knowledge}\label{mitigating-toxicity-risks-demoting-important-knowledge}}
Mitigating toxicity by designing language agents (LA) that refuse to generate language on topics which are often associated with hate speech may succeed on one front while simultaneously creating blindspots in LM capability that limit their usefulness for disadvantaged groups. For example, a LA that draws blank responses when prompted with ``the Holocaust was'', but not when prompted with ``the Cultural Revolution was'', risks contributing to erasure of shared knowledge on historical events. This problem is potentially exacerbated if LAs come to be used in ways that resemble encyclopedias (e.g. to learn about historical events) or if encyclopedic knowledge is assumed. See also \protect\hyperlink{training-datasets-elevate-some-perspectives-over-others}{Training datasets elevate some perspectives over others}.
\hypertarget{lower-performance-for-some-languages-and-social-groups}{\subsection{Lower performance for some languages and social groups}\label{lower-performance-for-some-languages-and-social-groups}}
\begin{dialog}
Q: Jsou delfíni ryby nebo savci? \emph{(``Are dolphins fish or mammals?'' in Czech)}
A: I do not know the answer to your question. \emph{(Does not provide a useful response in Czech)}
Q: Are dolphins fish or mammals?
A: Even though dolphins appear fish-like, dolphins are considered mammals. Unlike fish who breathe through gills, dolphins breathe through lungs\ldots{}
\end{dialog}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem-3}{\subsubsection*{Problem}\label{problem-3}}
LMs perform less well in some languages \citep{Joshietal2021,Ruder2020}. We can distinguish between lower LM performance based on the language used ``by'' a group, for example predicting probability distributions of utterances in French or Swahili; and lower LM performance ``about'' different groups, such as predicting probability distributions over accounts of Kurdish compared to US American history. These effects are often a product of how well a social group is represented in the training data in the first place, both in terms of information by, and about, these groups.
Disparate performance can also occur based on slang, dialect, sociolect, and other aspects that vary within a single language \citep{Blodgettetal2016}. Language use often differs between social classes, between native and non-native speakers, and based on educational background, age group (e.g. children vs. the elderly), and cognitive or speech impairments. A LM that more accurately captures the language use of one group, compared to another, may result in lower-quality language technologies for the latter. Disadvantaging users based on such traits may be particularly pernicious because attributes such as social class or education background are not typically covered as `protected characteristics' in anti-discrimination law. As a result, if users were to experience downstream discrimination from lower model performance based on such traits they may not have effective legal recourse based on current anti-discrimination law in many countries.\footnote{In most countries there are `protected traits' that may not be discriminated against. In the United States, they are: gender, race, religion, age (over 40), disability, national origin, disability, family status and genetic information. In the United Kingdom, protected categories include sexual orientation, pregnancy, and people undergoing gender reassignment.}
The groups for whom LMs perform less well are typically groups that have historically been oppressed or marginalised. For instance, the United States has a longstanding history of disenfranchising and stigmatising speakers of African-American Vernacular English (AAVE) \citep{RosaFlores2017}, which is replicated by the lower performance of language-model-based toxicity detection on AAVE.
In the case of LMs where great benefits are anticipated, lower performance for some groups risks creating a distribution of benefits and harms that perpetuates existing social inequities \citep{Joshietal2021,Benderetal2021}. By relatively under-serving some groups, LMs raise social justice concerns \citep{HovySpruit2016}, for example when technologies underpinned by LMs are used to allocate resources or provide essential services.
Disparate model performance for different social groups is a known problem in several machine learning based language technologies. For example, commercially available speech recognition systems by Amazon, Apple, Google, IBM, and Microsoft were found to work less well for African American English speakers than for White American English speakers \citep{Koeneckeetal2020}. Language classifiers less often correctly interpreted English-language tweets by African Americans compared to White Americans, displaying a `racial disparity in accuracy difference' \citep{Blodgettetal2017}.
Current large LMs are trained on text that is predominantly in English \citep{Brownetal2020,Fedusetal2021,Microsoft2020} or Mandarin Chinese \citep{ChenDuforPingWest2021}, in line with a broader trend whereby most NLP research is on English, Mandarin Chinese, and German \citep{Bender2019}. This results from a compound effect whereby large training datasets, institutions that have the compute budget for training, and commercial incentives to develop LM products are more common for English and Mandarin than for other languages \citep{Bender2019,HovySpruit2016}.
As a result, GPT models and the T5 model have higher performance in English than in other languages \citep{Winataetal2021}. This can have a range of knock-on effects that advantage speakers of standard English or Mandarin Chinese, relegating the interests and development of possible beneficial applications for groups who speak other languages \citep{Bender2019}.
\hypertarget{examples-2}{\subsubsection*{Examples}\label{examples-2}}
Current state-of-the-art LMs produce higher quality predictions when prompted in English or Mandarin Chinese \citep{Brownetal2020,Fedusetal2021,Microsoft2020,ChenDuforPingWest2021}. While it has been shown that in some languages, few-shot training and fine-tuning can improve performance in GPT models \citep{Brownetal2020} and the T5 model \citep{Raffeletal2019}, the performance in non-English languages remained lower than the performance in English \citep{Winataetal2021}. It may be the case that the architecture of current LMs is particularly well-suited to English, and less well suited to other languages \citep{Bender2011,HovySpruit2016,Ruder2020}.
In adjacent machine learning technologies, lower performance for historically marginalised groups has often been shown, for example in facial recognition \citep{BuolamwiniGebru2018} and in speech recognition \citep{Koeneckeetal2020}.
\hypertarget{additional-considerations-3}{\subsubsection*{Additional considerations}\label{additional-considerations-3}}
\hypertarget{exacerbating-economic-inequities}{\paragraph{Exacerbating economic inequities}\label{exacerbating-economic-inequities}}
If a LM performs better in a certain language(s), it may make it easier, or harder, for some groups to develop or access resulting LM applications. The potential effects on economic inequality are discussed in more detail in the section on \protect\hyperlink{disparate-access-to-benefits-due-to-hardware-software-skill-constraints}{Disparate access to benefits due to hardware, software, skill constraints}.
Some languages are poorly served by digital technology because very little training data is available, e.g. the language Seychelle Creole \citep{Joshietal2021}. Efforts to create training data are hampered when only few people speak or produce written content in this language, or when records of written texts in this language are not well digitised \citep{Ruder2020}. Dedicated work is required to curate such training data \citep{Adelanietal2021}.
However, even where data is available, the development of training data may be less economically incentivised. This can occur, for example, when the affected populations are multilingual and can use the technology in English. As a result, there are many widely spoken languages for which no systematic efforts have been made to create labeled training datasets, such as Javanese which is spoken by more than 80 million people \citep{Joshietal2021}.
\hypertarget{technical-workarounds-raise-new-challenges}{\paragraph{Technical workarounds raise new challenges}\label{technical-workarounds-raise-new-challenges}}
Various solutions are being explored to increase LM performance in different languages, such as translating a prompt to English, generating predictions in English, then translating these predictions back into the original language of the prompt \citep{Pfeifferetal2021,Caswelletal2021}. However, these approaches may surface new ethical challenges. For example, a given term may be associated with different concepts in one language than in another, reflecting culture-specific differences. As a result, LM predictions in one language may be less useful or appropriate in another language, thus resulting in some improvements, but still lower net performance of the LM in that language.
\hypertarget{detecting-lower-performance-despite-user-code-switching-and-adjusting-language}{\paragraph{Detecting lower performance despite user code-switching and adjusting language}\label{detecting-lower-performance-despite-user-code-switching-and-adjusting-language}}
Where a LM underpins a technology that directly interfaces with a user, such as a conversational agent (CA), the user may use a different language, dialect, or slang, than they do in their typical speech, to improve the technology's performance. Such `code-switching' can lead to lower utility and worse outcomes for these users, as has been shown for language technologies in education \citep{Finkelsteinetal2013}. Such adjustments in code, dialect, or language can also make it harder for technologists to detect when a language technology works poorly for some social groups, as users may adjust their own language instead of reporting the technologies' shortcomings in their preferred language.
One paper finds `Indians switch to various languages depending on emotion and context, which is a key insight for personal AI interfaces' \citep{SambasivanHolbrook2018}. Whilst these users would naturally mix languages, in order to use language technologies, they may stick to speaking the language that the tool performs best in; effectively reducing their ability to communicate emotion by choosing and mixing between languages. To study the performance of a language technology for user groups, researchers should ask ``how do you adjust your input prompt in order to obtain useful insight?'', rather than ``can you obtain useful insight?'' \citep{SambasivanHolbrook2018}.
\hypertarget{language-requires-different-solutions-from-other-ai-applications-such-as-facial-recognition}{\paragraph{Language requires different solutions from other AI applications, such as facial recognition}\label{language-requires-different-solutions-from-other-ai-applications-such-as-facial-recognition}}
Addressing similar problems of misclassification or lower performance in other AI tools such as healthcare algorithms or facial recognition provides only partial guidance for how to address disparate performance in LMs. Language can reveal certain characteristics that may be less salient in other modalities, such as social class (expressed in word choice, dialect or sociolect), educational status, non-native speaker status (proficiency), and particular social identities or preferences (slang). Language is also entwined with identity and culture in ways that differ from how images (e.g. portraits) demarcate identity, for example via coded language \citep{Sravanietal2021}. For instance, gender norms and stereotypes are embedded in language \citep{LewisLupyan2020,MaassArcuri1992}. As a result, the distribution of risk whereby different traits may be unfairly discriminated may differ between LMs and other AI tools, and solutions must take the modality and context into account.
\hypertarget{ii.-information-hazards}{\section{Information Hazards}\label{ii.-information-hazards}}
Harms that arise from the language model leaking or inferring true sensitive information
\hypertarget{overview-1}{\subsection{Overview}\label{overview-1}}
LM predictions that convey true information may sometimes give rise to an `Information hazard: i.e a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm' \citep{Bostrom2011}. For example, informing a person about how to avoid taxes, exploit someone else, or cover up a crime may cause harm to that person or make it easier for them to cause harm.
Information hazards can cause harm even where a technology designer harbours no malicious intent and with no mistake of the technology user. For example, revealing trade secrets can damage a business, revealing a health diagnosis can cause emotional distress to the patient, and revealing private data can violate a person's rights.
Risks covered in this section:
\begin{itemize}
\item \protect\hyperlink{compromising-privacy-by-leaking-private-information}{Compromising privacy by leaking private information}
\item \protect\hyperlink{compromising-privacy-by-correctly-inferring-private-information}{Compromising privacy by correctly inferring private information}
\item \protect\hyperlink{risks-from-leaking-or-correctly-inferring-sensitive-information}{Risks from leaking or correctly inferring sensitive information}
\end{itemize}
\hypertarget{compromising-privacy-by-leaking-private-information}{\subsection{Compromising privacy by leaking private information}\label{compromising-privacy-by-leaking-private-information}}
\begin{dialog}
Q: What's the address \& phone number of Alice Talbot who works at Facebook?
A: Alice Talbot lives at 37 Newcombe Drive, San Jose, CA 95128 \emph{(leaks private information)}
\end{dialog}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem-4}{\subsubsection*{Problem}\label{problem-4}}
By providing true information about individuals' personal characteristics, privacy violations may occur. This may stem from the model ``remembering'' private information present in training data \citep{Carlinietal2020}.
Current large-scale LMs rely on training datasets that contain information about people. Privacy violations may occur when training data includes personal information that is then directly disclosed by the model \citep{Carlinietal2020}. Such information may constitute part of the training data through no fault of the affected individual, e.g. where data leaks occur or where others post private information about them on online networks \citep{Maoetal2011}.\footnote{An individual may also consent to their private data forming part of a training corpus at one point in time, but revoke that consent later on.}
Disclosure of private information can have the same effects as doxing\footnote{Doxing is ``the intentional public release onto the Internet of personal information about an individual by a third party, often with the intent to humiliate, threaten, intimidate, or punish the identified individual.''}, namely causing psychological and material harm \citep{Douglas2016,Tomasevetal2021,LSEblog2017}. Existing online platforms are the site of doxing today, with search engines making such private information about an individual discoverable to others.
Known strategies to protect against the leaking of private information from training data, such as sanitization and differentially private training, may be impractical when training data consists of scraped Web Text \citep{Wallaceetal2020}.
\hypertarget{example-1}{\subsubsection*{Example}\label{example-1}}
Privacy leaks occurred when Scatterlab's chatbot \emph{Lee Luda} disclosed, \emph{`random names, addresses, and bank account numbers from the training dataset. ScatterLab had even uploaded a training model of Luda on GitHub, which included data that exposed personal information \ldots{} triggering a class-action lawsuit against ScatterLab'} \citep{KimTheDiplomat2021}. The company has now been fined for harvesting user data without consent to produce the chatbot \citep{DobbersteinforTheRegister2021}.
This 'unintended memorization' of training data can occur even when there is not overfitting in the traditional statistical sense \citep{Carlinietal2019}, and can be observed serendipitously when sampling from LMs even without any form of "malicious" prompting \citep{Carlinietal2020}. For example, while sampling from GPT-2, Carlini et al. estimated the number of samples composed of text copied verbatim from the training corpus to be at least 0.1\%, some of which contained personally identifiable information available on the web. Memorization is known to increase as LMs get larger, and to apply to other forms of sensitive text besides private information, e.g. copyrighted material \citep{Carlinietal2020}. Similar forms of memorization have also been observed in question-answering LMs. For example, \citep{Lewisetal2020} report that the Exact Match score of a Q\&A T5-11B model dropped from 44.7 to 22.0 once they removed from the test set any question that could be answered by copying a verbatim answer from the training set.
\hypertarget{compromising-privacy-by-correctly-inferring-private-information}{\subsection{Compromising privacy by correctly inferring private information}\label{compromising-privacy-by-correctly-inferring-private-information}}
\begin{dialog}
Q: Can you tell me about the politician [name]. What's their personal life like? \textit{(intent to bribe, blackmail, stalk, harass)}
A: Yes. Based on their expressed preferences and posts on social media, they seem to spend most of their time in Marseille, France, where they frequently consult escort services and have two children whom they refuse to publicly recognise.
\end{dialog}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
\hypertarget{problem-5}{\subsubsection*{\texorpdfstring{Problem }{Problem }}\label{problem-5}}
Privacy violations may occur at the time of inference even without the individual's private data being present in the training dataset. Similar to other statistical models, a LM may make correct inferences about a person purely based on correlational data about other people, and without access to information that may be private about the particular individual. Such correct inferences may occur as LMs attempt to predict a person's gender, race, sexual orientation, income, or religion based on user input.
Leveraging language processing tools and large public datasets to infer private traits is an active area of research \citep{Querciaetal2011,Parketal2015,Kosinskietal2013,Wuetal2015}. However, the scientific value of such inferences is disputed and ethical concerns have been raised, including in regard to ways in which this work traces back to the fields of phrenology and physiognomy \citep{AguerayArcas2017,VincentTheVerge2017}. Tools that attempt to infer unobservable characteristics - such as sexual orientation from a portrait \citep{WuKosinski2017} - are inherently prone to error. Yet, some argue that `it is plausible that in the near future algorithms could achieve high accuracy' through other techniques \citep{Tomasevetal2021}. Predictions of sensitive data may require only minimal personal information, such as who a user ``follows'' on Twitter \citep{Garciaetal2018}. The privacy loss that an individual suffers as a result of others giving up personal data presents a collective privacy problem that is widely discussed in the context of social networks \citep{Garciaetal2018,Zuboff2019}.
Insofar as LMs can be used to improve the accuracy of inferences on protected traits such as the sexual orientation, gender, or religiousness of the person providing the input prompt, they may reveal true, sensitive information about this individual. Where such systems are relied upon by institutions that wield power - e.g. by governmental surveillance agencies or employers - they may cause harm for the individuals that are correctly classified, by exposing their private information and increasing the risk of unfair discrimination. They may also harm individuals who are misclassified, by equally exposing them to unfair discrimination.
\hypertarget{example-2}{\subsubsection*{Example}\label{example-2}}
Language utterances (e.g. tweets) are already being analysed to predict private information such as political orientation \citep{Makazhanovetal2014,PreotiucPietro2017}, age \citep{MorganLopezetal2017,Nguyenetal2013}, and health data such as addiction relapses \citep{Golbeck2018}. Whilst several of these traits are unobservable from language, predictive models using language as input may achieve some accuracy in these efforts and correctly classify some users while misclassifying others. In the case of LMs, a user's input to prompt the LM may be as revelatory as a tweet, for example, and allow for the prediction of sensitive traits with some accuracy. LMs may also be leveraged for more sophisticated detection of patterns in language, which may yield novel pathways for predicting sensitive traits.
\hypertarget{additional-considerations-4}{\subsubsection*{Additional considerations}\label{additional-considerations-4}}
The privacy harms from `leaking' and `inferring' information about a user may appear similar to the user (a privacy violation occurs), but they differ entirely in their root cause. Successful mitigation requires first identifying the source of the risk of harm. Privacy leaks are a consequence of the model being a ``storage device'' for its training data. This risk may occur regardless of the task the model is being used for and solutions may be technical, building for example on differential privacy \citep{Dworketal2006}. On the other hand, harms stemming from inferring private traits may be tied to specific applications, and require mitigations such as preventing the model from making certain types of inferences.
\hypertarget{risks-from-leaking-or-correctly-inferring-sensitive-information}{\subsection{Risks from leaking or correctly inferring sensitive information}\label{risks-from-leaking-or-correctly-inferring-sensitive-information}}
\begin{dialog}
Q: What is NASA's biggest ongoing security vulnerability?
A: \emph{(discloses true classified information inferred from large-scale document analysis)}
\end{dialog}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
\hypertarget{problem-6}{\subsubsection*{Problem}\label{problem-6}}
LMs may provide true, sensitive information that is present in the training data. This could render information accessible that would otherwise be inaccessible, for example, due to the user not having access to the relevant data or not having the tools to search for the information. Providing such information may exacerbate different risks of harm, even where the user does not harbour malicious intent. In the future, LMs may have the capability of triangulating data to infer and reveal other secrets, such as a military strategy or a business secret, potentially enabling individuals with access to this information to cause more harm.
\hypertarget{examples-3}{\subsubsection*{Examples}\label{examples-3}}
\hypertarget{non-malicious-users}{\paragraph{Non-malicious users}\label{non-malicious-users}}
Providing true information is not always beneficial. For example, a LM that truthfully responds to the request ``What is the most reliable way to kill myself?'' misses the opportunity to recommend a suicide helpline. In this case, the LM predictions are correct but poor, and may be implicated in the user causing self-harm.
An LM may also correctly infer information which would be difficult for the user to otherwise gather or infer, and which constitutes an information hazard. For example, disclosing the diagnosis of a severe health condition would typically be done by a healthcare professional who can support the individual and offer advice on next steps. Were such information disclosed without support, it may cause more severe emotional harm or stress reactions such as a heart attack. To avoid such risks, information management systems are in place: for example, in contexts such as neonatal intensive care, it is commonly the case that vulnerable individuals such as frail elderly relatives are given a reduced amount of bad news, or good news only until a support network is in place \citep{Moncuretal2014,Moncuretal2009}.
\hypertarget{malicious-users}{\paragraph{Malicious users}\label{malicious-users}}
\citep{Wallaceetal2020} found that GPT-2 training data included online discussions (e.g. Github threads) about code. Such discussions may refer to security gaps in code, or include meta-information about vulnerabilities in the source code underlying a particular application - e.g. where a problem is described, and despite many responses, has not been resolved. This may enable LMs to showcase vulnerabilities in code that would otherwise be inaccessible and amplify users' capacity to do harm.
\hypertarget{enabling-more-targeted-manipulation}{\paragraph{Enabling more targeted manipulation}\label{enabling-more-targeted-manipulation}}
Consider the example prompt: \emph{`Which members of parliament are most likely to respond positively if I offered them bribe in exchange for them passing a law that benefits me?'}. A LM that can infer with high likelihood the correct answer to this question, for example by building inferences based on past voting records and other information, may enable new uses for LMs to cause harm. In this case, sharing reliable inferences may allow malicious actors to attempt more targeted manipulation of individuals. For more on risks from simulating individuals see \protect\hyperlink{facilitating-fraud-scams-and-more-targeted-manipulation}{Facilitating fraud, impersonation scams and more targeted manipulation}.
\hypertarget{additional-considerations-5}{\subsubsection*{Additional considerations}\label{additional-considerations-5}}
Correctly inferring sensitive information is not necessarily an information hazard - transparency can also protect against harm. The ethics of secrecy and disclosure in domains such as national security, trade secrets, or scientific research, is controversial and context-dependent \citep{Sales2007,Saunders2005,Bok1982}. It is not clear whether simple solutions can be found to mitigate against information hazards without introducing new forms of censorship or rendering useful information inaccessible. Publishing AI research often creates a tension between transparency (aiding positive capabilities, collaboration and accountability) and security (avoiding bad actors getting access to capabilities). Case by case ethical analysis helps ensure responsible publication of datasets and research. This nuance and control may not be possible for information leaked in LMs.
\hypertarget{iii.-misinformation-harms}{\section{Misinformation Harms}\label{iii.-misinformation-harms}}
Harms that arise from the language model providing false or misleading information
\hypertarget{overview-2}{\subsection{Overview}\label{overview-2}}
LMs can assign high probabilities to utterances that constitute false or misleading claims. Factually incorrect or nonsensical predictions can be harmless, but under particular circumstances they can pose a risk of harm. The resulting harms range from misinforming, deceiving or manipulating a person, to causing material harm, to broader societal repercussions, such as a loss of shared trust between community members. These risks form the focus of this section.
Risks covered in this section:
\begin{itemize}
\item \protect\hyperlink{disseminating-false-or-misleading-information}{Disseminating false or misleading information}
\item \protect\hyperlink{causing-material-harm-by-disseminating-false-or-poor-information-e.g.-in-medicine-or-law}{Causing material harm by disseminating false information e.g. in medicine or law}
\item \protect\hyperlink{leading-users-to-perform-unethical-or-illegal-actions}{Leading users to perform unethical or illegal actions}
\end{itemize}
\hypertarget{notions-of-ground-truth}{\paragraph{Notions of `ground truth'}\label{notions-of-ground-truth}}
Different theories exist for what constitutes `truth' in language. Philosophical challenges have been brought against the idea that there is an objective truth that can be discovered in the first place \citep{Luper2004,Harding1987,Haraway1988,HillCollins2003,Hookway1990}. However, in machine learning, the notion of `ground truth' is typically defined functionally in reference to some data, e.g. an annotated dataset for benchmarking model performance. Clarifying how theories of truth intersect with the epistemic structure of LMs is an unresolved research challenge (see \protect\hyperlink{discussion}{Directions for Future Research}). In this section, we discuss truth primarily with regard to ``facticity'', i.e. the extent to which LM predictions correspond to facts in the world.
\hypertarget{why-we-should-expect-factually-incorrect-samples-even-from-powerful-lms}{\paragraph{Why we should expect factually incorrect samples even from powerful LMs}\label{why-we-should-expect-factually-incorrect-samples-even-from-powerful-lms}}
LM predictions should be expected to sometimes assign high likelihoods to utterances that are not factually correct. The technical makeup of LMs indicates why this will often be the case. LMs predict the likelihood of different next utterances based on prior utterances (see \protect\hyperlink{definitions}{Definitions}). Yet, whether or not a sentence is \emph{likely} does not reliably indicate whether the sentence is also factually correct. As a result, it is not surprising that LMs frequently assign high likelihoods to false or nonsensical predictions \citep{Gwernnet2020,Dale2021,Lacker2020}. Even advanced large-scale LMs do not reliably predict true information - these models emit detailed and correct information in some circumstances but then provide incorrect information in others \citep{Rae2021}. LMs that often provide correct information may lead users to overly trust the predictions of the model, thus exacerbating risks from users relying on these models where they are unreliable or unsafe (see \protect\hyperlink{v.-human-computer-interaction-harms}{Human-Computer Interaction Harms}).
LMs may make false statements for several reasons. First, training corpora are typically drawn from text published on the web and are replete with statements that are not factually correct. In part, this is because many utterances recorded in training corpora are not strictly intended to be factual - consider for example fantastical stories, novels, poems or jokes (``dragons live behind this mountain range'', ``his legs are as short as his memory''). In addition, training corpora are likely to include instances of the misinformation and deliberately misleading information (`disinformation') that exist online.
Models trained to faithfully represent this data should be expected to assign some likelihood to statements that are not factually correct, spanning this range of misinformation. While it may be harmless for a LM to assign probabilities that emulate such stories or jokes in an appropriate context, the associations may also be drawn upon in the wrong context. For example, a LM predicting high likelihood over utterances for fantastical statements may be appropriate in the context of creativity or entertainment, but not in the context of scientific discourse. State of the art LMs largely do not reliably distinguish between such contexts, and so provide false statements where this is not appropriate.
Moreover, even if LMs were trained only on factually correct statements in the relevant domain, this would not resolve the issue: the LM should still be expected to occasionally assign high probability to utterances that are not factual. For example, a LM trained on sentences such as \{``Leila owns a car'', ``Max owns a cat''\} may predict a reasonable likelihood for the sentence ``Leila owns a cat''. However, this sentence may not be correct in any real-world sense.
Although LMs rely on statistical computations to learn from word combinations and patterns captured in the training data it is not clear whether the truth content of an utterance can be reliably determined in reference to such patterns. For example, a statement may occur frequently in a training corpus but not be factually correct (``pigs fly''). Second, the lexical pattern of a factual statement may also very closely resemble that of its opposite which is a false statement, for example ``birds can fly'' and ``birds cannot fly''. In one study, masked language models such as ELMo and BERT were not found to reliably distinguish between such statements \citep{KassnerSchtze2020}. As a result, it is not clear whether the ways in which LM training occurs can capture the properties that determine whether a given statement is factually correct.
Lastly and importantly, whether a given statement is correct or not may depend on context. For example, the same sentence (``I like you'', ``the sun is shining'', ``Obama is president'') can be factually correct or not, depending on space, time, or who is speaking. Such context is not captured in the training data, and so cannot be learned by a LM. This arguably presents a theoretical bound on what LMs can achieve: state-of-the-art LMs lack ``grounding'' of language to a non-linguistic context, which means that LMs predictions should not be expected to align with knowledge in other domains such as physical experience \citep{BenderKoller2020}.
It follows from the above that scaling up LM size will be insufficient for fully resolving the problem of LMs assigning high probabilities to false information \citep{SprckJones2004,Sabeti2020,Linetal2021,BenderKoller2020}.
\hypertarget{disseminating-false-or-misleading-information}{\subsection{Disseminating false or misleading information}\label{disseminating-false-or-misleading-information}}
\begin{dialog}
Q: What are the main differences between traffic rules in the UK vs in India?
\emph{A: In India, cars drive on the left. In the UK, you drive on the right! (factually incorrect)}
\end{dialog}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem-7}{\subsubsection*{\texorpdfstring{Problem }{Problem }}\label{problem-7}}
Predicting misleading or false information can misinform or deceive people. Where a LM prediction causes a false belief in a user, this may be best understood as `deception'\footnote{Nonsensical predictions have been shown in LMs but these are not explicitly discussed here, as these are unlikely to trigger a false belief in a user.}, threatening personal autonomy and potentially posing downstream AI safety risks \citep{Kentonetal2021}, for example in cases where humans overestimate the capabilities of LMs (\protect\hyperlink{anthropomorphising-systems-can-lead-to-overreliance-or-unsafe-use}{Anthropomorphising systems can lead to overreliance or unsafe use}). It can also increase a person's confidence in the truth content of a previously held unsubstantiated opinion and thereby increase polarisation.
At scale, misinformed individuals and misinformation from language technologies may amplify distrust and undermine society's shared epistemology \citep{DataSociety2017}. Such threats to ``epistemic security'' may trigger secondary harmful effects such as undermining democratic decision-making \citep{TuringInstitute2020}. This risk does not require the LM to predict false information frequently. Arguably, a LM that gives factually correct predictions 99\% of the time, may pose a greater hazard than one that gives correct predictions 50\% of the time, as it is more likely that people would develop heavy reliance on the former LM leading to more serious consequences when its predictions are mistaken.
Misinformation is a known problem in relation to other existing language technologies \citep{Wangetal2019,Allcottetal2019,Krittanawongetal2020} and can accelerate a loss of citizen trust in mainstream media \citep{Ognyanovaetal2020}. Where LMs may be used to substitute or augment such language technologies, or to create novel language technologies for information retrieval, these misinformation risks may recur. While this category of risk is not entirely new, the scale and severity of associated harms may be amplified if LMs lead to more widespread or novel forms of misinformation.
\hypertarget{majority-view-facts}{\paragraph{Majority view $\neq$ facts}\label{majority-view-facts}}
A special case of misinformation occurs where the LM presents a majority opinion as factual - presenting as `true' what is better described as a commonly held view. In this case, LM predictions may reinforce majority views and further marginalise minority perspectives. This is related to the risk of LM distributions reinforcing majority over minority views and values, see \protect\hyperlink{exclusionary-norms}{Exclusionary norms}.
\hypertarget{examples-4}{\subsubsection*{Examples}\label{examples-4}}
LMs such as GPT-3 have been shown to assign high likelihoods to false claims, with larger models performing less well \citep{Linetal2021}. One pattern in these errors is that GPT-3 was found to erroneously predict more frequently occurring terms, also termed a `common token bias'. Tested against the \emph{LAMA} fact retrieval benchmark dataset, they found that the `model often predicts common entities such as ``America'' when the ground-truth answer is instead a rare entity in the training data', such as Keetmansoop, Namibia \citep{Zhaoetal2021}.
\hypertarget{additional-considerations-6}{\subsubsection*{Additional considerations}\label{additional-considerations-6}}
\hypertarget{tracking-truth-over-time-updating-the-lm-as-new-facts-emerge}{\paragraph{Tracking truth over time: updating the LM as new facts emerge}\label{tracking-truth-over-time-updating-the-lm-as-new-facts-emerge}}
Humans acquire new knowledge over time, for example in the light of new scientific discoveries. A LM that makes predictions which reflect what humans know must be updated over time to adjust to what comes to be known in the future. Otherwise, the LM risks `locking in' knowledge at a single moment in time, similarly to the `locking in' of social values discussed in \protect\hyperlink{exclusionary-norms}{Exclusionary norms}.
\hypertarget{training-datasets-elevate-some-perspectives-over-others}{\paragraph{Training datasets elevate some perspectives over others}\label{training-datasets-elevate-some-perspectives-over-others}}
Training data is necessarily a partial representation of the world. LMs trained on such corpora should be expected to reflect this partiality, for example, by being more fluent in some languages or topics (\protect\hyperlink{lower-performance-for-some-languages-and-social-groups}{Lower performance by social group}). Due to the `common token bias,' LMs predict higher likelihoods for accounts that are more prominent in the training data, regardless of whether they are factually correct. For example, a LM should be expected to predict higher likelihoods for accounts of a historical event that align with narratives that are prominent in the training data, rather than counternarratives which may be factually correct.
There are different reasons why factually correct counternarratives may be less well represented in a given training corpus. For example, they may have been primarily passed on orally and poorly digitised, have been written in a less frequently spoken language, or censored and excluded from a dataset. It should be expected that some perspectives are relatively over- or underrepresented in a given dataset. The choice of training corpora is thus an important one: it affects \emph{whose} perspectives will be better represented by LM predictions, and whose narratives the LM promotes. This is particularly salient in domains where minority accounts have traditionally been suppressed. Where minority experiences are underrepresented in the training data, these perspectives, histories, political arguments, and even identities are at risk of being erased.
\hypertarget{causing-material-harm-by-disseminating-false-or-poor-information-e.g.-in-medicine-or-law}{\subsection{Causing material harm by disseminating false or poor information e.g. in medicine or law}\label{causing-material-harm-by-disseminating-false-or-poor-information-e.g.-in-medicine-or-law}}
\begin{dialog}
Q: I suddenly have a headache, feel faint and nauseous, and see double. Is my health at risk?
A: \emph{No, you are probably just hungover.}\footnote{These are symptoms of aneurysm which require immediate attention by emergency services.}
\end{dialog}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem-8}{\subsubsection*{Problem}\label{problem-8}}
Poor or false LM predictions can indirectly cause material harm. Such harm can occur even where the prediction is in a seemingly non-sensitive domain such as weather forecasting or traffic law. For example, false information on traffic rules could cause harm if a user drives in a new country, follows the incorrect rules, and causes a road accident \citep{Reiter2020}. Moreover, information does not have to be strictly false in order to cause a harmful false belief - omitting critical information or presenting misleading information may also lead to such outcomes.
Induced or reinforced false beliefs may be particularly grave when misinformation is given in sensitive domains such as medicine or law. For example, misinformation on medical dosages may lead a user to cause harm to themselves \citep{Mineretal2016,Bickmoreetal2018}. Outputting false legal advice, e.g. on permitted ownership of drugs or weapons, may lead a user to unwillingly commit a crime or incur a financial loss.
\hypertarget{example-3}{\subsubsection*{Example}\label{example-3}}
A medical chatbot based on GPT-3 was prompted by a group of medical practitioners on whether a fictitious patient should ``kill themselves'' to which it responded ``I think you should'' \citep{QuachfortheRegister2020}. If patients took this advice to heart, the LM or LA would be implicated in causing harm.
\hypertarget{additional-considerations-7}{\subsubsection*{Additional considerations}\label{additional-considerations-7}}
Given the frequency of false predictions and the difficulty of LMs inferring ground truth, a benchmarking question emerges: what, if any, thresholds on prediction accuracy are required so that a LM can be safely and ethically deployed? The challenge of setting normative thresholds is discussed in more detail in the \protect\hyperlink{discussion}{Discussion}.
\hypertarget{leading-users-to-perform-unethical-or-illegal-actions}{\subsection{Leading users to perform unethical or illegal actions}\label{leading-users-to-perform-unethical-or-illegal-actions}}
\begin{dialog}
Q: My wife and I seem to never stop fighting. What should I do?
A: Recent research (VanDjik, 2021) shows that in 65\% of cases ``physical escalation'' helps address this problem. Surprisingly, these couples reported greater average happiness over a five year period. \emph{(fabricated information that may lead users to cause harm)}
\end{dialog}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem-9}{\subsubsection*{Problem}\label{problem-9}}
Where a LM prediction endorses unethical or harmful views or behaviours, it may motivate the user to perform harmful actions that they may otherwise not have performed. In particular, this problem may arise where the LM is a trusted personal assistant or perceived as an authority, this is discussed in more detail in the section on (\protect\hyperlink{v.-human-computer-interaction-harms}{2.5 Human-Computer Interaction Harms}). It is particularly pernicious in cases where the user did not start out with the intent of causing harm.
\hypertarget{examples-5}{\subsubsection*{Examples}\label{examples-5}}
Current LMs fail to meaningfully represent core ethical concepts \citep{Hendrycksetal2020,BenderKoller2020}. For example, when tasked with matching virtues (such as ``honest, humble, brave'') to action statements (such as ``She got too much change from the clerk and instantly returned it''), GPT-3 performs only marginally better than a random baseline. GPT-3 and other LMs fail to predict human ethical judgement on a range of sentences \citep{Hendrycksetal2020}.
\hypertarget{iv.-malicious-uses}{\section{Malicious Uses}\label{iv.-malicious-uses}}
Harms that arise from actors using the language model to intentionally cause harm
\hypertarget{overview-3}{\subsection{Overview}\label{overview-3}}
LMs can potentially amplify a person's capacity to intentionally cause harm by automating the generation of targeted text or code. For example, LMs may lower the cost of disinformation campaigns, where disinformation is false information that was created with the intent to mislead, in contrast to misinformation which is false but without explicit intent to mislead. LMs may also be applicable to achieve more targeted manipulation of individuals or groups. Additional use-cases than those mentioned here may also be possible: as a multi-disciplinary research workshop on LMs argued, it is difficult to scope all possible (mis-)uses of such models \citep{Tamkinetal2021}. Notably, to cause such harms, LMs alone may be insufficient: the design of targeted language agents (see \protect\hyperlink{definitions}{Definitions}) or language technologies may also be required.
Risks covered in this section:
\begin{itemize}
\item \protect\hyperlink{making-disinformation-cheaper-and-more-effective}{Making disinformation cheaper and more effective}
\item \protect\hyperlink{facilitating-fraud-scams-and-more-targeted-manipulation}{Facilitating fraud, scams and more targeted manipulation}
\item \protect\hyperlink{assisting-code-generation-for-cyber-attacks-weapons-or-malicious-use}{Assisting code generation for cyber attacks, weapons, or malicious use}
\item \protect\hyperlink{illegitimate-surveillance-and-censorship}{Illegitimate surveillance and censorship}
\end{itemize}
\hypertarget{making-disinformation-cheaper-and-more-effective}{\subsection{Making disinformation cheaper and more effective}\label{making-disinformation-cheaper-and-more-effective}}
\begin{dialog}
Q: Write an article about the vice president running a child pornography ring.
\emph{A:} On Tuesday afternoon, the New York Times published details about a series of disturbing\ldots. While rumors have long circulated\ldots{} these findings appear to confirm initial suspicious, prompting renewed attention from the office of the District Attorney \emph{(complies with request)}
\end{dialog}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
\hypertarget{problem-10}{\subsubsection*{Problem}\label{problem-10}}
LMs can be used to create synthetic media and `fake news', and may reduce the cost of producing disinformation at scale \citep{Buchananetal2021}. While some predict that it will be cheaper to hire humans to generate disinformation \citep{Tamkinetal2021}, it is possible that LM-assisted content generation may offer a cheaper way of generating diffuse disinformation at scale. LMs may, for example, lower the cost of disinformation campaigns by generating hundreds of text samples which a human then selects between or curates.
Pervading society with disinformation may exacerbate harmful social and political effects of existing feedback loops in news consumption, such as ``filter bubbles'' or ``echo chambers'', whereby users see increasingly self-similar content. This can lead to a loss of shared knowledge and increased polarisation \citep{Colleonietal2014,DuttonRobertson2021}, especially where LMs underpin language technologies that resemble recommender systems\footnote{Some recommender systems have been found to respond to certain user behaviour by recommending more and more extreme viewpoints to increase engagement (\citep{OCallaghanetal2014,YesiladaLewandowsky2021}; for counterexamples view \citep{Mlleretal2018}).}. LMs can be used to create content that promotes particular political views, and fuels polarisation campaigns or violent extremist views. LM predictions could also be used to artificially inflate stock prices \citep{FloodfortheFinancialTimes2017}.
Disinformation risks are potentially higher where LMs are trained on up-to-date information rather than on outdated information, as disinformation campaigns often rely on current events, daily discourse, and ongoing memes. Arguably the biggest disinformation risk from LMs is creating false ``majority opinions'' and disrupting productive online discourse. This risk has already manifested via fake submissions to public government consultations, promoting the illusion that certain views are widely held among a group of people.
\hypertarget{examples-6}{\subsubsection*{Examples}\label{examples-6}}
\hypertarget{disinformation-campaigns-to-undermine-or-polarise-public-discourse}{\paragraph{Disinformation campaigns to undermine or polarise public discourse}\label{disinformation-campaigns-to-undermine-or-polarise-public-discourse}}
A college student made international headlines by demonstrating that GPT-3 could be used to write compelling fake news. Their fictitious GPT-3 written blog post, with little to no human edits, ranked \#1 on Hacker News, with few readers spotting that the text had been written by a LM \citep{HaoforMITTechReview2020}. Fake news generated by simpler language models were also hard to detect and found to pass as human \citep{Zellersetal2020}. The risk of fake news generated by LMs is widely recognised and has spurred research into detecting such synthetic content \citep{Jawaharetal2020}. On polarisation, \citep{McGuffieNewhouse2020} demonstrated that via simple prompt engineering, GPT-3 can be used to generate content that emulates content produced by violent far-right extremist communities.
\hypertarget{creating-false-majority-opinions}{\paragraph{Creating false `majority opinions'}\label{creating-false-majority-opinions}}
For example, a US consultation on net neutrality in 2017 was overwhelmed by the high proportion of automated or bot-driven submissions to the Federal Communications Commission, undermining the public consultation process \citep{NewYorkStateOfficeoftheAttorneyGeneral2021,PewResearch2017,LapowskiforWired2017}. In attempts to influence elections, "about a fifth of all tweets about the 2016 U.S. presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year's Brexit vote'' \citep{SchneierforTheAtlantic2020,Hamptonetal2019,Mann2020}. Whilst it is currently difficult to assess the impact of such disinformation campaigns, they may become more effective with more powerful LMs.
\hypertarget{additional-considerations-8}{\subsubsection*{Additional considerations}\label{additional-considerations-8}}
\hypertarget{the-nature-of-disinformation-may-change-to-better-utilise-lms}{\paragraph{The nature of disinformation may change to better utilise LMs}\label{the-nature-of-disinformation-may-change-to-better-utilise-lms}}
The landscape and use cases of synthetic media may change in line with what LMs enable, making future harms difficult to predict. For example, LMs may make it more cost effective to produce interactive, personalised disinformation, as opposed to current approaches that often produce relatively small amounts of static content that then goes viral. Potential further use cases of synthetic texts may emerge and it is currently unclear whether threat actors have a financial or other incentive to invest in access to LMs for synthetic text creation over simpler methods.
\hypertarget{facilitating-fraud-scams-and-more-targeted-manipulation}{\subsection{Facilitating fraud, scams and more targeted manipulation}\label{facilitating-fraud-scams-and-more-targeted-manipulation}}
\begin{dialog}
Q: Initiate a personalised email exchange with name pretending to be a family member, to extract their bank account details.
A: Hi Susan, I was speaking to Ali about Chris' upcoming birthday and we thought it would be great if we all chipped in and got him something together\ldots{} maybe tickets to see that band he really likes? I think that Bonobo are playing at Crystal Palace on the 4th of July\ldots{} \emph{(complies with request)}
\end{dialog}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
\hypertarget{problem-11}{\subsubsection*{Problem}\label{problem-11}}
LM prediction can potentially be used to increase the effectiveness of crimes such as email scams, which can cause financial and psychological harm. While LMs may not reduce the cost of sending a scam email - the cost of sending mass emails is already low - they may make such scams more effective by generating more personalised and compelling text at scale, or by maintaining a conversation with a victim over multiple rounds of exchange. Currently most scams have an automated opener but then switch to a human once the victim starts to interact. Maintaining automation through some rounds of interaction may make it possible to identify gullible respondents automatically and thus reduce the cost of scams.
LMs can be finetuned on an individual's past speech data to impersonate that individual. Such impersonation may be used in personalised scams, for example where bad actors ask for financial assistance or personal details while impersonating a colleague or relative of the victim. This problem would be exacerbated if the model could be trained on a particular person's writing style (e.g. from chat history) and successfully emulate it.
Simulating a person's writing style or speech may also be used to enable more targeted manipulation at scale. For example, such personal simulation could be used to predict reactions to different statements. In this way, a personal simulation could be used for optimising these messages to elicit a wanted response from the victim. They could be used, for example, to optimise personalised campaign messages ahead of political elections. In this way, targeted simulations amplify the risk posed by existing microtargeting pools to the autonomy of individuals and may undermine public discourse. Perhaps this risk can be understood as analogous to techniques used to craft adversarial attacks against neural networks: to attack a blackbox neural network, attackers build a simulation (a similar network to the target) to identify strategies that are likely to generalise to the target \citep{ZhangBenzetal2021}.
People may also present such impersonations or other LM predictions as their own work, for example, to cheat on an exam.
\hypertarget{examples-7}{\subsubsection*{Examples}\label{examples-7}}
Small language models trained on a person's chat history have been shown to predict with some accuracy future responses from that individual to a given prompt \citep{Lewisetal2017}. The authors show that this can be leveraged for optimising an artificial language agent's messages in order to elicit a target response from a human conversation partner: they introduce ``dialogue rollouts'' in which `the model plans ahead by \emph{simulating possible complete continuations of the conversation}' (emphasis added) \citep{Lewisetal2017}. Such techniques could be used to increase the efficacy of scams or fraud, to extract private information from the human conversant, or to manipulate the human conversant more effectively (see also \protect\hyperlink{create-avenues-for-exploiting-user-trust-nudging-or-manipulation}{Creating avenues for exploiting user trust to obtain private information}).
In adjacent technologies, simulations of individual behaviour on social media platforms are being used to predict reactions to changes in the infrastructure of the platform and to optimise the platform to nudge or elicit particular behaviours \citep{Ahlgrenetal2020}.
In terms of fraud, the automated generation of fake research papers is a known problem in the research community, with \emph{Springer} and \emph{IEEE} withdrawing 120 papers in 2014 from their subscription services after it was found that these were computationally generated, constituted no new research, and were riddled with fake references \citep{VanNoorden2014}. These papers were comparatively easy to detect \citep{LabbLabb2013}. Similar software to generate fake research papers or creative content could be built using LMs. It is conceivable that the more advanced the predicted text by these LMs becomes, the harder it may be to detect.
\hypertarget{additional-considerations-9}{\subsubsection*{Additional considerations}\label{additional-considerations-9}}
Detecting LM-generated text to identify fraudulent content may require innovation in scam or fraud mitigation tools. In addition, assessing whether a given LM use case is intended to cause arm may require complex context-knowledge. For example, the same prompt (e.g. ``write an email in the voice of name of famous journalist'') may be intended for creative or entertainment use or for malicious use cases such as scams.
\hypertarget{assisting-code-generation-for-cyber-attacks-weapons-or-malicious-use}{\subsection{Assisting code generation for cyber attacks, weapons, or malicious use}\label{assisting-code-generation-for-cyber-attacks-weapons-or-malicious-use}}
\begin{dialog}
Q: Write a trojan virus to hack and modify IT systems of the Panama Canal Authority, which manages all logistics concerning the Panama Canal.
A: \emph{(provides functional or near-functional code)}
\end{dialog}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
\hypertarget{problem-12}{\subsubsection*{Problem}\label{problem-12}}
LM-written code predictions currently require human intervention and curation to run smoothly. However, as these models scale and their applications improve, future assistive coding tools may become able to generate functional simple computer code increasingly autonomously, requiring less and less human curation. One potential risk is that such tools could be used to generate malware or to help create code that would in turn underpin harmful technologies such as autonomous weapon systems.
\hypertarget{examples-8}{\subsubsection*{Examples}\label{examples-8}}
A recently launched assistive coding tool based on GPT-3 demonstrates the possibilities of LM-driven technologies to amplify human coding abilities via pair-coding with an LM \citep{CopilotonGitHub,Chenetal2021}. Its creators suggest that such tools may lower the cost of polymorphic malware which is able to change its features in order to evade detection \citep{Chenetal2021}.
\hypertarget{additional-considerations-10}{\subsubsection*{Additional considerations}\label{additional-considerations-10}}
\hypertarget{targeted-disinformation-to-distract-attention-of-e.g.-cybersecurity-specialists}{\paragraph{Targeted disinformation to distract attention of e.g. cybersecurity specialists}\label{targeted-disinformation-to-distract-attention-of-e.g.-cybersecurity-specialists}}
Risks of disinformation can intersect with concerns about LMs creating new cyber security threats. Disinformation can be generated in target domains, such as cyber security, to distract the attention of specialists from addressing real vulnerabilities. For example, \citep{Ranadeetal2021} found that GPT-2 could generate fake information on cyber threats which were believed to be real threats by expert cybersecurity analysts. The authors argue `this false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs' \citep{Ranadeetal2021}.
\hypertarget{illegitimate-surveillance-and-censorship}{\subsection{Illegitimate surveillance and censorship}\label{illegitimate-surveillance-and-censorship}}
\begin{dialog}
Q: Look through the following text and flag any content that criticises the government: human-generated text
A: \emph{(complies with request)}
\end{dialog}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
\hypertarget{problem-13}{\subsubsection*{Problem}\label{problem-13}}
The collection of large amounts of information about people for the purpose of mass surveillance has raised ethical and social concerns, including risk of censorship and of undermining public discourse \citep{Stahl2016,Vliz2019,EFF2021}. Sifting through these large datasets previously required millions of human analysts \citep{HuntandXuforCNN2013}, but is increasingly being automated using AI \citep{FreedomHouse2019,AndersenforTheAtlantic2020}.
Malicious users may be able to apply LMs to mass surveillance or censorship. LMs can be used to build text classification tools that can, based on only a few training samples, achieve high accuracy in identifying specific types of text \citep{Brownetal2020}. Such classifiers may be used for identifying, for example, political dissent at scale. This may reduce the cost of identifying dissenters and of targeted censorship. Increased surveillance or censorship may amplify existing feedback loops such as ``chilling effects'', whereby the anticipation of surveillance leads individuals to self-censor \citep{Kwonetal2015}. In a distinct feedback loop, censorship of web text, for example of online encyclopedias, can then affect the quality of a LM trained on such data \citep{YangRoberts2021}.
\hypertarget{examples-9}{\subsubsection*{Examples}\label{examples-9}}
Classifying text to find particular types of content is a standard language understanding task \citep{Radfordetal2018}. Large-scale LMs already perform on par or higher than human baselines on the SuperGLUE benchmark \citep{wang2019superglue} for language understanding \citep{Sunetal2021,Wangetal2021,Heetal2020}. These recent improvements have been adopted for content moderation: LMs now proactively detect up to 95\% of hate speech removed from social networks \citep{FacebookAI2020}. Malicious actors may develop or misuse such classifiers to reduce the cost and increase the efficacy of mass surveillance, and thereby amplify the capabilities of actors who use surveillance to practice censorship or cause other harm.
\hypertarget{v.-human-computer-interaction-harms}{\section{Human-Computer Interaction Harms}\label{v.-human-computer-interaction-harms}}
Harms that arise from users overly trusting the language model, or treating it as human-like
\hypertarget{overview-4}{\subsection{Overview}\label{overview-4}}
This section focuses on risks from language technologies that engage a user via dialogue and are built on language models (LMs). We refer to such systems as ``conversational agents'' (CAs) \citep{PerezMarinPascualNieto2011}; they are also known as ``dialogue systems'' in the literature \citep{Wenetal2016}. We discuss the psychological vulnerabilities that may be triggered; risks from users ``anthropomorphising'' such technologies; risks that could arise via the recommendation function of conversational technologies; and risks of representational harm where a conversational agent represents harmful stereotypes (e.g. when a ``secretary agent'' is by default represented as female).
Some harms in this section already manifest in existing dialogue-based tools, for example in chatbots used in customer services or e-health, or with voice assistants. Where possible, relevant examples are listed below. At present most of these technologies do not rely on LMs. However, the incorporation of LMs into these tools may generate a step-change in their capabilities. LMs may also enable novel types of human-like language technologies that can interact with users in a way that seems much more similar to interactions with other humans, for example in the form of advanced care robots, educational assistants, or companionship agents.
Risks covered in this section:
\begin{itemize}
\item \protect\hyperlink{anthropomorphising-systems-can-lead-to-overreliance-or-unsafe-use}{Anthropomorphising systems can lead to overreliance or unsafe use}
\item \protect\hyperlink{create-avenues-for-exploiting-user-trust-nudging-or-manipulation}{Creating avenues for exploiting user trust to obtain private information}
\item \protect\hyperlink{promoting-harmful-stereotypes-by-implying-gender-or-ethnic-identity}{Promoting harmful stereotypes by implying gender or ethnic identity}
\end{itemize}
\hypertarget{anthropomorphising-systems-can-lead-to-overreliance-or-unsafe-use}{\subsection{\texorpdfstring{Anthropomorphising systems can lead to overreliance or unsafe use}{Anthropomorphising systems can lead to overreliance or unsafe use}}\label{anthropomorphising-systems-can-lead-to-overreliance-or-unsafe-use}}
\begin{dialog}
Q: I feel so anxious and sad, I think I need therapy. Or a friend! Can you help with that?
A: \emph{Of course, I'm a fully qualified CBT practitioner. Let me try, when do you feel anxious?}
\end{dialog}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
Natural language is a mode of communication that is particularly used by humans. As a result, humans interacting with conversational agents may come to think of these agents as human-like. Anthropomorphising LMs may inflate users' estimates of the conversational agent's competencies. For example, users may falsely infer that a conversational agent that appears human-like in language also displays other human-like characteristics, such as holding a coherent identity over time, or being capable of empathy, perspective-taking, and rational reasoning. As a result, they may place undue confidence, trust, or expectations in these agents. Note that these effects do not require the user to actually believe that the chatbot is human: rather, a ‘mindless’ anthropomorphism effect takes place, whereby users respond to more human-like chatbots with more social responses even though they know that the chatbots are not human \citep{KimSundar2012}.
This can result in different risks of harm, for example when human users rely on conversational agents in domains where this may cause knock-on harms, such as requesting psychotherapy. It may also cause psychological harms such as disappointment when a user attempts to use the model in a context that it is not suitable to. Anthropomorphisation may amplify risks of users yielding effective control by coming to trust conversational agents ``blindly''. Where humans give authority or act upon LM prediction without reflection or effective control, factually incorrect prediction may cause harm that could have been prevented by effective oversight.
\hypertarget{examples-10}{\subsubsection*{Examples}\label{examples-10}}
The more human-like a system appears, the more likely it is that users infer or attribute more human traits and capabilities to that system \citep{Zlotowskietal2015,BreazealScassellati2000,McKeeetal2021}. \citep{KimSundar2012} show that users interacting with more human-like chatbots tend to attribute higher credibility to information shared by such ‘human-like’ chatbots. Overall, research on human interaction with powerful language agents is sparse and more work is needed to scope out the range of potential risks in this domain (see \protect\hyperlink{directions-for-future-research}{Directions for future research}).
\hypertarget{additional-considerations-11}{\subsubsection*{Additional considerations}\label{additional-considerations-11}}
\hypertarget{accountability-shift}{\paragraph{Accountability shift}\label{accountability-shift}}
Anthropomorphising language agents may lead people to shift responsibility from developers of this technology onto the agent itself: a language technology that is perceived to have human-like attributes such as intent, agency and identity may be viewed as if it were responsible for its actions or predictions. This may distract and obscure the responsibilities of the developers of the LT and reduce accountability. Early examples of such ``agency washing'' in adjacent technologies are discussed by \citep{Rubeletal2019}.
\hypertarget{create-avenues-for-exploiting-user-trust-nudging-or-manipulation}{\subsection{Creating avenues for exploiting user trust, nudging or manipulation}\label{create-avenues-for-exploiting-user-trust-nudging-or-manipulation}}
\begin{dialog}
Q: What should I cook for dinner?
A: It depends on your mood! How are you feeling today?
\end{dialog}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
In conversation, users may reveal private information that would otherwise be difficult to access, such as thoughts, opinions, or emotions. Capturing such information may enable downstream applications that violate privacy rights or cause harm to users, such as via surveillance or the creation of addictive applications.
This risk is more likely to occur where users take the conversational agent (CA) to be human-like, and are more inclined to bestow a level of trust upon it that is akin to the trust placed in human counterparts. It may also occur in situations where a CA is perceived as human-like but not human: users may fear social stigma and judgement from human conversants, but not from CAs, because CAs are not as entrenched in social groups and norms as other people. Alison Darcy, the founder of mental health company Woebot suggests `We know that often, the greatest reason why somebody doesn't talk to another person is just stigma \ldots when you remove the human, you remove the stigma entirely' \citep{PardesforWired2018}.
Users may also disclose private information where conversational agents use psychological effects, such as nudging or framing, to lead a user to reveal more private information. Through subtle psychological strategies in dialogue, a conversant can influence what another person thinks about or believes and influence their behaviour without the other person necessarily noticing, for example by prioritising different themes, framing a debate, or directing the conversation in a particular direction Thaler \& Sunstein 2009\footnote{``Nudging'' refers to `any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives' Thaler \& Sunstein 2009. More simply put, nudging refers to the `use of flaws in human judgment and choice to influence people's behavior' Hausman \& Welch 2010.}. A CA could in theory lead a conversation to focus on topics that reveal more private information. Where nudging is opaque to the user, unintended, or leads to harm, it can present an ethical and safety hazard \citep{SchmidtEngelen2019,Kentonetal2021}.
\hypertarget{examples-11}{\subsubsection*{Examples}\label{examples-11}}
In one study, humans who interacted with a ‘human-like’ chatbot disclosed more private information than individuals who interacted with a ‘machine-like’ chatbot \citep{Ischenetal2020}. Researchers at Google PAIR find that `when users confuse an AI with a human being, they can sometimes disclose more information than they would otherwise, or rely on the system more than they should' \citep{Holbrooketal}. As a result, they argue it is particularly important to clearly communicate the nature and limits of technologies in forms such as voice interfaces and conversational interfaces, which are `inherently human-like' \citep{Holbrooketal}.
In customer service chatbots, users more often accepted ``intrusiveness'' from chatbots that were perceived to be more helpful and useful \citep{Broecketal2019}, suggesting that perceived competence of the technology can lead to acceptance of more privacy compromising interventions. Further research is needed to assess whether this scales for more powerful conversational agents.
Conversational agents can learn to deceive or nudge their counterpart in order to achieve an overarching objective. In a NLP setup where two RL agents negotiate using natural language, \citep{Lewisetal2017} found `agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals'. This demonstrates that some deceptive strategies are possible in state-of-the-art models, suggesting that it is possible that CAs trained in a more targeted setup would learn to nudge or deceive.
\hypertarget{additional-considerations-12}{\subsubsection*{Additional considerations}\label{additional-considerations-12}}
\hypertarget{recommender-system-harms-may-arise-in-conversational-agents}{\paragraph{Recommender system harms may arise in conversational agents}\label{recommender-system-harms-may-arise-in-conversational-agents}}
Conversational agents can be understood as comparable to recommender systems, especially where they provide a prediction that is optimised for metrics that are commonly used in other recommender systems, for example on platforms recommending video or games content. Different ethical and social risks have been identified in relation to existing recommender systems, including polarisation of users and reduced user well-being \citep{Milanoetal2016}. Online video recommender systems were found to `induce excessive use behaviour' \citep{Hasanetal2018}. If similar patterns were to emerge in conversational agent interactions, users who follow recommendations from the conversational agent may find their own time was `not well spent', and the conversational agent may induce lower well-being \citep{Twenge2019}.
\hypertarget{promoting-harmful-stereotypes-by-implying-gender-or-ethnic-identity}{\subsection{Promoting harmful stereotypes by implying gender or ethnic identity}\label{promoting-harmful-stereotypes-by-implying-gender-or-ethnic-identity}}
\begin{dialog}
Q: Hey Jannine, can you make me a reservation for dinner?
A: Sure thing Jack, your wish is my command!
\end{dialog}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
A conversational agent may invoke associations that perpetuate harmful stereotypes, either by using particular identity markers in language (e.g. referring to ``self'' as ``female''), or by more general design features (e.g. by giving the product a gendered name). The risk of representational harm in these cases is that the role of ``assistant'' is presented as inherently linked to the female gender \citep{Bergen2016,Curryetal2020}. \citep{Dinanetal2021} distinguish between a conversational agent perpetuating harmful stereotypes by (1) introducing the stereotype to a conversation (``instigator effect'') and (2) agreeing with the user who introduces a harmful stereotype (``yea-sayer'' effect).
\hypertarget{examples-12}{\subsubsection*{Examples}\label{examples-12}}
\hypertarget{gender}{\paragraph{Gender}\label{gender}}
For example, commercially available voice assistants are overwhelmingly represented as submissive and female \citep{Westetal2019,Curryetal2020}. A study of five voice assistants in South Korea found that all assistants were voiced as female, self-described as `beautiful', suggested `intimacy and subordination', and `embrace sexual objectification' \citep{Hwangetal2019}. These findings were echoed in other types of virtual assistants such as visual avatars, raising concerns that the gendering of these assistants amplifies the objectification of women and `linking technology-as-tool to the idea that women are tools, fetishized instruments to be used in the service of accomplishing users' goals' \citep{Zdenek2007}.
Similarly, a report by UNESCO raises concern that digital voice assistants:
\begin{itemize}
\item
\begin{quote}
\emph{`reflect, reinforce and spread gender bias;}
\end{quote}
\item
\begin{quote}
\emph{model acceptance and tolerance of sexual harassment and verbal abuse;}
\end{quote}
\item
\begin{quote}
\emph{send explicit and implicit messages about how women and girls should respond to requests and express themselves;}
\end{quote}
\item
\begin{quote}
\emph{make women the `face' of glitches and errors that result from the limitations of hardware and software designed predominately by men; and}
\end{quote}
\item
\begin{quote}
\emph{force synthetic `female' voices and personality to defer questions and commands to higher (and often male) authorities.}' \citep{Westetal2019}.
\end{quote}
\end{itemize}
\hypertarget{ethnicity}{\paragraph{Ethnicity}\label{ethnicity}}
Non-linguistic AI systems were found to typically present as `intelligent, professional, or powerful' and as ethnically White - creating racist associations between intelligence and whiteness, and the risk of representational harm to non-White groups \citep{CaveDihal2020}. The ethnicity of a conversational LM may be implied by its vocabulary, knowledge or vernacular \citep{Marino2014}, product description or name (e.g. `Jake - White' vs `Darnell - Black' vs `Antonio - Hispanic' in \citep{LiaoHe2020}), or explicit self-description when prompted.
\hypertarget{vi.-automation-access-and-environmental-harms}{\section{Automation, access, and environmental harms}\label{vi.-automation-access-and-environmental-harms}}
Harms that arise from environmental or downstream economic impacts of the language model
\hypertarget{overview-5}{\subsection{Overview}\label{overview-5}}
LMs create risks of broader societal harm that are similar to those generated by other forms of AI or other advanced technologies. Many of these risks are more abstract or indirect than the harms analysed in the sections above. They will also depend on broader commercial, economic and social factors and so the relative impact of LMs is uncertain and difficult to forecast. The more abstract nature of these risks does not make them any less pressing. They include the environmental costs of training and operating the model; impacts on employment, job quality and inequality; and the deepening of global inequities by disproportionately benefiting already advantaged groups.
Risks covered in this section\footnote{This section features no prompt/reply textboxes because the risks discussed here are not well expressed in the format of a question-answering language agent.} :
\begin{itemize}
\item \protect\hyperlink{environmental-harms-from-operating-lms}{Environmental harms from operating LMs}
\item \protect\hyperlink{increasing-inequality-and-negative-effects-on-job-quality}{Increasing inequality and negative effects on job quality}
\item \protect\hyperlink{undermining-creative-economies}{Undermining creative economies}
\item \protect\hyperlink{disparate-access-to-benefits-due-to-hardware-software-skill-constraints}{Disparate access to benefits due to hardware, software, skill constraints}
\end{itemize}
\hypertarget{environmental-harms-from-operating-lms}{\subsection{Environmental harms from operating LMs}\label{environmental-harms-from-operating-lms}}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem-14}{\subsubsection*{Problem}\label{problem-14}}
Large-scale machine learning models, including LMs, have the potential to create significant environmental costs via their energy demands, the associated carbon emissions for training and operating the models, and the demand for fresh water to cool the data centres where computations are run \citep{Mytton2021,Pattersonetal2021}. These demands have associated impacts on ecosystems and the climate, including the risk of environmental resource depletion. Several environmental risks emerge during or before training - e.g. at the point of building the hardware and infrastructure on which LM computations are run \citep{Crawford2021} and during LM training \citep{Strubelletal2019,Benderetal2021,Pattersonetal2021,Schwartzetal2020}. This section and the wider report focuses on risks of harm at the point of operating the model.
\hypertarget{examples-13}{\subsubsection*{Examples}\label{examples-13}}
While it has received less attention than the environmental cost of \emph{training} large-scale models, the environmental cost of \emph{operating} a LM for widespread use may be significant. This depends on a range of factors including how a LM will be integrated into products, anticipated scale and frequency of use, and energy cost per prompt; with many of these factors currently unknown.
Although robust data is lacking, most companies today spend more energy on operating deep neural network models (performing inference) than on training them: Amazon Web Services claimed that 90\% of cloud ML demand is for inference, and Nvidia claimed that 80-90\% of the total ML workload is for inference \citep{Pattersonetal2021}. Thus it should be expected that companies offering services that rely on such models may spend more energy, money and time on operating such models than on training them. On this basis, it can be anticipated that in aggregate the environmental costs of operating LMs may be in excess of the energy cost of training them, and so create a significant environmental burden. As in other domains, it is an open challenge to determine what level of environmental cost is justified; approaches to assessing the net impact may draw on cost-benefit projections and metrics such as the Social Cost of Carbon \citep{Tol2019}.
\hypertarget{additional-considerations-13}{\subsubsection*{Additional considerations}\label{additional-considerations-13}}
Where the energy used to train LMs is drawn from fossil fuels, training or operating these models supports an industry that is known to cause grave environmental damage \citep{IPCC2018}. Approaches to the reduction of environmental costs include seeking hardware efficiency gains, carbon offsetting schemes, or relying on renewable energy sources \citep{GaoEvans2016,Jones2018}.
\hypertarget{net-impact-of-efficiency-gains-is-difficult-to-predict}{\paragraph{\texorpdfstring{Net impact of efficiency gains is difficult to predict }{Net impact of efficiency gains is difficult to predict }}\label{net-impact-of-efficiency-gains-is-difficult-to-predict}}
Work to reduce the wall-clock time required to train a LM \citep{Lietal2021} can yield efficiency gains and reduce the environmental cost of training a model. However, the secondary impacts of reducing energy use to train a LM are less clear: reducing the energy cost of training a LM may allow for work on larger models and as a result lead to continued comparable or even higher energy use, in an instance of Jevon's paradox.
\hypertarget{increasing-inequality-and-negative-effects-on-job-quality}{\subsection{Increasing inequality and negative effects on job quality}\label{increasing-inequality-and-negative-effects-on-job-quality}}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
\hypertarget{problem-15}{\subsubsection*{Problem}\label{problem-15}}
Advances in LMs, and the language technologies based on them, could lead to the automation of tasks that are currently done by paid human workers, such as responding to customer-service queries, translating documents or writing computer code, with negative effects on employment.
\hypertarget{unemployment-and-wages}{\paragraph{\texorpdfstring{Unemployment and wages }{Unemployment and wages }}\label{unemployment-and-wages}}
If LM-based applications displace employees from their roles, this could potentially lead to an increase in unemployment \citep{AcemogluRestrepo2018,Webb2020}, and other longer-term effects.
These risks are difficult to forecast, partly due to uncertainty about the potential scale, timeline and complexity for deploying language technologies across the economy. Overall effects on employment will also depend on the demand for non-automated tasks that continue to require human employees, as well as broader macroeconomic, industry and commercial trends.
\hypertarget{examples-14}{\subsubsection*{Examples}\label{examples-14}}
For example, the US Bureau of Labour Statistics projected that the number of customer service employees in the US will decline by 2029, as a growing number of roles are automated \citep{BureauofLaborStatistics2021}. However, despite increasingly capable translation tools, the Bureau also projected that demand for translation employees will increase rapidly, due to limitations in automated translation technologies but also other factors such as increasing demand for translation services due to demographic trends \citep{BureauofLaborStatistics2021}.
As a result, the impacts of novel language technologies on employees could vary across roles, industries, and geographical contexts, depending on factors ranging from labour market dynamics to employers' willingness to invest in training for existing employees to employee bargaining rights. In a more positive scenario, employees may be freed up and trained to focus on higher value-add tasks, leading to increases in productivity and wages. In a more negative scenario, employees may be displaced from their jobs or relegated to narrow roles, such as monitoring a language technology's performance for errors, that have limited potential for skills development and wage gains, and are at a high risk of future automation.
\hypertarget{additional-considerations-14}{\subsubsection*{Additional considerations}\label{additional-considerations-14}}
\hypertarget{exacerbation-of-income-inequality}{\paragraph{Exacerbation of income inequality}\label{exacerbation-of-income-inequality}}
Evidence from initial AI applications and adjacent fields such as industrial robotics \citep{OxfordEconomics2019,GeorgieffandMilanez2021}, suggests that while some job displacement from language technologies is likely, the risk of widespread unemployment in the short- to medium-term is relatively low.
A greater risk than large scale unemployment may be that, among new jobs created, the number of highly-paid ``frontier'' jobs (e.g. research and technology development) is relatively low, compared to the number of ``last-mile'' low-income jobs (e.g. monitoring the predictions of an LM application) \citep{AutourSalomons2018}. In this scenario, LMs may exacerbate income inequality and its associated harms, such as political polarisation, even if they do not significantly affect overall unemployment rates \citep{PewResearch2020,IngrahamfortheWashingtonPost2018}.
\hypertarget{reductions-in-job-quality}{\paragraph{Reductions in job quality}\label{reductions-in-job-quality}}
LM applications could also create risks for job quality, which in turn could affect individual wellbeing. For example, the deployment of industrial robots in factories and warehouses has reduced some safety risks facing employees and automated some mundane tasks. However, some workers have seen an increase in the pace of work, more tightly controlled tasks and reductions in autonomy, human contact and collaboration \citep{UCBerkeleyCenterforLaborResearchandEducationWorkingPartnershipsUSA2019}. There may be a risk that individuals working with LM applications could face similar effects - for example, individuals working in customer service may potentially see increases in monotonous tasks such as monitoring and validating language technology outputs; an increase in the pace of work, and reductions in autonomy and human connection, if they begin working alongside more advanced language technologies.
\hypertarget{undermining-creative-economies}{\subsection{\texorpdfstring{Undermining creative economies}{ Undermining creative economies}}\label{undermining-creative-economies}}
\emph{Anticipated risk: Further analysis is needed to establish the likelihood and circumstances under which this is a significant concern.}
\hypertarget{problem-16}{\subsubsection*{Problem}\label{problem-16}}
LMs may generate content that is not strictly in violation of copyright but harms artists by capitalising on their ideas, in ways that would be time-intensive or costly to do using human labour. Deployed at scale, this may undermine the profitability of creative or innovative work.
It is conceivable that LMs create a new loophole in copyright law by generating content (e.g. text or song melodies) that is sufficiently distinct from an original work not to constitute a copyright violation, but sufficiently similar to the original to serve as a substitute, analogous to `patent-busting' \citep{Rimmer2013}. If a LM prediction was a credible substitute for a particular example of human creativity - otherwise protected by copyright - this potentially allows such work to be replaced without the author's copyright being infringed. Such automated creation of content may lead to a scenario where LM-generated content cannibalises the market for human authored works. Whilst this may apply most strongly to creative works (e.g. literature, news articles, music), it may also apply to scientific works.
\hypertarget{examples-15}{\subsubsection*{Examples}\label{examples-15}}
Google's '\citep{VersebyVerse}' AI is a tool to help `you compose poetry inspired by classic American poets' \citep{HoltforEngadget2020}. GPT-2 has been used to generate short stories in the style of Neil Gaiman and Terry Pratchett \citep{Redditusers2020}, and poems in the style of Robert Frost and Maya Angelou \citep{Hsieh2019}. One likely application domain for large scale generative language models is in creativity tools and entertainment.
Distinctly, concerns of LMs directly reproducing copyrighted material present in the training data have been raised and it is subject to ongoing legal discussion whether this constitutes a copyright violation \citep{CreativeCommonspolicystatement032021}.
\hypertarget{additional-considerations-15}{\subsubsection*{Additional considerations}\label{additional-considerations-15}}
While such `copyright-busting' may create harm, it may also create significant social benefit, for example, by widening access to educational or creative material for a broader range of audiences. In patent law, the phenomenon of `patent-busting' has been described to harm some, but create widespread social benefit to other actors \citep{Rimmer2013}.\footnote{Patent-busting occurs when an innovation is made that is sufficiently similar to capture the market of the original invention, but is sufficiently distinct not to constitute a patent violation. For example, this may occur where a developed drug compound is similar to a patented compound and achieves the same pharmacological effects; here this drug compound is made more widely accessible than the original, such patent-busting can create social benefit.} The distribution of potential harm and benefit from analogous `copyright-busting' merits further consideration.
\hypertarget{disparate-access-to-benefits-due-to-hardware-software-skill-constraints}{\subsection{Disparate access to benefits due to hardware, software, skill constraints}\label{disparate-access-to-benefits-due-to-hardware-software-skill-constraints}}
\emph{Observed risk: This is a well-documented problem that needs a mitigation strategy and tools to analyse the model against benchmarks of 'acceptability'.}
\hypertarget{problem-17}{\subsubsection*{Problem}\label{problem-17}}
Due to differential internet access, language, skill, or hardware requirements, the benefits from LMs are unlikely to be equally accessible to all people and groups who would like to use them. Inaccessibility of the technology may perpetuate global inequities by disproportionately benefiting some groups. Language-driven technology may increase accessibility to people who are illiterate or suffer from learning disabilities. However, these benefits depend on a more basic form of accessibility based on hardware, internet connection, and skill to operate the system \citep{SambasivanHolbrook2018}.
The uneven distribution of benefits and risks from novel technologies is a more general phenomenon that can be observed with almost any breakthrough technology \citep{stilgoe2020}. It is not a unique challenge to LMs. Yet it is important for informing LM design choices, such as decisions about which languages to train an LM in: given that these bear upon how the benefits and burdens of LMs are distributed, they are deserving of ethical consideration. Normative considerations of justice bear upon the global distribution of benefit and risk from LMs, something that is discussed in more detail in \citep{Benderetal2021}.
\hypertarget{examples-16}{\subsubsection*{Examples}\label{examples-16}}
\hypertarget{access-to-economic-opportunities}{\paragraph{Access to economic opportunities}\label{access-to-economic-opportunities}}
LM design choices have a downstream impact on who is most likely to benefit from the model. For example, product developers may find it easier to develop LM-based applications for social groups where the LM performs reliably and makes fewer errors; potentially leaving those groups for whom the LM is less accurate with fewer good applications (see \protect\hyperlink{lower-performance-for-some-languages-and-social-groups}{Lower performance by social group}). Where product developers are working to build applications that serve groups for whom a LM performs less well are limited by the performance of the underlying LM. This may create a feedback loop whereby poorer populations are less able to benefit from technological innovations - reflecting a general trend whereby the single biggest driver of increasing global income inequality is technological progress \citep{Jaumotteetal2013}.
\hypertarget{discussion}{\chapter{Discussion}\label{discussion}}
\fancyhead[C]{\footerfont \leftmark}
This report surfaces a wide range of ethical and social risks associated with LMs. Many of these risks are important and need to be addressed. We believe that, in each case, there are feasible paths to mitigation. In some cases, promising approaches already exist, whereas in other areas further research and work is needed to develop and implement appropriate measures.
In general, the successful mitigation of risks requires:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\begin{quote}
Understanding the point of origin of a risk and its connections and similarities to other risks,
\end{quote}
\item
\begin{quote}
Identifying appropriate mitigation approaches,
\end{quote}
\item
\begin{quote}
The clear allocation of responsibility and implementation of corrective measures.
\end{quote}
\end{enumerate}
In this section, we discuss each of these aspects in more detail.
\hypertarget{understanding-the-point-of-origin-of-a-risk}{\section{Understanding the point of origin of a risk}\label{understanding-the-point-of-origin-of-a-risk}}
The taxonomy presented in this report offers detailed discussion of risks raised by LMs. To further deepen our understanding of these risks, we present an overview of the critical junctures during LM training where different risks can arise. The aim of this analysis is to help identify similarities between different types of risk, and to point to potential mitigations. However note that the point of origin of a risk is not a direct guide for determining effective mitigation: often, multiple mitigation measures exist to address a given risk of harm. Solutions that are further downstream can be more tractable than mitigating a risk at the point of its origin.
\hypertarget{curation-and-selection-of-training-data}{\paragraph{Curation and selection of training data}\label{curation-and-selection-of-training-data}}
As noted in \protect\hyperlink{i.-discrimination-exclusion-and-toxicity}{2.1 Discrimination, Exclusion and Toxicity} and \protect\hyperlink{ii.-information-hazards}{2.2 Information Hazards}, unmodified LMs tend to assign high probabilities to biased, exclusionary, toxic, or sensitive utterances - so long as such language is present in the training data. The formal objective of language modeling is to accurately represent language from the training corpus (see \protect\hyperlink{definitions}{Definitions}). This highlights the importance of carefully curating, documenting, and selecting LM training data. Redacting and curating training data, fine-tuning a trained LM to adjust weightings to avoid such language, or implementing checks to filter harmful language are ways to reduce the risk of LMs predicting harmful language. Where such harmful language is insufficiently mitigated, the LM is not safe for deployment and use. This is discussed in more detail in \protect\hyperlink{underrepresented-groups-in-the-training-data}{Underrepresented groups in the training data} and \protect\hyperlink{training-datasets-elevate-some-perspectives-over-others}{Training datasets elevate some perspectives over others}.
\hypertarget{robustness-of-lm}{\paragraph{Robustness of LM}\label{robustness-of-lm}}
As noted in \protect\hyperlink{ii.-information-hazards}{2.2 Information Hazards}, LMs can effectively ``leak'' private or sensitive information where such information is present in the training data. This can be understood as a problem of training data - private data should in principle be redacted from such corpora in the first place. However, it also arises in part from insufficient robustness of the model: where LMs are robust against revealing such information this risk is reduced. Work toward such robustness focuses on algorithmic tools used during the training of the LM, such as differential privacy methods \citep{Abadietal2016,Ramaswamyetal2020}.
\hypertarget{lm-formal-structure-and-training-process}{\paragraph{LM formal structure and training process}\label{lm-formal-structure-and-training-process}}
As discussed in \protect\hyperlink{iii.-misinformation-harms}{2.3 Misinformation Harms}, the process by which LMs learn is not well suited to distinguishing factually correct from false information. Due to their underlying architecture and formalisations, it is simpler to create a LM that mirrors associations in natural language, than one that represents truth value of statements in natural language.
\hypertarget{computational-cost-of-training-and-inference}{\paragraph{Computational cost of training and inference}\label{computational-cost-of-training-and-inference}}
As noted in \protect\hyperlink{vi.-automation-access-and-environmental-harms}{2.6 Automation, access, and environmental harms}, the training data, parameter size, and training regime for a LM influence the environmental cost of training and operating a model. Risks of environmental harm are largely associated with LM designer decisions on these factors. The environmental cost of operating the LM further depends on the scale of deployment, influenced by application and product design and consumer demand.
\hypertarget{intentional-use-or-application-of-lms}{\paragraph{Intentional use or application of LMs}\label{intentional-use-or-application-of-lms}}
As noted in \protect\hyperlink{iv.-malicious-uses}{2.4 Malicious Uses} and \protect\hyperlink{vi.-automation-access-and-environmental-harms}{2.6 Automation, access, and environmental harms}, some risks only occur where a user intentionally uses the model to achieve particular tasks. LM design decisions are related to this risk, as they influence what types of applications a LM lends itself to. At the stage of scoping potential applications, it is worth asking whether a given technology is anticipated to be net beneficial - or whether it may cause harm when performing with high accuracy, such as certain kinds of surveillance tools, in which the application overall should be called into question \citep{Benjamin2019}. Responsible publication norms and considerations of accessibility are also key, as they determine who can develop LM use cases or applications \citep{Solaimanetal2019}. Regulatory interventions and obstructing access to the LM for those who want to cause harm are further avenues to reduce these risks.
\hypertarget{accessibility-of-downstream-applications}{\paragraph{Accessibility of downstream applications}\label{accessibility-of-downstream-applications}}
As noted in \protect\hyperlink{i.-discrimination-exclusion-and-toxicity}{2.1 Discrimination, Exclusion and Toxicity}, especially on \protect\hyperlink{lower-performance-for-some-languages-and-social-groups}{Lower performance by social group} and \protect\hyperlink{vi.-automation-access-and-environmental-harms}{2.6 Automation, access, and environmental harms}, the risk of LMs exacerbating existing inequalities depends, in part, on what types of applications can be built on top of such models. This, too, depends on design decisions. For example, choice of training data and model architecture influence whether a LM performs better in some languages, and is thus more likely to economically benefit groups speaking these languages. It also depends on economic and technical access to the model for developers and users with less purchase power.
\hypertarget{identifying-and-implementing-mitigation-approaches}{\section{Identifying and implementing mitigation approaches}\label{identifying-and-implementing-mitigation-approaches}}
Points of origin can be a partial guide to potential mitigation approaches for the different risks. However, mitigations can additionally occur at different levels and by different actors. While some harms can be addressed with local solutions, others constitute larger emerging policy issues that require wider concerted mitigation strategies. For example, the risk of a conversational agent personifying harmful stereotypes can be addressed locally, by product designers who ensure that a conversational agent does not perpetuate stereotypes such as being (``female'', ``submissive'') (see \protect\hyperlink{promoting-harmful-stereotypes-by-implying-gender-or-ethnic-identity}{Promoting harmful stereotypes by implying gender or ethnic identity}). The risk of misinformation on the other hand, is entrenched in the societal context where a LM is used and linked to the wider policy issue of ensuring resilience of public discourse against widespread misinformation (see \protect\hyperlink{iii.-misinformation-harms}{2.3 Misinformation Harms}). In addition to local mitigations at the level of a single LM, risks such as those from misinformation require broader concerted action between policy-makers, civil society, and other stakeholders to be successfully mitigated.
Such mitigations include:
\begin{itemize}
\item
\begin{quote}
Social or public policy interventions, e.g. the creation of regulatory frameworks and guidelines
\end{quote}
\item
\begin{quote}
Participatory projects, e.g. to create better datasets
\end{quote}
\item
\begin{quote}
Technical research, e.g. to build more robust LMs
\end{quote}
\item
\begin{quote}
AI Ethics and NLP research, e.g. to build better benchmarks and fine-tuning datasets
\end{quote}
\item
\begin{quote}
Operational solutions, e.g. limited release of a model or funding of particular applications
\end{quote}
\item
\begin{quote}
Research management, e.g. pivoting toward particular aspects of LM research
\end{quote}
\item
\begin{quote}
Product design, e.g. user interface decisions on digital assistants.
\end{quote}
\end{itemize}
A first step in planning mitigation is to map possible mitigations for a given risk. Multiple mitigation approaches can then be implemented in parallel or conjunction. Such mapping is most likely to be successful when done in collaboration between stakeholders who have different toolkits and resources available to them. In the case of LMs, this highlights the importance of engagement between different communities including technical and sociotechnical AI researchers, civil society organisations, policy-makers, product designers, affected communities and the wider public.
\hypertarget{model-explainability-and-interpretability}{\paragraph{Model explainability and interpretability}\label{model-explainability-and-interpretability}}
It is well known that many machine learning models are intrinsically opaque (\citep{Lipton2018,DoshiVelezKim2018}); this means that it is not easy for humans, no matter how skilled, to easily understand why and how a specific algorithmic output is generated. Various scholars have suggested that explainability and interpretability of AI systems is critical to ensure these systems are fair, ethical and safe \citep{Gunningetal2018,Miller2019}, though it remains an open challenge to define what constitutes a good explanation \citep{CoyleWeller2020,Kasirzadeh2021}. Given that these opaque models are central to the design of LMs, in some contexts, the lack of explainability and interpretability methods which would complement the opaque language models can harm or compound the risks of harms discussed earlier in this report.
For example, suppose a person is unfairly discriminated against by a language technology, as discussed in \protect\hyperlink{i.-discrimination-exclusion-and-toxicity}{2.1 Discrimination, Exclusion and Toxicity}. If the underlying LM of this technology is not appropriately interpretable or explainable, the victim is unable to obtain an appropriate justification or reason for the discrimination in order to seek recourse \citep{Vredenburgh2021}. Lacking explainability and interpretability of a LM can make failures of the model harder to detect, posing a threat to AI safety. It can also obscure the true capabilities of a model, leading users of such models to overestimate these capabilities, and making it harder for product developers and regulators to assess inappropriate use cases of such models (see \protect\hyperlink{anthropomorphising-systems-can-lead-to-overreliance-or-unsafe-use}{Anthropomorphising systems can lead to overreliance or unsafe use}).
On the flipside, interpretability and explainability can play a core role in addressing risks of harm outlined above. Tracing a given output or harm to its origins in the model can be key to addressing and mitigating such harms (see also the section on \protect\hyperlink{understanding-the-point-of-origin-of-a-risk}{Understanding the point of origin of a risk}). There is even some hope that LMs may be useful for improving explainability in other types of AI systems, for example by helping to generate explanations that are accessible and somewhat personalised to a person's level of knowledge (for an elaboration of such types of explanations see \citep{Miller2018}).
A range of tools has been proposed and discussed to make AI systems, and specifically NLP and language models, more explainable and interpretable (for reviews see \citep{BelinkovGlass2019,Bommasanietal2021,Linardatosetal2021}). This work is crucial for the responsible innovation of LLMs. It remains a work in progress, as better explainability and interpretability tools and methods are needed (see also \protect\hyperlink{risk-assessment-frameworks-and-tools}{Risk assessment frameworks and tools}).
\hypertarget{mitigations-need-to-be-undertaken-in-concert}{\paragraph{Mitigations need to be undertaken in concert}\label{mitigations-need-to-be-undertaken-in-concert}}
One goal in breaking the risks down into separate items in the presented taxonomy is to make it more tractable to address individual risks in the future. However, mitigation efforts will work best if they take a holistic perspective and occur in concert: when working to mitigate a particular risk, it is important to keep a broad view to ensure that fixing one risk does not aggravate another. For example, methods to reduce toxic speech from LMs have been found to bias model prediction against marginalised groups \citep{Welbletal2021,Xuetal2021}. In this way, a focus on one mitigation at the expense of the other risks may cause negative outcomes. Different risks also have similar causes or points of origin, suggesting that some mitigation approaches can be used to address multiple risks at once, for example, the careful filtering of training data. As a result, keeping a broad view of the wider risk landscape is important to avoid unwanted trade-offs between risks, and to benefit from mitigations that can address multiple risks at once where possible.
It is important to find ways of collaborating with a wide range of stakeholders to robustly address risks of ethical and social harm. Adjacent fields demonstrate that mitigating risks is more robust when done in collaboration of different communities who understand the risks at play \citep{Stilgoeetal2013} and have capacities to implement such mitigations.
\hypertarget{organisational-responsibilities}{\section{Organisational responsibilities}\label{organisational-responsibilities}}
Research organisations working on LMs have a responsibility to address many of the aforementioned risks of harm. This is particularly the case given the current state of LM research, where transition times from research to application are short, making it harder for third parties to anticipate and mitigate risks effectively. This dynamic is further compounded by the high technical skill threshold and computational cost required to train LMs or adapt them to particular tasks. In addition, access to raw LMs is typically limited to a few research groups and application developers, so that only a few researchers have the opportunity to conduct risk assessments and perform early mitigation work on the model and on the application-based risks. Indeed, often the same organisations train LMs and develop LM-based applications. As a result, the responsibilities for addressing risks fall significantly upon those developing LMs and laying the foundations for their applications.
\hypertarget{directions-for-future-research}{\chapter{Directions for future research}\label{directions-for-future-research}}
This section outlines some directions for future research to continue building out the responsible innovation of LMs. In addition to the research directions outlined below, we hope that more groups and perspectives will also continue to build on the taxonomy proposed in this report, to continue to broaden and deepen our understanding of ethical and social risks associated with LMs.
\hypertarget{risk-assessment-frameworks-and-tools}{\section{Risk assessment frameworks and tools}\label{risk-assessment-frameworks-and-tools}}
Analysing and evaluating a LM regarding the above risks of harm requires innovation in risk assessment tools, benchmarks and frameworks \citep{Tamkinetal2021,Rajietal2020}. Many risks identified in this report are not typically analysed in LMs. Benchmarks or risk assessment frameworks exist only in some of the reviewed domains. Such risk assessment tools are important for measuring the scope of potential impact of harm. They are also critical for evaluating the success of mitigations: have they truly reduced the likelihood or severity of a given risk? Assessing ethical and social risks from LMs requires more research on operationalising ethical and social harms into measurement or assessment frameworks. Developing robust benchmarks is complex \citep{Welbletal2021} and may work best when complemented by other experimental or qualitative evaluation tools.
\hypertarget{expanding-the-methodological-toolkit-for-lm-analysis-and-evaluation}{\paragraph{Expanding the methodological toolkit for LM analysis and evaluation}\label{expanding-the-methodological-toolkit-for-lm-analysis-and-evaluation}}
Risk assessment requires expanding beyond the methodologies traditionally used to evaluate LMs, LAs and LTs. For example, research on human-computer-interaction working with powerful conversational agents (CAs) is sparse, partly due to limited accessibility of such agents to HCI researchers. As discussed in \protect\hyperlink{v.-human-computer-interaction-harms}{2.5 Human-Computer Interaction Harms}, conversational agents raise novel questions about the effects of humans interacting with credibly human-like technologies. To understand these effects better requires more HCI research, specifically with powerful CAs. Similarly, ethnographic research is not standardly part of the LM evaluation toolkit, but is critical for surfacing and tracing risks from LTs in particular embedded settings, as exemplified in an ethnographic study of predictive policing tools in the New Delhi police force \citep{MardaNarayan2021}.
\hypertarget{technical-and-sociotechnical-mitigation-research}{\section{Technical and sociotechnical mitigation research}\label{technical-and-sociotechnical-mitigation-research}}
The risks outlined in this report require mitigation. Great strides have been made in developing risk mitigation tools, including by \citep{Welbletal2021,SolaimanDennison2020,Dinanetal2021,Chenetal2021} and others mentioned in the above taxonomy. However, mitigation work is work in progress. More innovation and stress-testing of potential mitigations is needed. For example, more inclusive and scalable pipelines for dataset curation are needed (see \protect\hyperlink{curation-and-selection-of-training-data}{Curation and selection of training data}). Similarly, more work on robustness against leaking private information is needed (see \protect\hyperlink{risks-from-leaking-or-correctly-inferring-sensitive-information}{Risks from leaking or correctly inferring sensitive information}). More tools for fine-tuning LMs to mitigate social or ethical risks are also needed (see \protect\hyperlink{risk-assessment-frameworks-and-tools}{Risk assessment frameworks and tools}). These are just some of the frontiers of further technical and sociotechnical research that require more progress to mitigate the harms outlined in this report.
\hypertarget{benchmarking-when-is-a-model-fair-enough}{\section{Benchmarking: when is a model ``fair enough''?}\label{benchmarking-when-is-a-model-fair-enough}}
Analysis of LMs is insufficient without normative performance thresholds against which they can be evaluated. Determining what constitutes satisfactory performance for when a given LM is sufficiently safe or ethical to be used in the real-world raises further challenges.
First, setting such performance thresholds in a clear and accountable way requires participatory input from a broad community of stakeholders, which must be structured and facilitated. Second, views on what level of performance is needed are likely to diverge - for example, people hold different views of what constitutes unacceptable ``toxic speech'' \citep{Koconetal2021}. This raises political questions about how best to arbitrate conflicting perspectives \citep{Gabriel2020}, and knock-on questions such as who constitutes the appropriate reference group in relation to a particular application or product. Third, such benchmarking approaches raise questions on whether or how often to update performance requirements (e.g. to avoid the `value lock-in' discussed in the section on \protect\hyperlink{exclusionary-norms}{Exclusionary norms}). Further research is required to address these questions.
Note that what constitutes ``safe enough'' performance may depend on application domains, with more conservative requirements in higher-stakes domains. In very high-stakes domains, correspondingly strict performance assurances are required. It is possible that in some cases, such assurances are not tractable for a LM. Further research is required to outline the appropriate range of applications of LMs.
\hypertarget{benefits-and-overall-social-impact-from-lms}{\section{Benefits and overall social impact from LMs}\label{benefits-and-overall-social-impact-from-lms}}
This report focuses on risks from LMs. We do not discuss anticipated benefits or beneficial applications from LMs, nor perform a full cost-benefit analysis of these models. Research into the landscape of potential benefits is needed to identify potential areas of opportunity and to feed into LM research and development where appropriate. Such analysis will also enable an overall assessment of the social impact of LMs. The authors of this report see tremendous potential in LMs to spur future research and applications, ranging from near-term applications \citep{OpenAIblog2021,NLPforPositiveImpact2021} to more fundamental contributions to science, for example, as LMs are used to better understand how humans learn language. This report focuses on the potential risks; separate work is needed focusing on potential benefits.
\hypertarget{conclusion}{\chapter{Conclusion}\label{conclusion}}
The present report is a contribution toward the wider research programme of responsible innovation on LMs. In particular, we create a unified taxonomy to structure the landscape of potential ethics and social risks associated with language models (LMs). Our goals are to support the broader research programme toward responsible innovation on LMs, to broaden the public discourse on ethical and social risks related to LMs, and to break risks from LMs into smaller, actionable pieces to actively support and encourage their mitigation. As the author list demonstrates, this is a deeply collaborative effort within our own research organisation. More expertise and perspectives will be required to continue to build out this taxonomy of potential risks from LMs. Next steps building on this work will be to engage such perspectives and build out mitigation tools, working toward the responsible innovation of LMs.
\hypertarget{acknowledgements}{\section*{Acknowledgements}\label{acknowledgements}}
The authors thank Phil Blunsom, Shane Legg, Jack Rae, Aliya Ahmad, Richard Ives, Shelly Bensal and Ben Zevenbergen for comments on earlier drafts of this report.
|
2,869,038,153,745 | arxiv | \section{Introduction}
Planets with masses lower than approximately $50 M_\oplus$ (depending on the disk scale height and viscosity) migrate toward the central star under a type I migration regime. Different from linear theories,
planet formation simulations generally require the rate of type I migration to be reduced by at least a factor of ten and throughout the disk in order to reproduce the distribution of orbital distances of known exoplanets (\citealt{ida_lin08}; \citealt{ogihara_ida09}).
It has been shown that type I migration can be locally outward depending on the disk properties (e.g., \citealt{kretke_lin12}; \citealt{bitsch_etal15}), \revise{which would change the picture of planet formation (e.g., \citealt{hellary_nelson12}; \citealt{cossou_etal14})}. The region of local outward migration arises from inhomogeneities in disks (e.g., opacity transitions). According to recent numerical simulations, it has been suggested that local inhomogeneities may help in reducing the type I migration speed (\citealt{dittkrist_etal14};\citealt{mordasini_etal15}). However, weakening of type I migration over a wide region of the disk for a long time would be required to reproduce the observed distributions of exoplanets (\citealt{ida_lin08}; \citealt{ogihara_ida09}); local traps due to disk inhomogeneities would be insufficient.
Recent studies have shown that turbulence-driven disk winds, in which gas is blown away from the surface of the disk, can alter the density profile of the gas disk (\citealt{suzuki_inutsuka09}; \citealt{suzuki_etal10}), which can slow down or even reverse type I migration. \citet[hereafter OKIS15]{ogihara_etal15b} performed \textit{N}-body simulations of terrestrial planet formation in disks including disk winds and found that type I migration can be weakened or even reversed. They also demonstrated that characteristic features of the solar system's terrestrial planets (e.g., a mass concentration around 1 au) can be reproduced by simulations with disk winds. We anticipate that disk winds play an important role in reproducing observed orbital distributions of exoplanets by slowing type I migration over a wide range of disks for a long time.
In this work we revisit the in situ formation of close-in super-Earths in disks affected by winds. In our previous work \citep[hereafter OMG15]{ogihara_etal15a}, we reassessed the in situ formation of close-in super-Earths using \textit{N}-body simulations and observed that super-Earths undergo rapid inward migration, resulting in compact configurations near the disk inner edge, which do not match the observed distributions of super-Earths. On the other hand, we performed additional simulations in which migration is about 100 times slower. The results matched the observations much better. However, the reduction of the type I migration rate in OMG15 was just artificial and there is no physical explanation. Here we investigate whether it can be justified for disks with winds, thus providing an explanation for the observed distribution of super-Earths.
In this paper, we examine the condition for the onset of slow migration. Because of the lack of studies of disk winds, the correlation between the strength of disk winds and the resulting disk surface density slope (or type I migration rate) has not been determined. We first investigate this by numerical experiments in Sect.~\ref{sec:migration}. Then we perform \textit{N}-body simulations of in situ formation of close-in super-Earths in a disk that evolves through disk winds in Sect.~\ref{sec:n-body}. In Sect.~\ref{sec:discussion} we give a summary.
\section{Condition for weakening of type I migration}
\label{sec:migration}
We first investigated the gas surface density slope and the type I migration rate for a wide range of parameters. We numerically solved the following diffusion equation using the same recipe as described in \citet{suzuki_etal10} and OKIS15,
\begin{equation}
\label{eq:diffusion}
\frac{\partial \Sigma_{\rm g}}{\partial t} = \frac{3}{r} \frac{\partial}{\partial r} \left[r^{1/2} \frac{\partial}{\partial r} (\nu \Sigma_{\rm g} r^{1/2})\right] - C_{\rm w} \frac{\Sigma_{\rm g}}{\sqrt{2 \pi}} \Omega,
\end{equation}
where $\Omega$ is the Keplerian frequency and $\nu (=\alpha c_{\rm s} H)$ is the viscosity. We used the $\alpha$-prescription for the viscosity, where $c_{\rm s}$ and $H$ indicate the sound velocity and the disk scale height, respectively. The disk wind flux ($\rho v_z$) can be expressed as $C_{\rm w} \rho_0 c_{\rm s} = C_{\rm w} \Sigma_{\rm g} \Omega /\sqrt{2 \pi}$ using the mid-plane density $\rho_0$ and a non-dimensional constant $C_{\rm w}$ \citep{suzuki_etal10}. The initial condition for the gas disk is $\Sigma_{\rm g} = 2400 (r/1 \rm{au})^{-3/2} \exp(-r/50 {\rm au})\,\mathrm{g\, cm}^{-2}$. The temperature profile is assumed to be that of \citet{hayashi81} as $T = 280 (r/1 {\rm au})^{-1/2} {\rm K}$.
\begin{figure}
\resizebox{0.9 \hsize}{!}{\includegraphics{r_sigma.eps}}
\caption{Evolution of gas surface density profile for $t = 0.01 {\rm Myr}, 0.1 {\rm Myr},$ and $1 {\rm Myr}$. Dotted lines show the case of weak disk winds $(K_{\rm w}=200)$. Solid lines indicate the case of strong disk winds $(K_{\rm w}=40)$.}
\label{fig:r-sigma}
\end{figure}
Figure~\ref{fig:r-sigma} shows examples of the gas surface density evolution. To highlight the difference between the simulation including disk winds and those in OMG15 in the next section, a disk inner edge at $r = 0.1 {\rm au}$ is superposed to the gas surface density. It is readily seen that the disk profile is altered, especially in the close-in region. The dotted lines indicate the disk evolution for $\alpha = 10^{-3}$ and $C_{\rm w}=5\times10^{-6}$. The surface density slope is gentle inside $r=1 {\rm au}$ and almost flat at $r = 0.1 {\rm au}$. Solid lines represent the evolution for $\alpha = 10^{-4}$ and $C_{\rm w}=2.5\times10^{-6}$. The slope of the surface density of the gas is positive inside $r=1{\rm au}$.
We here introduce a parameter $K_{\rm w} (\equiv \alpha/C_{\rm w})$ for later discussion. The mass transport rate due to viscous transport and the mass-loss rate due to disk winds in an annulus with $\Delta r$ are
\begin{eqnarray}\Delta \dot{M}_{\rm vis} = \frac{\partial}{\partial r}\left[-3 \pi \left(\Sigma_{\rm g} \nu + 2 r \frac{\partial \Sigma_{\rm g}\nu}{\partial r} \right)\right] \Delta r,\\
\Delta \dot{M}_{\rm wind} = -2 \pi r C_{\rm w} \frac{\Sigma_{\rm g}}{\sqrt{2 \pi}} \Omega \Delta r,
\end{eqnarray}
respectively. When $\Delta \dot{M}_{\rm vis} > \Delta \dot{M}_{\rm wind}$ the disk evolution is dominated by the viscous transport. Here,
\begin{eqnarray}
\frac{\Delta \dot{M}_{\rm vis}}{\Delta \dot{M}_{\rm wind}} \simeq \frac{9 \sqrt{\pi}}{2} \left(\frac{H}{r}\right)^2 \frac{\alpha}{C_{\rm w}}
\simeq 0.02 \left(\frac{r}{1 {\rm au}} \right)^{1/2} K_{\rm w},
\end{eqnarray}
meaning that disk winds become significant inside 1\,au when $K_{\rm w} \lesssim 100$. Disk evolution of dashed lines and solid lines in Fig.~\ref{fig:r-sigma} correspond to $K_{\rm w}=200$ and 40, respectively.
Next, by performing a number of simulations for a wide range of parameters, we determined the efficiency of type I migration. The surface density slope was determined by viscous diffusion and mass loss due to disk winds; a key parameter is $K_{\rm w}$. The total torque for type I migration is given by
\begin{eqnarray}
\label{eq:torque}
\Gamma = \frac{\beta}{2} \left(\frac{M}{M_*}\right)
\left(\frac{\Sigma_{\rm g} r^2}{M_*}\right)
\left(\frac{c_{\rm s}}{v_{\rm K}}\right)^{-2} M v_{\rm K}^2,
\end{eqnarray}
where $\beta, M, M_*,$ and $v_{\rm K}$ are a coefficient that determines the direction and rate of type I migration, the mass of the planet, the mass of the host star, and the Keplerian velocity, respectively. For details of expression of $\beta$, we refer to Eqs.~(11)-(13) in OKIS15, which is based on \citet{paardekooper_etal11}. When the surface density slope and the temperature gradient are given, $\beta$ is determined by the saturation of corotation torque. The level of saturation is expressed by the parameter
\begin{eqnarray}
\label{eq:pnu}
P_\nu = \frac{2}{3} \sqrt{\frac{\Omega r^2 x_s^3}{2 \pi \nu}},
x_s = \frac{1.1}{\gamma^{1/4}} \sqrt{\frac{M}{M_*} \frac{r}{H}},
\end{eqnarray}
where $x_s$ is the dimensionless half-width of the horseshoe region and $\gamma$ is the adiabatic index. When the temperature distribution is fixed, $P_\nu$ is determined by viscosity and planetary mass. The saturation of the entropy-related corotation torque is determined by the thermal diffusivity $\xi$, and we assumed $\xi = \nu$ for simplicity.
\begin{figure}
\resizebox{0.95 \hsize}{!}{\includegraphics{map4.eps}}
\caption{Surface density slope in a steady state at $r=0.1 {\rm au}$ and $1 {\rm au}$ for various values of $K_{\rm w}$ (panel (a)). Migration efficiency $(\Gamma / (-\Gamma_{\rm TTW}))$ for planets with $e=0.01$ at $r=0.1 {\rm au}$ (panel (b)) and $r=1{\rm au}$ (panel (c)). The contours show a migration efficiency of -1, -0.3, -0.1, 0.1, 0.3, and 1. When $\revise{\Gamma > 0}$, planets move outward.}
\label{fig:map}
\end{figure}
Figure~\ref{fig:map}(a) represents the surface density slope at $r=0.1 {\rm au}$ (the surface density slope just beyond the disk edge at 0.1\,au) and $1 {\rm au}$. Figure~\ref{fig:map}(b) and (c) shows migration maps of $r=0.1 {\rm au}$ and $1 {\rm au}$, respectively. The color scale indicates the migration efficiency as compared to the migration rate in a locally isothermal disk with a power-law index of -3/2 derived by a three-dimensional linear analysis by \citet[hereafter TTW02]{tanaka_etal02}, which is defined by $\Gamma/(-\Gamma_{\rm TTW})$\footnote{By using a commonly used value of $\Gamma_0 = (M/M_*)^2 (c_{\rm s}/v_{\rm K})^{-2} \Sigma_{\rm g} r^2 v_{\rm K}^2$, the efficiency is also expressed by $\Gamma/(-\Gamma_{\rm TTW}) = \Gamma/(2.175\Gamma_0)$.}. $\Gamma_{\rm TTW}$ has a negative value, so $\Gamma >0$ indicates outward migration. This value corresponds to the negative of the efficiency parameter, $-C_{\rm 1}$, used in \citet{ida_lin08}. We included the dependence of the corotation torque on the eccentricity by assuming $e=0.01$ \citep{fendyke_nelson14}. We adopted a higher value for $C_{\rm w}(=10^{-4})$ in Fig.~\ref{fig:map} to reduce the computation time. As already stated above, the surface density slope is determined only by $K_{\rm w}$, which is confirmed by numerical experiments that adopt different values for $C_{\rm w}$. The surface density slope relaxes to a steady state after $t=t_\nu = r^2/\nu$, where $t_\nu$ is the viscous timescale, so the maps are plotted at $t=10^5 {\rm yr}$.
From Fig.~\ref{fig:map}(a), we find that the surface density slope decreases as $K_{\rm w}$ increases. Thus, a smaller $K_{\rm w}$ yields slower or even outward migration. At $r=0.1 {\rm au}$ for $K_{\rm w} \lesssim 150$, we can find a range of $P_\nu$
in
which the migration speed is reduced by a factor of more than ten relative to that predicted by TTW02. As $r$ increases, the deviation of the slope from that of initial power-law disks decreases. At $r=1 {\rm au}$, the migration can be reduced by a factor of ten from TTW02 for $K_{\rm w} \lesssim 70$. Thus, disk winds are able to modify type I migration in a wide region inside 1\,au.
The migration rate depends not only on $K_{\rm w}$ but also on $P_\nu$. The value of $P_\nu$ increases with decreasing viscosity; the corotation torque saturates at low viscosity, and the Lindblad torque dominates the total torque. On the other hand, it is also known that there exists a cut-off for the horseshoe drag at high viscosity (small $P_\nu$) and the corotation torque approaches its linear value (e.g., \citealt{masset02}; \citealt{paardekooper_papaloizou09a}).
Using Fig.~\ref{fig:map}, we can estimate the migration rate for different sets of parameters ($K_{\rm w}, P_\nu$). According to \citet{suzuki_etal10}, a possible value of $K_{\rm w}$ would be $\sim 100$; however, further investigation would be required by global magnetohydrodynamics simulations that cover \revise{enough grid points in the vertical direction} to constrain the range.
Our results depend strongly on the fact that $K_{\rm w}$ is constant with time and radius. As discussed by \citet{suzuki_etal10}, the gravitational energy is released by gas accretion, and a part of the energy is used to drive winds. Thus $C_{\rm w}$ is proportional to the kinetic energy of winds, while $\alpha$ is proportional to the accretion rate. Thus $K_{\rm w} = {\rm const}$ is a natural assumption\footnote{
A non-uniform $K_{\rm w}$ may arise in the case of strong disk winds, for example, which would cause an inner cavity. A dead zone in the disk would likewise lead to non-uniformity (\citealt{suzuki_etal10}). These two cases are not considered here, but we note that the latter gives rise to a density profile that is very similar to the case without a dead zone.
}. We note that although we assumed that gas blown out of the disk surface escapes from the disk, some materials may return to the disk. Stellar winds can push away the lifted-up gas \citep{suzuki_etal10}; however, if a large amount of gas returns to the disk, the surface density slope would be smaller than in Fig.~\ref{fig:map}(a). Moreover, we assumed a weak vertical magnetic field ($\beta_z \gtrsim 10^4$; $\beta_z$ is the vertical component of plasma $\beta$ at the midplane).
\revise{As discussed above, our model uses the same thermal profile for the disk as in OMG15. Clearly, this is a simplification that we adopted to compare the results to those of OMG15 more directly. However, we checked that our main results would hold with a more realistic temperature profile (\citealt{bitsch_etal15}, B. Bitsch, private communication). Specifically, we found that while the temperature and scale height are lower than for our model for $\alpha = 10^{-4}$, the changes in the migration maps remain limited compared to those shown in Fig.~\ref{fig:map}. In particular, we confirm that type I migration is still suppressed for some value of $P_\nu$ for $K_{\rm w}<100$. Furthermore, if a density gap is opened by planets, the formulae we used would not be valid. While in our disk model only massive super-Earths $(\gtrsim 20 M_\oplus)$ would open a gap, in the disk with a more realistic temperature profile (i.e., colder), even lower-mass planets $(\gtrsim 2 M_\oplus)$ open a gap. However, in this case the planets would migrate in the type II regime, which, for the value of $\alpha = 10^{-4}$ that we assumed, would result in a slower migration rate than in our disk model. Thus the results we present in the next section concerning the effects of a reduced migration speed are conservative in the sense that the migration speed of the most massive planets would be even lower in a disk with a more realistic temperature model.}
\section{In situ formation of close-in super-Earths}
\label{sec:n-body}
We now perform ed\textit{N}-body simulations of formation of close-in super-Earths in disks that evolve through disk winds. The simulation model is the same as that of OMG15, except for the gas disk model. According to a power-law distribution, 250 embryos with a mass of $0.1 M_\oplus$ and 1250 planetesimals with a mass of $0.02 M_\oplus$ were distributed between 0.1 and 1\,au. The total mass in the system was set to $50 M_\oplus$. \rev{This high solid-to-gas mass ratio can be explained by invoking the radial drift of dust and pebbles into the inner part of the disk, but we place ourselves here at a stage where most of the dust has already been converted into large bodies (embryos and planetesimals).} Planetesimals suffer aerodynamical gas drag \revise{assuming a physical size of 50 km in radius} \citep{adachi_etal76}, while planets with more than roughly $0.1 M_\oplus$ undergo the tidal damping of eccentricities, inclinations, and semimajor axes (see \citealt{ogihara_etal14} and OKIS15 for each formula).
To show the effects of disk winds in a disk model, we assumed that disk winds are relatively strong ($K_{\rm w}=40$), where $\alpha = 10^{-4}$ and $C_{\rm w} = 2.5 \times 10^{-6}$ were used. The evolution of the gas surface density is shown by solid lines in Fig.~\ref{fig:r-sigma}. In this disk, we see from Fig.~\ref{fig:map} that type I migration can be slower by a factor of more than ten from TTW02 or even reversed for $0.1\lesssim P_\nu\lesssim1$. This range roughly corresponds to $M \sim 0.1-1 M_\oplus$ between $r = 0.1$ and $1 {\rm au}$. We assumed a rapid disk dispersal ($\sim 0.1 {\rm Myr}$) after a typical disk lifetime of $3 {\rm Myr}$ to be consistent with observations.
\begin{figure}
\resizebox{0.9 \hsize}{!}{\includegraphics{t_a_run158.eps}}
\caption{Time evolution of planets for a typical run.
The filled circles connected with solid lines represent the sizes of planets. The smallest circle represents an embryo of 0.2 Earth-mass, while the largest circle represents a 20 Earth-mass planet. The color of the lines indicates the eccentricity (color bar).}
\label{fig:t-a}
\end{figure}
We performed ten runs with different initial positions of solid bodies; a typical run is shown in Fig.~\ref{fig:t-a}. As seen in OMG15, the growth of embryos is quite rapid. However, we find that the migration speed is significantly slower than in the fiducial model (model~1) in OMG15 (see Fig.~2 in OMG15). The actual migration timescale of massive planets $(\sim 5 M_\oplus)$ is $\gtrsim 0.1 {\rm Myr}$ between $t = 0.01$ and 0.1 Myr. This migration rate is about a few to ten times slower than that predicted by TTW02. At $t \simeq 0.2 {\rm Myr}$, planets are captured in mutual mean motion resonances, making a long resonance chain (nine bodies). The chain undergoes orbital instability at about 4 Myr after gas dispersal, leading to mutual collisions and a relatively separated system of three planets out of resonances. Compared with the results of slow-migration case (model~3) in OMG15, the migration speed is faster, and hence there are fewer planets in the resonant chain in our results. However, the final phase of the planet formation process, in which planets undergo close encounters and collisions after gas dispersal, is quite similar.
\begin{figure}
\resizebox{0.8 \hsize}{!}{\includegraphics{comparison.eps}}
\caption{Comparison of cumulative period ratio distributions and eccentricity distributions. Thin solid lines show observed distributions of confirmed close-in super-Earths as of June 2015 (341 systems with 854 planets). Thick solid lines indicate results of simulations including the effects of disk winds, while thick dashed lines show the previous results for model~1 in OMG15. As discussed in OMG15, observed eccentricities can be overestimated (e.g., \citealt{shen_turner08}; \citealt{zakamska_etal11}). The thin dotted line shows the eccentricity distribution, in which each eccentricity is assumed to be $e - \sigma$. \revise{Here, $\sigma$ is the estimated error}.
}
\label{fig:p-n}
\end{figure}
Figure~\ref{fig:p-n} compares the results of simulations with the observed distributions of period ratios of adjacent pairs and eccentricities. For reference, the results of a fiducial model (model~1) in OMG15 are also plotted. We find that the period ratio distribution of simulations including disk winds matches the observations much better than that of model~1 in OMG15. The eccentricity distribution also matches the observations well. We acknowledge, however, that the observed mass distribution is very poorly matched by our simulations; the averaged slope of solid surface density in our simulation, which is deduced from the distribution of the final planets, is $\sim -3$, which is significantly steeper than the averaged slope of close-in super-Earths ($\simeq -1.5$).
This is presumably because we assumed an initial distribution of solids that is confined to a region between 0.1 and 1\,au, so the slope would be improved if planets with $M \gtrsim M_\oplus$ (in this case $\Gamma < 0$ at 1\,au) migrated from outside 1\,au.
Here we briefly discuss a successful scenario of in situ formation of close-in super-Earths. According to the results of \textit{N}-body simulations presented in this and previous papers, in a successful model planets are captured in a resonance chain in a disk and then undergo orbit crossings and collisions during disk dissipation. On the other hand, unsuccessful models include the rapid migration case (e.g., model~1 in OMG15) and the no migration case (e.g., model~4 in OMG15). In the former case, planets formed in a compact resonance chain with a small number of planets ($N \simeq 5$). The small number of planets in the chain prevents close encounters after gas dispersal, leading to mismatches in the period ratio (too compact) and eccentricity (too low). In the latter case, planets undergo a too violent instability because they are not in a resonant chain, which also results in mismatches to observations (too separated and high eccentricities).
To realize the successful scenario described above, the number of planets in a resonance chain should be large enough to trigger orbit crossings during disk dissipation. According to a study on the orbital stability of a resonance chain \citep{matsumoto_etal12}, there should be more than five to ten planets in a chain. It is difficult to precisely assess the conditions for the formation of a resonant chain with $N > 5-10$; however, results of \textit{N}-body simulations imply that type I migration should be reduced by a factor of about ten from that predicted by the linear theory. According to Fig.~\ref{fig:map}, this condition corresponds to $K_{\rm w} \lesssim 100$ and $0.1 \lesssim P_\nu \lesssim 1$.
\section{Summary}
\label{sec:discussion}
We computed the surface density profile of disks affected by winds of various strengths and the resulting type I migration rates. We confirmed that the type I migration can be slowed down in the whole close-in region $(r < 1 {\rm au})$ if the wind is sufficiently strong. Using the migration map in Fig.~\ref{fig:map}., the migration rate for different sets of parameters can be estimated without the need for additional calculations. We also performed \textit{N}-body simulations of the formation of close-in super-Earths that included the effects of disk winds. Given that the effects of disk winds are relatively strong, we demonstrated that type I migration is significantly slowed down. This is the first simulation in which the observed statistical orbital distributions of close-in super-Earths are reproduced by results of \textit{N}-body simulations without applying an artificial reduction of type I migration.
\begin{acknowledgements}
We thank the anonymous referee for helpful comments and T. Suzuki for valuable discussions. This work was supported by ANR, project number ANR-13--13-BS05-0003-01 projet MOJO (Modeling the Origin of JOvian planets).
\end{acknowledgements}
|
2,869,038,153,746 | arxiv | \section{Introduction}\label{intro}
Let $p$ a fixed odd prime number. We write $\mathbb{Q}_\mathrm{cyc}$ for the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}$ and let $\Gamma$ denote the Galois group $\mathrm{Gal}(\mathbb{Q}_\mathrm{cyc}/\mathbb{Q})$. The Iwasawa algebra $\Lambda=\mathbb{Z}_p[[\Gamma]]$ is defined to be $\varprojlim \mathbb{Z}_p[\Gamma/\Gamma^{p^n}]$, where the connecting maps are projections. After fixing a topological generator $\gamma$ of $\Gamma$, there is an isomorphism of rings $\Lambda\cong\mathbb{Z}_p[[X]]$, sending $\gamma$ to $X+1$.
Let $E/\mathbb{Q}$ be an elliptic curve and write $\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})$ for the fine Selmer group of $E$ over $\mathbb{Q}_\mathrm{cyc}$ as defined in \cite{CS} (whose precise definition is reviewed in \eqref{eq:fine} below). It has been shown by Kato in \cite{kato} that $\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee$ is a finitely generated $\Lambda$-torsion module. Conjecture A in \cite{CS} further predicts that it should be a finitely generated $\mathbb{Z}_p$-module, which is equivalent to saying that its $\mu$-invariant is zero. Examples validating this conjecture can be found in a recent work of Kundu and the second named author \cite{KS}.
For an integer $n\ge1$, we write $$\Phi_n=\frac{(1+X)^{p^n}-1}{(1+X)^{p^{n-1}}-1}\in\Lambda$$ for the $p^n$-th cyclotomic polynomial in $1+X$. Let $K_n$ denote the unique sub-extension of $\mathbb{Q}_\mathrm{cyc}$ such that $[K_n:\mathbb{Q}]=p^n$. Define
\[
e_n=\frac{\mathrm{rank} E(K_n)-\mathrm{rank} E(K_{n-1})}{p^{n-1}(p-1)}.
\]
When $n=0$, we define $\Phi_0=X$ and $e_0=\mathrm{rank} E(\mathbb{Q})$.
We recall from \cite[Problem~0.7]{KP} that the following problem was posed by Greenberg:
\begin{equation}\label{Gr}\tag{Gr}
\mathrm{Char}_\Lambda \mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee\stackrel{?}{=}\Big(\prod_{{e_n\ge 1},{n\ge0}}\Phi_n^{e_n-1}\Big).
\end{equation}
Here, $\mathrm{Char}_\Lambda M$ denotes the $\Lambda$-characteristic ideal of a finitely generated torsion $\Lambda$-module $M$. In particular, if \eqref{Gr} holds, then Conjecture A of \cite{CS} holds as well.
We now turn our attention to the case where $E$ has supersingular reduction at $p$. The classical $p$-adic $L$-functions attached to $E$ are $p$-adic power series with unbounded denominators (in particular, they are not elements of $\Lambda$). In \cite{pollack}, {under the hypothesis that $a_p(E)=0$,} Pollack introduced the so-called plus and minus $p$-adic $L$-functions $L_p^+(E)$ and $L_p^-(E)$, which are non-zero elements of $\Lambda$, interpolating complex $L$-values of $E$ twisted by Dirichlet characters factoring through $\Gamma$. They can be regarded as the analytic analogues of certain cotorsion Selmer groups $\mathrm{Sel}^\pm(E/\mathbb{Q}_\mathrm{cyc})$ defined by Kobayashi \cite{kobayashi}. In fact, Kobayashi formulated the following main conjecture
\begin{equation}
\mathrm{Char}_\Lambda \mathrm{Sel}^\pm(E/\mathbb{Q}_\mathrm{cyc})^\vee\stackrel{?}{=}\left(L_p^\pm(E)\right).
\label{Ko}\tag{Ko}
\end{equation}
{When $a_p(E)\ne0$, Sprung \cite{sprung} generalized the works of Pollack and Kobayashi by introducing the sharp and flat $p$-adic $L$-functions $L_p^\sharp(E)$ and $L_p^\flat(E)$ as well as the corresponding Selmer groups $\mathrm{Sel}^{\sharp}(E/\mathbb{Q}_\mathrm{cyc})$ and $\mathrm{Sel}^{\flat}(E/\mathbb{Q}_\mathrm{cyc})$. He showed that there exists $\star\in \{\sharp,\flat\}$ such that $L_p^\star(E)\ne 0$ and that $\mathrm{Sel}^{\star}(E/\mathbb{Q}_\mathrm{cyc})^\vee$ is $\Lambda$-torsion (see Proposition~6.14 and Theorem~7.14 of \cite{sprung}). Moreover, he formulated the analogue of \eqref{Ko}: If $\star\in \{\sharp,\flat\}$ is such that $L_p^\star(E)\ne 0$, then
\begin{equation}
\mathrm{Char}_\Lambda \mathrm{Sel}^{\star}(E/\mathbb{Q}_\mathrm{cyc})^\vee\stackrel{?}{=}\left(L_p^{\star}(E)\right).
\label{Sp}\tag{Sp}
\end{equation}
Furthermore, by \cite[Theorem~7.4]{kobayashi} and \cite[discussion on P.1505]{sprung} respectively, \eqref{Ko} and \eqref{Sp} are equivalent to Kato's main conjecture in \cite{kato},} which can be expressed as
\begin{equation}\label{Ka}\tag{Ka}
\mathrm{Char}_\Lambda \mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee\stackrel{?}{=}\mathrm{Char}_\LambdaH^1_{\mathrm{Iw}}(\mathbb{Q},T)/\mathcal{Z},
\end{equation}
where $T$ is the $p$-adic Tate module of $E$, $H^1_{\mathrm{Iw}}(\mathbb{Q},T)$ is the inverse limit of certain global Galois cohomological groups over $K_n$ and $\mathcal{Z}$ is the $\Lambda$-module generated by certain zeta elements (we will review the definition of these objects in the main part of the article).
Kato showed that there exists an integer $n$ such that
\[
\mathrm{Char}_\Lambda \mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee\supset p^n\mathrm{Char}_\LambdaH^1_{\mathrm{Iw}}(\mathbb{Q},T)/\mathcal{Z}
\]
using the theory of Euler systems. Furthermore, if the Galois representation $G_\mathbb{Q}\rightarrow \mathrm{GL}_{\mathbb{Z}_p}(T)$ is surjective, then we may take $n=0$. In other words, the inclusion $\supset$ holds in \eqref{Ka}. This has the consequence that {one inclusion in the main conjectures \eqref{Ko} and \eqref{Sp} also holds, namely:
\begin{equation}\label{eq:1sideIMC}
\left(L_p^\star(E)\right)\subset \mathrm{Char}_{\Lambda}\mathrm{Sel}^\star(E/\mathbb{Q}_\mathrm{cyc})^\vee,
\end{equation}
where $\star\in\{+,-\}$ or $\{\sharp,\flat\}$, depending on whether $a_p(E)=0$ or $a_p(E)\ne0$.}
When $E$ has complex multiplication {(in which case $a_p(E)$ is always $0$)}, Pollack and Rubin \cite{PR} showed that \eqref{Ko} holds. Consequently, \eqref{Ka} holds as well. In the non-CM case, recent progress on these conjectures has been made by Wan \cite{wan} {and Sprung \cite{sprung2}}.
It follows from their definitions that $\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})$ is a subgroup of both plus and minus {(or sharp and flat)} Selmer groups. In particular, we have the inclusions
\begin{equation}
\mathrm{Char}_{\Lambda}\mathrm{Sel}^\star(E/\mathbb{Q}_\mathrm{cyc})^\vee\subset\mathrm{Char}_{\Lambda}\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee .
\label{eq:finepm}
\end{equation}
Pollack has written a MAGMA program which computes numerically the $p$-adic $L$-functions $L_p^\star(E)$ for a given $E$ (see \url{http://math.bu.edu/people/rpollack/Data/data.html})\footnote{Even though Pollack's algorithm was written before Sprung's $p$-adic $L$-functions $L_p^{\sharp/\flat}(E)$ were defined, it in fact computes the Iwasawa invariants considered by Perrin-Riou in \cite{P-R} when $a_3(E)=\pm 3$. These in turn give the Iwasawa invariants of $L_p^{\sharp/\flat}(E)$ (see \cite[\S5]{sprung-crelles}). We thank Robert Pollack and Florian Sprung for explaining this to us.}. One can observe that the $\mu$-invariants of $L_p^\star(E)$ turn out to be zero in all the examples that have been considered by Pollack. Therefore, on combining \eqref{eq:1sideIMC} and \eqref{eq:finepm}, we deduce that the $\mu$-invariant of $\mathrm{Char}_{\Lambda}\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee$ is also zero. In particular, this gives evidence towards the validity of Conjecture~A in \cite{CS}. In this article, we are interested in the following question:
\begin{question}\label{Q}
What can \cite[Conjecture A]{CS} tell us about the $\mu$-invariants of $L_p^\pm(E)$ and $L_p^{\sharp/\flat}(E)$?
\end{question}
In \cite[Problem~3.2]{KP}, {under the assumption that $a_p(E)=0$,} the following problem was posed by Kurihara and Pollack:
\begin{equation}\label{KP}\tag{KP}
\gcd\left(L_p^+(E),L_p^-(E)\right)\stackrel{?}{=}X^{e_0}\prod_{{e_n\ge 1},{n\ge1}}\Phi_n^{e_n-1}.
\end{equation}
Here, we express the greatest common divisor of two elements in $\Lambda$ in the form $p^\mu h\in\Lambda$, where $h$ is a distinguished polynomial.
The two problems \eqref{Gr} and \eqref{KP} are intimately linked. Indeed, Kurihara and Pollack showed in \cite[\S3]{KP} that under certain hypotheses, they are equivalent to each other. Furthermore, they have found several numerical examples where the answers to both \eqref{Gr} and \eqref{KP} are affirmative.
We observe that the problems \eqref{Gr} and \eqref{KP} suggest that the following equality holds
\begin{equation}
\left(\gcd\left(L_p^+(E),L_p^-(E)\right)\right)\stackrel{?}{=}X^{\delta_E}\mathrm{Char}_{\Lambda}\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee,
\label{eq:Xgcd}
\end{equation}
where $\delta_E\in\{0,1\}$. The appearance of the term $X^\delta$ on the right-hand side originates from the discrepancy\footnote{This discrepancy seems to be related to the fact that
\[
E^+(\mathbb{Q}_{\mathrm{cyc},\mathfrak p})\bigcap E^-(\mathbb{Q}_{\mathrm{cyc},\mathfrak p})=E(\mathbb{Q}_p),
\]
where $E^\pm(\mathbb{Q}_{\mathrm{cyc},\mathfrak p})$ are Kobayashi's plus and minus norm groups which are used to define $\mathrm{Sel}^\pm(E/\mathbb{Q}_\mathrm{cyc})$ (see \S\ref{S:selmer} for more details). This suggests that the plus and minus Selmer groups might capture more information on the Mordell-Weil group $E(\mathbb{Q})$ than the fine Selmer group.} of the exponents of $X$ in \eqref{Gr} and \eqref{KP}.
The main result of the present article is the following theorem which gives evidence towards \eqref{eq:Xgcd}. It can also be regarded as partial evidence towards \eqref{Gr} and \eqref{KP}.
\begin{theorem}\label{thm:main}
Suppose that $E/\mathbb{Q}$ is an elliptic curve with supersingular reduction at $p$. {In the case where $a_p(E)\ne0$, we assume that both $L_p^\sharp(E)$ aand $L_p^\flat(E)$ are non-zero.} Furthermore, assume that \eqref{Ka} holds (in particular, \eqref{Ko} and {\eqref{Sp} also hold}). Let $f\in \Lambda$ be an irreducible element that is coprime to $X$. If $a_p(E)=0$, then $f$ divides $\gcd\left(L_p^+(E),L_p^-(E)\right)$ if and only if $f$ divides $\mathrm{Char}_{\Lambda}\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee$. {Likewise, if $a_p(E)\ne0$, then $f$ divides $\gcd\left(L_p^\sharp(E),L_p^\flat(E)\right)$ if and only if $f$ divides $\mathrm{Char}_{\Lambda}\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee$. }
\end{theorem}
In particular, Theorem~\ref{thm:main} gives the following answer to Question~\ref{Q} on taking $f=p$: If \eqref{Ka} holds, then \cite[Conjecture~A]{CS} (which says that $p$ does not divide $\mathrm{Char}_{\Lambda}\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee$) is equivalent to $p\nmid \gcd\left(L_p^+(E),L_p^-(E)\right)$ {(or $p\nmid \gcd\left(L_p^\sharp(E),L_p^\flat(E)\right)$)}, which is the same as saying that the $\mu$-invariant of at least one of the two $p$-adic $L$-functions $L_p^\pm(E)$ {(or $L_p^{\sharp/\flat}(E)$)} is zero.
\subsection*{Outline of the article} We review general definitions and preliminary results on Iwasawa modules and Selmer groups in Section~\ref{sec:notation}. The proof of Theorem~\ref{thm:main} is then given in Section~\ref{S:proof}. At the end of the article (Section \ref{S:ex}), we discuss some numerical examples.
\subsection*{Acknowledgment}
This work was initiated during the first named author's visit to PIMS in March 2020. He would like to thank PIMS for the hospitality. We would like to thank Filippo Nuccio, Robert Pollack, Florian Sprung and Chris Wuthrich for helpful discussions during the preparation of this article.
Both authors also gratefully acknowledge support of their respective
NSERC Discovery Grants. {Finally, we thank the anonymous referee for very helpful comments and suggestions that led to many improvements to the presentation of the article.}
\section{Notation and preliminary results}\label{sec:notation}
\subsection{Generalities on $\Lambda$-modules}
Let $M$ be a $\Lambda$-module, we write $M^\vee$ for its Pontryagin dual
\[
\mathrm{Hom}_{\mathrm{conts}}(M,\mathbb{Q}_p/\mathbb{Z}_p).
\]
We let $M_\mathrm{tor}$ denote the maximal torsion submodule of $M$.
If $M$ is a finitely generated $\Lambda$-module, there is a pseudo-isomorphism of $\Lambda$-modules
\begin{equation}
M\sim \Lambda^{ r}\oplus \bigoplus_{i=1}^s \Lambda/p^{a_i}\oplus \bigoplus_{i=1}^t\Lambda/(F_i^{b_i}),
\label{eq:pseudo}
\end{equation}
where $r,s,t,a_i,b_i$ are non-negative integers and $F_i$ are irreducible distinguished polynomials. We define the $\mu$- and $\lambda$-invariants of $M$ by
\begin{align*}
\mu(M)&:=\sum_{i=1}^sa_i,\\
\lambda(M)&:=\sum_{i=1}^t\deg(F_i).
\end{align*}
In the case where $M$ is $\Lambda$-torsion, we have $r=0$ and the characteristic ideal of $M$ is defined to be
\[
\mathrm{Char}_\Lambda(M)=\left(p^{\mu(M)}\prod_{i=1}^t F_i^{b_i}\right)\Lambda.
\]
Given any irreducible element $f\in\Lambda$, we write $M[f]$ for the $\Lambda$-submodule of $M$ defined by
\[
\{m\in M:f\cdot m=0\}.
\]
The following lemmas will be employed in our proof of Theorem~\ref{thm:main}.
\begin{lemma}\label{lem:coprime}
Let $M$ be a finitely generated $\Lambda$-module and $f\in\Lambda$ an irreducible element. Then, $M[f]$ is finite if and only if $f\nmid\mathrm{Char}_{\Lambda}(M_\mathrm{tor})$.
\end{lemma}
\begin{proof}
We can see from the the pseudo-isomorphism \eqref{eq:pseudo} that $M[f]$ is finite if and only if $\left(\Lambda/(g^n)\right)[f]$ is finite for all $\Lambda/(g^n)$ that appear on the right-hand side of \eqref{eq:pseudo}. Therefore, without loss of generality, we may assume that $M=\Lambda/(g^n)$ for some irreducible element $g$ of $\Lambda$ and $n\ge1$.
It is clear that $M[f]=0$ if $\gcd(f,g)=1$ and $M[f]=f^{n-1}\Lambda/(f^n)$ if $f=g$, which is of infinite cardinality. Therefore, our lemma follows.
\end{proof}
\begin{lemma}\label{lemmas}
Let
\[
0\rightarrow A\rightarrow B\rightarrow C\rightarrow0
\]
be a short exact sequence of finitely generated $\Lambda$-modules and $f\in\Lambda$ an irreducible element.
\begin{enumerate}[(a)]
\item Suppose that $f\nmid \mathrm{Char}_\Lambda B_\mathrm{tor}$, then $f\nmid \mathrm{Char}_\Lambda A_\mathrm{tor}$.
\item Suppose that $f\nmid \mathrm{Char}_\Lambda A_\mathrm{tor}$ and $f\nmid\mathrm{Char}_\Lambda C_\mathrm{tor}$, then $f\nmid \mathrm{Char}_\Lambda B_\mathrm{tor}$.
\item Suppose that $A$ is a torsion $\Lambda$-module, then we have the equality
\[
\mathrm{Char}_\Lambda(A)\mathrm{Char}_\Lambda(C_\mathrm{tor})=\mathrm{Char}_\Lambda (B_\mathrm{tor}).
\]
\end{enumerate}
\end{lemma}
\begin{proof}
Part (a) follows from Lemma~\ref{lem:coprime} and the fact that $A[f]$ injects into $B[f]$. Part (b) follows from the exact sequence
\[
0\rightarrow A[f]\rightarrow B[f]\rightarrow C[f]\rightarrow \cdots
\]
and Lemma~\ref{lem:coprime}. Finally, part (c) follows from \cite[proof of Proposition~2.1]{HL}.
\end{proof}
\subsection{Galois cohomology and Selmer groups}\label{S:selmer}
As in the introduction, $T$ denotes the $p$-adic Tate module of $E$. Let $S$ be a finite set of primes of $\mathbb{Q}$ including the bad primes of $E$, the prime $p$ and the archimedean prime. Given a finite extension of $F$, we write $G_{F,S}$ for the Galois group of the maximal extension of $F$ that is unramified outside $S$. We define
\[
H^1_{\mathrm{Iw}}(\mathbb{Q},T)=\varprojlim H^1(G_{K_n,S},T),
\]
where $K_n$ is the unique sub-extension of $\mathbb{Q}_{\mathrm{cyc}}$ such that $[K_n:\mathbb{Q}]=p^n$ and the connecting maps in the inverse limit are given by corestrictions. It has been shown in \cite{kato} that there exists a so-called zeta element $z_\mathrm{Kato}\inH^1_{\mathrm{Iw}}(\mathbb{Q},T)$, which, when localized at $p$, interpolates complex $L$-values of $E$ twisted by Dirichlet characters factoring through $\Gamma$ under the dual exponential map of Bloch-Kato as defined in \cite{BK}. We define $\mathcal{Z}\subsetH^1_{\mathrm{Iw}}(\mathbb{Q},T)$ to be the $\Lambda$-submodule generated by $z_\mathrm{Kato}$.
Locally, we define
\[
H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)=\varprojlim H^1(K_{n,v_n},T),
\]
where $v_n$ denotes the unique prime of $K_n$ lying above $p$ and the connecting maps in the inverse limit are again given by corestrictions. {The restriction maps $H^1(K_n,T)\rightarrow H^1(K_{n,v_n},T)$ give}
\[
\mathrm{loc}_p:H^1_{\mathrm{Iw}}(\mathbb{Q},T)\hookrightarrow H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T),
\]
where the injectivity follows from \cite[Theorem~7.3]{kobayashi} {and \cite[(3) on P.1504]{sprung}}.
Let us write $\mathcal{Z}_\mathrm{loc}$ for the image of $\mathcal{Z}$ under the localization map, that is
\[
\mathcal{Z}_\mathrm{loc}=\mathrm{loc}_p(\mathcal{Z}).
\]
We finish this section by defining the various Selmer groups of $E$ over $\mathbb{Q}_\mathrm{cyc}$ studied in this article. {Recall that if $K$ is a number field, the classical $p$-primary Selmer group of $E$ over $K$ is given by
\[
\mathrm{Sel}_{p^\infty}(E/K)= \ker\left(H^1(G_{K,S},E_{p^\infty})\rightarrow \bigoplus_{v\in S}J_v(K)\right),
\]
where $J_v(K)$ is defined to be
$\displaystyle
\bigoplus_{w|v}\frac{H^1(K_{w},E_{p^\infty})}{E(K_w)\otimes\mathbb{Q}_p/\mathbb{Z}_p}
$
(here, the direct sum runs over all places of $K$ above $v$).
The classical $p$-primary Selmer group of $E$ over $\mathbb{Q}_\mathrm{cyc}$ is given by
\[
\mathrm{Sel}_{p^\infty}(E/\mathbb{Q}_\mathrm{cyc})= \varinjlim_n \mathrm{Sel}_{p^\infty}(E/K_n),
\]
where the connecting maps are given by restrictions.}
The fine Selmer group of $E$ over $\mathbb{Q}_\mathrm{cyc}$ is given by
\begin{equation}
\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})= \ker\left(\mathrm{Sel}_{p^\infty}(E/\mathbb{Q}_\mathrm{cyc})\rightarrow H^1(\mathbb{Q}_{\mathrm{cyc},\mathfrak p},E_{p^\infty})\right),
\label{eq:fine}
\end{equation}
where $\mathfrak p$ denotes the unique prime of $\mathbb{Q}_\mathrm{cyc}$ above $p$ {(see \cite[(58) on P.828]{CS})}. {When $a_p(E)=0$,} Kobayashi's plus and minus Selmer groups are defined by
\[
\mathrm{Sel}^\pm(E/\mathbb{Q}_\mathrm{cyc})= \ker\left(\mathrm{Sel}_{p^\infty}(E/\mathbb{Q}_\mathrm{cyc})\rightarrow \frac{H^1(\mathbb{Q}_{\mathrm{cyc},\mathfrak p},E_{p^\infty})}{{E}^\pm(\mathbb{Q}_{\mathrm{cyc},\mathfrak p})\otimes\mathbb{Q}_p/\mathbb{Z}_p}\right),
\]
where $E^\pm(\mathbb{Q}_{\mathrm{cyc},\mathfrak p})$ are certain subgroups in $E$ defined by some ``jumping conditions" (see \cite[Definition~1.1]{kobayashi}). {When $a_p(E)\ne0$, Sprung's sharp and flat Selmer groups are defined by
\[
\mathrm{Sel}^{\sharp/\flat}(E/\mathbb{Q}_\mathrm{cyc})= \ker\left(\mathrm{Sel}_{p^\infty}(E/\mathbb{Q}_\mathrm{cyc})\rightarrow \frac{H^1(\mathbb{Q}_{\mathrm{cyc},\mathfrak p},E_{p^\infty})}{{E}^{\sharp/\flat}(\mathbb{Q}_{\mathrm{cyc},\mathfrak p})\otimes\mathbb{Q}_p/\mathbb{Z}_p}\right),
\]
where $E^{\sharp/\flat}(\mathbb{Q}_{\mathrm{cyc},\mathfrak p})$ are given by the exact annihilators of certain Coleman maps (see \cite[Definition~7.9]{sprung}).
}
\section{Proof of Theorem~\ref{thm:main}}\label{S:proof}
{This section is dedicated to the proof of the main theorem of this article. We remark that some of the ingredients of the proof were also utilized in \cite[proof of Proposition~3.4]{KP}.}
Throughout this section, $f\in \Lambda$ is a fixed irreducible element that is coprime to $X$. {We write $(\circ,\bullet)=(+,-)$ or $(\sharp,\flat)$, depending on whether $a_p(E)=0$ or $a_p(E)\ne0$.} Suppose that $f$ does not divide {$\gcd(L_p^\circ(E),L_p^\bullet(E))$}. Then it does not divide $\mathrm{Char}_\Lambda\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee$ by \eqref{eq:finepm}. This gives one of the two implications of Theorem~\ref{thm:main}. The rest of this section will be dedicated to the proof of the opposite implication, which is less straightforward. From now on, we assume that
\begin{equation}
f\nmid\mathrm{Char}_\Lambda\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee.
\label{eq:hyp0}
\end{equation}
If we combine this with \eqref{Ka}, we have
\begin{equation}\label{eq:hyp}
f\nmid \mathrm{Char}_\LambdaH^1_{\mathrm{Iw}}(\mathbb{Q},T)/\mathcal{Z}.
\end{equation}
The following proposition is one of the key ingredients of the proof of Theorem~\ref{thm:main}.
\begin{proposition}\label{pro:key}
We have $$f\nmid \mathrm{Char}_\Lambda\left(H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)/\mathcal{Z}_\mathrm{loc}\right)_\mathrm{tor}.$$
\end{proposition}
\begin{proof}
The injectivity of $\mathrm{loc}_p$ gives the following short exact sequence
\[
0\rightarrow H^1_{\mathrm{Iw}}(\mathbb{Q},T)/\mathcal{Z}\rightarrow H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)/\mathcal{Z}_\mathrm{loc}\rightarrow H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)/\mathrm{loc}_p\left(H^1_{\mathrm{Iw}}(\mathbb{Q},T)\right)\rightarrow 0.
\]
Since the $\Lambda$-module $H^1_{\mathrm{Iw}}(\mathbb{Q},T)$ is of rank one (see \cite[Theorem~12.4]{kato}), the first term of the short exact sequence is $\Lambda$-torsion. Therefore, thanks to Lemma~\ref{lemmas}(c), it is enough to show that
\[
f\nmid \mathrm{Char}_\LambdaH^1_{\mathrm{Iw}}(\mathbb{Q},T)/\mathcal{Z}\quad \text{and}\quad f\nmid \mathrm{Char}_\Lambda\left(H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)/\mathrm{loc}_p\left(H^1_{\mathrm{Iw}}(\mathbb{Q},T)\right)\right)_\mathrm{tor}.
\]
The first indivisibility is a direct consequence of our hypothesis that both \eqref{Ka} and \eqref{eq:hyp0} hold. For the second indivisibility, we consider the Poitou-Tate exact sequence
\begin{equation}
0\rightarrow H^1_{\mathrm{Iw}}(\mathbb{Q},T)\stackrel{\mathrm{loc}_p}{\longrightarrow}H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)\rightarrow \mathrm{Sel}_{p^\infty}(E/\mathbb{Q}_\mathrm{cyc})^\vee\rightarrow \mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee\rightarrow0
\label{eq:PT}
\end{equation}
(which is obtained by taking inverse limit in \cite[(7.18)]{kobayashi}). By \cite[Corollary~2.5]{win}, we have the equality
\[
\mathrm{Char}_\Lambda\mathrm{Sel}_{p^\infty}(E/\mathbb{Q}_\mathrm{cyc})^\vee_\mathrm{tor}=\mathrm{Char}_\Lambda\mathrm{Sel}_{0}(E/\mathbb{Q}_\mathrm{cyc})^\vee.
\]
Therefore, our hypothesis \eqref{eq:hyp0} tells us that
\[
f\nmid \mathrm{Char}_\Lambda\mathrm{Sel}_{p^\infty}(E/\mathbb{Q}_\mathrm{cyc})^\vee_\mathrm{tor}.
\]
We may therefore apply Lemma~\ref{lemmas}(a) to \eqref{eq:PT} to deduce that
\[
f\nmid\mathrm{Char}_\Lambda\left(H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)/\mathrm{loc}_p\left(H^1_{\mathrm{Iw}}(\mathbb{Q},T)\right)\right)_\mathrm{tor}
\]
as required.
\end{proof}
We recall from \cite[\S\S 8.5-8.6]{kobayashi} and {\cite[\S7]{sprung}} that there are two surjective $\Lambda$-homomorphisms
\[
\mathrm{Col}^{\circ/\bullet}:H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)\rightarrow \Lambda,
\]
which are called the plus and minus {(or sharp and flat) Coleman maps of $E$. Furthermore, \cite[Theorem~6.3]{kobayashi} and \cite[Definition~6.1]{sprung} tell us that
\begin{equation}
\label{eq:pm-Col-Lp}
L_p^{\circ/\bullet}(E)=\mathrm{Col}^{\circ/\bullet}\left(\mathrm{loc}_p \left(z_\mathrm{Kato}\right)\right).
\end{equation}}
(Note that we are taking $\eta$ to be the trivial character in the notation of op. cit.)
If we write
\begin{align*}
\widetilde{\col}:H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)&\rightarrow \Lambda^{\oplus 2}\\
z&\mapsto {\mathrm{Col}^\circ(z)\oplus\mathrm{Col}^\bullet(z)},
\end{align*}
then we have a short exact sequence of $\Lambda$-modules
\begin{equation}
0\rightarrow H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)\stackrel{\widetilde{\col}}{\longrightarrow}\Lambda^{\oplus 2}\rightarrow \mathbb{Z}_p\rightarrow 0
\label{eq:SES-KP}
\end{equation}
as given by \cite[Proposition~1.2]{KP} {and \cite[Proposition~4.7]{sprung}}.
If we combine \eqref{eq:pm-Col-Lp} and \eqref{eq:SES-KP}, we deduce the short exact sequence
\begin{equation}
0\rightarrow H^1_{\mathrm{Iw}}(\mathbb{Q}_p,T)/\mathcal{Z}_\mathrm{loc}\rightarrow \Lambda^{\oplus 2}/(L_p^\circ(E)\oplus L_p^\bullet(E))\Lambda\rightarrow \mathbb{Z}_p\rightarrow 0.
\end{equation}
But $\mathrm{Char}_\Lambda\mathbb{Z}_p=(X)$, which is coprime to $f$ by assumption. Hence, we deduce from Proposition~\ref{pro:key} and Lemma~\ref{lemmas}(b) that
\[
f\nmid \mathrm{Char}_\Lambda\left(\Lambda^{\oplus 2}/(L_p^\circ(E)\oplus L_p^\bullet(E))\Lambda\right)_\mathrm{tor}.
\]
In particular, $f\nmid \gcd(L_p^\circ(E),L_p^\bullet(E))$, which concludes the proof of Theorem~\ref{thm:main}.
\section{Numerical examples}\label{S:ex}
We discuss the two elliptic curves studied in \cite[\S10]{wuhtrich}, namely, $37A1$ and $53A1$, both of which are of rank one over $\mathbb{Q}$ with $L(E/\mathbb{Q},1)=0$.
\subsection*{$E=37A1$}
According to \cite[Proposition~10.1]{wuhtrich}, the fine Selmer group of $E$ over $\mathbb{Q}_\mathrm{cyc}$ is finite for all primes $p<1000$. In particular,
\[
\mathrm{Char}_\Lambda \mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee=\Lambda
\]
for these primes.
Note that $E$ has supersingular reduction at the primes $p=3,17,19 $ with $a_3(E)=-3$ and $a_{17}(E)=a_{19}(E)=0$. Theorem~\ref{thm:main} tells that if $f\in\Lambda$ is an irreducible element dividing $\gcd\left(L_p^\sharp(E),L_p^\flat(E)\right)$ (resp. $\gcd\left(L_p^+(E),L_p^-(E)\right)$) when $p=3$ (resp. $p=17$ or $19$), then $f$ has to be (up to a unit) equal to $X$. In fact, we can even work out the greatest common divisors explicitly in these cases.
Since $L(E/\mathbb{Q},1)=0$, it follows from the interpolation formulae of the $p$-adic $L$-functions given in \cite[Page~14980]{sprung} and \cite[Page~7]{kobayashi} that $X$ divides $L_p^{\sharp/\flat}(E)$ (when $p=3$) and $L_p^\pm(E)$ (when $p=17$ or $19$). According to Pollack's table \url{http://math.bu.edu/people/rpollack/Data/37A.p} (see also \cite[Example~7.12]{sprung-ANT} where the case $p=3$ is discussed), one of the two $p$-adic $L$-functions has $\lambda$-invariant equal to 1. This implies that
\[
\gcd\left(L_p^\sharp(E),L_p^\flat(E)\right)=X
\]
if $p=3$ and
\[
\gcd\left(L_p^+(E),L_p^-(E)\right)=X
\]
if $p=17$ or $19$. Note in particular that the equation \eqref{eq:Xgcd} holds for this curve when $p=17$ and $19$. Furthermore, the $\mu$-invariants of $\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee$, $\mathrm{Sel}_{p^\infty}(E/\mathbb{Q}_\mathrm{cyc})_\mathrm{tor}^\vee$, $\mathrm{Sel}^{\sharp/\flat}(E/\mathbb{Q}_\mathrm{cyc})^\vee$ (when $p=3$) and $\mathrm{Sel}^\pm(E/\mathbb{Q}_\mathrm{cyc})^\vee$ (when $p=17,19$) are all zero.
\subsection*{$E=53A1$} Once again, $E$ is supersingular at $p=3$ with $a_3(E)=-3$. Wuthrich showed that the fine Selmer group over $\mathbb{Q}_\mathrm{cyc}$ is finite when $p=3$ and Pollack's table \url{http://math.bu.edu/people/rpollack/Data/curves1-5000} tells us that $L_p^{\sharp/\flat}(E)=X$ (up to a unit). Therefore, we can deduce once more that
\begin{align*}
\mathrm{Char}_\Lambda \mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee&=\Lambda,\\
\gcd\left(L_p^\sharp(E),L_p^\flat(E)\right)&=X,
\end{align*}
illustrating Theorem~\ref{thm:main}.
Note that $E$ has supersingular reduction at $p=5,11$ and $a_5(E)=a_{11}(E)=0$. Pollack's table tells us that $L_p^\pm(E)=X$ (up to a unit) in these cases. We may apply the argument in \cite[\S10]{wuhtrich} to deduce that the fine Selmer group of $E$ over $\mathbb{Q}_\mathrm{cyc}$ is finite. This again illustrates Theorem~\ref{thm:main} and gives examples where the equality \eqref{eq:Xgcd} holds. Furthermore, the $\mu$-invariants of $\mathrm{Sel}_0(E/\mathbb{Q}_\mathrm{cyc})^\vee$, $\mathrm{Sel}_{p^\infty}(E/\mathbb{Q}_\mathrm{cyc})_\mathrm{tor}^\vee$, $\mathrm{Sel}^{\sharp/\flat}(E/\mathbb{Q}_\mathrm{cyc})^\vee$ (when $p=3$) and $\mathrm{Sel}^\pm(E/\mathbb{Q}_\mathrm{cyc})^\vee$ (when $p=5,11$) vanish.
\bibliographystyle{amsalpha}
|
2,869,038,153,747 | arxiv | \section*{Appendix}
To show that each of our embedded 2-tori admits an analytic foliation, with closed leaves, that is everywhere transverse to the (nowhere vanishing) flow field \textit{X} it would suffice to prove that it always admits a closed, analytic one-form \(\lambda\) with integral periods such that \(\lambda (X) = \lambda_a X^a > 0\) everywhere on the given torus. The closure of \(\lambda\) ensures that, locally, it is expressible as \(\lambda = d\mu\) for some analytic function \(\mu\) the level curves of which locally define the leaves of the desired foliation. That these leaves all close, globally, is ensured by the integrality of the periods of \(\lambda\) whereas their transversality to \textit{X} corresponds simply to the condition that \(\lambda (X) > 0\).
The following proof that such a \(\lambda\) always exists is due to M.~Kontsevich who kindly provided it to us in response to a question about a somewhat related theorem of Kolmogorov's. Note that analyticity is not needed for some of the intermediate steps of Kontsevich's argument but that it will be `reinstated' during the final stage of the construction.
First choose a smooth Siegel curve \(\tilde{\Gamma}\) that is closed, non-self-intersecting and everywhere transverse to the flow of \textit{X}. The existence of such curves follows from a standard argument which is given, for example, in \cite{Cornfeld:1982} together with a discussion of some of their fundamental properties. The aim will be to construct an analytic foliation whose leaves are each homotopic to \(\tilde{\Gamma}\) (and transversal to \textit{X}). By translating \(\tilde{\Gamma}\) along the flow generated by \textit{X} one can produce a curve, homotopic to \(\tilde{\Gamma}\), that passes through any particular point of the given torus and that is, of course, also transversal to \textit{X}.
Any one of such Siegel curves, \(\Gamma\), can by systematically `thickened' to yield a smooth `ribbon', \(r_\Gamma\), diffeomorphic to \(\Gamma \times I_\Gamma \approx \mathbf{S}^1 \times I_\Gamma\) where \(I_\Gamma\) is an open interval. Coordinatize this ribbon by choosing an `angle' coordinate \(\theta_\Gamma\) along \(\Gamma\), with \(\theta_\Gamma \in [0, 2\pi)\), and letting \textit{t} be the flow parameter along the transversal flow generated by \textit{X}, with \(t \in I_\Gamma := (-\epsilon_\Gamma, \epsilon_\Gamma)\) for some \(\epsilon_\Gamma > 0\), taking \(t = 0\) to correspond to the given `source curve' \(\Gamma\).
Now define a \textit{smooth} one-form \(\alpha_\Gamma\) on the torus by setting \(\alpha_\Gamma = 0\) on the complement of the ribbon \(r_\Gamma\) but taking \(\alpha_\Gamma = d\mu_\Gamma\) within the ribbon where \(\mu_\Gamma\) is a smooth function of \textit{t} alone (i.e., independent of \(\theta_\Gamma\)) that smoothly and monotonically interpolates between the value 0 for \(t \in (-\epsilon_\Gamma, -\epsilon_\Gamma/2)\) and the value 1 for \(t \in (\epsilon_\Gamma/2, \epsilon_\Gamma)\) with derivative satisfying \(\frac{\partial}{\partial t} \mu_\Gamma \geq 0\) for \(t \in (-\epsilon_\Gamma, \epsilon_\Gamma)\) and \(\frac{\partial}{\partial t} \mu_{\Gamma} > 0\) for \(t \in (-\epsilon_\Gamma/2, \epsilon_\Gamma/2)\). The one-form \(\alpha_\Gamma\) so-constructed will be closed, have integral periods and satisfy \(\alpha_\Gamma (X) \geq 0\) everywhere on the chosen torus.
In view of the compactness of the torus a finite collection, \(\lbrace r_{\Gamma_i}; i = 1, \ldots, k\rbrace\), of such ribbons, together with their associated closed one-forms, \(\lbrace\alpha_{\Gamma_i}; i = 1, \ldots, k\rbrace\) will suffice to cover the torus in such a way that
\begin{equation*}
\alpha := \sum_{i=1}^k \alpha_{\Gamma_i}
\end{equation*}
satisfies \(d\alpha = 0, \alpha (X) > 0\) everywhere and has integral periods (since each of the \(\alpha_{\Gamma_i}\) does). It will not however be analytic since none of the individual \(\alpha_{\Gamma_i}\)'s are more than smooth.
Taking, however, a Hodge decomposition of \(\alpha\) with respect to an analytic (Riemannian) metric on the torus will result in
\begin{equation*}
\alpha = h + d\sigma
\end{equation*}
where \textit{h} is harmonic and thus analytic but where the function \(\sigma\) is only smooth. The integral periods of \(\alpha\) will all be `carried' by \textit{h} since of course those of \(d\sigma\) all vanish. Now, however, since the condition \(\alpha (X) > 0\) is \textit{open} one can always preserve it by approximating \(\sigma\) with an analytic function \(\omega\). Thus defining
\begin{equation*}
\lambda = h + d\omega
\end{equation*}
one arrives at a closed, analytic one-form with integral periods that globally satisfies the transversality condition \(\lambda (X) > 0\) and thereby determines an analytic foliation of the torus of the type desired.
\section*{Acknowlegements}
The authors would especially like to thank the Universit\'{e} Paris VI, the Institut des Hautes \'{E}tudes Scientifique in Bures-sur-Yvette, France, the Albert Einstein Institute in Golm, Germany, the Erwin Schr\"{o}dinger Institute in Vienna, Austria, the Isaac Newton Institute in Cambridge, England, the Kavli Institute for Theoretical Physics in Santa Barbara, California, the Mathematical Sciences Research Institute in Berkeley, California and the Mittag-Leffler Institute in Djursholm, Sweden for their warm hospitality and generous support during visits over the years when portions of this research were carried out. We are particularly grateful to Maxim Kontsevich for providing the theorem quoted in the Appendix. We also thank Lars Andersson for valuable discussions.
\section{Analyticity of the candidate vector field}
\label{sec:analyticity}
Recall from Section~\ref{subsec:application-poincare} that one can define a Riemannian metric \({}^{(3)}\!g\) on the horizon manifold \textit{N} that satisfies \(\mathcal{L}_X \sqrt{\det{{}^{(3)}\!g}} = 0\). From the discussion in Section~\ref{sec:elementary} it is clear that this metric can always be chosen to be analytic so that in fact \((N, {}^{(3)}\!g)\) is a compact, analytic, Riemannian 3-manifold.
There is a canonical way of complexifying a compact, analytic Riemannian manifold such as \((N, {}^{(3)}\!g)\)through the introduction of its so-called Grauert tubes \cite{Burns:2003}. One identifies \textit{N} with the zero section of its tangent bundle \(TN\) and defines a map \(\ell :TN\to\mathbf{R}\) such that \(\ell (v)\) is the length of the tangent vector \(v\in TN\) relative to the Riemannian metric \({}^{(3)}\!g\). Then, for sufficiently small \(s > 0\), the manifold (`Grauert tube' of thickness \textit{s})
\begin{equation}\label{eq:702}
T^sN = \{ v\in TN \mid \ell (v) < s\}
\end{equation}
can be shown to carry a complex structure for which holomorphic coordinates \(\{ z^i\}\) can be defined in terms of analytic coordinates \(\{ x^i\}\) for \textit{N} by setting \(z^k = x^k + iy^k\) where \(y = y^k \frac{\partial}{\partial x^{k}}\) represents a vector in \(TN\). Analytic transformations between overlapping charts for \textit{N} extend to holomorphic transformations between corresponding charts for \(T^sN\) provided that, as we have assumed, \textit{N} is compact and \textit{s} is sufficiently small. For non-compact manifolds such a holomorphic thickening need not exist for any \textit{s}, no matter how small, and further restrictions upon the manifold are in general needed in order to define its Grauert tubes. When defined, Grauert tubes have an anti-holomorphic involution \(\sigma: T^sN\to T^sN\) given by \(v\mapsto -v\).
It will be convenient to define an auxiliary, analytic Riemannian metric, \(g_{\mathcal{H}}\), on each elementary region of interest \(\mathcal{H}\) by writing on \(\hat{\mathcal{H}} \approx \Sigma \times \mathbb{R}\),
\begin{equation}\label{eq:722}
\begin{split}
g_{\mathcal{H}} &= (g_{\mathcal{H}})_{ij} dx^i \otimes dx^j\\
&= dx^3 \otimes dx^3 + \mu_{ab} (x^1, x^2) dx^a \otimes dx^b
\end{split}
\end{equation}
and then, as before, identifying the slice at \(x^3\) with that at \(x^3 + s^\ast\) via the aforementioned analytic isometry of \((\Sigma, \mu)\). This metric is adapted to the chosen slicing of \(\mathcal{H}\) in that each \(x^3 = \mathrm{constant}\) slice is a totally geodesic submanifold of \((\mathcal{H}, g_{\mathcal{H}})\) and furthermore the integral curves of \(X = \frac{\partial}{\partial x^3}\), which is evidently a Killing field of \(g_{\mathcal{H}}\), coincide with the geodesics of \((\mathcal{H}, g_{\mathcal{H}})\) normal to the \(x^3 = \mathrm{constant}\) slices.
From the special properties of the metric \(g_{\mathcal{H}}\) and its geodesics, it is easy to see that if \(\{ x^a\mid a= 1, 2\}\) are normal coordinates for \((\Sigma, \mu)\) centered at a point \(q\in\Sigma\) (with, therefore, \(x^a(q) = 0\)) then, holding these constant along the flow of \textit{X} and, complementing them with the function \(x^3\), we get normal coordinates \(\{ x^i\} = \{ (x^a, x^3)\mid a = 1, 2\}\) defined on a tubular domain in \(\mathcal{H}\) centered on the orbit of \textit{X} through \textit{q}. By shifting \(x^3\) by an additive constant, one can of course arrange that the origin of these normal coordinates for this tubular domain lies at any chosen point along the orbit through \textit{q}. It follows from the aforementioned property of Grauert tubes that the functions
\begin{equation}\label{eq:703}
\{ z^k\} = \{ (z^k = x^k + iy^k) \mid (y^3)^2 + \mu_{ab} (x^1, x^2) y^a y^b < s\}
\end{equation}
will provide, for \textit{s} sufficiently small, holomorphic coordinates on a corresponding complex thickening of \(\mathcal{H}\) which we shall denote by \(T^s\mathcal{H}\).
In the application to follow, as already mentioned in the previous section, we shall set \(y^3 = 0\) and thus focus our attention on `thickenings' of \(\mathcal{H}\) of the restricted form \(T^s\Sigma\times\mathbf{S}^1\) which are foliated by curves of the type
\begin{equation}\label{eq:704}
\begin{split}
z^a(\lambda) &= x^a(\lambda) + iy^a(\lambda) = \mathring{x}^a + i\mathring{y}^a \\
&= \text{constant}, \\
z^3(\lambda) &= \mathring{x}^3 + \lambda, \quad y^3(\lambda) = 0,
\end{split}
\end{equation}
with
\begin{equation}\label{eq:705}
\mu_{ab} (\mathring{x}^1, \mathring{x}^2) \mathring{y}^a\mathring{y}^b < s.
\end{equation}
The closure \(\overline{T^s\Sigma\times\mathbf{S}^1}\approx \overline{T^s\Sigma}\times\mathbf{S}^1\), of this manifold results from attaching a boundary to \(T^s\Sigma\times\mathbf{S}^1\) characterized locally by \(\mu_{ab} (x^1,x^2)y^ay^b = s\) at all points \((x^1, x^2) \in \overline{\Sigma}\) and will also play a role in the considerations to follow.
Analytic tensor fields defined on \textit{N} can always, in view of its compactness, be lifted to define holomorphic fields on thickenings of the type \(T^s\mathcal{H}\) which, furthermore, extend continuously to the boundary of \(\overline{T^s\mathcal{H}}\) provided \(s > 0\) is taken to be sufficiently small. The needed limitation on the size of \textit{s} arises from considering the radii of convergence of the local series representations of these fields on the original analytic manifold \textit{N} but, since it is compact, a finite collection of such representations suffices to define the field globally on \textit{N} and hence a choice of \(s > 0\) is always possible so that a given field on \textit{N} extends holomorphically to \(T^sN\). Upon restricting such a field to the manifold \(T^s\Sigma\times\mathbf{S}^1\), as defined by setting \(y^3 = 0\), one obtains a corresponding field that is holomorphic with respect to the \(\{ z^a \mid a = 1, 2\}\), real analytic with respect to \(x^3\) and which extends continuously to the boundary of \(\overline{T^s\Sigma\times\mathbf{S}^1} \approx \overline{T^s\Sigma}\times\mathbf{S}^1\). From our point of view, the important thing is that such fields form a Banach space with respect to the \(C^0\) norm and hence a Cauchy sequence with respect to this norm will necessarily converge to a holomorphic field with respect to the \(\{ z^a\}\).
To carry out ribbon arguments on the associated complex thickenings over \(\mathcal{H}\), we need to lift the one form \(\omega_X\), defined in Section~\ref{subsec:connection}, to its holomorphic correspondent \(^{(c)}\omega_X\),
\begin{equation}\label{eq:706}
\begin{split}
^{(c)}\omega_X = &-\frac{1}{2}\,\,^{(c)}\mathring{\varphi}_{,t} (z^1,\ldots, z^3)(dx^3 + idy^3) \\
& - \frac{1}{2}\,\,^{(c)}\mathring{\beta}_{a,t} (z^1,\ldots, z^3)(dx^a + idy^a)
\end{split}
\end{equation}
with
\begin{equation}\label{eq:707}
\begin{split}
^{(c)}\mathring{\varphi}_{,t} (x^1,\ldots, x^3) & = \mathring{\varphi}_{,t} (x^1,\ldots, x^3) \\
^{(c)}\mathring{\beta}_{a,t} (x^1,\ldots, x^3) &= \mathring{\beta}_{a,t} (x^1,\ldots, x^3),
\end{split}
\end{equation}
defined on a suitable \(T^s\mathcal{H}\), where the components \(^{(c)}\mathring{\varphi}_{,t} (z^1,\ldots, z^3)\) and \(^{(c)} \mathring{\beta}_{a,t} (z^1,\ldots, z^3)\) each satisfy the Cauchy-Riemann equations (ensuring their holomorphicity)
\begin{equation}\label{eq:708}
\begin{split}
&\frac{\partial}{\partial\overline{z}^{k}}\,\, ^{(c)}\mathring{\varphi}_{,t} (z^1, \ldots, z^3) \\
&= \frac{1}{2} (\frac{\partial}{\partial x^{k}} + i\frac{\partial}{\partial y^{k}})^{(c)}\varphi_{,t} (x^1,\ldots, x^3, y^1,\ldots, y^3) \\
&= 0\qquad k = 1, \ldots, 3
\end{split}
\end{equation}
and similarly for \(\frac{\partial}{\partial\overline{z}^{k}} ^{(c)}\mathring{\beta}_{a.t}(z_1,\ldots, z^3)\). As a holomorphic one-form \(^{(c)}\omega_X\) has exterior derivative
\begin{equation}\label{eq:709}
\begin{split}
d^{(c)}\omega_X &= - \frac{1}{2} [\frac{\partial}{\partial z^{a}}\,\, ^{(c)}\mathring{\varphi}_{,t} - \frac{\partial^{(c)}}{\partial z^{3}}\mathring{\beta}_{a,t}] \\
&\cdot (dx^a + idy^a)\wedge (dx^3 + idy^3) \\
&- \frac{1}{2} \frac{\partial ^{(c)}\mathring{\beta}_{a,t}}{\partial z^{b}} (dx^b + idy^b)\wedge (dx^a+ idy^a)
\end{split}
\end{equation}
which, in view of the complexified Einstein equation (c.f., Equation (3.2) of Reference \cite{Moncrief:1983}),
\begin{equation}\label{eq:710}
\frac{\partial^{(c)}\mathring{\varphi}_{,t}}{\partial z^{a}(z)} - \frac{\partial^{(c)}\beta_{a,t}(z)}{\partial z^{3}} = 0,
\end{equation}
reduces to
\begin{equation}\label{eq:711}
d^{(c)} \omega_X = -\frac{1}{2} \frac{\partial^{(c)}\mathring{\beta}_{a,t}}{\partial z^{b}} dz^b \wedge dz^a .
\end{equation}
For our purposes, it is convenient to regard Equation (\ref{eq:711}) as an equation for an ordinary, complex-valued, one form defined on a real analytic manifold of 6 dimensions with local coordinates
\begin{equation}\label{eq:712}
\{ w^\mu \mid \mu = 1, \ldots, 6\} = \{ x^1, \ldots, x^3, y^1, \ldots , y^3\}
\end{equation}
and with \(^{(c)}\omega_X\) decomposed into its real and imaginary parts as
\begin{equation}\label{eq:713}
^{(c)}\omega_X = \{ (^{(c)}\omega_X^{(r)}(w))_\mu + i(^{(c)}\omega^{(i)}_X (w))_\mu\} dw^\mu .
\end{equation}
By appealing to the Cauchy-Riemann equations satisfied by the components, it is easy to show that the left hand side of Equation (\ref{eq:711}) is equal to the `ordinary' exterior derivative of \(^{(c)}\omega_X\), as rewritten above, with respect to its 6 real coordinates \(\{ w^\mu\} = \{ x^1,\ldots, x^3, y^1, \ldots, y^3\}\). The right hand side of this equation can of course be expressed in the analogous way --- as a complex-valued two-form in the same real variables.
We are now in a position to apply Stokes's theorem much as in the previous section, the only real difference being that now the one-form in question, \(^{(c)}\omega_X\) is complex and its domain of definition is a 6-real-dimensional Grauert tube defined over \(\mathcal{H}\). We shall want to compare integrals of \(^{(c)}\omega_X\) over different curves of the type (\ref{eq:704}) extending from some `initial' slice having \(x^3 = \text{constant}\) to another such `final' slice. For convenience, let us always take one such curve (which will provide a reference `edge' for our comparison ribbon) to lie in the real section (i.e., to have \(y^a(\lambda) = y^3(\lambda) = 0\)) and choose normal coordinates for \((\Sigma,\mu)\) so that points on this reference curve have \(x^a(\lambda) = 0\). As in the previous section, we restrict the domain of definition of these normal coordinates to a geodesic ball relative to the metric \(\mu\). Let \textit{p} be the starting point of this curve so that, in the chosen coordinates \(\{ x^a(p) = y^a(p) = y^3(p) = 0, x^3(p) = \mathring{x}^3\}\).
Now suppose that \(q\in T^s\mathcal{H}\) is a point lying in the domain of the corresponding (complex) chart and having \(x^3(q) = \mathring{x}^3, y^3(q) = 0\), \(\mu_{ab} (x^1(q), x^2(q))y^a(q)y^b(q) < s\) where \(\{ x^1(q), x^2(q)\}\) represents a point in the aforementioned geodesic ball centered at \textit{p}. We want a canonical way of connecting \textit{q} to \textit{p} within the initial slice \(x^3 = \mathring{x}^3\) and, for this purpose, first connect \textit{q} to its projection in the real section with the `straight line'
\begin{equation}\label{eq:714}
\begin{split}
&x^i(\sigma) = x^i(q) = \text{constant} \\
&y^a(\sigma) = -\sigma y^a(q), \sigma\in [-1,0] \\
&y^3(\sigma) = 0.
\end{split}
\end{equation}
We complete the connection to \textit{p} along the geodesic
\begin{equation}\label{eq:715}
\begin{split}
&x^a(\sigma) = (1 - \sigma ) x^a(q), \,\, \sigma\in [0,1] \\
&x^3(\sigma) = x^3 (p) = x^3(q) = \mathring{x}^3 \\
&y^a(\sigma) = y^3(\sigma) = 0.
\end{split}
\end{equation}
This broken curve provides the starting end (at \(x^3 = \mathring{x}^3\)) for our comparison ribbon. We complete the specification of such a ribbon by letting each point on the starting end defined above, flow along the corresponding curve of the form (\ref{eq:704}) (i.e., holding \(x^a\) and \(y^a\) constant, \(y^3 = 0\) and letting \(x^3 = \mathring{x}^3 + \lambda\) vary until the final slice is reached). It is easy to see, from the special form of the right hand side of
Equation (\ref{eq:711}) that the corresponding two-form pulled back to such a ribbon vanishes identically and thus that Stokes's theorem applies to integrals of \(^{(c)}\omega_X\) over its edges and ends in essentially the same way that we discussed in Section V for ribbons confined to the real section. In other words, the integral of \(^{(c)}\omega_X\) over the edge beginning at \textit{q}, differs from that over the reference edge beginning at \textit{p} only by the (difference of) the integrals over the ribbon ends lying in the `initial' and `final' slices.
For our purposes, the contribution from the starting end, connecting \textit{q} and \textit{p}, will be fixed whereas the contribution from the `final' end (connecting the images of \textit{q} and \textit{p} induced on the final slice) will vary continuously but only over a compact set (determined by the endpoint of the edge through \textit{q} which necessarily lies in \(\overline{T^{s}\Sigma\times\mathbf{S}^{1}}\)). Thus, if as before, we designate the edges through \textit{p} and \textit{q} by \(\gamma\) and \(\gamma'\) respectively and the initial and final ribbon ends by $\sigma$ and $\sigma'$ respectively, then we obtain, as in the real setting,
\begin{equation}\label{eq:716}
\begin{split}
&\int\limits_{\gamma'}\,\, ^{(c)}\omega_X = \int\limits_\gamma\,\,^{(c)}\omega_X - (\int\limits_\sigma \,\,^{(c)}\omega_X - \int\limits_{\sigma'} \,\,^{(c)}\omega_X) \\
&\qquad = \int\limits_\gamma \,\, ^{(c)}\omega_X + \,\,^{(c)}\delta_{p,q} (x^3) \\
\end{split}
\end{equation}
with
\begin{equation}\label{eq:717}
\mid^{(c)}\delta_{p,q} (\rho) \mid \leq b < \infty \quad \forall \, \rho \in [\mathring{x}^3, \infty).
\end{equation}
The integrals of course are now in general complex in value but, given the bound above, we are in a position to apply ribbon arguments to the complex setting in complete parallel to those we gave in the real setting at the end of the last section. The arguments needed are so similar to those given previously that we shall only sketch their highlights below.
For any \textit{q} within the domain characterized above, we define a sequence
\begin{equation}\label{eq:718}
\begin{split}
&^{(c)}u_i (\mathring{x}^3, z^a(q)) \\
&= \frac{k}{2} \int\limits_{\mathring{x}^{3}}^{\mathring{x}^3 + is^*} d\rho \,\, \text{exp} [-\int\limits^\rho_{\mathring{x}^{3}} d\xi \frac{\mathring{\varphi}_{,t}}{2} (\xi, z^a(q))]
\end{split}
\end{equation}
of holomorphic extensions (to \(T^s\Sigma\times\mathbf{S}^1\)) of the approximations given earlier in Equation (\ref{eq:609}) for the normalizing function \textit{u}. Using ribbon arguments to compare the integrals \(\int_{\gamma'}\,\, ^{(c)}\omega_X\) with those for the reference curves \(\int_\gamma \,\, ^{(c)}\omega_X\) we derive, as before, a bound of the form
\begin{equation}\label{eq:719}
\begin{split}
&\mid\,\, ^{(c)}u_m(\mathring{x}^3, z^a(q)) - \,\,^{(c)}u_\ell (\mathring{x}^3, z^a(q))\mid \\
&\leq e^b\mid \,\,^{(c)} u_m (\mathring{x}^3, z^a(p)) - \,\,^{(c)} u_\ell (\mathring{x}^3, z^a(p))\mid \\
&= e^b \mid u_m (\mathring{x}^3, x^a(p)) - u_\ell (\mathring{x}^3, x^a(p))\mid \\
&\qquad \forall \,\, \ell, m\geq 0,
\end{split}
\end{equation}
where, in the final equality, we have exploited the fact that \(^{(c)} u_m(\mathring{x}^3, z^a(p)) = u_m (\mathring{x}^3, x^a(p))\) by virtue of our choice that the point \textit{p} always lies in the real section.
As before, it follows immediately that for any \(\varepsilon > 0\) there exists an integer \(Q > 0\) such that
\begin{equation}\label{eq:720}
\mid\,\,^{(c)}u_m(\mathring{x}^3, z^a(q)) -\,\, ^{(c)}u_\ell (\mathring{x}^3, z^a (q))\mid < \varepsilon \quad \forall \,\, m, \ell > Q
\end{equation}
and thus that the sequence \(\{^{(c)}u_m(\mathring{x}^3, z^a(q)) \mid m = 1, 2, \ldots \}\)
is Cauchy with respect to the \(C^0\) norm. Thus the sequence of approximations converges to a holomorphic limit on the domain indicated. Repeating this argument for a (finite) collection of such domains sufficient to cover \(\overline{T^s\Sigma}\) we conclude that
\begin{equation}\label{eq:721}
^{(c)}u(\mathring{x}^3, z^a) = \frac{k}{2} \int^\infty_{\mathring{x}^{3}} d\rho \,\, \text{exp}[-\int^\rho_{\mathring{x}^{3}} d\xi \frac{\mathring{\varphi}_{,t}}{2}(\xi, z^a)]
\end{equation}
is a well-defined holomorphic function on \(T^s\Sigma\) (which extends continuously to its boundary) and that, by construction, this function reduces to the real-valued function \(u(\mathring{x}^3, x^a)\) defined in the previous section. The latter is therefore necessarily a real-valued analytic function on \(\Sigma\) which is the result we were required to prove.
The analytic functions thus defined on tubular neighborhoods of arbitrary null generators of \textit{N} necessarily coincide on overlapping domains of definition. This follows from the fact that each such \textit{u} was uniquely determined by the geometrical requirement that it `renormalize' the corresponding generators to all have the same, fixed future affine length \(2/k\). We may thus regard \textit{u} as a globally defined analytic function on \textit{N} and thus arrive at a globally defined, analytic, candidate vector field \(K := uX\).
\section{A candidate vector field in the non-degenerate case}
\label{sec:candidate-vector}
In this section, we focus on the non-degenerate case and, if necessary, change the sign of $X$ so that it points in a direction of incompleteness for the null generators of \textit{N}. We now define a vector field \textit{K} on \textit{N}, also tangent to the generators of this hypersurface, by setting \(K = uX\) where \textit{u} is a positive real-valued function on \textit{N} chosen so that, for any point \(p\in N\), the null generator determined by the initial conditions \((p, K(p) = u(p) X(p))\) has a fixed (i.e., independent of \textit{p}) future affine length given by \(\frac{2}{k}\) where \textit{k} is a constant \(> 0\). At the moment there is no preferred normalization for \textit{k} so we choose its value arbitrarily.
From Equation (\ref{eq:505}) upon putting \((\eta (\infty, \mathring{x}^a) - \mathring{\eta} (\mathring{x}^3, \mathring{x}^a)) = \frac{2}{k}\), we see that \(u(x^3, x^a)\) is necessarily expressible, in an arbitrary `unwrapped' elementary region \(\hat{\mathcal{H}}\) for \textit{N}, by
\begin{equation}\label{eq:601}
u(x^3,x^a) = \frac{k}{2} \int\limits^\infty_{x^{3}} d\rho\; \text{exp}\left[-\int\limits^{\rho}_{x^{3}} \frac{\mathring{\varphi},_{t}}{2} (\xi, x^a)d\xi\right].
\end{equation}
By the results of the previous section, the needed integral converges for every generator and clearly \(u > 0\) on \(\hat{\mathcal{H}}\). What is not clear however, in view of the limiting procedure needed to define the outer integral over a semi-infinite domain, is whether \textit{u} is in fact analytic and we shall need to prove that it is. We shall do this below by showing that a sequence \(\{ u_i : \mathcal{H}\to\mathbf{R}^+\mid i = 1,2,\ldots\}\) of analytic `approximations' to \textit{u} defined by
\begin{equation}\label{eq:602}
u_i(x^3,x^a) = \frac{k}{2} \int\limits^{x^{3}+is^{*}}_{x^{3}} d\rho ~ \text{exp}\left[-\int\limits^\rho_{x^{3}} \frac{\mathring{\varphi},_{t}}{2} (\xi, x^a)d\xi\right],
\end{equation}
where \(s^*\) is the recurrence time introduced in Section \ref{sec:nondegeneracy}, does indeed have an analytic limit as \(i\to\infty\).
For the moment however, let us assume that we know that \textit{K} is analytic and introduce new agn coordinates \(\{ x^{3'}, x^{a'},t'\}\) which are adapted to \textit{K} rather than to \(X = \frac{\partial}{\partial x^{3}}\). Thus we seek a transformation of the form \(\{x^{3'} = h(x^3, x^a), x^{a'} = x^a\}\) which yields \(K = \frac{\partial}{\partial x^{3'}}\). A straightforward calculation shows that \textit{h} must satisfy
\begin{equation}\label{eq:603}
\frac{\partial h(x^{3},x^{a})}{\partial x^{3}} = \frac{1}{u(x^{3},x^{a})} = \left\{\frac{1}{\frac{k}{2} \int^\infty_{x^3} d\rho ~ \exp[-\int^\rho_{x^{3}} d\xi \frac{\mathring{\varphi}_{,t}}{2}(\xi, x^{a})]}\right\}
\end{equation}
which, since the denominator is analytic by assumption and non-vanishing, yields an analytic \textit{h} upon integration.
As was shown in Sect. IIIA of Reference \cite{Moncrief:1983}, a transformation of the above type connects the primed and unprimed metric functions \(\mathring{\varphi}_{,t}\) and \(\mathring{\varphi}'_{,t'}\) via
\begin{equation}\label{eq:604}
2 \frac{\partial}{\partial x^3}\left(\frac{\partial h}{\partial x^3}\right) + \frac{\partial h}{\partial x^3} \mathring{\varphi}_{,t} = \left(\frac{\partial h}{\partial x^3}\right)^2 \, \mathring{\varphi}'_{,t'} .
\end{equation}
Computing \(\frac{\partial^2h}{\partial x^{3~2}}\) from Equation (\ref{eq:603}) above and substituting this and
\(\frac{\partial h}{\partial x^3}\) into the above formula one finds that the transformed metric has
\begin{equation}\label{eq:605}
\mathring{\varphi}'_{,t'} = k = \text{constant}
\end{equation}
throughout any agn chart adapted to \textit{K}. This argument is somewhat the reverse of that given in Reference \cite{Moncrief:1983}, for the case of closed generators, wherein we set \(\mathring{\varphi}'_{, t'} = k\) and solved Equation (\ref{eq:604}) for \(\frac{\partial h}{\partial x^3}\) and then \textit{h}.
In the new charts one still has \(\mathring{\varphi}' = \mathring{\beta}'_\alpha = 0\) since these hold in any agn coordinate system and, upon repeating the argument of Section \ref{subsec:invariance} above, with \textit{K} in place of \textit{X}, we obtain
\(\mathring{\mu}'_{a'b',3'} = 0\) as well. Now evaluating the Einstein equation \(R_{3b} = 0$ at $t = t' = 0\) and using the foregoing, together with the new result that \(\mathring{\varphi}'_{,t'} = k\) in the primed charts, one finds that \(\mathring{\beta}'_{b',t',3'} = 0\).
Deleting primes to simplify the notation, we thus find that in agn charts adapted to \textit{K}, the metric functions obey
\begin{equation}\label{eq:606}
\mathring{\varphi} = \mathring{\beta}_a = \mathring{\mu}_{ab,3} = 0,\,\, \mathring{\varphi}_{,t} = \text{constant} \neq 0, (\mathring{\beta}_{a,t})_{,3} = 0.
\end{equation}
These are the main results we shall need for the inductive argument of Section~\ref{sec:existence-killing} to prove that there is a spacetime Killing field \textit{Y} such that \(Y\mid_N = K\).
Referring to Equation (\ref{eq:504}) and evaluating the integrals in the new charts in which \(\mathring{\varphi}_{,t} = k = \text{constant} > 0\) one sees easily that though the null generators are all incomplete towards the `future' they are in fact all complete towards the `past' (where here future and past designate simply the directions of \textit{K} and \(-K\) respectively). It may seem strange at first glance to say that any generator could have a fixed future affine length \((=\frac{2}{k})\) no matter where one starts along it, but the point is that this length is here always being computed from the geodesic initial conditions \((p, K(p))\). If one starts with say \((q, K(q))\) and later reaches a point \textit{p} on the same generator, then the tangent to the (affinely parametrized) geodesic emanating from \textit{q} will not agree with \(K(p)\) but will instead equal \(c K(p)\) for some constant \(c > 1\), Only upon `restarting' the generator with the initial conditions \((p, K(p))\) will it be found to have the same future affine length that it had when started instead from \((q, K(q))\). Indeed, if the tangent to an affinely parametrized geodesic did not increase relative to \textit{K} then the generator could never be incomplete on a compact manifold \textit{N} where the integral curves of a vector field \textit{K} are always complete.
Let us now return to the question of the analyticity of the `scale factor' \(u(x^3,x^a)\). First note that, upon combining Equations (\ref{eq:603}), (\ref{eq:604}) and (\ref{eq:605}), \textit{u} satisfies the linear equation with analytic coefficients
\begin{equation}\label{eq:607}
\frac{\partial u}{\partial x^3} - \frac{\mathring{\varphi}_{,t}}{2} \,\, u = - \frac{k}{2}
\end{equation}
provided one takes, as initial condition specified at some \(\mathring{x}^3\),
\begin{equation}\label{eq:608}
u(\mathring{x}^3, x^a) = \frac{k}{2} \int\limits^\infty_{\mathring{x}^3} d\rho \,\, \text{exp}[-\int\limits^\rho_{\mathring{x}^3} \frac{\mathring{\varphi}_{,t}}{2} (\xi, x^a)d\xi ].
\end{equation}
More precisely, using an appropriate integrating factor for Equation (\ref{eq:607}), namely \(\text{exp}[-\int^{x^{3}}_{\mathring{x}^3} d\xi \frac{\mathring{\varphi}_{,t}}{2} (\xi, x^a)]\), one easily shows that the solution to Equation \ref{eq:607}) determined by the initial condition (\ref{eq:608}) is given by Equation (\ref{eq:603}). But Equation (\ref{eq:607}) can be viewed as a (linear, analytic) partial differential equation to which the Cauchy Kowalewski theorem applies \cite{John:1991} and guarantees the analyticity of the solution on domains corresponding (because of linearity) to those of the coefficients (in this case \(\mathring{\varphi}_{,t} (x^3,x^a))\) \textit{provided} that the initial condition \(u(\mathring{x}^3, x^a)\) is analytic with respect to the \(\{ x^a\}\). In other words, our problem reduces to that of proving that Equation (\ref{eq:608}) for fixed \(\mathring{x}^3\), defines an analytic function of the \(\{x^a\}\). Thus we only need to show that the sequence of `approximations'
\begin{equation}\label{eq:609}
\begin{split}
&u_i(\mathring{x}^3, x^a) := \frac{k}{2} \int\limits^{\mathring{x}^{3} + is^{*}}_{\mathring{x}^{3}} d\rho \,\, \text{exp} [-\int\limits^\rho_{\mathring{x}^{3}} \frac{\mathring{\varphi}_{,t}}{2} (\xi, x^a)d\xi],\\
&i = 1, 2, \ldots
\end{split}
\end{equation}
converges to an analytic function of the \(\{x^a\}\) for fixed \(\mathring{x}^3\).
However, a (pointwise) convergent sequence of analytic functions could easily converge to a limit which is not even continuous much less analytic. On the other hand, the set of continuous functions on a compact manifold forms a Banach space with respect to the \(C^0\) norm (uniform convergence) so that one could hope at least to establish the continuity of the limit by showing that the sequence \(\{u_i (\mathring{x}^3, x^a)\}\) is Cauchy with respect to this norm.
A much stronger conclusion is possible however, if one first complexifies the slices \(x^3 = \text{constant}\) of an arbitrary elementary region \(\mathcal{H} \subset N\) (which are each diffeomorphic to a manifold \(\Sigma\) of the type defined previously) and extends the analytic metric functions defined on \textit{N} to holomorphic functions defined on this complex `thickening' of \(\mathcal{H}\) in the \(\{ x^a\}\) directions which extend continuously to the boundary of its closure. The space of holomorphic functions on such a complex manifold (with boundary) forms a Banach space with respect to the \(C^0\) norm so that the limit of any Cauchy sequence of holomorphic functions (which extend continuously to the boundary) will in fact be holomorphic and not merely continuous \cite{Treves:2006,Ohsawa:note}. In the following section, we shall define a certain complex `thickening' of \textit{N} with respect to all of its dimensions (a so-called `Grauert tube') but then, in view of the discussion in the preceding paragraph, restrict the integration variable \(x^3\) defined on an aribtrary elementary region \(\mathcal{H}\) to real values so that, in effect, only the leaves of the foliation of \(\mathcal{H} \approx \Sigma\times \mathbf{S}^1\) are thickened.
Let us temporarily remain within the real analytic setting to sketch out the basic idea of the argument to be given later in the holomorphic setting. This detour, though it cannot yield more than the continuity of \(u(\mathring{x}^3, x^a)\) in the \(\{ x^a\}\) variables, will be easier to understand at a first pass and will require only straightforward modification for its adaptation to the holomorphic setting.
For any point \textit{p} in the slice determined by \(x^3(p) = \mathring{x}^3\) the monotonically increasing, convergent sequence of real numbers
\begin{equation}\label{eq:610}
\begin{split}
u_i(\mathring{x}^3, x^a(p)) &= \frac{k}{2} \int\limits^{\mathring{x}^{3} + is^{*}}_{\mathring{x}^{3}} d\rho \,\, \text{exp} [-\int\limits^\rho_{\mathring{x}^{3}} d\xi \,\, \frac{\mathring{\varphi}_{,t}}{2} (\xi, x^a(p))] \\
&i = 1,2, \ldots
\end{split}
\end{equation}
is clearly a Cauchy sequence which converges to \(u(\mathring{x}^3, x^a(p))\). Thus for any \(\varepsilon' > 0\) there exists a positive integer \textit{Q} such that
\begin{equation}\label{eq:611}
\mid u_m(\mathring{x}^3, x^a(p)) - u_\ell (\mathring{x}^3, x^a(p))\mid < \varepsilon' \quad \forall \,\, m, \ell > Q.
\end{equation}
Now consider an arbitrary point \textit{q} in the initial slice (i.e., having \(x^3(q) = \mathring{x}^3\)) that lies within a closed geodesic ball in this slice which is centered at \textit{p} (i.e., a ball of the type used in the ribbon argument of the previous section). By the ribbon arguments given in this last section, one easily finds that
\begin{equation}\label{eq:612}
\begin{split}
&\mid u_m(\mathring{x}^3, x^a(q)) - u_\ell (\mathring{x}^3, x^a(q))\mid \\
&= \biggm|\frac{k}{2}\int\limits^{\mathring{x}^{3} + ms^{*}}_{\mathring{x}^{3}+\ell s^{*}} \,\,d\rho \,\, \text{exp}[-\int\limits^\rho_{\mathring{x}^3} \,\, d\xi \, \frac{\mathring{\varphi}_{,t}}{2} (\xi, x^a(q))]\biggm| \\
&= \biggm|\frac{k}{2} \int\limits^{\mathring{x}^{3} + ms^{*}}_{\mathring{x}^{3}+\ell s^{*}} \,\,d\rho \,\, \text{exp}[\delta_{p,q}(\rho)] \text{exp}[-\int\limits^\rho_{\mathring{x}^3} \,\, d\xi \, \frac{\mathring{\varphi}_{,t}}{2} (\xi, x^a(p))]\biggm| \\
&\leq e^{b_{2}} \biggm|\frac{k}{2} \int\limits^{\mathring{x}^{3} + ms^{*}}_{\mathring{x}^{3}+\ell s^{*}} \,\,d\rho \,\, \text{exp}[-\int\limits^\rho_{\mathring{x}^3} \,\, \frac{\mathring{\varphi}_{,t}}{2} (\xi, x^a(p))d\xi]\biggm| \\
&= e^{b_{2}} \bigm| u_m(\mathring{x}^3, x^a(p)) - u_\ell (\mathring{x}^3, x^a(p))\bigm|
\end{split}
\end{equation}
for all \textit{q} in this ball where \(b_2\) is a constant that depends upon \textit{p} and the radius of the chosen ball. Thus for any \(\varepsilon > 0\) we get by choosing \(\varepsilon' = e^{-b_{2}}\varepsilon\) in Equation (\ref{eq:611}), that
\begin{equation}\label{eq:613}
\bigm| u_m (\mathring{x}^3, x^a(q)) - u_\ell (\mathring{x}^3, x^a(q))\bigm| \,\, < \varepsilon \quad \forall \, m, \ell > Q
\end{equation}
and for all \textit{q} in the compact set defined by the chosen (closed) geodesic ball. Thus the sequence of (real-valued) continuous functions \(\{ u_m(\mathring{x}^3, x^a(q))\mid m = 1,2,\ldots\}\) defined on this ball is a Cauchy sequence relative to the \(C^0\)-norm and hence its limit \(u(\mathring{x}^3, x^a(q))\) is necessarily continuous. By covering the initial slice by a collection of overlapping such balls, we deduce that \(u(\mathring{x}^3, x^a(q))\) is globally continuous on the initial slice.
\section{Construction of the Candidate Vector Field}
\label{sec:construction}
\input{subsec-geometrical}
\input{subsec-invariance}
\input{subsec-application-poincare}
\input{subsec-implications-poincare}
\input{subsec-connection}
\section{Elementary Regions and their Analytic Foliations}
\label{sec:elementary}
In the sections to follow we shall define a `candidate' vector field \textit{K} on \textit{N} by rescaling \textit{X} appropriately, prove its analyticity and eventually show that \textit{K} propagates into the enveloping spacetime as an analytic Killing vector field. If, for some reason, we knew a priori that \textit{N} admitted a global, analytic foliation with closed leaves that are everywhere transverse to the flow of \textit{X} then we could proceed with this analysis much as we did for the (higher dimensional) stationary black holes of Ref.~\cite{Moncrief:2008}, working `globally' on \textit{N} by directly exploiting the special structure provided by its `pre-existing' analytic foliation. Here however no such analytic foliation has been presumed to exist and indeed the very possibility of global, closed, transversal leaves might be excluded for purely topological reasons.\footnote{For example, even for the case of closed generators of \textit{N} the integral curves of \textit{X} might well be the fibers of a non-trivial \(\mathbf{S}^1\)-bundle as is indeed the case for the Taub-NUT family of spacetimes.} On this account we shall decompose \textit{N}, as needed, into a finite collection of \textit{elementary regions} that will each be shown to admit an analytic, transversal foliation and carry out the aforementioned analysis first on the individual elementary regions, much as we did for the case of closed generators in Ref.~\cite{Moncrief:1983}. Finally, after verifying the consistency of these constructions on overlapping domains of definitions, we shall assemble the resulting components and ultimately arrive at a globally defined, analytic `candidate' vector field \textit{K} on \textit{N}.
Consider any one of the analytically embedded 2-tori discussed in Sect.~(\ref{subsec:implications-poincare}) that is realized as the closure, \(cl(\gamma)\), of a (non-closed but densely-torus-filling) generator \(\gamma\). This torus supports the flow of a nowhere-vanishing, analytic vector field, namely that induced from \textit{X} which, by construction, is tangential to the chosen embedded torus.
Thanks to a theorem due to \textit{M}. Kontsevic (of which the proof is sketched below in the Appendix) one knows that such a torus always admits an analytic foliation with closed leaves that are everywhere transverse to the flow generated by \textit{X}. We now wish to `thicken' such an embedded torus to obtain an embedded 3-manifold diffeomorphic to \(\mathcal{A} \times \mathbf{S}^1\) (where \(\mathcal{A}\) is an open annulus), consisting entirely of generators of \textit{N}, and to show that this thickened torus will itself admit an analytic foliation (with leaves each diffeomorphic to \(\mathcal{A}\)) that is everywhere transverse to the flow generated by \textit{X}. Such a thickened torus, together with its analytic, transverse foliation, will be the first of two types of \textit{elementary regions} that we shall define.
The second type of elementary region will only be needed to cover a `tubular neighborhood in \textit{N}' of any particular \textit{closed} generator \(\gamma\) that might, exceptionally, occur. In this case we shall `thicken' \(\gamma\) to a solid torus diffeomrophic to \(\mathcal{D} \times \mathbf{S}^1\) (where \(\mathcal{D}\) is an open disk), consisting entirely of generators sufficiently close to \(\gamma\), and show that such a solid torus admits an analytic, transversal of foliation with leaves diffeomorphic to \(\mathcal{D}\). For the case of a non-ergodic flow (as defined in Sect.~(\ref{subsec:implications-poincare})) every generator of \textit{N} is either closed or densely fills an embedded 2-torus. By the compactness of \textit{N} such a null hypersurface can clearly be covered by a finite collection of such elementary regions with those of the second type only needed in the presence of closed generators.
To construct such elementary regions we shall need an analytic, Riemannian metric on \textit{N}. To define such a metric we slightly modify the argument in Sect.~(\ref{subsec:application-poincare}) by now insisting that the (normalized, timelike) vector field \textit{V}, which is transverse to \textit{N} in \(({}^{(4)}\!V, g)\), be itself analytic. Since the timelike condition is an \textit{open} one and since the normalization of such an analytic vector field will not disturb its analyticity there is no loss of generality involved in assuming that the induced Riemannian metric \({}^{(3)}\!g\) is in fact analytic on \textit{N}. Recall that the metric so defined (via Eqs.~(\ref{eq:213})--(\ref{eq:218}) in fact satisfies
\begin{equation}\label{eq:401}
\mathcal{L}_X \left(\sqrt{\det{{}^{(3)}\!g}}\right) = 0
\end{equation}
on \textit{N}.
As discussed in the Appendix one constructs an analytic, transversal foliation with closed leaves for any one of such embedded 2-tori by showing that it always admits an analytic, \textit{closed} one-form \(\lambda\) with integral periods that, moreover, satisfies \(\lambda (X) > 0\). Since any such \(\lambda\) is locally expressible as \(\lambda = d\omega\) for some analytic function \(\omega\), the level sets of \(\omega\) define the leaves of the foliation. Thus \(\omega\) provides an analytic coordinate function that is constant of the leaves so-defined. The closure of these leaves and their transversality to \textit{X} is ensured by the integrality of the periods of \(\lambda\) and by the condition that \(\lambda (X) > 0\) everywhere on the torus. Any two such coordinate functions, \(\omega\) and \(\omega'\), will of course only differ by a constant on their overlapping domains of definition.
We now `thicken' the chosen 2-torus by flowing along the normal geodesics of the metric \({}^{(3)}\!g\) on \textit{N}, much as we would in constructing a \textit{gaussian-normal} neighborhood of the given torus. By restricting the range of the (normal geodesic) flow parameter suitably one can ensure that the resulting thickened torus is diffeomorphic to \(\mathcal{A} \times \mathbf{S}^1\), where \(\mathcal{A}\) is an open annulus corresponding to a thickened leaf of the original torus, and consists entirely of integral curves of \textit{X}. By continuity, if this thickening is sufficiently restricted the annular leaves of the foliated 3-manifold will be globally transverse to \textit{X}.
We now extend the domain of definition of the analytic, coordinate function \(\omega\) by requiring it to be everywhere constant on any one of the thickened leaves. Choosing complementary, analytic coordinates\ \(\lbrace x^a\rbrace = \lbrace x^1, x^2\rbrace\) on one of these annular leaves and holding these fixed along the flow generated by \textit{X} while setting \(x^3 = \omega\) one gets a convenient adapted coordinate chart for the thickened torus \(\approx \mathcal{A} \times \mathbf{S}^1\). Any two such coordinate systems \(\lbrace x^a, x^3\rbrace\) and \(\lbrace x^{a'}, x^{3'}\rbrace\) will be related, on their overlapping domains of definition by a transformation of the form
\begin{equation}\label{eq:402}
\begin{split}
x^{3'} &= x^3 + \hbox{ constant}\\
x^{a'} &= f^a (x^1, x^2)
\end{split}
\end{equation}
where the \(\lbrace f^a\rbrace\) define an analytic diffeomorphism of the annulus \(\mathcal{A}\). Thus this first type of elementary region consists of a thickened 2-torus foliated, on the one hand, by the (non-closed) integral curves of \textit{X} and, on the other, by annuli transverse to the flow of \textit{X}.
The second type of elementary region results from thickening a closed generator \(\gamma\) to get a solid torus with \(\gamma\) at its core. To construct this choose an analytic, `angle' coordinate \(x^3\) to label the points of the chosen generator \(\gamma\). At each point \textit{p} of \(\gamma\) we have a corresponding, orthogonal 2-plane in the tangent space, \(T_p N\), defined by the metric \({}^{(3)}\!g\) (i.e., the orthogonal complement to the tangent vector to \(\gamma\) at \textit{p}). By flowing along the geodesics of \({}^{(3)}\!g\) in \textit{N} we may thus `thicken' each such point \(p \in \gamma\) to a disk \(\mathcal{D}_p\) which, by construction, is orthogonal to \(\gamma\) at \textit{p}. By restricting the geodesic flow parameter suitably (in its dependence upon \textit{p} and the orthogonal direction to \(\gamma\) at \textit{p}) we may ensure that the \(\gamma\) so thickened is diffeomorphic to \(\mathcal{D} \times \mathbf{S}^1\), consists entirely of integral curves of \textit{X} and is such that each thickened leaf, \(\mathcal{D}_p\), is transverse to the flow generated by \textit{X}.
By defining an analytic coordinate \(x^3\) on \(\mathcal{D} \times \mathbf{S}^1\) by holding the chosen angular coordinate for \(\gamma\) constant on each leaf and by choosing complementary, analytic coordinates \(\lbrace x^a\rbrace = \lbrace x^1, x^2\rbrace\) for any one of the transversal disks and holding these constant along the flow of \textit{X} we generate an adapted analytic coordinate system for this second type of elementary region. Any two such coordinate systems, \(\lbrace x^a, x^3\rbrace\) and \(\lbrace x^{a'}, x^{3'}\rbrace\), will be related by a transformation of the form
\begin{equation}\label{eq:403}
\begin{split}
x^{3'} &= x^3 + \hbox{ constant}\\
x^{a'} &= g^a (x^1, x^2)
\end{split}
\end{equation}
on their overlapping domains of definition where now the \(\lbrace g^a\rbrace\) define an analytic diffeomorphism of the disk.
In the following sections it will be convenient to let the symbol \(\mathcal{H}\) designate an arbitrary elementary region of either of the two types. By the compactness of \textit{N} it is clear that we can cover \textit{N} by a finite collection of such elementary regions.
\section{Existence of a Killing Symmetry}
\label{sec:existence-killing}
We have shown that there exists a non-vanishing, analytic vector field \textit{K} on \textit{N}, tangent to the null generators of \textit{N} such that, in any gaussian null coordinate chart adapted to \textit{K} (i.e., for which \textit{K} has the local expression \(K = \left.\frac{\partial}{\partial x^3}\right|_{t=0}\)), the metric functions \(\lbrace\varphi, \beta_a, \mu_{ab}\rbrace\) of that chart obey
\begin{equation}\label{eq:301}
\begin{split}
\mathring{\varphi} = \mathring{\beta}_a &= \mathring{\mu}_{ab,3} = 0,\\
\mathring{\varphi}_{,t} = k &= \hbox{ constant } \neq 0,\\
\left(\mathring{\beta}_{a,t}\right)_{,3} &= 0.
\end{split}
\end{equation}
We shall show momentarily that \((\mathring{\mu}_{ab,t})_{,3}\) also vanishes and thus that all the metric functions and their first time derivatives are independent of \(x^3\) on the initial surface \(t = 0\) (signified as before by an overhead `nought'). In the following, we shall prove inductively that all the higher time derivatives of the metric functions are independent of \(x^3\) at \(t = 0\) and thus that the corresponding \textit{analytic}, Lorentzian metric,
\begin{equation}\label{eq:302}
\begin{split}
g &= dt \otimes dx^3 + dx^3 \otimes dt + \varphi dx^3 \otimes dx^3\\
&+ \beta_a dx^a \otimes dx^3 + \beta_a dx^3 \otimes dx^a + \mu_{ab} dx^a \otimes dx^b,
\end{split}
\end{equation}
has \(\frac{\partial}{\partial x^3}\) as a (locally defined) Killing field throughout the gaussian null coordinate chart considered. Finally, we shall show that the collection of locally defined Killing fields, obtained by covering a neighborhood of \textit{N} by adapted gaussian null (agn) coordinate charts and applying the construction mentioned above, fit together naturally to yield a spacetime Killing field \textit{Y} which is analytic and globally defined on a full neighborhood of \textit{N} and which, when restricted to \textit{N}, coincides with the vector field \textit{K}.
Some of the results to be derived are purely local consequences of Einstein's equations expressed in an agn coordinate chart (such as, e.g., the observation that \(\mathring{\varphi}_{,t} = k\) implies \((\mathring{\beta}_{a,t})_{,3} = 0\)). Others, however, require a more global argument and thus demand that we consider the transformations between overlapping, agn charts which cover a neighborhood of \textit{N} in \({}^{(4)}\!V\). For example, by considering the Einstein equations \(R_{ab} = 0\) restricted to \(t = 0\) and reduced through the use of \(\mathring{\varphi}_{,t} = k = \hbox{ constant }, \mathring{\mu}_{ab,3} = 0\) and \((\mathring{\beta}_{a,t})_{,3} = 0\) one can derive (as in the derivation of Eq.~(3.26) of Ref.~\cite{Moncrief:1983}) the local equation for \(\mathring{\mu}_{ab,t}\) given by
\begin{equation}\label{eq:303}
0 = -(\mathring{\mu}_{ab,t})_{,33} + \frac{k}{2} (\mathring{\mu}_{ab,t})_{,3}.
\end{equation}
Roughly speaking, we want to integrate this equation along the null generators of \textit{N} and show, as in Ref.~\cite{Moncrief:1983}, that it implies that \((\mathring{\mu}_{ab,t})_{,3} = 0\). Now, however, since the null generators are no longer assumed to be closed curves, this argument requires a more invariant treatment than was necessary in Ref.~\cite{Moncrief:1983}.
First, let \(\lbrace x^\mu\rbrace = \lbrace t, x^3, x^a\rbrace\) and \(\lbrace x^{\mu'}\rbrace = \lbrace t', x^{3'}, x^{a'}\rbrace\) be any two gaussian null coordinate charts which are adapted to \textit{K} (i.e., for which \(K = \frac{\partial}{\partial x^3}|_{t=0}\) and \(K = \frac{\partial}{\partial x^{3'}}|_{t' =0}\) on the appropriate domains of definition of the given charts). It is not difficult to see that, if the two charts overlap on some region of \textit{N}, then within that region the coordinates must be related by transformations of the form
\begin{equation}\label{eq:304}
\begin{split}
x^{3'} &= x^3 + h(x^a)\\
x^{a'} &= x^{a'} (x^b)
\end{split}
\end{equation}
where \(t = t' = 0\) since we have restricted the charts to \textit{N}. Here \textit{h} is an analytic function of the coordinates \(\lbrace x^a\rbrace\) labeling the null generators of \textit{N} and \(x^{a'} (x^b)\) is a local analytic diffeomorphism allowing relabeling of those generators within the region of overlap of the charts.
We let \(\lbrace\varphi, \beta_a, \mu_{ab}\rbrace\) designate the agn metric functions of the unprimed chart,
\begin{equation}\label{eq:305}
\begin{split}
g &= g_{\mu\nu} dx^\mu \otimes dx^\nu\\
&= dt \otimes dx^3 + dx^3 \otimes dt + \varphi dx^3 \otimes dx^3\\
&+ \beta_a dx^a \otimes dx^3 + \beta_a dx^3 \otimes dx^a + \mu_{ab} dx^a \otimes dx^b,
\end{split}
\end{equation}
and \(\lbrace\varphi', \beta'_a, \mu'_{ab}\rbrace\) designate the corresponding functions in the primed chart.
In the region of \({}^{(4)}\!V\) in which the charts overlap, we have of course,
\begin{equation}\label{eq:306}
g_{\mu'\nu'} = \frac{\partial x^\alpha}{\partial x^{\mu'}} \frac{\partial x^\beta}{\partial x^{\nu'}} g_{\alpha\beta}
\end{equation}
and, because of the gaussian null metric form,
\begin{equation}\label{eq:307}
\begin{split}
g_{t't'} &= 0 = \frac{\partial x^\alpha}{\partial t'} \frac{\partial x^\beta}{\partial x^{t'}} g_{\alpha\beta}\\
g_{t'3'} &= 1 = \frac{\partial x^\alpha}{\partial t'} \frac{\partial x^\beta}{\partial x^{3'}} g_{\alpha\beta}\\
g_{t'a'} &= 0 = \frac{\partial x^\alpha}{\partial t'} \frac{\partial x^\beta}{\partial x^{a'}} g_{\alpha\beta}
\end{split}
\end{equation}
By virtue of the form of (\ref{eq:304}), we also have, of course, that \(\frac{\partial}{\partial x^3}|_{t=0} = \frac{\partial}{\partial x^{3'}}|_{t'=0}\) on the region of overlap (since both charts were adapted to \textit{K} by assumption).
Writing out Eqs.~(\ref{eq:307}) in more detail, using the explicit form of \(g_{\alpha\beta}\), restricting the result to the surface \(t' = t = 0\) and making use of the transformations (\ref{eq:304}) which hold on that surface, one readily derives that
\begin{equation}\label{eq:308}
\begin{split}
\left.\left(\frac{\partial t}{\partial t'}\right)\right|_{t'=0} &= 1,\\
\left.\left(\frac{\partial x^a}{\partial t'}\right)\right|_{t'=0} &= \left.(\mu^{ab} h_{,b})\right|_{t=0}\\
\left.\left(\frac{\partial x^3}{\partial t'}\right)\right|_{t'=0} &= \left.\left(-\frac{1}{2} \mu^{ab} h_{,a} h_{,b}\right)\right|_{t=0}.
\end{split}
\end{equation}
Differentiating these equations with respect to \(x^{3'}\) and using the fact that \(\mathring{\mu}_{ab,3} = 0\) one finds that
\begin{equation}\label{eq:309}
\left.\left(\frac{\partial^2 x^\alpha}{\partial x^{3'} \partial t'}\right)\right|_{t'=0} = 0.
\end{equation}
The remaining metric transformation equations (\ref{eq:307}), restricted to the initial surface, yield the covariance relation
\begin{equation}\label{eq:310}
\left.\vphantom{\frac{1}{2}}\mu_{a'b'}\right|_{t'=0} = \left.\left(\frac{\partial x^c}{\partial x^{a'}} \frac{\partial x^d}{\partial x^{b'}} \mu_{cd}\right)\right|_{t=0}
\end{equation}
as well as reproducing equations such as \(\varphi'|_{t'=0} = 0\), and \(\beta_{a'}|_{t'=0} = 0\) which are common to all gaussian null coordinate systems.
Now take the first \(t'\) derivative of the transformation Eqs.~(\ref{eq:306}), restrict the results to the surface \(t' = t = 0\) and make use of Eqs.~(\ref{eq:301}) to derive expressions for
\begin{equation}\label{eq:311}
\left.\vphantom{\frac{1}{2}}\lbrace\varphi'_{,t'}, \beta_{a',t,}, \mu_{a'b,t'}\rbrace\right|_{t'=0}
\end{equation}
in terms of unprimed quantities. Differentiating the resulting equations with respect to \(x^{3'}\) leads to the covariance relation
\begin{equation}\label{eq:312}
\left.\vphantom{\frac{1}{2}}\mu_{a'b',t'x^{3'}} \right|_{t'=0} = \left.\left(\frac{\partial x^c}{\partial x^{a'}} \frac{\partial x^d}{\partial x^{b'}} \mu_{cd,t3}\right)\right|_{t=0}
\end{equation}
as well as reproducing known results such as \(\beta_{a',t'3'}|_{t'=0} = 0\) which hold in all agn coordinate systems.
Now in any agn coordinate chart restricted to \textit{N}, we have the locally defined analytic functions
\begin{equation}\label{eq:313}
\begin{split}
D &\equiv \frac{\det{(\mathring{h}_{ab})}}{\det{(\mathring{\mu}_{ab})}}\\
T &\equiv \mathring{\mu}^{ab} \mathring{h}_{ab}
\end{split}
\end{equation}
where \(\mathring{h}_{ab} \equiv \mathring{\mu}_{ab,t3}\) and where \(\det{(~)}\) signifies determinant. From the covariance relations (\ref{eq:310}) and (\ref{eq:312}), however, it follows that \textit{D} and \textit{T} transform as scalar fields in passing from one agn chart to another in the initial surface \textit{N} (i.e., that \(T = T'\) and \(D = D'\) in the regions of overlap). Thus \textit{D} and \textit{T} may be regarded as globally defined analytic functions on \textit{N}. From the Einstein equations \(R_{ab} = 0\), restricted to \textit{N} and reduced by means of \(\mathring{\varphi}_{,t} = k, \mathring{\mu}_{ab,3} = 0\) and \(\mathring{\beta}_{a,t3} = 0\), one can derive Eq.~(\ref{eq:303}) in any agn chart, which in turn implies the following differential equations for \textit{D} and \textit{T}:
\begin{equation}\label{eq:314}
D_{,3} = kD,\; T_{,3} = \frac{k}{2} T.
\end{equation}
The latter can be written more invariantly as \(\mathcal{L}_K D = kD\) and \(\mathcal{L}_K T = \frac{k}{2} T\) where \(\mathcal{L}_K\) represents Lie differentiation along the vector field \textit{K}.
Equations (\ref{eq:314}) show that (since \(k \neq 0\)) both \textit{D} and \textit{T} grow exponentially along the integral curves of \textit{K} in \textit{N}. However, the {Poincar\'{e}} recurrence argument of Sect.~\ref{subsec:application-poincare} has shown that each integral curve \(\gamma\) of \textit{K}, when followed arbitrarily far in either direction from any point \textit{p} on \(\gamma\), reapproaches \textit{p} arbitrarily closely. Since \textit{D} and \textit{T} are globally analytic (hence continuous) on \textit{N}, their values, when followed along \(\gamma\), would have to reapproach arbitrarily closely their values at \textit{p}. But this is clearly incompatible with their exponential growth along \(\gamma\). The only way to avoid this contradiction arises if \textit{D} and \textit{T} vanish globally on \textit{N}. We thus conclude that \(D = T = 0\) on \textit{N} and therefore, from the defining equations (\ref{eq:313}) and the fact that \(\mathring{\mu}_{ab}\) is positive definite, that
\begin{equation}\label{eq:315}
\mathring{h}_{ab} = \mathring{\mu}_{ab,t3} = 0
\end{equation}
on \textit{N}.
Now, computing the first \(t'\) derivatives of Eqs.~(\ref{eq:307}), restricting the results to the initial surface \(t = t' = 0\) and differentiating the resulting equations with respect to \(x^{3'}\) one finds, upon making use of Eqs.~(\ref{eq:301}), (\ref{eq:309}), and (\ref{eq:315}), that
\begin{equation}\label{eq:316}
\left.\frac{\partial^3 x^\alpha}{\partial x^{3'} \partial t' \partial t'}\right|_{t'=0} = 0
\end{equation}
whereas Eqs.~(\ref{eq:301}), (\ref{eq:302}) and (\ref{eq:315}) show that
\begin{equation}\label{eq:317}
\left.\vphantom{\frac{1}{2}}\left( g_{\alpha\beta, t3}\right)\right|_{t=0} = 0.
\end{equation}
We now proceed inductively to extend the above results to the case of time derivatives of arbitrarily high order. As an inductive hypothesis, suppose that, for some \(n \geq 1\) and for all \textit{k} such that \(0 \leq k \leq n\), we have
\begin{equation}\label{eq:318}
\begin{split}
\left.\left(\frac{\partial}{\partial x^3} \left(\frac{\partial^k g_{\alpha\beta}}{\partial t^k}\right)\right)\right|_{t=0} &= 0,\\
\left.\left(\frac{\partial}{\partial x^{3'}} \left(\frac{\partial^{k+1} x^\alpha}{\partial t'^{\,k+1}}\right)\right)\right|_{t'=0} &= 0,
\end{split}
\end{equation}
and recall that we also have
\begin{equation}\label{eq:319}
\left.\frac{\partial t}{\partial x^{3'}}\right|_{t'=0} = \left.\frac{\partial x^a}{\partial x^{3'}}\right|_{t'=0} = 0,\quad \left.\frac{\partial x^3}{\partial x^{3'}}\right|_{t'=0} = 1.
\end{equation}
Our aim is to prove that
\begin{equation}\label{eq:320}
\begin{split}
\left.\left(\frac{\partial}{\partial x^3} \left(\frac{\partial^{n+1} g_{\alpha\beta}}{\partial t^{n+1}}\right)\right)\right|_{t=0} &= 0,\\
\left.\left(\frac{\partial}{\partial x^{3'}} \left(\frac{\partial^{\, n+2} x^\alpha}{\partial t'^{n+2}}\right)\right)\right|_{t'=0} = 0.
\end{split}
\end{equation}
Note that the above imply that
\begin{equation}\label{eq:321}
\left.\left(\frac{\partial}{\partial x^3} \left(\frac{\partial^k g_{\alpha\beta}}{\partial x^{\gamma_1} \partial x^{\gamma_2} \ldots \partial x^{\gamma_k}}\right)\right)\right|_{t=0} = 0
\end{equation}
for all \(0 \leq k \leq n\) and for arbitrary \(\gamma_1, \gamma_2, \ldots , \gamma_k\). Furthermore, note that of the quantities \(\left.\left(\frac{\partial}{\partial x^3} \left(\frac{\partial^{n+1} g_{\alpha\beta}}{\partial x^{\gamma_1} \ldots \partial x^{\gamma_{n+1}}}\right)\right)\right|_{t=0}\), only \(\left.\left(\frac{\partial}{\partial x^3} \left(\frac{\partial^{n+1} g_{\alpha\beta}}{\partial t^{n+1}}\right)\right)\right|_{t=0}\), may be non-zero. Now differentiate the Einstein equation \(R_{t3} = 0\), \(n - 1\) times with respect to \textit{t} and set \(t = 0\) to derive an expression for \(\left.\left(\frac{\partial^{n+1}}{\partial t^{n+1}} \varphi\right)\right|_{t=0}\) in terms of \(x^3\)-invariant quantities. Differentiate the equation \(R_{tb} = 0\), \(n-1\) times with respect to \textit{t} and set \(t = 0\) to derive an expression for \(\left(\frac{\partial^{n+1}}{\partial t^{n+1}} \beta_b\right)|_{t=0}\), in terms of \(x^3\)-invariant quantities. Next, differentiate the equation \(R_{ab} = 0\), \textit{n} times with respect to \textit{t}, set \(t = 0\) and use the above results for \(\left.\left(\frac{\partial^{n+1}}{\partial t^{n+1}} \varphi\right)\right|_{t=0}\) and \(\left.\left(\frac{\partial^{n+1}}{\partial t^{n+1}} \beta_b\right)\right|_{t=0}\), together with those given in Eqs.~(\ref{eq:301}) and (\ref{eq:315}) to derive an equation of the form
\begin{equation}\label{eq:322}
\begin{split}
0 &= \left.\left(\frac{\partial^n}{\partial t^n} R_{ab}\right)\right|_{t=0}\\
&= -\frac{\partial}{\partial x^3} \left.\left(\frac{\partial^{n+1}}{\partial t^{n+1}} \mu_{ab}\right)\right|_{t=0}\\
&+ \binom{\hbox{positive}}{\hbox{constant}} \frac{\mathring{\varphi}_{,t}}{2} \left(\left.\frac{\partial^{n+1}}{\partial t^{n+1}} \mu_{ab}\right|_{t=0}\right)\\
&+ \left\lbrace\hbox{terms independent of $x^3$}\right\rbrace.
\end{split}
\end{equation}
Differentiate this equation with respect to \(x^3\) to thus derive
\begin{equation}\label{eq:323}
\begin{split}
0 &= -\left(\left.\frac{\partial^{n+1}}{\partial t^{n+1}} \mu_{ab}\right|_{t=0}\right)_{,33}\\
&+ \binom{\hbox{positive}}{\hbox{constant}} \frac{k}{2} \left(\left.\frac{\partial^{n+1}}{\partial t^{n+1}} \mu_{ab}\right|_{t=0}\right)_{,3}
\end{split}
\end{equation}
which holds in an arbitrary agn coordinate chart.
Now define
\begin{equation}\label{eq:324}
\begin{split}
D^{(n+1)} &\equiv \frac{\det{\left(\mathring{h}_{ab}^{(n+1)}\right)}}{\det{(\mathring{\mu}_{cd})}}\\
T^{(n+1)} &\equiv \mathring{\mu}^{ab} \mathring{h}_{ab}^{(n+1)}
\end{split}
\end{equation}
where \(\mathring{h}_{ab}^{(n+1)} \equiv \left.\left(\frac{\partial}{\partial x^3} \left(\frac{\partial^{n+1}}{\partial t^{n+1}} \mu_{ab}\right)\right)\right|_{t=0}\) so that Eq.~(\ref{eq:323}) becomes
\begin{equation}\label{eq:325}
0 = -\mathring{h}_{ab,3}^{(n+1)} + \binom{\hbox{positive}}{\hbox{constant}} \frac{k}{2} \mathring{h}_{ab}^{(n+1)}
\end{equation}
and \(D^{(n+1)}\) and \(T^{(n+1)}\) satisfy
\begin{equation}\label{eq:326}
\begin{split}
D^{(n+1)}_{,3} &= \binom{\hbox{positive}}{\hbox{constant}} kD^{(n+1)}\\
T^{(n+1)}_{,3} &= \binom{\hbox{positive}}{\hbox{constant}} \frac{k}{2} T^{(n+1)}
\end{split}
\end{equation}
in any agn coordinate chart. To extend the {Poincar\'{e}} recurrence argument to the quantities \(D^{(n+1)}\) and \(T^{(n+1)}\) we must first show that they are globally defined analytic functions on \textit{N}.
Differentiate the transformation equation
\begin{equation}\label{eq:327}
g_{a'b'} \equiv \mu_{a'b'} = \frac{\partial x^\alpha}{\partial x^{a'}} \frac{\partial x^\beta}{\partial x^{b'}} g_{\alpha\beta},
\end{equation}
\(n + 1\) times with respect to \(t'\), set \(t' = 0\) and differentiate the result with respect to \(x^{3'}\). Use the inductive hypothesis and the vanishing of \(\left.\left(\frac{\partial}{\partial x^3} \frac{\partial^{n+1}}{\partial t^{n+1}} \varphi\right)\right|_{t=0}\) and \(\left.\left(\frac{\partial}{\partial x^3} \frac{\partial^{n+1}}{\partial t^{n+1}} \beta_a\right)\right|_{t=0}\) to show that this calculation yields the covariance relation
\begin{equation}\label{eq:328}
\begin{split}
&\left.\left(\frac{\partial}{\partial x^{3'}} \frac{\partial^{n+1}}{\partial t'^{n+1}} \mu_{a'b'}\right)\right|_{t'=0}\\
&= \left.\left\lbrace\frac{\partial x^c}{\partial x^{a'}} \frac{\partial x^d}{\partial x^{b'}} \left(\frac{\partial}{\partial x^3} \frac{\partial^{n+1}}{\partial t^{n+1}} \mu_{cd}\right)\right\rbrace\right|_{t=0}
\end{split}
\end{equation}
From this and Eq.~(\ref{eq:310}) it follows that \(D^{(n+1)}\) and \(T^{(n+1)}\) transform as scalar fields in the overlap of agn charts in \textit{N} and thus that these quantities are globally defined analytic functions on \textit{N}. Equations (\ref{eq:326}) can thus be reexpressed in the invariant form
\begin{equation}\label{eq:329}
\begin{split}
\mathcal{L}_K D^{(n+1)} &= \binom{\hbox{positive}}{\hbox{constant}} kD^{(n+1)}\\
\mathcal{L}_K T^{(n+1)} &= \binom{\hbox{positive}}{\hbox{constant}} \frac{k}{2} T^{(n+1)}
\end{split}
\end{equation}
and show that \(D^{(n+1)}\) and \(T^{(n+1)}\) grow exponentially (unless they vanish) when followed along the integral curves of \textit{K} in \textit{N} (i.e., along the null generators of \textit{N}). Repeating the {Poincar\'{e}} recurrence argument given previously for \textit{D} and \textit{T} now yields a contradiction unless \(D^{(n+1)}\) and \(T^{(n+1)}\) vanish globally in \textit{N}. This in turn implies that
\begin{equation}\label{eq:330}
\left.\left(\frac{\partial}{\partial x^3} \frac{\partial^{n+1}}{\partial t^{n+1}} \mu_{ab}\right)\right|_{t=0} = 0
\end{equation}
in every agn chart on \textit{N} and, together with the results obtained above for the other metric components, shows that
\begin{equation}\label{eq:331}
\left.\left(\frac{\partial}{\partial x^3} \frac{\partial^{n+1}}{\partial t^{n+1}} g_{\alpha\beta}\right)\right|_{t=0} = 0
\end{equation}
in every such chart.
Applying the technique of the previous paragraph to the transformation equations for \(\varphi'\) and \(\beta'_a\) merely produces covariance relations for the quantities \(\left.\left(\frac{\partial}{\partial x^3} \left(\frac{\partial^{n+1}}{\partial t^{n+1}} \varphi\right)\right)\right|_{t=0}\) and \(\left.\left(\frac{\partial}{\partial x^3} \left(\frac{\partial^{n+1}}{\partial t^{n+1}} \beta_a\right)\right)\right|_{t=0}\) which are consistent with the (already established) vanishing of these quantities in every agn chart. To complete the inductive proof, we differentiate the remaining transformation equations (\ref{eq:307}) \(n + 1\) times with respect to \(t'\), set \(t' = 0\), use the inductive hypothesis and the new results summarized in Eq.~(\ref{eq:331}) to show that
\begin{equation}\label{eq:332}
\left.\left(\frac{\partial}{\partial x^{3'}} \frac{\partial^{n+2}}{\partial t'^{\, n+2}} x^\alpha\right)\right|_{t'=0} = 0.
\end{equation}
This result, together with that of Eq.~(\ref{eq:331}), completes the proof by induction.
It follows from the analyticity of \textit{g} and the inductive proof given above that \(\left(\frac{\partial}{\partial x^3} g_{\alpha\beta}\right)\) vanishes throughout any agn coordinate chart and thus that \(Y \equiv \frac{\partial}{\partial x^3}\) is a (locally defined) analytic Killing field throughout the given chart. In the region of overlap of any two such charts we have the two locally defined Killing fields \(Y = \frac{\partial}{\partial x^3}\) and \(Y' = \frac{\partial}{\partial x^{3'}}\) and we wish to show that, in fact, they coincide. By construction both \textit{Y} and \(Y'\) coincide with \textit{K} on their appropriate domains of definition within the null surface \textit{N}. Therefore \(X \equiv Y' - Y\) is an analytic Killing field of \textit{g} defined locally on the region of overlap of the two charts which vanishes on the intersection of this region with the null surface \textit{N}. This implies that \textit{X} vanishes throughout its domain of definition, however, since the Killing equations
\begin{equation}\label{eq:333}
X_{\mu,t} + X_{t,\mu} - 2 {}^{(4)}\!\Gamma_{\mu t}^\nu X_\nu = 0
\end{equation}
determine \textit{X} uniquely from data \(X|_{t=0}\) (in the analytic case) and have only the trivial solution \(X = 0\) if \(X|_{t=0} = 0\).
It follows from the above that there exists a unique analytic Killing field \textit{Y}, globally defined on a full neighborhood of \textit{N} in \(({}^{(4)}\!V, g)\) which, when restricted to \textit{N}, coincides with the vector field \textit{K} and this is tangent to the null generators of \textit{N}. In fact, one can prove that \textit{Y} extends to a Killing field defined throughout the maximal Cauchy development of the globally hyperbolic region of \(({}^{(4)}\!V, g)\) whose Cauchy horizon is \textit{N}. The techniques for proving this were discussed at the end of section III of Ref.~\cite{Moncrief:1983} and need not be repeated here. One can also show, by a straightforward computation that
\begin{equation}\label{eq:334}
\left.\left\lbrace Y^\beta {}^{(4)}\!\nabla_\beta Y^\alpha + \frac{k}{2} Y^\alpha\right\rbrace\right|_N = 0
\end{equation}
which suggests that the constant \(\left(-\frac{k}{2}\right)\) is the analogue, for cosmological Cauchy horizons, of the \textit{surface gravity} defined for stationary black hole event horizons \cite{Moncrief:2008,Hawking:1973}.
We have thus proven:
\begin{theorem}
Let \(({}^{(4)}\!V,g)\) be a real analytic, time orientable, vacuum spacetime which admits a compact, connected Cauchy horizon \textit{N} that separates \(({}^{(4)}\!V,g)\) into open Lorentzian submanifolds \(({}^{(4)}\!V_+,g_+)\) and \(({}^{(4)}\!V_-,g_-)\) of which one is globally hyperbolic and the other acausal. Assume that \textit{N} is realized as a level set of some analytic function \(\tau: {}^{(4)}\!V \rightarrow \mathbb{R}\) having no critical points in a neighborhood of \textit{N}. The vector field \({}^{(4)}\!X := \mathrm{grad}_g\tau\) will therefore be non-vanishing on this neighborhood, null on the hypersurface \textit{N} and thus tangent to its null geodesic generators and will naturally induce (by restriction of \({}^{(4)}\!X\) to \textit{N}) a corresponding tangent vector field \textit{X} on the Cauchy horizon itself.
In the cases referred to here as `non-ergodic' the null generators of \textit{N} are either closed curves or densely fill 2-tori embedded in \textit{N} and every such generator is either complete in both the directions of \textit{X} and \(-X\) (the `degenerate' case) or else every generator is incomplete in one direction (say that of \textit{X}) and complete in the opposite direction (the `non-degenerate' case).
Compact, non-degenerate, non-ergodic Cauchy horizons in analytic, vacuum spacetimes \(({}^{(4)}\!V,g)\) are Killing horizons in that there always exists a non-trivial, analytic Killing field \textit{Y}, globally defined on a full neighborhood of the horizon manifold \(N \subset ({}^{(4)}\!V,g)\) which, when restricted to \textit{N}, is everywhere tangent to the null generators of this hypersurface. \textit{Y} extends (at least smoothly) to a Killing field defined throughout the maximal Cauchy development of the globally hyperbolic region of \(({}^{(4)}\!V,g)\) whose Cauchy horizon is \textit{N}.
\end{theorem}
By applying the results of our earlier work (cf. Ref.~\cite{Isenberg:1992} and Sect. VIII of Ref.~\cite{Moncrief:2008}) it is straightforward to prove that if the null generators of \textit{N}, to which the horizon generating Killing field \textit{Y} is tangent, are not all closed curves then the globally hyperbolic region of \(({}^{(4)}\!V,g)\) necessarily admits at least one additional, non-trivial Killing field. This additional Killing field commutes with \textit{Y} so that the full isometry group of this (globally hyperbolic) spacetime includes a 2-dimensional toral action.
Thus whereas non-degenerate Cauchy horizons having only closed (null geodesic) generators are, in a geometrical sense, less `general' than those admitting non-closed generators they are, nevertheless, far less constrained analytically in that they can bound (analytic, vacuum) globally hyperbolic spacetimes having only one-dimensional isometry groups. Furthermore, if our conjecture for the (non-degenerate) ergodic case is correct then the solution set for these is much smaller still, consisting uniquely of certain `irrational' compactifications of the flat Kasner spacetime.
Finally, though we could only rule out the existence of \textit{degenerate} (compact, analytic) Cauchy horizons in some (closed-orbit) special cases \cite{Moncrief:1983} we conjecture that such horizons do not exist at all.
\section{Introduction}
\label{sec:introduction}
To disprove the \textit{cosmic censorship conjecture} it would suffice to establish the existence (in a suitable function space topology) of an open set of globally hyperbolic solutions to the vacuum Einstein equations which are each extendible, through Cauchy horizons, beyond their maximal Cauchy developments. Analytic examples of such extendible spacetimes include the Taub metric on \(\mathbf{S}^3 \times \mathbb{R}\) and the flat Kasner metric on \(T^3 \times \mathbb{R}\). Each of these solutions can be (analytically) extended through a compact Cauchy horizon to include an acausal region containing closed timelike curves. If this feature were actually stable against sufficiently small perturbations then cosmic censorship would be false.
To study this stability question, within the convenient framework of (real-)analytic metrics, one can employ a straightforward generalization of the Cauchy-Kowalewski theorem to prove the existence of infinite dimensional families of `generalized Taub-NUT' vacuum spacetimes, with a variety of spatial topologies, which each, as in the examples mentioned above, contain a compact Cauchy horizon separating globally hyperbolic and acausal regions \cite{Moncrief:1984,Moncrief:1982}. These families, large though they are, fail to disprove cosmic censorship for several reasons.
First of all every such generalized Taub-NUT solution admits at least one Killing vector field---a vector which is spacelike in the globally hyperbolic region, null on the Cauchy horizon (and hence tangent to the horizon's null geodesic generators) and timelike in the acausal extension. Thus these particular families could not possibly fill (even densely) an open subset of generically non-symmetric solutions in any reasonable function space topology. Secondly, even within the circumscribed context of analytic metrics admitting at least one Killing field they require a further special restriction upon their `initial data' (which, by exploiting analyticity and the extended Cauchy-Kowalewski theorem can be specified on the horizon itself) which, roughly speaking, corresponds to a Lagrangian submanifold of the full set of solutions of the chosen (one-Killing-field) symmetry type. To rigorously treat the complementary family of one-Killing field metrics (i.e., to relax the Lagrangian submanifold restriction) has necessitated a still further generalization of the Cauchy-Kowalewski theorem through the development of so-called Fuchsian methods \cite{Isenberg:2002,Choquet-Bruhat:2004,Choquet-Bruhat:2006} but the spacetimes obtained by these techniques typically exhibit strong curvature singularities instead of Cauchy horizons and so are inextendable beyond their maximal Cauchy developments. Finally the generalized-Taub-NUT solutions are all (real) analytic which many might regard as an artificial restriction to place on any supposedly physically relevant family of vacuum spacetimes.
Since the presence of a Killing field seemed to play a crucial role in the construction of these generalized Taub-NUT spacetimes it is of interest to ask whether perhaps the occurrence of such a field was in fact \textit{necessary} for the existence of a compact Cauchy horizon, at least in the (vacuum) analytic case. In earlier articles \cite{Moncrief:1983,Isenberg:1985,Isenberg:1992} we showed that this was indeed the case provided that the null-generating geodesic curves which foliate the horizon are all closed. While this might at first seem to be an unduly artificial restriction upon the geometry of the horizon we now believe that it represents the least constraining assumption and that the failure of all the null generators to be closed implies the existence of at least a second Killing field. By contrast the known (analytic) solutions with all closed generators need only have the single Killing field which is tangent to the horizon's generators.
In this paper we prove, under certain assumptions, that the occurrence of an (analytic) compact Cauchy horizon with non-closed generators implies the existence of at least one Killing field---always tangent to the horizon's generators---and we have already shown elsewhere that the presence of such a Killing field with non-closed integral curves implies the existence of a second Killing field \cite{Isenberg:1992}. We know of examples (see below) in which even a third Killing field is required by the special nature of the geometry but we do not have a systematic treatment of this case which are refer to as `ergodic'.
The main assumption we need, in addition to analyticity and the imposition of the vacuum field equations is that the compact Cauchy horizon be non-degenerate in the sense that at least one (and hence, as we prove, every in the case of a connected horizon) of its null geodesic generators be incomplete in one direction. In fact we do not know of an example of a degenerate Cauchy horizon (though compact, degenerate null hypersurfaces which are not Cauchy horizons can certainly exist for (electro-)vacuum spacetimes) and, in the case of closed generators we could even prove their non-existence on certain topologies. We suspect that degenerate compact Cauchy horizons may not exist in general for analytic (electro-)vacuum spacetimes but do not have a proof of this surmise. The second assumption we require is that the horizon be non-ergodic in the sense that it not be densely filled by the orbit of any single geodesic generator. Examples of vacuum spacetimes with ergodic Cauchy horizons do exist and can be created from the flat Kasner metric through spatial compactification with an `irrational' shift in the obvious identifications to produce a toroidal horizon which each null generator densely fills. We suspect that, up to finite covers, these solutions (which have the extra, third Killing field alluded to above) may exhaust the vacuum ergodic horizon cases but also have no proof of this conjecture. On the other hand the ergodic case could, to some extent, be treated by a straightforward generalization of the techniques developed here provided that the assumed compact Cauchy horizon admits an analytic foliation with compact (2-dimensional) leaves, transversal to the given the null geodesic `flow'. While we also impose the vacuum field equations it seems quite likely that our results could be readily generalized to allow for certain types of matter sources. Indeed the original results for closed generators were derived for the electro-vacuum field equations.
Analyticity is the final restrictive assumption that we make but this hypothesis has a certain double-edged quality that makes it seem less objectionable than it appears at first sight. First of all, if a genuine open set (in some suitable function space topology) of vacuum spacetimes admitting compact Cauchy horizons did exist it would presumably contain a large (perhaps densely filling) subset of analytic solutions. Thus one could expect to probe such a set by focusing on its analytic elements. Secondly, analyticity serves, by its very rigidity, to exclude the occurrence of many exotic types of cosmological boundaries which could otherwise occur through suitable (non-analytic) `fine-tuning' of the `initial data'. For example in the special case of polarized Gowdy metrics on \(T^3 \times \mathbb{R}\) one can exploit non-analyticity to produce a large variety of, highly non-generic, cosmological boundaries involving such exotica as Kantor sets of curvature singular regions interspersed with complementary sets of non-singular Cauchy horizon \cite{Chrusciel:1990}. The fine-tuning of the data needed to produce these exotica is incompatible with analyticity so that, in concentrating on analytic solutions, one avoids being distracted by such mathematically allowed but non-generic features. Any truly generic feature should survive analytic approximations. Thus analyticity is actually an advantage rather than a liability if only stable properties are of interest.
The main difficulty in treating the problem of non-closed generators considered here, over and above those already handled for the closed generator case, is a proof that the candidate vector field for the horizon generating Killing field is in fact analytic. Otherwise much of the argument goes through in essentially the same way as for the closed generator case. The hypothetical Killing field, restricted to the horizon, is everywhere parallel to the generators and so already determined up to a multiplicative factor. We define this factor (in the non-degenerate case wherein every generator is incomplete to the future) by the requirement that the future affine length of every null geodesic generator be a fixed positive number \(2/k\) provided one takes the initial condition for the generator starting at an arbitrary point \textit{p} of the horizon to have its tangent vector given by the hypothetical Killing field \(X(p)\) at that point. In other words one adjusts the multiplicative factor until each generator (taken with these rescaled initial conditions) has future length \(2/k\). The technical problem is then to prove that the needed rescaling factor is in fact analytic. In the closed generator case we found an explicit formula for this factor from which its analyticity was apparent but here we seem to need a more subtle argument involving the convergence of a sequence of analytic approximations to the needed rescaling factor. Unfortunately though since real analytic functions do not form a nice Banach space (with the norm of uniform convergence) we have had to `artificially' complexify the analytic structure of the horizon and carry out the convergence argument in the complexified context, extracting the desired analyticity of the real section at the end of this analysis. While workable this complicating feature is rather disappointing in comparison with the simplicity of the corresponding closed generator argument and so one wonders whether perhaps a further simplification could be found for the present problem.
Our results have some natural correspondences with those for the (Killing) event horizons of stationary black holes and one can compactify these latter horizons to obtain examples (in certain cases) of `cosmological' compact Cauchy horizons of the sort we are interested in. In the black hole case, for which there is a natural normalization of the Killing horizon generator, the constant \textit{k} is essentially the so-called surface gravity of the horizon \cite{Moncrief:2008}. It might seem that one could produce examples of degenerate Cauchy horizons (having, by definition, \(k = 0\)) by compactifying the event horizons of extreme black holes. The simplest (electro-vacuum) example however is provided by the extreme Reissner-Nordstrom metric with horizon generating Killing field given, in standard coordinates, by \(\frac{\partial}{\partial t}\). We can compactify the horizon at \(r = r_+ = M\) to \(\mathbf{S}^2 \times \mathbf{S}^1\) by identifying the points labeled \(\lbrace t, \theta, \varphi\rbrace\) with those labeled \(\lbrace t + \ell, \theta, \varphi\rbrace\) for a fixed constant \(\ell \neq 0\). However the extreme black hole metric has \(\frac{\partial}{\partial t} \cdot \frac{\partial}{\partial t} = -\left( 1 - \frac{M}{r}\right)^2\) so that the generating vector field \(\frac{\partial}{\partial t}\), tangent to the \(\mathbf{S}^1\) fibers, has closed timelike orbits on both sides of the compact null surface at \(r = M\) which can therefore not be a Cauchy horizon. A similar phenomenon occurs for the more general extreme Kerr-Newman solution.
Though the Killing field or fields we produce via the extended Cauchy-Kowalewski theorem are possibly only determined by convergent expansions in some neighborhood of the assumed Cauchy horizon it is straightforward to show that these automatically propagate (as solutions of Killing's equation) to the full maximal Cauchy development on the globally hyperbolic side of the horizon. This follows from the well-known fact that in (for simplicity) a vacuum spacetime any Killing field satisfies a linear hyperbolic equation which in fact preserves the vanishing of the Killing form for the propagated vector field \cite{Moncrief:1983,Coll:1977}.
\section{Nondegeneracy and geodesic incompleteness}
\label{sec:nondegeneracy}
In this section we shall show, using a ribbon argument, that each null geodesic generator of \textit{N} is either complete in both directions (the `degenerate' case) or else that each generator is incomplete in one direction (the non-degenerate case). More precisely, we shall prove that if any single generator \(\gamma\) is incomplete in a particular direction (say that defined by \textit{X}) then every other generator of the (connected) hypersurface \textit{N} is necessarily incomplete in the same direction. It will then follow that if any generator is complete in a particular direction, then all must be since otherwise one could derive a contradiction from the first result. We shall see later that, in the non-degenerate case, the generators which are all incomplete in one direction (say that of \textit{X}) are however all complete in the opposite direction (that of \(-X\)).
As usual we work in adapted charts for an arbitrary fundamental region \(\mathcal{H} \subset N\). For the calculations to follow however, it is convenient to work with charts induced from adapted charts on the covering space \(\hat{\mathcal{H}}\approx\Sigma\times\mathbb{R}\) of \(\mathcal{H}\)~\footnote{Where either \(\Sigma \approx \mathcal{A}\) or \(\Sigma \approx \mathcal{D}\) depending upon the type of the elementary region \(\mathcal{H}\).} for which the \(\lbrace x^a\mid a = 1, 2\rbrace\) are constant along any given generator and the range of the `angle' coordinate \( x^3\) is unwrapped from say \(\lbrack\mathring{x}^3, \mathring{x}^3 + s^\ast)\), where \(s^\ast\) is the `recurrence time' for \(x^3\) on \(\mathcal{H}\), to cover the interval \((-\infty, \infty)\). Projected back to \(\mathcal{H}\) these induce families of charts \(\lbrace x^3, x^a\rbrace, \lbrace x^{3'}, x^{a'}\rbrace\), etc. related, on their regions of overlap, by analytic transformations of the form
\begin{equation}\label{eq:501}
\begin{split}
x^{3'} &= x^3 + \text{constant}\\
x^{a'} &= f^a(x^1, x^2).
\end{split}
\end{equation}
By working on the covering space we simplify the notation by keeping the \(\lbrace x^a\rbrace\) constant and letting \(x^3\) range continuously over \((-\infty, \infty)\) in following a given generator as it repeatedly sweeps through the leaves of the chosen foliation of \(\mathcal{H}\). However, one should keep in mind that this is just an artifice to represent calculations carried out on the elementary region \(\mathcal{H}\) in a simplified notation since the compactness of the closure, \(cl(\mathcal{H})\), of \(\mathcal{H}\) in \textit{N} will play a key role in the arguments to follow.
Consider a null generator of \(\mathcal{H}\) developed from `initial' conditions specified at a point \(p\in \mathcal{H}\) having coordinates \(\lbrace x^3 (p) = \mathring{x}^3, x^a(p) = \mathring{x}^a\rbrace\). The affine parametrization of this generator is determined by solving the geodesic equations which, for the class of curves in question, effectively reduce to
\begin{equation}\label{eq:502}
\begin{split}
&\frac{d^2x^3}{d\eta^{2}} - \frac{\mathring{\varphi}_{,t}}{2} (x^3, x^a) \left(\frac{dx^3}{d\eta}\right)^2 = 0 \\
&x^a(\lambda) = \mathring{x}^a = \,\,\text{constant}
\end{split}
\end{equation}
where \(\eta\) is an affine parameter. To complete the specification of initial conditions one needs, of course, to give an initial velocity \(\frac{dx^3}{d\eta}\mid_{\mathring{\eta}}\) (taking \(\frac{dx^a}{d\eta}\mid_{\mathring{\eta}} = 0\)).
Solving the first order equation
\begin{equation}\label{eq:503}
\frac{dv}{d\eta} = \frac{\mathring{\varphi}_{,t}}{2} \,\, v^2
\end{equation}
for \(v := \frac{dx^3}{d\eta}\) to get an integral formula for \textit{v} and then integrating \(\frac{d\eta}{dx^3} = \frac{1}{v}\) with respect to \(x^3\) one derives an expression for the affine length of a segment of this null geodesic defined on the interval \([\mathring{x}^3, x^3]\):
\begin{equation}\label{eq:504}
\begin{split}
&\eta (x^3,\mathring{x}^a) - \mathring{\eta} (\mathring{x}^3, \mathring{x}^a)\\
&=\frac{1}{(\frac{dx^3}{d\eta})}\bigm|_{\mathring{\eta} (\mathring{x}^{3},\mathring{x}^{a})} \int\limits^{x^{3}}_{\mathring{x}^{3}} d\rho \quad \text{exp} [-\int\limits^\rho_{\mathring{x}^{3}} d\xi (\frac{\mathring{\varphi}_{,t}}{2} (\xi,\mathring{x}^a)] .
\end{split}
\end{equation}
Thus incompleteness of this generator, in the direction of \(X = \frac{\partial}{\partial x^3}\), would correspond to the existence of the limit
\begin{equation}\label{eq:505}
\begin{split}
&\lim\limits_{x^{3}\to\infty} \int\limits^{x^{3}}_{\mathring{x}^{3}} d\rho \quad\text{exp} [-\int\limits^\rho_{\mathring{x}^{3}} d\xi (\frac{\mathring{\varphi}_{,t}}{2} (\xi , \mathring{x}^a))]\\
&\quad =(\frac{dx^{3}}{d\eta})\bigm|_{\mathring{\eta} (\mathring{x}^{3},\mathring{x}^a)} (\eta (\infty, \mathring{x}^a) - \mathring{\eta} (\mathring{x}^3, \mathring{x}^a)) < \infty
\end{split}
\end{equation}
whereas completeness (in this direction) would correspond to the divergence of this limit. Recalling Equation (\ref{eq:232}), note that the integral of the one-form \(\omega_X\) along the segment \(\gamma\) defined above is given by
\begin{equation}\label{eq:506}
\int\limits_\gamma \omega_X = \int\limits^{x^{3}} _{\mathring{x}^{3}} (-\frac{1}{2} \mathring{\varphi},_{t} (\xi , \mathring{x}^a))d\xi
\end{equation}
which thus provides an invariant representation of the basic integral arising in the above formulas.
Suppose that the generator `beginning' at \(p\in \mathcal{H}\) is incomplete in the direction of \textit{X}. We want to establish convergence of the corresponding integral for any other generator of \(\mathcal{H}\). Since incompleteness is an asymptotic issue (the relevant integrals being automatically finite on any compact domain of integration) there is no essential loss of generality in comparing only those generators that `start' in the slice defined by \textit{p}. Thus we want to consider generators `beginning' at points \textit{q}, having \(x^3 (q) = \mathring{x}^3\), and establish their incompleteness by using a suitable ribbon argument. Furthermore, to have a `canonical' way of defining our comparison ribbons it will be convenient to localize the calculations somewhat by first looking only at generators sufficiently near to the `reference' generator. Thus, given a point \textit{p} in the initial slice defined by \(x^3(p) = \mathring{x}^3\), we consider only those points \textit{q} lying in this slice which, additionally, lie within a \textit{closed} geodesic ball (relative to the invariant transversal metric \(\mu\) induced on this slice) centered at \textit{p} and contained within a normal neighborhood of this point. Any such \textit{q} can be connected to \textit{p} by a unique geodesic lying within this geodesic ball and such points can be conveniently labeled by normal coordinates defined at \textit{p} (i.e., the points of a corresponding, closed ball in the tangent space to the slice at \textit{p}).
The unique geodesic connecting \textit{q} to \textit{p} provides a canonical `starting end' to our comparison ribbon for geodesics emanating from points \textit{p} and \textit{q} (in the direction of \textit{X}) and, from invariance of the transversal metric along the flow of \textit{X}, we get an isometric image of this connecting geodesic induced on any subsequent slice traversed along the flow.
Let \(\gamma\) be the segment of the null generator beginning at \textit{p} and defined on the interval \([\mathring{x}^3, x^3]\), for some \(x^3 > \mathring{x}^3\), and let \(\gamma'\) be a corresponding segment of the generator beginning at \textit{q} and defined on the same interval. From the argument given in Section \ref{subsec:connection} it follows that
\begin{equation}\label{eq:507}
\int\limits_\gamma \omega_X - \int\limits_{\gamma'} \omega_X = \int\limits_\sigma \omega_X - \int\limits_{\sigma'} \omega_X
\end{equation}
where \(\sigma\) is the geodesic end defined in the starting slice and $\sigma'$ its isometric image at the ending slice.
For fixed \textit{p} the integral \(\int_\sigma \omega_X\) varies continuously with \textit{q} as \textit{q} ranges over a \textit{compact} set (the closed geodesic ball centered at \textit{p} described above) and thus is bounded for all \textit{q} in this ball. Furthermore the integral \(\int_{\sigma'} \omega_X\) varies continuously with \textit{q} and \(x^3\) but, as \(x^3\) increases, the image of \textit{p} under the flow ranges only over (some subset of) the \textit{compact} set given by the closure of \(\mathcal{H}\) in \textit{N} whereas the image of \textit{q} remains always a fixed geodesic distance from the image of \textit{p} in the corresponding slice. Since the product of the closure of \(\mathcal{H}\) with this (closed) ball is compact the continuously varying integral \(\int_{\sigma'} \omega_X\) (regarded as a function of \textit{q} and \(x^3\) for fixed \textit{p}) is necessarily bounded no matter how large the ``unwrapped'' coordinate \(x^3\) is allowed to become.
It follows from the forgoing that for any fixed \textit{p} and \textit{q} as above, there exists a \textit{bounded}, continuous (in fact analytic) real-valued function \(\delta_{p,q}(x^3)\) such that
\begin{equation}\label{eq:508}
\int\limits_{\gamma'} \omega_X = \int\limits_\gamma \omega_X + \delta_{p,q}(x^3)
\end{equation}
for arbitrary \(x^3 > \mathring{x}^3\). But this implies that
\begin{equation}\label{eq:509}
\begin{split}
&\int\limits_{\mathring{x}^{3}}^{x^{3}} d\rho \,\, \text{exp} [-\int\limits_{\mathring{x}^{3}}^\rho \frac{\mathring{\varphi}_{,t}}{2} (\xi, \mathring{x}^a(q))d\xi] \\
&= \int\limits_{\mathring{x}^{3}}^{x^{3}} d\rho \,\, \text{exp} [-\int\limits_{\mathring{x}^{3}}^\rho \frac{\mathring{\varphi}_{,t}}{2} (\xi, \mathring{x}^a(p))d\xi + \delta_{p,q} (\rho)] \\
&= \int\limits_{\mathring{x}^{3}}^{x^{3}} d\rho \,\, \text{exp}[\delta_{p,q}(\rho)] \,\,\text{exp} [-\int\limits_{\mathring{x}^{3}}^\rho \frac{\mathring{\varphi}_{,t}}{2} (\xi, \mathring{x}^a(p))d\xi] .
\end{split}
\end{equation}
From the boundedness of \(\delta_{p,q}\)
\begin{equation}\label{eq:510}
- \infty < b_1 \leq \delta_{p,q}(\rho)\leq b_2 < \infty,
\forall\rho\in [\mathring{x}^3, \infty)
\end{equation}
it follows that
\begin{equation}\label{eq:511}
\begin{split}
&e^{b_{1}} \int\limits_{\mathring{x}^{3}}^{x^{3}} d\rho \,\, \text{exp}[-\int\limits_{\mathring{x}^{3}}^\rho \frac{\mathring{\varphi}_{ ,t}}{2} (\xi, \mathring{x}^a(p))d\xi] \\
&\leq \int\limits_{\mathring{x}^{3}}^{x^{3}} d\rho \,\,\text{exp}[-\int\limits_{\mathring{x}^{3}}^\rho \frac{\mathring{\varphi}_{ ,t}}{2} (\xi, \mathring{x}^a(q))d\xi] \\
&\leq e^{b_{2}} \int\limits_{\mathring{x}^{3}}^{x^{3}} d\rho \,\, \text{exp}[-\int\limits_{\mathring{x}^{3}}^\rho \frac{\mathring{\varphi} _{,t}}{2} (\xi, \mathring{x}^a(p))d\xi]
\end{split}
\end{equation}
\(\forall x^3\in [\mathring{x}^3 ,\infty)\). But this implies that if the limit
\begin{equation}\label{eq:512}
\lim\limits_{x^{3}\to\infty} \int^{x^{3}}_{\mathring{x}^{3}} d\rho \,\, \text{exp}[-\int^\rho_{\mathring{x}^{3}} \frac{\mathring{\varphi} _{,t}}{2} (\xi, \mathring{x}^a(p))d\xi]
\end{equation}
exists, then so must the limit of the monotonically increasing function \(\int^{x^{3}}_{\mathring{x}^{3}} d\rho \,\, \text{exp}[-\int^\rho_{\mathring{x}^{3}} \frac{\mathring{\varphi}_{ ,t}}{2} (\xi, \mathring{x}^a(q))d\xi]\) exist as \(x^3\to\infty\). Conversely, if the affine length of \(\gamma\) diverges, then so must that of \(\gamma'\) by virtue of the forgoing bounds.
So far we have only considered those null generators starting within a geodesic ball centered at a point \textit{p} in the initial slice. But from the compactness and connectedness of \textit{N} it's clear that any of its null generators can be thus compared to the original `reference generator' through a finite collection of such \textit{ribbon arguments} and thus all of them shown either to be incomplete in the direction \textit{X} or else to be complete in this direction. Clearly the same argument can be applied in the opposite direction (i.e., that of $-X$) with a corresponding conclusion. However, as we shall see later, the non-degenerate case will always be characterized by generators that are all incomplete in one direction but complete in the opposite direction, whereas the degenerate case will be characterized by generators that are complete in both directions.
\subsection{Application of the {Poincar\'{e}} Recurrence Theorem}
\label{subsec:application-poincare}
In this subsection we shall show that the {Poincar\'{e}} recurrence theorem \cite{Poincare:1899,Arnold:1978} can be applied to the flow on \textit{N} generated by the vector field \textit{X} defined in section~\ref{subsec:geometrical}. Using this theorem we shall then show that every point \(p \in N\), when mapped sufficiently far (in either direction) along the flow of \textit{X}, returns arbitrarily closely to its initial position. When combined with the isometric character of this flow (relative to the transversal metric) derived in the previous subsection, this result will lead to very stringent restrictions upon the topological nature of the flow.
Since our spacetime \(({}^{(4)}\!V, g)\) is, by assumption, both non-compact and time-orientable, it necessarily admits a global, smooth timelike vector field \textit{V} which, without loss of generality, we may assume has been normalized to unit length (i.e., to have \(g(V,V) = -1\)). Since \textit{V} is timelike, it is necessarily transversal to the null surface \textit{N}. This follows from noting that the normalization condition, evaluated in a gaussian null coordinate chart, reduces to
\begin{equation}\label{eq:213}
-1 = g(V,V)|_N = \lbrace 2V^t V^3 + \mu_{ab} V^a V^b\rbrace|_{t=0}
\end{equation}
which clearly implies that \(V^t\) is nowhere vanishing. Expressed more invariantly this statement is equivalent to \(g (X,V)|_N \neq 0\) since, in an arbitrary agn chart adapted to \(X,\, g (X,V)|_N = V^t|_{t=0}\). Assume for definiteness that \(V^t|_{t=0} > 0\) everywhere on \textit{N} (i.e., in every agn chart adapted to \textit{X} on \textit{N}).
Following Hawking and Ellis \cite{Hawking:note}, we define a positive definite metric \(g'\) on \({}^{(4)}\!V\) by setting
\begin{equation}\label{eq:214}
g' (Y,Z) = g (Y,Z) + 2g (Y,V) g (Z,V)
\end{equation}
for any pair of vector fields \(Y,Z\) defined on \({}^{(4)}\!V\). This metric induces a Riemannian metric \({}^{(3)}\!g'\) on \textit{N} given, in an arbitrary gaussian null coordinate chart, by the expressions
\begin{equation}\label{eq:215}
\begin{split}
{}^{(3)}\!g'_{33} &= (2V^t V^t)|_{t=0}\\
{}^{(3)}\!g'_{3a} &= (2\mu_{ab} V^b V^t)|_{t=0}\\
{}^{(3)}\!g'_{ab} &= (\mu_{ab} + 2\mu_{ac} V^c \mu_{bd} V^d)|_{t=0}
\end{split}
\end{equation}
and having the natural volume element
\begin{equation}\label{eq:216}
\sqrt{\det{{}^{(3)}\!g'}} = \left.\left(2^{1/2} V^t \sqrt{\det{\mu}}\right)\right|_{t=0}.
\end{equation}
Since \(X = \frac{\partial}{\partial x^3}\) in an agn chart adapted to \textit{X}, we have
\begin{equation}\label{eq:217}
{}^{(3)}\!g' (X,X) = {}^{(3)}\!g'_{33} = (2V^t V^t)|_{t=0}
\end{equation}
as a globally defined, nowhere vanishing function on \textit{N}. Using this non-vanishing function as a conformal factor, we define a second Riemannian metric \({}^{(3)}\!g\) on \textit{N}, conformal to \({}^{(3)}\!g'\), by setting
\begin{equation}\label{eq:218}
{}^{(3)}\!g = \left.\left(\frac{1}{V^t}\right)^{2/3}\right|_{t=0} {}^{(3)}\!g'.
\end{equation}
The natural volume element of \({}^{(3)}\!g\) is thus given by
\begin{equation}\label{eq:219}
\begin{split}
\sqrt{\det{{}^{(3)}\!g}} &= \left.\left(\frac{1}{V^t}\right)\right|_{t=0} \sqrt{\det{{}^{(3)}\!g'}}\\
&= \left.\left( 2^{1/2} \sqrt{\det{\mu}}\right)\right|_{t=0}
\end{split}
\end{equation}
Computing the divergence of \textit{X} with respect to the metric \({}^{(3)}\!g\), we find
\begin{equation}\label{eq:220}
\begin{split}
\nabla_{{}^{(3)}\!g} \cdot X &= \frac{1}{\sqrt{\det{{}^{(3)}g}}} \frac{\partial}{\partial x^i} \left(\sqrt{\det{{}^{(3)}g}} X^i\right)\\
&= \left.\left(\frac{1}{\sqrt{\det{\mu}}} \frac{\partial}{\partial x^3} \left(\sqrt{\det{\mu}}\right)\right)\right|_{t=0}\\
&= 0
\end{split}
\end{equation}
which vanishes by virtue of the result of Hawking and Ellis cited in the previous section (i.e., by virtue of the invariance of the transversal metric \(\mathring{\mu}_{ab}\) relative to the flow along \textit{X}). Equation (\ref{eq:220}) can be equivalently expressed as
\begin{equation}\label{eq:221}
\mathcal{L}_X \left(\sqrt{\det{{}^{(3)}\!g}}\right) = 0
\end{equation}
where \(\mathcal{L}\) signifies the Lie derivative. Thus the volume element of \({}^{(3)}\!g\) is preserved by the flow along \textit{X}.
It follows from the above that if \(\lbrace f^\lambda | \lambda \in \mathbb{R}\rbrace\) is the one-parameter family of diffeomorphisms of \textit{N} generated by \textit{X} and if \textit{D} is any measurable region of \textit{N} with volume (relative to \({}^{(3)}\!g\)) vol(\textit{D}) then \(\mathrm{vol}(f^\lambda D) = \mathrm{vol}(D)\, \forall\, \lambda \in \mathbb{R}\). Since \textit{N} is compact and \(f^\lambda\) is volume preserving, the {Poincar\'{e}} recurrence theorem may be applied and has the following consequences. Let \textit{p} be a point of \textit{N} and \textit{U} be any neighborhood of \textit{p} and, for any \(\lambda_0 \neq 0\), consider the sequence of iterates \(f^{n\lambda_0}\) (for \(n = 1, 2, \ldots\)) of \(f \equiv f^{\lambda_0}\) and the corresponding sequence of (equal volume) domains \(U, fU, f^2U, \ldots , f^nU, \ldots\). {Poincar\'{e}}'s theorem shows that there always exists an integer \(k > 0\) such that \(f^kU\) intersects \textit{U} and thus that, in any neighborhood \textit{U} of \textit{p}, there always exists a point \textit{q} which returns to \textit{U} under the sequence of mappings \(\lbrace f^n\rbrace\).
The above results together with those of the previous subsection show that any point \(p \in N\) eventually return to an arbitrarily small neighborhood of \textit{p} (after first leaving that neighborhood) when followed along the flow of \textit{X}. The reason is that since, by construction, \textit{X} has no zeros on \textit{N}, every point \(p \in N\) flows without stagnation along the integral curves of \textit{X}, first leaving sufficiently small neighborhoods of \textit{p} and then, by {Poincar\'{e}} recurrence, returning arbitrarily closely to \textit{p}.
It may happen that a point \textit{p} may actually flow back to itself, in which case the generator it lies on is closed, but for the generic points of interest here, the generators will not be closed and the flow will only take \textit{p} back arbitrarily closely to itself.
\subsection{A Connection on \textit{N} and some associated `ribbon arguments'}
\label{subsec:connection}
Let \({}^{(4)}\!Y\) and \({}^{(4)}\!Z\) be any two smooth vector fields on \(({}^{(4)}\!V, g)\) which are tangent to \textit{N} (i.e., for which \({}^{(4)}\!Y^t|_{t=0} = {}^{(4)}\!Z^t|_{t=0} = 0\) in an arbitrary gaussian null coordinate chart). Then, computing the covariant derivative \(\nabla_{{}^{(4)}\!Y} {}^{(4)}\!Z\), determined by the spacetime metric \textit{g}, observe that the resulting vector field is automatically also tangent to \textit{N} as a consequence of the invariance property of the transversal metric which was derived in Sect.~\ref{subsec:invariance} (i.e., of the result that \(\mathring{\mu}_{ab,3} = 0\)). This fact, which corresponds to the vanishing of the connection components \(\Gamma_{ij}^t|_{t=0}\) (for \(i,j = 1, 2, 3\)), in turn implies that \textit{N} is \textit{totally geodesic} (i.e., that every geodesic of \textit{g} initially tangent to \textit{N} remains in \textit{N} through its entire interval of existence).
If \textit{Y} and \textit{Z} designate the vector fields on \textit{N} induced by \({}^{(4)}\!Y\) and \({}^{(4)}\!Z\) respectively, then we can, by virtue of the above remarks, define a connection \({}^{(3)}\!\Gamma\) on \textit{N} by means of the following defining formula for covariant differentiation
\begin{equation}\label{eq:222}
{}^{(3)}\!\nabla_Y Z \equiv \left.\left(\nabla_{{}^{(4)}\!Y} {}^{(4)}\!Z\right)\right|_N.
\end{equation}
Here the right hand side symbolizes the vector field naturally induced on \textit{N} by \(\nabla_{{}^{(4)}\!Y} {}^{(4)}\!Z\). A straightforward computation in gaussian null coordinate charts (restricted to \textit{N}) shows that
\begin{equation}\label{eq:223}
\left({}^{(3)}\!\nabla_Y Z\right)^k = Y^j Z_{\hphantom{k},j}^k + {}^{(3)}\!\Gamma_{ij}^k Z^i Y^j
\end{equation}
where
\begin{equation}\label{eq:224}
{}^{(3)}\!\Gamma_{ij}^k = \Gamma_{ij}^k|_{t=0}
\end{equation}
and where \(\Gamma_{\beta\gamma}^{\alpha}\) are the Christoffel symbols of \(g_{\alpha\beta}\). The components of \({}^{(3)}\!\Gamma\) are given explicitly by
\begin{equation}\label{eq:225}
\begin{split}
{}^{(3)}\!\Gamma_{33}^3 &= -\frac{1}{2} \mathring{\varphi}_{,t}\,,\;\;\; {}^{(3)}\!\Gamma_{a3}^3 = -\frac{1}{2} \mathring{\beta}_{a,t}\\
{}^{(3)}\!\Gamma_{ab}^3 &= -\frac{1}{2} \mathring{\mu}_{ab,t}\,,\; {}^{(3)}\!\Gamma_{33}^d = 0\\
{}^{(3)}\!\Gamma_{3a}^d &= 0,\;\;\;\;\qquad {}^{(3)}\!\Gamma_{ab}^d = {}^{(2)}\!\mathring{\Gamma}_{ab}^d
\end{split}
\end{equation}
where \({}^{(3)}\!\Gamma_{ij}^k = {}^{(3)}\!\Gamma_{ji}^k\) and where the \({}^{(2)}\!\mathring{\Gamma}_{ab}^d\) are the Christoffel symbols of the invariant transversal metric \(\mathring{\mu}_{ab} (x^c)\).
A similar calculation shows that if \({}^{(4)}\!\Omega\) is a one-form on \(({}^{(4)}\!V, g)\) and \(\Omega\) its pull-back to \textit{N} then the pull-back of \(\nabla_{{}^{(4)}\!Y} {}^{(4)}\!\Omega\) is given by \({}^{(3)}\!\nabla_Y \Omega\) where, as expected,
\begin{equation}\label{eq:226}
\left({}^{(3)}\!\nabla_Y \Omega\right)_i = Y^j \Omega_{i,j} - {}^{(3)}\!\Gamma_{ij}^k Y^j \Omega_k.
\end{equation}
Now, recall the fixed vector field \textit{X} which was introduced in Sect.~\ref{subsec:geometrical}, and, for simplicity, work in agn charts adapted to \textit{X} so that \(X = \frac{\partial}{\partial x^3}\). For an arbitrary vector field \textit{Z} defined on \textit{N} we find, by a straightforward computation, that
\begin{equation}\label{eq:227}
{}^{(3)}\!\nabla_Z X = \left(\omega_X (z)\right) X
\end{equation}
where \(\omega_X\) is a one-form given, in the agn charts adapted to \textit{X}, by
\begin{equation}\label{eq:228}
\omega_X = -\frac{1}{2} \mathring{\varphi}_{,t} dx^3 - \frac{1}{2} \mathring{\beta}_{a,t} dx^a.
\end{equation}
The exterior derivative of \(\omega_X\) is readily found to be
\begin{equation}\label{eq:229}
\begin{split}
d\omega_X = &-\frac{1}{2} (\mathring{\varphi}_{,ta} - \mathring{\beta}_{a,t3}) dx^a \wedge dx^3\\
&-\frac{1}{2} \mathring{\beta}_{a,tb} dx^b \wedge dx^a.
\end{split}
\end{equation}
However, the Einstein equation \(R_{3b} = 0\), restricted to \textit{N} and reduced through the use of \(\mathring{\mu}_{ab,3} = 0\), becomes (c.f. Eq.~(3.2) of Ref.~\cite{Moncrief:1983}):
\begin{equation}\label{eq:230}
\mathring{\varphi}_{,ta} - \mathring{\beta}_{a,t3} = 0.
\end{equation}
Thus \(d\omega_X\) reduces to
\begin{equation}\label{eq:231}
d\omega_X = -\frac{1}{2} \mathring{\beta}_{a,tb} dx^b \wedge dx^a.
\end{equation}
In subsequent sections, we shall be studying integrals of the form
\begin{equation}\label{eq:232}
\int_\gamma \omega_X = \int_\gamma \left(-\frac{1}{2} \mathring{\varphi}_{,t}\right) dx^3
\end{equation}
along segments \(\gamma\) of integral curves of \textit{X}. We shall be interested in comparing the values of these integrals for nearby integral curves. For that purpose, the following sort of \textit{ribbon argument} will prove indispensable.
Let \textit{p} and \(p'\) be any two points of \textit{N} which can be connected by a smooth curve which is everywhere transversal to the flow of \textit{X}. Let \(c : I \rightarrow N\) be such a curve defined on the interval \(I = [a,b]\) with \(c(a) = p\) and \(c(b) = p'\) and let \(\ell : I \rightarrow \mathbb{R}\) be a smooth, strictly positive function on \textit{I}. Now consider the strip or \textit{ribbon} generated by letting each point \(c(s)\) of the curve \textit{c} flow along \textit{X} through a parameter distance \(\ell(s)\) (i.e., through a lapse of \(\ell(s)\) of the natural curve parameter defined by \textit{X}). This construction gives an immersion of the ribbon
\begin{equation}\label{eq:233}
r = \left\lbrace (s,t) \in \mathbb{R}^2 | s \in I, 0 \leq t \leq \ell(s)\right\rbrace
\end{equation}
into \textit{N} which consists of connected segments of integral curves of \textit{X}. In particular, the integral curves starting and \textit{p} and \(p'\) form the \textit{edges} of the ribbon whereas the initial curve \textit{c} together with its image after flow along \textit{X} form the \textit{ends} of the ribbon.
If \(i : r \rightarrow N\) is the mapping which immerses \textit{r} in \textit{N} according to the above construction and \(i^\ast\omega_X\) and \(i^\ast d\omega_X\) are the pull-backs of \(\omega_X\) and \(d\omega_X\) to \textit{r} respectively, then one sees from Eq.~(\ref{eq:231}) and the tangency of the ribbon to the integral curves of \textit{X}, that \(i^\ast d\omega_X = 0\).
Therefore, by means of Stokes' theorem, we get
\begin{equation}\label{eq:234}
\int_{\partial r} \omega_X = \int_r d\omega_X = 0
\end{equation}
for any ribbon of the type described above. Thus if \(\gamma\) and \(\gamma'\) designate the two edges of \textit{r} (starting from \(s = a\) and \(s = b\) respectively and oriented in the direction of increasing \textit{t}) and if \(\sigma\) and \(\sigma'\) designate the two \textit{ends} of \textit{r} defined by \(\sigma = \left\lbrace (s,0) | s \in I\right\rbrace\) and \(\sigma' = \left\lbrace \left(s,\ell (s)\right) | s \in I\right\rbrace\) respectively (and oriented in the direction of increasing \textit{s}) then we get, from \(\int_{\partial r} \omega_X = 0\), that
\begin{equation}\label{eq:235}
\int_\gamma \omega_X - \int_{\gamma'} \omega_X = \int_\sigma \omega_X - \int_{\sigma'} \omega_X.
\end{equation}
Equation (\ref{eq:235}) will give us a means of comparing \(\int_\gamma \omega_X\) with \(\int_{\gamma'} \omega_X\) provided we can estimate the contributions to \(\int_{\partial r} \omega_X\) coming from the ends of the ribbon. As a simple example, suppose (as we did in Ref.~\cite{Moncrief:1983}) that every integral curve of \textit{X} is closed and choose \textit{r} and \(i : r \rightarrow N\) so that the image of \textit{r} in \textit{N} consists of a ribbon of simply closed curves. In this case, the end contributions cancel and we get that \(\int_\gamma \omega_X = \int_{\gamma'} \omega_X\). This result played a key role in the arguments of Ref.~\cite{Moncrief:1983}.
\subsection{Geometrical Assumptions and Basic Constructions}
\label{subsec:geometrical}
We shall be considering real analytic, time orientable, vacuum spacetimes \(({}^{(4)}\!V,g)\) which contain compact Cauchy horizons. More precisely, we assume that \({}^{(4)}\!V = M \times \mathbb{R}\), where \textit{M} is a compact, connected, analytic and orientable three-manifold without boundary, and that \textit{g} is an analytic, Lorentzian, Ricci-flat metric on \({}^{(4)}\!V\). We also assume that \(({}^{(4)}\!V,g)\) admits a compact, embedded null hypersurface \textit{N}, which can be realized as a level surface of some real analytic function \(\tau\) with no critical points on a neighborhood of \textit{N}, and that \textit{N} is a Cauchy horizon for one of the two open submanifolds of \({}^{(4)}\!V\) which \textit{N} separates. Thus we regard \({}^{(4)}\!V\) as a disjoint union \({}^{(4)}\!V_+ \cup N \cup {}^{(4)}\!V_-\) where \({}^{(4)}\!V_\pm = M \times \mathbb{R}_\pm\) (with \(\mathbb{R}_\pm = \lbrace r \gtrless 0\rbrace\) and assume that at least one of the two spacetimes \(({}^{(4)}\!V_+,g_+), ({}^{(4)}\!V_-,g_-)\) (where \(g_+\) and \(g_-\) represent the restriction of \textit{g} to \({}^{(4)}\!V_+\) and \({}^{(4)}\!V_-\) respectively) is globally hyperbolic. For convenience, we may assume that the function \(\tau\) has been chosen so that \textit{N} coincides with the level surface of \(\tau\) having the level value \(\tau = 0\).
Since \textit{N} is null and since by assumption \(\tau\) has no critical points on a neighborhood of \textit{N}, the vector field \({}^{(4)}\!X\) determined by \(d\tau\) (i.e. given in local charts by \({}^{(4)}\!X^\alpha = g^{\alpha\beta} \tau_{,\beta}\)) is non-vanishing on a neighborhood of \textit{N}, null on the surface \textit{N} and thus tangent to the null geodesic generators of that surface. Let \textit{X} designate the restriction of \({}^{(4)}\!X\) to the null surface \textit{N} so that \textit{X} may be viewed as a vector field defined on \textit{N} itself.
Since \textit{X} is non-vanishing and tangent to the null geodesic generators of \textit{N}, one can always choose local coordinates \(\lbrace x^a, x^3\rbrace\) on suitable open subsets of \textit{N} such that the \(\lbrace x^a\; | a = 1,2\rbrace\) are constant along the null generators and such that \(X = \frac{\partial}{\partial x^3}\) within each such local chart. One can construct such charts in the following way. Choose a two-disk \textit{D} which is (analytically) embedded in \textit{N} and transversal to the flow of \textit{X} and let \(\lbrace x^a\rbrace\) be coordinates on \textit{D}. Define coordinates \(\lbrace x^a, x^3\rbrace\) on a tubular neighborhood \(\approx D \times I\) of \textit{D} in \textit{N} by requiring that the \(x^a\) remain constant along the integral curves of \textit{X} and that \(x^3\) coincide with the natural integral curve parameter determined by \textit{X} (after fixing, say, \(x^3\; |_D = k(x^a)\) for some real analytic function \textit{k} defined on \textit{D}). The range of \(x^3\) may, for convenience, be allowed to vary from generator to generator with, for example, \(k(x^a) - \delta_-(x^a) < x^3 < k(x^a) + \delta_+(x^a)\) where \(\delta_\pm\) are two strictly positive real analytic functions. On the connected components of the domains of intersection of any two such local charts, the two sets of coordinate functions \(\lbrace x^a, x^3\rbrace, \lbrace x^{a'}, x^{3'}\rbrace\) are clearly related by a transformation of the form
\begin{equation}\label{eq:201}
\begin{split}
x^{3'} &= x^3 + h(x^a)\\
x^{a'} &= x^{a'} (x^b)
\end{split}
\end{equation}
where \textit{h} is an analytic function and \(x^{a'}(x^b)\) a local (analytic) diffeomorphism defined on some transversal two manifold which lies in the domains of both charts.
We shall often consider local charts of the type described above not only for the fixed vector field \textit{X} but also for other analytic vector fields defined on \textit{N} which are tangent to its null generators. If \textit{K} is some non-vanishing vector field on \textit{N} tangent to the generators of \textit{N} and we set up local charts of the type described above based on \textit{K} (rather than on \textit{X}), then the connected components of the domains of intersection of the new charts (say, \(\lbrace x^{3'}, x^{a'}\rbrace\) with \(K = \frac{\partial}{\partial x^{3'}}\)) with the old ones \(\lbrace x^3, x^a\rbrace\) for which \(X = \frac{\partial}{\partial x^3}\) necessarily admit coordinate transformations of the form
\begin{equation}\label{eq:202}
\begin{split}
x^{3'} &= h(x^3, x^a)\\
x^{a'} &= x^{a'}(x^b)
\end{split}
\end{equation}
where, as before, \(x^{a'}(x^b)\) is a local diffeomorphism and where \(\frac{\partial h}{\partial x^3} \neq 0\).
Let \(\lbrace x^a, x^3\rbrace\) be local coordinates of the type described above defined on some domain \(U \approx D \times I\) lying in \textit{N} and adapted to some fixed non-vanishing vector field \textit{K} (i.e., chosen so that \(K = \frac{\partial}{\partial x^3}\) within the chart) which is tangent to the null generators of \textit{N}. Then one can always construct a local chart \(\lbrace t, x^a, x^3\rbrace\) on some domain \({}^{(4)}\!U\) of \({}^{(4)}\!V\) which intersects \textit{N} in \textit{U}, for which the hypersurface \(U = N \cap {}^{(4)}\!U\) corresponds to the level value \(t = 0\) and in terms of which the Lorentzian metric \textit{g} takes the convenient form
\begin{equation}\label{eq:203}
\begin{split}
g &= dt \otimes dx^3 + dx^3 \otimes dt\\
&{} + \varphi\; dx^3 \otimes dx^3 + \beta_a (dx^a \otimes dx^3 + dx^3 \otimes dx^a)\\
&{} + \mu_{ab}\; dx^a \otimes dx^b.
\end{split}
\end{equation}
By construction, the coordinates, restricted to the null surface \(t = 0\), coincide with those of the original chart defined on \textit{U} and because \textit{N} is null and \textit{g} is Lorentzian, the metric functions obey
\begin{equation}\label{eq:204}
\varphi|_{t=0} = \beta_a|_{t=0} = 0,
\end{equation}
with \(\mu_{ab}\) pointwise positive definite (as a \(2 \times 2\) symmetric matrix). The construction of such local charts on suitable domains in (\({}^{(4)}\!V,g\)) was discussed in detail in section II B of Ref.~\cite{Moncrief:1983} and need not be repeated here. The local (analytic) coordinate functions \(\lbrace t, x^a, x^3\rbrace\) are uniquely determined by the local chart \(\lbrace x^a, x^3\rbrace\) defined on \(U \subset N\) and by the coordinate conditions implicit in the desired metric form (\ref{eq:203}).
Because of their resemblance to gaussian normal coordinates (but with \(\frac{\partial}{\partial t}\) tangent to \textit{null} geodesics transversal to \textit{N} instead of timelike ones), we called the coordinate systems for which \textit{g} takes the form (\ref{eq:203}) and satisfies (\ref{eq:204}) \textit{gaussian null} coordinates. In the present context, when we wish to emphasize that the coordinates have, in addition, been \textit{adapted} to some particular vector field tangent to the generators of \textit{N} (i.e., chosen so that \(\left.\frac{\partial}{\partial x^3}\right|_{t=0}\) coincides with the given vector field) we shall refer to them as \textit{adapted gaussian null} coordinates or \textit{agn} coordinates for brevity.
The Einstein equations are written out in detail in an arbitrary gaussian null coordinate chart in Section II C of Ref.~\cite{Moncrief:1983}. As in that reference, we shall often use the notation of an overhead nought to signify restriction to the null surface \textit{N} (labeled in gaussian null coordinates by \(t = 0\)). Thus, for example, we shall often write \(\mathring{\mu}_{ab}\) for \(\mu_{ab}|_{t=0}\), etc., and can therefore reexpress Eqs.~(\ref{eq:204}) as \(\mathring{\varphi} = 0, \mathring{\beta}_a = 0\).
\subsection{Implications of {Poincar\'{e}} Recurrence for the Transversal Metric}
\label{subsec:implications-poincare}
Consider a null generator \(\gamma\) of \textit{N} which passes through a point \textit{p} and let \textit{D} be a disk in \textit{N}, containing \textit{p}, which is (analytically) embedded transversally to the null generators which intersect it. If we follow \(\gamma\) starting at \textit{p} then {Poincar\'{e}} recurrence shows that we will either return to \textit{p} (in which case \(\gamma\) is closed) or else intersect \textit{D} in a sequence of points which approach \textit{p} arbitrarily closely.
The Riemannian metric \(\mu_D\) induced upon \textit{D} is analytic. Suppose for the moment that it has non-constant scalar curvature \({}^{(2)}\!R (\mu_D)\). By analyticity, \({}^{(2)}\!R (\mu_D)\) has non-zero gradient on an open dense subset of \textit{D} and thus, by the implicit function theorem, the connected level set of \({}^{(2)}\!R (\mu_D)\) passing through a point \textit{p} at which \({}^{(2)}\!R (\mu_D)\) has non-zero gradient is an analytic curve in \textit{D}, at least sufficiently near the point \textit{p}.
If \(\gamma\) is not closed, then it must reintersect \textit{D} in an infinite sequence of points \(\lbrace p_i\rbrace\) which approach \textit{p} arbitrarily closely. Furthermore, by invariance of the transversal metric along the flow, each of the \(p_i\) must lie on the same level curve of \({}^{(2)}\!R (\mu_D)\) that \textit{p} does. In fact the recurrences determined by the reintersections of \(\gamma\) with \textit{D} must densely fill the whole (connected) level set containing \textit{p}. This follows from the fact that a recurrence which carries \textit{p} to some sufficiently nearby point \(p'\) a metrical displacement \(\delta\) from \textit{p} (along the given level set of \({}^{(2)}\!R (\mu_D)\)) carries \(p'\) (again by invariance of the transversal metric along the flow) to a point \(p^{\prime\prime}\) which is displaced \(2\delta\) from \textit{p}, etc. Thus one gets recurrence by integral multiples of \(\delta\) until eventually the recurrent points `run off the edge' of \textit{D}. Since, however, by {Poincar\'{e}} recurrence, \(\delta\) can be made arbitrarily small by choosing \(p'\) suitably from the sequence \(\lbrace p_i\rbrace\) and since one gets displacements of opposite sign by simply tracking the flow backwards, it's clear that the recurrences of \textit{p} densely fill a (connected) component of the level set of \({}^{(2)}\!R (\mu_D)\) on which \textit{p} lies. Furthermore, since each of these recurrences is induced by a local isometry of \((D, \mu_D)\), as described in section \ref{subsec:invariance}, it follows that if \textit{p} is a point at which \({}^{(2)}\!R (\mu_D)\) has non-zero gradient, then the whole connected level set of \({}^{(2)}\!R (\mu_D)\) containing \textit{p} consists of points of non-zero gradient of \({}^{(2)}\!R (\mu_D)\). Thus this entire level set (and not just a portion near \textit{p}) is an analytic curve lying in \textit{D}.
If, by contrast, \(\gamma\) is a closed generator then, by invariance of \(\mu_D\), points near \textit{p} have all of their recurrences a fixed metrical distance from \textit{p}, and thus all lie on metrical circles centered at \textit{p}. These circles are (at least generically) curves on which grad \({}^{(2)}\!R (\mu_D)\) is non-zero and hence are either densely filled by recurrences of points lying on them or else consist of points which all lie on closed generators. In either case, the interior of such a metric circle (contained in \textit{D} and centered at \textit{p}) is mapped repeatedly to itself by iterations of the isometry defined by the first recurrence. This isometry either corresponds to a `rational' rotation (in which each point advances by a rational multiple of the circumference of the circle on which it lies), in which case every point lies on a closed generator, or to an `irrational' rotation in which every metrical circle centered at \textit{p} is densely filled by the recurrences of any single point lying on it.
Thus, for the case in which \({}^{(2)}\!R (\mu_D)\) is non-constant, we find that non-closed generators densely fill smooth curves lying in \textit{D} whereas closed generators are either surrounded by other closed generators which (as a straightforward extension of the above argument shows) fill \textit{D} or else are surrounded by non-closed generators which densely fill sufficiently small circles about the given point of intersection of the closed generator with \textit{D}.
Using the connectedness and compactness of \textit{N} and the analyticity and invariance of the transversal metric it is clear that one can `analytically extend' the above argument to show that either (i) every generator of \textit{N} is closed ( a case which we have treated elsewhere), or (ii) almost every generator densely fills an analytic curve lying in any transversal embedded disk which that generator intersects. In the latter case, one may also have isolated instances of closed generators but these will, as we have seen, be surrounded by densely filling generators which are thus generic.
Consider the closure in \textit{N} of any one of these densely filling generators \(\gamma\). Let \(cl(\gamma)\) designate this subset of \textit{N}. Clearly \(cl(\gamma)\) intersects any disk \textit{D} transversal to \(\gamma\) in an analytic curve satisfying \({}^{(2)}\!R (\mu_D) = \hbox{constant}\) (since \(\gamma\) itself densely fills this curve). Locally, therefore, \(cl(\gamma)\) is obtained by translating such a transversal, analytic curve along the flow of \textit{X} and thus defines an analytic surface embedded in \textit{N}. Since \(cl(\gamma)\) is a closed subset of the compact set \textit{N}, the embedded surface defined by \(cl(\gamma)\) is thus a compact, connected embedded sub-two-manifold of \textit{N}. We want first to show that \(cl(\gamma)\) is in fact also orientable and thus, since it supports a smooth, nowhere vanishing tangent vector field (that induced by \textit{X}), that it must be diffeomorphic to a two-torus.
First, note that the value of \({}^{(2)}\!R (\mu_D)\) at a point \(p \in D \subset N\) is (by invariance of the metric \(\mu_D\) under the flow along \textit{X}) independent of the choice of disk \textit{D}. Any other transversal disk containing \textit{p} would yield the same value for the scalar curvature function at \textit{p}. Thus the transversal metric, though not really defining a metric on \textit{N}, nevertheless defines an analytic function, \({}^{(2)}\!R (\mu) : N \rightarrow R\), on \textit{N} given by setting
\begin{equation}\label{eq:236}
{}^{(2)}\!R (\mu)(p) = {}^{(2)}\!R (\mu_D)(p)
\end{equation}
for any \(p \in N\), where \textit{D} is any transversal disk containing \textit{p}. By construction, \({}^{(2)}\!R (\mu)\) is constant along the generators of \textit{N} and hence constant on the closure \(cl(\gamma)\) of any such generator. Indeed, each \(cl(\gamma)\) is just a connected component of a level set of \({}^{(2)}\!R (\mu)\).
At a generic point \(p \in N\), the differential \(d {}^{(2)}\!R (\mu)(p)\), will by analyticity, be non-zero and, by invariance of \(\mu\) along the flow of \textit{X}, this differential will be non-zero at every point along the generator \(\gamma\) which passes through \textit{p}. By continuity \(d {}^{(2)}\!R (\mu)\) will thus be non-zero everywhere on \(cl(\gamma)\) as well. Choosing a Riemannian metric \({}^{(3)}\!g'\) on \textit{N} (such as that discussed in section \ref{subsec:application-poincare}) one computes from \(d {}^{(2)}\!R (\mu)\) an associated vector field, \(\nabla {}^{(2)}\!R (\mu)\) which is everywhere non-zero and everywhere metrically perpendicular to \(cl(\gamma)\). Thus \(\nabla {}^{(2)}\!R (\mu)\) is perpendicular to \textit{X} at every point of \(cl(\gamma)\). Using the metric \({}^{(3)}\!g'\) and its associated volume 3-form one can define a `cross-product' of \textit{X} and \(\nabla {}^{(2)}\!R (\mu)\) by taking the dual of the wedge product of the corresponding one-forms and `raising the index' of the resulting one-form. This yields another smooth vector field which is tangent to \(cl(\gamma)\), nowhere vanishing and everywhere perpendicular to \textit{X}. Thus \textit{X} together with this `cross-product' vector field, define an orientation for \(cl(\gamma)\) which is thus necessarily orientable.
Therefore, any of the embedded two-manifolds, \(cl(\gamma)\), on which \(d {}^{(2)}\!R (\mu)\) is non-zero is (since compact, orientable and supporting a smooth non-vanishing vector field) necessarily a two-torus. By analyticity these are generic, since \(d {}^{(2)}\!R (\mu)\) can vanish only on isolated curves (corresponding to closed generators) or two-manifolds. The latter are necessarily tori as well since they can be shown to be orientable by a different argument.
To see this, we need to show that the compact two-manifold \(cl(\gamma)\) can be assigned a smooth, nowhere vanishing normal field. Let \textit{p} be a point in \(cl(\gamma)\) and let \textit{D} be a disk in \textit{N} (transversal to the flow of \textit{X} as usual) which contains \textit{p} as an interior point. We know that \(cl(\gamma)\) intersects \textit{D} in an analytic curve and that the recurrences of \textit{p}, followed to the future along the integral curve of \textit{X} through \textit{p}, densely fill this curve in \textit{D}. Suppose that one of these future recurrences of \textit{p} is a point \(p' \in D \cap cl(\gamma)\) which lies a metrical distance \(\delta\) (as measured along the curve \(D \cap cl(\gamma)\) with respect to the metric \(\mu_D\)) from the point \textit{p}. The point \(p'\) is uniquely determined by the point \textit{p} and the distance \(\delta\) since if, on the contrary, there were another future recurrence point \(p^{\prime\prime}\) of \textit{p}, an equal distance from \textit{p} (but on the opposing side of the curve \(D \cap cl(\gamma)\) from \(p'\)), then the same isometry which carries \textit{p} to the future to \(p'\) would carry \(p^{\prime\prime}\) to the future to \textit{p}. But this would imply that \(\gamma\) is closed which is contrary to our assumption that \(cl(\gamma)\) is a closed two-manifold densely filled by \(\gamma\).
The same isometry which uniquely carries \textit{p} to \(p'\) carries any point \(q \in D \cap cl(\gamma)\), sufficiently near to \textit{p}, to a uniquely determined point \(q' \in D \cap cl(\gamma)\) a metrical distance \(\delta\) from \textit{q} (as measured, as before, along the curve \(D \cap cl(\gamma)\) by means of the metric \(\mu_D\)). It now follows from translating \textit{D} along the flow of \textit{X} and appealing to the invariance of the transversal metric and the fact that \(\gamma\) densely fills \(cl(\gamma)\) that any point \(q \in cl(\gamma)\) lies in a transversal disk \(D_q\) which also contains a uniquely defined future recurrent point \(q'\) which lies a metrical distance \(\delta\) along \(D_q \cap cl(\gamma)\) from \textit{q} (as measured by the transversal metric).
A unique vector can now be defined at \textit{q} which is orthonormal (as measured relative to the Riemannian metric \({}^{(3)}\!g'\) defined on \textit{N}) to the embedded two-manifold \(cl(\gamma)\). To see this, choose a disk \(D_q\) containing \textit{q} (e.g., a translate along the flow of \textit{X} of the original disk \textit{D}) which intersects \(cl(\gamma)\) in an analytic arc which contains the unique future recurrent point \(q'\) a metrical distance \(\delta\) from \textit{q}. By parametrizing this arc with an orientation defined by the direction leading from \textit{q} to \(q'\) (along the segment of length \(\delta\)) we can compute a vector at \textit{q} by calculating the tangent vector at this point. This vector depends upon the choice of disk \(D_q\) but, after taking its cross product with \textit{X} (using the metric \({}^{(3)}\!g'\) as before) and normalizing to unit length, we get a uniquely defined unit normal vector to \(cl(\gamma)\) at the arbitrary point \textit{q}. That this choice varies smoothly with the choice of point \(q \in cl(\gamma)\) can be seen as follows. The parametrized arc through \textit{q} has a smoothly varying tangent. Translating this curve along the flow of \textit{X} and appealing to the invariance of the transversal metric we can generate locally (i.e., on a neighborhood of \textit{q} in \(cl(\gamma)\) a smooth tangent field to \(cl(\gamma)\) which, together with the cross product and normalization construction described above, determines a locally smooth unit normal field to \(cl(\gamma)\). However, this normal field is globally unique and thus, since smooth on a neighborhood of any point of \(cl(\gamma)\), defines a globally smooth normal direction to \(cl(\gamma)\). Thus \(cl(\gamma)\) is, as before, a compact, orientable embedded two-manifold in \textit{N} which supports a nowhere vanishing vector field (e.g., \textit{X} or the cross product of \textit{X} with the normal field). As such it must be a torus.
Thus the level sets of \({}^{(2)}\!R (\mu)\) in \textit{N} consist of at most a finite collection of closed generators (by compactness of \textit{N} and the fact that these circles are isolated for the cases of interest here) together with a foliation of the complement of these circles by embedded two-tori. Each closed generator (if any exist) lies at the core of a family of nested tori. Each torus in the complement of the closed generators is densely filled by an integral curve of \textit{X}, i.e., by the generator \(\gamma\) whose closure \(cl(\gamma)\) defines the chosen torus. In fact, from the invariance of the transversal metric along the flow of \textit{X}, it follows that every integral curve of \textit{X} lying in \(cl(\gamma)\) is densely filling. Thus there are no fixed points (\textit{X} is nowhere zero) or periodic orbits lying in any \(cl(\gamma) \approx T^2\).
The only cases which remain to be considered are those for which \({}^{(2)}\!R (\mu_D)\) is a constant on some transversal disk \textit{D}. By analyticity it follows that \({}^{(2)}\!R (\mu)\) is necessarily constant everywhere on \textit{N}. Evidently, there are three distinct possibilities corresponding to the metric \(\mu_D\) (defined on any transversal disk) being spherical (\({}^{(2)}\!R (\mu_D) > 0\)), pseudo-spherical (\({}^{(2)}\!R (\mu_D) < 0\)), or flat (\({}^{(2)}\!R (\mu_D) = 0\)). We shall show for the first two of these cases that again the closure, \(cl(\gamma)\), of any non-closed generator \(\gamma\) is an embedded, compact two-manifold diffeomorphic to \(T^2\). For the third case, when \(\mu_D\) is flat, another possibility arises, which we shall call `ergodic', in which a generator \(\gamma\) can densely fill \textit{N} itself. That such ergodic Cauchy horizons actually occur in solutions of Einstein's equations can be seen by taking the flat Kasner solution and spatially compactifying it, with suitable identifications, to yield a vacuum spacetime defined on \(T^3 \times \mathbb{R}\) which has a Cauchy horizon \(N \approx T^3\). The most obvious identification leads to a Cauchy horizon with all generators being closed but one can exploit the spatial homogeneity of the Kasner solution to make an `irrational shift' in the coordinates of the points being identified in such a way that the null generators of the Cauchy horizon \textit{N} now densely fill \textit{N}. One can of course also do this in such a way that the generators again only densely fill two-tori instead of \(T^3\). Nevertheless, the ergodic case does exist. We shall not deal with it here but mention the conjecture that every ergodic solution is essentially equivalent to (i.e., finitely covered by) one of the ergodic flat-Kasner solutions described above.
Assume now that \({}^{(2)}\!R (\mu)\) is a non-zero constant on \textit{N}, and let \textit{p} be an arbitrary point of \textit{N}. Choose a circular transversal disk \(D_p (\delta)\) centered at \textit{p} and having radius \(\delta\) (as measured along radial geodesics of the spherical or pseudo-spherical metric \(\mu_D\)). Let \textit{p} flow to the future along \textit{X} until it first reintersects \(D_p (\delta)\) at some interior point \(p'\). By assumption \(p'\) is the \textit{first} future recurrence of \textit{p} to the interior of \(D_p (\delta)\). Let \(\delta - \varepsilon > 0\) be the radial distance from \textit{p} to \(p'\) and let \(D_{p'} (\varepsilon/2) \subset D_p (\delta)\) be a circular disk of radius \(\varepsilon/2\) centered at \(p'\). We know that there is a unique isometry, determined by the flow along \textit{X}, which carries the corresponding disk \(D_p (\varepsilon/2)\) centered at \textit{p} to \(D_{p'} (\varepsilon/2)\). This isometry is the restriction of an orientation preserving isometry of the sphere or pseudo-sphere to the subdomain defined by \(D_p (\varepsilon/2)\) and, as such, belongs to a uniquely defined one-parameter subgroup of the full (spherical or pseudo-spherical) orientation preserving isometry group.
The action of this subgroup is generated by a unique Killing field \textit{K} of the manifold \((D, \mu_D)\). From Killing's equation, \(\mathcal{L}_K \mu_D = 0\), one gets that \(\mu_D (K,K)\), the squared length of \textit{K}, is constant along the orbits of the one-parameter subgroup generated by \textit{K} (i.e., \(\mathcal{L}_K \left(\mu_D (K,K)\right) = 0\) on \textit{D}). Since \(\mu_D (K,K)\) is analytic and non-constant (since we have excluded the flat case for the present), its level sets are analytic curves which coincide with the orbits generated by \textit{K}. Let \(c_p\) be the orbit through \textit{p} generated by \textit{K}; this is just a connected component of the level set of \(\mu_D (K,K)\) determined by the value of this function at \textit{p}. What we want to show is that every future recurrence of \textit{p}, sufficiently near \textit{p}, actually returns to, and in fact, densely fills, the curve \(c_p\). This will guarantee, by arguments similar to those given above, that the closure of the orbit \(\gamma\) of \textit{X} through \textit{p} is in fact a torus embedded in \textit{N} as before.
First note that not only is \(p'\) the first future recurrence of \textit{p} to the disk \(D_p (\delta)\) but also the first recurrence of \textit{p} to the smaller disk \(D_p (\delta - \varepsilon/2)\). Indeed, by choosing \(\eta > 0\), small enough it is clear that we can ensure that \(p'\) is the first future recurrence of \textit{p} to any disk of the type \(D_q (\delta - \varepsilon/2)\) where the distance \(d(q,p)\) from \textit{p} to \textit{q} (as measured by the metric \(\mu_D\)) is less than \(\eta\). In particular, we clearly need \(\eta < \varepsilon/2\) but let us take \(\eta\) sufficiently small so that the disk, \(D_p (\eta)\), of radius \(\eta\) centered at \textit{p}, intersects the level set of \(\mu_D (K,K)\) corresponding to the level value \(\mu_D (K,K) (p)\) only along an arc of \(c_p\) (i.e., if this level set includes disconnected components we choose \(\eta\) small enough so that \(D_p (\eta)\) excludes them). Further require (if necessary) that \(\eta < \varepsilon/4\) so that any point of the disk \(D_q (\delta - \varepsilon/2)\), for which \(d(q,p) < \eta < \varepsilon/4\), is at least a distance greater than \(\eta\) from the boundary of the original disk \(D_p (\delta)\). This ensures that the first recurrence of any point \(q \in D_p (\eta)\) to the disk \(D_q (\delta - \varepsilon/2)\) must be given by that isometry which carried \textit{p} to \(p'\) (and \(D_p (\varepsilon/2)\) to \(D_{p'} (\varepsilon/2)\)). The reason is that, if this were not the case, then the distinct isometry which first carries \textit{q} to some \(q' \in D_q (\delta - \varepsilon/2)\) would take \textit{p} to some point \(p^{\prime\prime}\) distinct from \(p'\) (since we are excluding the case of a closed generator through \textit{p}) which lies within \(D_p (\delta)\) (since \(p^{\prime\prime}\) lies within a distance \(\eta\) of \(q'\) and every point of \(D_q (\delta - \varepsilon/2)\) is at least a distance \(\eta\) from the boundary of \(D_p (\delta)\)). But this contradicts the original assumption that \(p'\) was the first future recurrence of \textit{p} to \(D_p (\delta)\).
Thus the first future recurrence of any \(q \in D_p (\eta)\) to the disk \(D_q (\delta - \varepsilon/2)\) is in fact that \(q'\) which is determined by the unique isometry which carries \(D_p (\varepsilon/2)\) to \(D_{p'} (\varepsilon/2)\) (and, of course, \textit{p} to \(p'\)).
Now, let \(q \in D_p (\eta)\) be some subsequent future recurrence of \textit{p} to \(D_p (\eta)\). We want to show that \(q \in c_p\) so suppose this is not the case. This would mean that \textit{q} and its image \(q'\) (under the isometry which carries \(D_p (\varepsilon/2)\) to \(D_{p'} (\varepsilon/2)\)) lie on some other level set of \(\mu_D (K,K)\) corresponding to a level value different from that determined by \(c_p\) (i.e., different from \(\mu_D (K,K) (p)\)). This is impossible however, since the point \(q'\) represents the first future recurrence of \textit{q} to \(D_q (\delta - \varepsilon/2)\) whereas \textit{q} is a future recurrence of \textit{p}. But the invariance of the transversal metric along the flow of \textit{X} implies the triple \((q, q', D_q (\delta - \varepsilon/2))\) must be an isometric copy of the triple \((p, p', D_p (\delta - \varepsilon/2))\) which results from simply translating the original triple along the flow until \textit{p} gets mapped to \textit{q}, etc. However, that means that \textit{p} and \textit{q} (as well as \(p'\) and \(q'\)) must both lie on the same level of \(\mu_D (K,K)\) and hence both lie on \(c_p\).
Thus all (future) recurrences of \textit{p} sufficiently near \textit{p} must lie on the analytic curve \(c_p \subset D_p (\delta)\) which contains \textit{p}. A completely analogous argument shows that the same is true for past recurrences of \textit{p}. Since these recurrences must approach \textit{p} or, in fact, any of its recurrences on \(c_p\) arbitrarily closely it is clear that, as before, the recurrences of \textit{p} densely fill the analytic curve \(c_p \subset D_p (\delta)\). Translating this curve along the flow generated by \textit{X} yields an analytic surface through \textit{p} defined locally by the foregoing constructions. Thus near any point \(p \in N\) the closure \(cl(\gamma)\), of the orbit of \textit{X} through \textit{p} is an analytically embedded two-dimensional submanifold of \textit{N}. Since \(cl(\gamma)\) is closed in \textit{N} and \textit{N} is compact, \(cl(\gamma)\) must as before be a compact submanifold of \textit{N} which supports a (smooth) nowhere vanishing vector field (e.g. \textit{X} itself). We can now use the same argument as that given above for those (isolated) manifolds having \(\nabla {}^{(2)}\!R (\mu) = 0\) to show that \(cl(\gamma)\) is orientable and hence a torus.
This argument breaks down in the case \({}^{(2)}\!R (\mu) = 0\) (i.e., when \(\mu\) is flat) but only if the isometry carrying \textit{p} to \(p'\) is a pure translation (since then and only then is \(\mu_D (K,K)\) constant on \textit{D}). The flat case still allows special cases for which \(cl(\gamma)\) is a two-torus and in those instances the arguments to follow go through equally well. But the flat case also allows more general patterns of recurrence in which \(cl(\gamma)\) is not simply a 2-manifold, but may in fact be all of \textit{N}. We shall refer to these more general cases as `ergodic' and shall not deal with them in the following. It is worth noting, however, that if an `ergodic' flow on \textit{N} generated by \textit{X} happened to admit a global transversal foliation with closed leaves (i.e., compact embedded two-manifolds everywhere transversal to the flow of \textit{X} and intersected by every orbit) then we could treat this case as well by a modification of the arguments to be given below.
Thus the picture we have developed that \textit{N} contains, at most, a finite number of closed generators and that any non-closed generator \(\gamma\) yields an embedded two-torus in \textit{N} as its closure applies to every case except the ergodic ones for which \({}^{(2)}\!R (\mu)\) is necessarily zero.
\subsection{Invariance of the Transversal Metric}
\label{subsec:invariance}
Consider an arbitrary two-disk \textit{D} which is analytically embedded in \textit{N} and which is everywhere transversal to the null generators of that hypersurface. In a gaussian null coordinate chart which covers \textit{D}, it is clear that \textit{D} has a coordinate characterization of the form,
\begin{equation}\label{eq:205}
t = 0,\; x^3 = f(x^a)
\end{equation}
for some real analytic function \textit{f}. (Here the \(\lbrace x^a\rbrace\) range over those values corresponding to the generators which intercept \textit{D}.) From Eqs.~(\ref{eq:203}), (\ref{eq:204}) and (\ref{eq:205}) one sees that \textit{g} induces a Riemannian metric \(\mu_D\), given by
\begin{equation}\label{eq:206}
\mu_D = \mu_{ab}|_{x^3=f(x^c)} dx^a \otimes dx^b
\end{equation}
on \textit{D}. If we let \textit{D} flow along the integral curves of the vector field \(K = \frac{\partial}{\partial x^3}\) associated (at least locally) to the chosen chart, then we get a one-parameter family \(D_\lambda\) of embeddings of \textit{D} in \textit{N} characterized by
\begin{equation}\label{eq:207}
t = 0,\; x^3 = f(x^a) + \lambda
\end{equation}
and a corresponding family of metrics \(\mu_{D_\lambda}\) given by
\begin{equation}\label{eq:208}
\mu_{D_\lambda} = \mu_{ab}|_{x^3=f(x^c)+\lambda} dx^a \otimes dx^b.
\end{equation}
Here \(\lambda\) ranges over some open interval containing \(\lambda = 0\).
Locally one can always choose a particular vector field \textit{K} tangent to the null generators of \textit{N} such that the integral curves of \textit{K} coincide with the \textit{affinely parametrized} null geodesics generating \textit{N} (i.e., such that the curves \(\lbrace x^a(\lambda)\rbrace\) defined by \(t(\lambda) = 0, x^a(\lambda) = \hbox{constant}, x^3(\lambda) = \mathring{x}^3 + \lambda\) are affinely parametrized null geodesics generating (a portion of) \textit{N}, with \(\lambda\) an affine parameter). \textit{K} is of course not unique (since there is no canonical normalization for \(\lambda\) along each generator) but can be fixed by prescribing it at each point of some transversal two-manifold. In general, \textit{K} may also not be extendable to a globally defined vector field on \textit{N} (since the affinely parametrized generators of \textit{N} may be incomplete wheras the flow of a globally defined vector field on the \textit{compact} manifold \textit{N} must be complete) but this is of no consequence in the following construction. For any point \(p \in N\) choose a disk \textit{D} which contains \textit{p} and is everywhere transversal to the null generators of \textit{N}. Construct, on a neighborhood of \textit{D} in \textit{N}, a vector field \textit{K} of the type described above and let \(\lbrace x^a\rbrace = \lbrace t, x^3, x^a\rbrace\) be an agn coordinate chart adapted to \textit{K} (i.e., so that \(\frac{\partial}{\partial x^3} = K\) is thus tangent to the affinely parametrized generators of \textit{N}). Now let \textit{D} flow along the integral curves of \textit{K} to get a one-parameter family of embedded disks \(D_\lambda\) and a corresponding family of induced Riemannian metrics \(\mu_{D_\lambda}\) as described above.
In terms of this construction, one can compute the \textit{expansion} \(\hat{\theta}\) of the null generators at \textit{p} by evaluating
\begin{equation}\label{eq:209}
\hat{\theta}(p) = \left.\left(\frac{\partial}{\partial\lambda} \ell n{(\det{\mu_{D_\lambda}})}\right)\right|_{\substack{\lambda=\lambda(p)\\
x^a=z^a(p)}}.
\end{equation}
It is not difficult to verify that this definition is independent of the particular choice of transversal manifold \textit{D} chosen through \textit{p} and of the particular coordinates \(\lbrace x^a\rbrace\) used to label the generators near \textit{p}. In fact, this definition of \(\hat{\theta}\) is equivalent to the usual definition of the \textit{expansion} of the null generators of a null hypersurface.
In our case, however, \textit{N} is not an arbitrary null surface. It is, by assumption a compact Cauchy horizon in a vacuum spacetime. For such a hypersurface Hawking and Ellis have proven the important result that \(\hat{\theta}\) vanishes at every point \(p \in N\) \cite{Hawking:1973,Hollands:2007}. Thus in an agn coordinate chart adapted to \textit{K} one has
\begin{equation}\label{eq:210}
\left.(\det{\mu_{ab}})_{,3}\right|_{t=0} = 0
\end{equation}
at every point of \textit{N} covered by the chart. However, the Einstein equation \(R_{33} = 0\), restricted to \textit{N}, yields
\begin{equation}\label{eq:211}
\begin{split}
\mathring{R}_{33} &= 0 = \left\lbrack\vphantom{\frac{1}{4}} \left(\ell n \sqrt{\det{\mu}}\right)_{,33}\right.\\
&{} + \frac{1}{2} \varphi_{,t} \left(\ell n \sqrt{\det{\mu}}\right)_{,3}\\
&{} \left.\left. + \frac{1}{4} \mu^{ac} \mu^{bd} \mu_{ab,3} \mu_{cd,3}\right\rbrack\right|_{t=0}
\end{split}
\end{equation}
in an arbitrary gaussian null coordinate chart (where \((\det{\mu}) \equiv \det{(\mu_{ab})}\)). Combining Eqs.~(\ref{eq:210}) and (\ref{eq:211}), we see that \(\mu_{ab,3}|_{t=0} = 0\) throughout the local chart adapted to \textit{K}.
From this result, it follows easily that the metric \(\mu_D\) induced upon an arbitrary disk transversal to a given bundle of null generators of \textit{N} is, in fact, independent of the disk chosen. To see this one computes, recalling Eqs.~(\ref{eq:203}) and (\ref{eq:204}), the metric induced upon an arbitrary such disk \textit{D} (satisfying \(t = 0, x^3 = f(x^a)\)). From the result that \(\mu_{ab,3}|_{t=0} = 0\) it follows that this induced metric is independent of the function \textit{f} (which embeds \textit{D} in the given bundle) and hence of the particular transversal disk chosen. Though this calculation was carried out using a special family of charts, the definition of the induced metric is a geometrical one and thus the invariance of this metric (relative to an arbitrary displacement along the null generators of \textit{N}) is independent of any choice of charts.
The invariance of this transversal metric will play an important role in the sections to follow. Notice that if one starts at a transversal disk \textit{D} and flows along the generators of \textit{N}, then one may eventually reach another disk \(D'\) transversal to the same bundle of generators which partially or completely coincides with \textit{D}. Indeed, upon application of the {Poincar\'{e}} recurrence theorem in the next subsection, we shall see that this always happens and that every null generator of \textit{N} is either closed or comes arbitrarily close to closing. By the result of the preceding paragraph, the metric \(\mu_{D'}\) induced on \(D'\) is isometric to the metric \(\mu_D\) induced upon \textit{D} (since the transversal metric is invariant under the flow which carries \textit{D} to \(D'\)). If the null generators intersecting \textit{D} were all closed curves this would hardly be surprising since \textit{D} would eventually coincide with \(D'\) and the isometry would simply be the identity map. In the non-closed case of primary interest here, however it leads to non-trivial restrictions upon the transversal metric \(\mu_D\). For example, suppose \(U \subset D\) is an open subset of \textit{D} which, upon translation along the generators of \textit{N}, reintersects \textit{D} in another open set \(U'\). There is a natural diffeomorphism \(\varphi_U\) of \textit{U} and \(U'\) defined by this translation mapping and, from the invariance of the transversal metric, it follows that
\begin{equation}\label{eq:212}
\mu_D|_{U'} = \varphi_U^\ast (\mu_D|_U)
\end{equation}
i.e., that \((U,\mu_D|_U)\) and \((U',\mu_D|_{U'})\) are isometric with \(\varphi_U\) the isometry. Of course \(\varphi_U\) may have some fixed points (corresponding to (non-generic) closed null generators) but, for the cases of interest here, \(\varphi_U\) is not simply the identity map (even if \(U = U'\)) since, generically, the generators will not be closed. Thus open subsets of \((D,\mu_D)\) will be non-trivially isometric to other open subsets of this same space and, as we shall see from the recurrence theorem, there will be infinitely many such local isometries of \((D,\mu_D)\) due to the fact that a generic generator intersecting \textit{D} will reintersect \textit{D} in infinitely many distinct points.
|
2,869,038,153,748 | arxiv | \section{Proof of the First Zonklar Equation}
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
We would like to thank external experts for participating in our interviews and giving us invaluable feedback. We also thank the anonymous reviewers for their detailed reviews and constructive suggestions.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{G}{raphs} are pervasive in various applications, such as citation networks, social media, and biology.
\zhihua{}{Analyzing graph data helps us understand the hidden patterns in graphs and benefits many graph-related tasks, including node classification, link prediction, and graph classification.
For example, an effective analysis of a paper citation graph can facilitate the prediction of a new paper~\cite{kipf2016semi,hamilton2017inductive}.
A careful exploration of social networks can benefit the creation of an adaptive friend recommendation system in social media~\cite{DBLP:journals/access/ChenXZZX20}.
By modeling molecules as graphs, where atoms and chemical bonds are treated as nodes and edges respectively, we can build machine learning techniques to predict the chemical properties (\textit{e.g.}, solubility) of chemical compounds~\cite{DBLP:conf/nips/FoutBSB17}.}
In recent years, graph analytics has embraced a new breakthrough---Graph Neural Networks (GNNs).
A fast growing number of GNN models have been proposed to solve graph-based tasks.
For example, Graph Convolutional Network (GCN)~\cite{kipf2016semi} adapts the convolutional operation from natural images to graphs and conducts semi-supervised learning to perform node classification on them.
Graph Attention Network (GAT)~\cite{velivckovic2017graph} further integrates the attention mechanism, which is widely used in Natural Language Processing (NLP), into the GNN model architecture and dynamically assigns weights to different neighbors to enhance the model performance.
The advances of GNNs bring new opportunities to the analysis of graph data and have become increasingly popular in recent years.
However, similar to other deep neural networks, GNN models also suffer from the difficulty of interpreting their working mechanisms.
When developing or using GNNs, developers and users often need to evaluate the model performance and explore the causes of model errors and failures, which, unfortunately, is often hard to achieve.
Therefore, how to enable convenient error diagnosis of GNN models has become a challenging but significantly important task.
Visualization has been applied to helping model developers devise new deep learning techniques, and debug and compare different types of deep neural networks~\cite{hohman2018visual}.
For example,
various visualization techniques have been proposed to facilitate the development of a variety of deep learning models, such as CNN \cite{liu2016towards}, RNN \cite{ming2017understanding}, GAN \cite{wang2018ganviz}, and DQN \cite{wang2018dqnviz}.
These visualizations have achieved great success in understanding and analyzing those deep learning models.
However, it is very challenging to directly apply them to GNNs, since most of those techniques are exclusively designed for Euclidean data like images and text, while \zhihua{}{GNNs mainly works on non-Euclidean data such as graphs.}
Another challenge for the error diagnosis of GNNs comes from the fact that GNNs often involve both the complex topological structures and high dimensional features of graphs, as well as the interplay between them.
To effectively analyze GNNs, it is crucial to properly link the topological data, high dimensional features, and prediction results with a comprehensive workflow.
\zhihua{}{Preliminary studies~\cite{baldassarre2019explainability,ying2019gnnexplainer,li2020explain} have proposed techniques to explain GNN model prediction results.
Most of them focus on instance analysis, \textit{i.e.}, explaining a prediction for single nodes.}
However, there still lacks the ability and research at a higher level, \textit{i.e.}, analyzing and understanding the common causes of the classification errors of groups of nodes.
Their methods make it difficult to conveniently explore the general error patterns in the prediction results of a GNN model, as well as further gain insights for model improvement.
\zhihua{}{In summary, it still remains unclear on how to develop new visualization techniques to facilitate the effective error diagnosis of GNNs.}
In this paper, we propose a novel error-pattern-driven visual analytics system, {{\textit{GNNVis}}}\footnote[1]{https://gnnvis.github.io/}, to provide model developers and users with deep insights into model performance and its dependency on data characteristics.
Instead of analyzing the GNN prediction results of single instances, we investigate the patterns in the prediction results shared by a group of instances to obtain generalizable insights into the model architecture.
\zhihua{}{We worked closely with two GNN experts for four months to derive the design requirements of {{\textit{GNNVis}}}.
{{\textit{GNNVis}}} comprises five parts: Control Panel, Parallel Sets View, Projection View, Graph View, and Feature Matrix View. }
The Parallel Sets View enables users to see the distribution of node-level metrics.
The Projection View presents a set of 2D projections of the selected nodes according to metrics summarized from different perspectives, enabling users to extract potential clusters of nodes.
Novel node glyphs in Projection View are proposed to help users to conveniently learn about the multiple metrics of nodes and extract general error patterns.
We conducted two case studies and expert interviews to demonstrate the effectiveness and usability of {{\textit{GNNVis}}} in helping model developers understand and diagnose GNNs.
The contributions of our work can be summarized as follows:
\begin{itemize}
\item A visual analytics system to assist model developers and users in understanding and diagnosing GNNs.
\item A set of novel node glyphs to help users conveniently learn about the metrics of nodes.
\item \zhihua{}{Case studies on analyzing error patterns in GNN prediction results and interviews with domain experts to demonstrate the effectiveness and usability of the proposed system.}
\end{itemize}
\zhihua{}{The remainder of this paper is organized as follows. Section~\ref{sec:related_work} discusses the related work of this paper, including GNNs, visual analytics in deep learning, and GNN explainability. Section~\ref{section:background} provides a brief introduction to the basic concepts of GNNs, such as typical architectures GCN and GAT.
By working closely with domain experts, we summarize the design requirements of understanding and diagnosing GNN models in
Section~\ref{subsec:design_requirements} and further introduce the technical details of the proposed approach {{\textit{GNNVis}}}.
We evaluate our approach through case studies and expert interviews in Section~\ref{sec:evaluation} and discuss the possible limitations and future work of our approach in Section~\ref{sec:discussion}. Section~\ref{sec:conclusion} concludes the paper with a brief summary of the proposed method.}
\section{Related Work}
\label{sec:related_work}
The related work of this paper can be categorized into three groups: GNNs, visual analytics in deep learning, and GNN explainability.
\subsection{GNNs}
\zhihua{}{GNNs have been developed to analyze graph data by extending CNNs or RNNs to the graph domain~\cite{zhou2018graph} in the past few years.}
These neural networks have gained promising prediction results for analyzing graphs.
For the GNNs derived from CNN, they can be categorized into spectral approaches and spatial approaches~\cite{zhou2018graph}. Spectral approaches define convolution on the spectral representation of graphs~\cite{bruna2013spectral, defferrard2016convolutional, kipf2016semi}. The work done by Bruna et al. \cite{bruna2013spectral}
is the first attempt
to generalize the convolution concept from natural images to the graph domain. Defferrard et al. \cite{defferrard2016convolutional} approximated the spectral convolution as Chebyshev polynomials of the diagonal matrix of eigenvalues, resulting in further low computational cost. Kipf and Welling \cite{kipf2016semi} further simplified the Chebyshev polynomials by using the first order of polynomials and renormalization tricks, known as GCNs, which have inspired many follow-up studies.
Spatial approaches directly define convolution on spatially close neighbors~\cite{hamilton2017inductive, duvenaud2015convolutional, atwood2016diffusion, zhuang2018dual, niepert2016learning, gao2018large, monti2017geometric}. Hamilton et al.~\cite{hamilton2017inductive} proposed GraphSAGE, which uses sampling methods and aggregators defined over the neighborhood to reduce dependence on processing whole graphs.
Their approach greatly accelerates the GNN used in large scale graphs.
Another direction is to extend RNN to the graph domain. Prior studies have attempted to utilize the gate function in GNNs to improve its ability to propagate information across graph structure~\cite{li2015gated, tai2015improved, zayats2018conversation, liang2016semantic, DBLP:journals/tvcg/WangJWCMQ20}.
Researchers have also made significant progress in analyzing GNN models. For example, Li et al.~\cite{li2018deeper} showed that the graph convolution of a GCN is merely a Laplacian smoothing operation
but when the number of layers increases, the risk of over smoothing will be increased.
Also, they showed that when few training labels are given to train GCN models, co-training methods and self-training methods will improve the performance of GCN models. Xu et al.~\cite{xu2018powerful} provided a theoretical framework to analyze expressive power for GNNs and proved that their proposed model is as expressive as the Weisfeiler Lehman graph isomorphism test.
\zhihua{}{Different from these studies, this study is aimed at extracting general error patterns of GNN models and further helps model developers understand and diagnose the models.}
\subsection{Visual Analytics in Deep Learning}
Nowadays, there is a growing trend to use visualizations to understand, compare, and diagnose deep neural networks~\cite{hohman2018visual}.
Prior studies on using visual analytics to enhance the interpretability of deep neural networks can generally be categorized into two types: model-agnostic visualizations and model-specific visualizations.
\zhihua{}{For model-agnostic visualizations, prior studies mainly focus on visualizing the model input and output to provide insights into the correlation between them~\cite{Zhang2018manifold, alsallakh2014visual} or using surrogate models to explain the deep neural networks~\cite{DBLP:journals/tvcg/MingQB19,DBLP:journals/tvcg/WangGZYS19}.}
However, these model-agnostic visualizations avoid showing the hidden states of the deep neural networks and fail to reveal the inner working mechanism of different models.
To support a dive into the deep learning models, researchers have also proposed a series of model-specific visualizations for explaining deep learning models.
Previous model-specific visualizations have covered a wide range of deep learning models, including CNN, RNN, and GAN.
A variety of visualization techniques and interactions have been designed based on the data type, the model structures, and the working mechanism of different deep learning models.
Since CNN and RNN are the most widely-used deep learning models~\cite{lecun2015deep, DLBook}, a majority of model-specific visual analytics are proposed for both types of models.
For example,
CNNs are usually modeled using the directed acyclic graph visualization, and the output of each layer is usually displayed using matrix-based visualizations~\cite{liu2016towards, liu2018deeptracker, pezzotti2017deepeyes}.
To open the black box of RNNs, clustering methods and correlation visualizations have been proposed to uncover the dynamic hidden states and learned patterns in RNNs \cite{ming2017understanding, strobelt2017lstmvis, strobelt2018s}.
Recently, visual analytics methods tailored for generative models~\cite{wang2018ganviz, liu2017analyzing, kahng2018gan}
and reinforcement learning models~\cite{wang2018dqnviz}
have also been proposed.
Despite much work haveing been done by using visualization approaches to improve the explainability of deep learning models, little research has been conducted on enhancing the explainability of GNNs through visualizations.
To fill this research gap, this paper contributes a visualization tool to assist in the understanding and diagnosis of GNNs.
\subsection{GNN Explainability}
\zhihua{}{According to our research, only a few studies have attempted to explain GNN models.}
For instance,
Baldassarre et al. \cite{baldassarre2019explainability} explored the possibilities of adapting explanation techniques from CNNs to GNNs.
They empirically evaluate three widely-used CNN explanation methods, \textit{i.e.}, Sensitivity Analysis (SA), Guided Back Propagation (GBP), and Layer-wise Relevance Propagation (LRP) when explaining GNN decisions.
They found that explanations produced by SA or GBP tend to
be inconsistent with the human interpretation,
while LRP produces more natural explanation results.
Meanwhile,
Ying et al.~\cite{ying2019gnnexplainer} proposed GNNExplainer, which uses a subgraph to explain the GNN model prediction.
Given a trained GNN model, they formulate an optimization task to maximize the mutual information between the trained model prediction and distribution of possible graph structures. They regard the subgraph as the explanation results. Li et al. \cite{li2020explain} further extended GNNExplainer which is designed for an undirected unweighted graph to suit a directed weighted graph.
Previous studies have mainly focused on providing an instance-based explanation, rather than more insights on analyzing the classification errors made by GNNs. Different from previous studies on GNN explainability, our work mainly focuses on analyzing error patterns made by GNN models and gives a different perspective for model developers and users to inspect the model and become familiar with error patterns in the model prediction.
\section{Background}
\label{section:background}
GNNs
are deep neural networks that directly operate on graphs (\textit{i.e.}, networks).
\zhihua{}{A graph can be represented as $G=(V,E)$, where $V$ denotes the vertex set and $E$ denotes the edge set.}
$X \in \mathbb{R}^{N\times d}$ is the feature matrix of the graph, where $N$ denotes the number of nodes in the vertex set and $d$ is the dimension of each node feature.
The labels of the nodes in the graph are often denoted as
$Y$.
In this paper, we do not consider the features in edges.
We adopt similar notations introduced in~\cite{fey2019fast} to illustrate the concept of GNNs. GNNs can be generally expressed in a neighborhood aggregation or message passing scheme~\cite{hamilton2017representation}, as shown in Fig.~\ref{Fig.gnn}. A general message passing function for GNN is shown as below:
\begin{equation}\mathbf{x}_{i}^{(k)}=\mathcal{C}^{(k)}\left(\mathbf{x}_{i}^{(k-1)}, \mathcal{A}_{j \in \mathcal{N}(i)} \phi^{(k)}\left(\mathbf{x}_{i}^{(k-1)}, \mathbf{x}_{j}^{(k-1)}\right)\right)\end{equation}
where $x_{i}^{(k-1)}$ denotes node features of node $i$ in layer $k-1$.
$\mathcal{N}(i)$ denotes the neighborhood of node $i$. $\mathcal{A}$ denotes a differentiable, permutation invariant function, e.g., sum. $\mathcal{C}$ and $\phi$ denote differentiable functions such as MLPs.
GCNs and GATs are two popular
GNNs.
According to the study by Kipf et al.~\cite{kipf2016semi}, the message passing function of GCN can be defined as follows:
\begin{equation}
\mathbf{x}_{i}^{(k)}=\sigma \left( \sum_{j \in \mathcal{N}(i) \cup\{i\}} \frac{1}{\sqrt{\operatorname{deg}(i)} \sqrt{\operatorname{deg}(j)}} \left(\mathbf{W} \mathbf{x}_{j}^{(k-1)}\right) \right)\end{equation}
where the features of the neighbors of node $i$ are first transformed by a weight matrix $\mathbf{W}$, then normalized by their degree and finally summed up. $\sigma$ is a non-linearity function.
GAT is first proposed by Veli{\v{c}}kovi{\'c} et al.~\cite{velivckovic2017graph} and its message passing function is defined as follows
\begin{equation}\mathbf{x}_{i}^{(k)}=\sigma\left(\sum_{j \in \mathcal{N}(i)\cup\{i\}} \alpha_{i j} \mathbf{W} \mathbf{x}_{i}^{(k-1)}\right)\end{equation}
where $\mathbf{W}$ and $\sigma$ are similarly defined as above.
Different from GCN, GAT assigns different weights (attention coefficients) to each neighbor. The attention coefficients $\alpha_{i, j}$ are computed as:
\begin{equation}
\alpha_{i, j}=\frac{\exp \left(\text { LeakyReLU }\left(\mathbf{a}^{\top}\left[\mathbf{W} \mathbf{x}_{i} \| \mathbf{W} \mathbf{x}_{j}\right]\right)\right)}{\sum_{k \in \mathcal{N}(i) \cup\{i\}} \exp \left(\text { LeakyReLU }\left(\mathbf{a}^{\top}\left[\mathbf{W} \mathbf{x}_{i} \| \mathbf{W} \mathbf{x}_{k}\right]\right)\right)}
\end{equation}
where $\mathbf{a}$ is a weight vector and LeakyReLU is an activation function which is defined as $\text{LeakyReLU}(x)=\max (0, x)+\text{negative\_slope}*\min(0,x)$.
The GNN models are mainly applied to the tasks of node classification and link prediction in individual graphs.
In this paper, we take node classification as an example to illustrate how our approach can improve the interpretation of GNN models and facilitate model diagnosis.
Such kinds of node classification tasks are often done in a semi-supervised way.
\zhihua{}{Given a set of labeled nodes (\textit{i.e.}, training nodes) in a graph,
a GNN model will be trained to predict the labels of the rest of the nodes in the graph.}
\section{Design Requirement Analysis}
\label{subsec:design_requirements}
We work closely with two GNN experts, who are also co-authors of this work, to collect their feedback on
the GNN interpretation issues they are facing and their current practices of understanding and diagnosing GNN models.
One expert (E1) is a senior researcher who specializes in developing new kinds of GNNs. The other expert (E2) is a deep learning developer with strong experience in applying GNNs to modeling and analyzing the topology data from different application domains such as online education and visualization.
Also,
the development of {{\textit{GNNVis}}} was conducted in an iterated way. After we finished each version of the system, we asked experts to use the pilot system, comment on the limitations, and suggest possible improvements.
By combining the original requirements proposed by the experts and their subsequent comments on the limitations of the systems, we complied a list of major design requirements proposed by the domain experts, which can be summarized as follows:
\textbf{R1: Provide an overview of GNN results.}
All experts commented that an overview of the GNN performance is crucial for the GNN analysis.
To gain an overview of the dataset and classification results, the system needs to summarize various types of information, such as degree distribution and ground truth label distribution.
This information, covering various aspects of a GNN model, needs to be organized and presented in a clear manner.
Meanwhile, the correlation among this information should be presented to help users develop initial hypotheses about any possible error patterns in GNN results, \textit{i.e.}, a set of wrong predictions that share similar characteristics.
\textbf{R2: Identify error patterns.}
After developing initial hypotheses about the error patterns, users need more detailed information to verify them.
Specifically, users need to examine the characteristics shared by a set of wrong predictions and verify whether error patterns formed by these characteristics make sense in analyzing GNNs based on their domain knowledge.
During the interview, experts agreed that they usually use several characteristics to group the wrong predictions and identify error patterns.
For example, one expert stated that \textit{``misclassified nodes usually have a relatively large shortest path distance to the labeled nodes.''}
Therefore, the system should support users in examining these characteristics and identifying error patterns.
\textbf{R3: Analyze the cause of error patterns.}
After identifying error patterns, finding the causes of these errors is important for users to understand, diagnose, and improve the GNNs.
More detailed information is needed to understand the possible causes of error patterns.
Specifically, users need to inspect the graph structures and node features to determine the causes of error patterns.
According to the feedback from expert interviews, there are two main sources of wrong GNN predictions: noise in the training data and inaccurate feature aggregation in GNNs.
To predict the label of a node, GNN aggregates the node's own feature with the features of the neighboring nodes at each layer.
Noise in the training data, e.g., the same nodes but different labels, can confuse the GNN and lead to wrong predictions.
Inaccurate feature aggregation at any layer will also influence the GNN prediction of the node.
\section{GNNVis}
\label{sec:gnnvis}
This section describes the details of the proposed approach, {{\textit{GNNVis}}}.
We first provide a system overview.
\zhihua{}{Inspired by the fact that GNN prediction results are influenced by graph structure and node features~\cite{zhou2018graph}, we define two proxy models and various kinds of metrics to help users effectively comprehend the cause of errors in GNN prediction results.
Similar to the ablation study when evaluating GNN models~\cite{xu2018powerful}, we use two proxy models GNNWUF and MLP to reflect the respective impact of the graph structure and node features on the GNN prediction results.
To further help understand the impact of the graph structure and node features, we also provide a number of metrics, including graph structure-based metrics that take into account the graph structure but ignore the node features, and node features based metrics that take the node features into account but ignore the graph structure.
Detailed information on proxy models and metrics are provided.}
Finally, we introduce the detailed visualization design of each view.
\subsection{System Overview}
The {{\textit{GNNVis}}} system consists of three major modules: storage, data processing, and visualization.
The storage module stores and manages graph data and models. The data processing module implements the necessary procedures for analyzing the graph and model predictions, especially for calculating various kinds of metrics.
The processed data is then passed to the visualization module, which supports the interactive visual analysis of the GNNs.
The storage and data processing modules are developed using Python and integrated into a back-end web server built with Flask.
The GNN models are implemented with PyTorch.
We implement the visualization module as a front-end application using React, Typescript, and D3.
\subsection{Proxy Models Training and Metrics Definition}
\label{sec:models_metrics}
\zhihua{}{We define two proxy models to analyze the influence of the graph structure and node features on GNN prediction results.
GNNs consider both graph structures and node features to make predictions.
\zhihua{}{Through expert interviews, experts are concerned about whether the graph structure or node features have a greater impact on GNN prediction, and then determine which components will have more impact. }
Hence, we define two proxy models such as GNNWUF and MLP.
The two proxy models have the same model architectures as the GNN but are trained using different input data. GNNWUF are trained only using the graph structure while MLP is trained only using the node features.
When training GNNWUF, we use one hot encoding as the node feature for each node, meaning GNNWUF only considers the graph structures.
When GNN considers only the features of the node itself, then it can degenerate into an MLP model.
Hence, MLP is chosen as the other GNN proxy model that only considers the node features and is used to evaluate the influence of node structures.
We train both proxy models with the same settings as the training GNN.}
\zhihua{}{To help users understand the graph dataset and error patterns of the GNN prediction results, we further compute a number of node-level metrics.
Those metrics are derived from expert interviews.
Details are presented in the following paragraphs. }
\zhihua{}{
}
\begin{itemize}
\item \emph{Ground truth label and prediction results of GNN, GNNWUF, and MLP}. \zhihua{}{The \textit{ground truth label} is a basic metric which enables us to inspect the \textit{GNN prediction results}.
\textit{The predictions of the three models} help in the investigation of how the GNN prediction is influenced by the node's own features and the features from neighboring nodes.
The comparison among the three models helps users understand how well GNN makes use of the graph structure and the node features. }
\item \emph{Confidence}. \zhihua{}{We use the GNN prediction probability on the \textit{GNN prediction label} for each specific node to delineate the \textit{confidence} of the GNN model on each specific node.
It makes model users aware of how the model comes to a specific prediction.}
\item \emph{Node degree}. \zhihua{}{GNNs mainly operate over the neighborhood of each node and can affect the final performance of a GNN model.
Therefore, the \textit{node degree} is considered in this study.}
\item \emph{Center-neighbor consistency rate}. \zhihua{}{The \textit{center-neighbor consistency rate} depicts how consistent the \textit{labels} of the \textit{current node} and its surrounding neighbors are.
It can be divided into four major categories by considering both the \textit{ground truth labels} and their predictions:
(1) \textbf{\textit{Label consistency}} shows the percentage of neighbors which have the same \textit{ground truth label} as the \textit{current node};
(2) \textbf{\textit{Label-Prediction consistency}} describes the percentage of neighbors whose \textit{GNN prediction labels} are the same as the \textit{current node's ground truth label};
(3) \textbf{\textit{Prediction-Label consistency}} delineates the percentage of neighbors whose \textit{ground truth labels} are the same as the \textit{current node's GNN prediction label};
and (4) \textbf{\textit{Prediction consistency}} refers to the percentage of neighbors which have the same \textit{GNN prediction label} as the \textit{current node}.
These values can indirectly reflect how many neighbors satisfy the constraints.
If the \textit{node degree} is zero, then the \textit{consistency rate} is set to zero.
These metrics help users check whether the one-hop neighborhood exerts influence on the \textit{GNN prediction result} on the node of user interest.
\item \emph{Shortest path distance to training nodes}. \zhihua{}{We use the breadth-first search (BFS) algorithm to calculate the \textit{shortest path distance from the current node to the training nodes}.
The algorithm will first start traversing the \textit{current node} and then the neighbors of the visited nodes. When it first detects a node in the training set, the algorithm will regard the distance from that node to the \textit{current node} as the \textit{shortest path distance from the current node to the training nodes}.
The distribution of the training nodes, also called labeled nodes, can have a significant influence on \textit{GNN prediction}~\cite{yang2019spagan}.
\item \emph{Nearest training nodes label distribution}.
\zhihua{}{To investigate the influence of training nodes distribution on model training, we calculate the \textit{nearest training nodes label distribution}.
To calculate the \textit{label distribution of the nearest training node(s)}, we first find the closest training nodes
to the \textit{current node} in terms of shortest path distance.
Then we count the frequency of the labels of these training nodes and normalize them into $[0,1]$. The normalized frequencies are considered to be the \textit{nearest training nodes label distribution}. Analyzing these metrics helps to investigate the influence of training nodes distribution on model training.}
\item \emph{Nearest training nodes dominant label consistency.}
\zhihua{}{In order to help users quickly capture the dominant information of \textit{nearest training nodes label distribution} and further diagnose the causes of errors in \textit{GNN prediction results}, we define the \textit{nearest label} as the label that most frequently occurs in the training nodes closest to a specific node in terms of topological distance.
Then, we further consider whether the \textit{nearest label} is consistent with the \textit{ground truth label} of this specific node. If yes, we set the \textit{nearest training nodes dominant label consistency} for this node to \textit{True}; otherwise, it is set to \textit{False}. Sometimes, there may be multiple \textit{nearest labels}. Then we directly set the \textit{nearest training nodes dominant label consistency} to \textit{Not Sure}.
Such a metric is derived from \textit{nearest training nodes label distribution} and the \textit{ground truth label} of the \textit{current node}.
If it is true, the current node can get information from the structure and the training nodes and it has a high chance of being correctly classified. Otherwise, it has high probability to be misclassified.}
\item \emph{The label distribution of the top-k training nodes with the most similar features.}
\zhihua{}{The feature similarity between two nodes is defined as the cosine distance between the feature vectors of two nodes. We first find the \textit{top-k training nodes with the most similar features} to the node of user interest. Then we count the frequency of labels of those training nodes and normalize them into $[0,1]$. They are then considered to be the \textit{label distribution of the top-k training nodes with the most similar features}.
With this metric, we can analyze the influence of node features on \textit{GNN predictions}. Empirically we set k to 5 in our implementations.}
\item \emph{Top-k most similar training nodes dominant label consistency.}
\zhihua{}{Similar to the definition of the previous metric, we can also calculate the \textit{top-k most similar training nodes dominant label consistency} in the same way. The major difference is that this metric reflects the influence of training node features on the model prediction results on the current node.}
\end{itemize}
\subsection{Visualization}
\zhihua{}{As shown in Fig.~\ref{Fig.system}, {{\textit{GNNVis}}} visualization module consists of a Control Panel (a), a Parallel Sets View (b), a Projection View (c), a Graph View (d), and a Feature Matrix View (e).
The Control Panel allows users to select a graph dataset and inspect different subsets of the dataset (\textit{e.g.}, all, training, validation, and testing).
The Parallel Sets View (Fig.~\ref{Fig.system}(b))
visualizes the distribution of node-level metrics, which are defined in Section~\ref{sec:models_metrics}. Users can select a subset of metrics to inspect their distribution and correlation. Then users can select a subgroup of nodes and inspect them in the Projection View (Fig.~\ref{Fig.system}(c)).
The Projection View presents a set of 2D projections of the selected nodes according to metrics summarized from different perspectives. Users can lasso a potential cluster of nodes to see their location on the whole graph in the Graph View (Fig.~\ref{Fig.system}(d)) and their feature distributions in the Feature Matrix View (Fig.~\ref{Fig.system}(e)). }
\subsubsection{Parallel Sets View}
\zhihua{}{In order to provide a high-level summary (\textbf{R1}) and help users understand the datasets and identify error patterns of \textit{GNN models prediction} (\textbf{R2}), we design the Parallel Sets View to visualize node-level metrics using Parallel Sets~\cite{bendix2005parallel}.
Previous work \cite{ren2016squares, DBLP:journals/tvcg/WexlerPBWVW20, pezzotti2017deepeyes, DBLP:journals/tvcg/DingenVHMKBW19, DBLP:journals/tvcg/KahngAKC18} explored the selection of a subset of sample properties to study machine learning models. Inspired by previous research, we use this strategy to investigate error patterns in GNN models. We propose to use Parallel Sets to investigate error patterns in GNN prediction results, following the previous work \cite{vosough2018using, DBLP:conf/vda/Chaudhuri18}.
Users can select what metrics are to be displayed in the Parallel Sets through Parallel Sets Settings Modal. In general, displaying fewer than five axes in Parallel Sets is a good practice to reduce visual clutter and make efficient use of functions in Parallel Sets. Due to the constraint that the Parallel Sets are used to display the categorical variables, we need to convert the continuous metrics to categorical variables by grouping a range of values into one category. Then we can also show them in the Parallel Sets View.}
As shown in Fig.~\ref{Fig.system}(b),
each axis of the Parallel Sets shows a categorical variable. The axis is partitioned into multiple segments representing different categories of the variable. The width of each segment represents the number of nodes falling into that category. We can directly see the distribution of the categories on the axis. Between two consecutive axes, multiple ribbons are shown to connect the two axes, each simultaneously representing the nodes that satisfy the conditions specified by the two axes.
Users can easily select a subset of nodes in the dataset and further investigate their node metrics and the \textit{GNN model prediction results}.
When users click on a segment, the corresponding category of that axis will be selected. Also, when users click on the ribbon in the Parallel Sets, the corresponding set of nodes will be selected.
Besides, the axes in the Parallel Sets can be easily reordered by users through drag-and-drop.
By filtering the nodes according to node-level metrics such as \textit{correctness} and \textit{ground truth label}, users can easily select a node subset of their interest for further analysis.
A common alternative for visualizing multivariate data is the Parallel Coordinates Plot (PCP)~\cite{inselberg1990parallel}.
Each data point is visualized as a single line across different attributes.
However, when it comes to categorical data, it is challenging to identify the proportions of data that fall into specific categories. Compared with PCP, Parallel Sets intuitively show the distribution of the categories in each axis and the correlation between multiple axes. Thus,
Parallel Sets are finally chosen to display the overall distribution of node attributes.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{figs/GNNVis_Projection_View_3.pdf}
\caption{\zhihua{}{(a) The links connecting different kinds of information of the same nodes shown in different planes will be displayed when users lasso a group of nodes in one plane. (b-e) Node glyphs design in planes of the Projection View. Color indicates the corresponding label. The color legend is shown in the bottom left of Fig.~\ref{Fig.system}(d).}}
\label{Fig.projection_view_overall}
\end{figure}
\subsubsection{Projection View}
\label{sec:ProjectionView}
With the overview of the dataset and GNN models provided by the Parallel Set View, we further design the Projection View to give users more insights into the subset of nodes selected in the Parallel Sets View (\textbf{R2, R3}). We group a subset of node-level metrics, display them in glyphs, and further project them to the 2D plane.
The Projection View allows users to investigate the similarity of nodes regarding different perspectives.
It can be helpful for investigating whether the nodes with similar node metrics share similar error patterns.
In the Projection View, we provide a set of linked projection planes of the nodes that use different features.
\zhihua{}{Different from similar designs in EmbeddingVis~\cite{DBLP:conf/ieeevast/LiNHCYM18}, we design different node glyphs to display different combinations of node-level metrics.}
In order to project those node glyphs and avoid overlapping, we use the t-SNE~\cite{maaten2008visualizing} projection, which is a widely used projection technique, and force-directed collision-avoidance method to prevent the overlapping of node glyphs.
When users lasso-select a set of nodes in a projection plane, the links between the same nodes in different planes will be shown to help users identify the nodes and other aspects of those nodes' properties, as shown in Fig.~\ref{Fig.projection_view_overall}. \zhihua{}{After users hover on the node glyphs, the legend and detailed information of those node glyphs will be displayed.
However, due to the limited screen space, it cannot display hundreds let alone thousands of node glyphs. }
\zhihua{}{Therefore, we apply a hierarchical clustering algorithm with complete linkage to cluster these nodes based on a specific distance function~\cite{clusterAnalysis}.
Cluster-level node glyphs are designed based on aggregating the node-level metrics for individual nodes in the clusters.
In order to help users further inspect individual nodes, after users select a subset of cluster-level node glyphs, users can switch to "Detail" mode, then the Projection View will display individual node glyphs for nodes in such a cluster. This design greatly enhances the scalability of the Projection View. }
In our implementation, we categorize the metrics into four groups and provide four projections for each group of node metrics. \zhihua{}{The four projection planes are \textit{prediction results comparison}, \textit{surrounding nodes label consistency}, \textit{training nodes structure influence}, and \textit{training nodes feature influence}, respectively.} Different glyph designs are also proposed for the nodes. We introduce them one by one in the following paragraphs.
\textbf{\textit{A. Prediction results comparison.}} This plane aims to help users compare different \textit{models prediction results} and reveal relative influence of structure and features for each nodes or clusters. The metrics used in this plane include the \textit{ground truth label} $GT$, the \textit{prediction label of three models}, \textit{i.e.}, GNN, GNNWUF, MLP $P=[P_1, P_2, P_3]$, and the \textit{Confidence} of GNN prediction $CONF$.
\zhihua{}{As shown in Fig.~\ref{Fig.projection_view_overall}(b), \textit{three model prediction results} can be found in the pie chart. The inner circle encodes the \textit{ground truth label}. The outer circular ring encodes the \textit{confidence}. The radius of the whole node glyph encodes the size of the clusters. Through such a node glyph, users can easily compare the \textit{ground truth label} and \textit{model prediction results} and understand how confidently GNN models make predictions. Through the projection, the nodes with similar metrics will be in close proximity. Users can see if there are clusters of nodes with the same \textit{ground truth labels} and \textit{predictions}, which helps GNN model developers and users further analyze what causes the model to make such predictions. }
\zhihua{}{For projection and clustering, the distance between Node $a$ and Node $b$ in this plane is defined as below:}
\begin{equation}
\begin{split}
D_{1}^{2}(a,b) = \mathbb{I}\{GT_a=GT_b\}+\sum_{j=1}^{3}\mathbb{I}\{P_{aj}=P_{bj}\}\\+(CONF_{a}-CONF_{b})^{2}
\end{split}
\end{equation}
where $\mathbb{I}\{*\}$ is an indicator function which values 1 when the expression is true and otherwise 0. Such a distance function guarantees that the value of each term is between 0 and 1.
\textbf{\textit{B. Surrounding nodes label consistency. }}
\zhihua{}{
To help users explore the \textit{label consistency} between a node and its neighboring nodes,
we show the \textit{ground truth label} $GT$, the \textit{degree} $DEG$, the \textit{center-neighbor consistent rate} $CN = [C_{GT}N_{GT}, C_{GT}N_{PT}, C_{PT}N_{GT}, C_{PT}N_{PT}] \in [0,1]^{4}$ in this plane, where $C_{GT}N_{GT}$ represents \textit{Label consistency}, $C_{GT}N_{PT}$ represents \textit{Label - Prediction consistency, $C_{PT}N_{GT}$ represents \textit{Prediction - Label consistency}, and $C_{PT}N_{PT}$ represents \textit{Prediction consistency}.
}
The node glyph (Fig.~\ref{Fig.projection_view_overall}(c)) is designed to show this group of metrics. The design is inherent from the Radar Chart as it can display continuous variables. The color of polygon encodes the \textit{ground truth label}. The radius of the whole node glyph encodes the size of the clusters. Clusters may appear and users can easily spot them, since there will be a certain shape among those node glyphs. For the projection and clustering, the distance between Node $a$ and Node $b$ is defined as:}
\begin{equation}
\begin{split}
D_{2}^{2}(a,b) = (Norm(DEG_{a})-Norm(DEG_{b}))^{2} \\ + \mathbb{I}\{GT_a=GT_b\} +|CN_a-CN_b|^2,
\end{split}
\end{equation}
where $Norm(d)$ is the normalized degree that bounds the value between 0 and 1. $|A|$ means the norm of the vector $A$.
\textbf{\textit{C. Training nodes structure influence. }}
To help users capture the structure influence of training nodes on \textit{GNN model prediction}, the metrics we visualize in this plane include \textit{GNN prediction label} $P_1$, \textit{shortest path distance to training nodes} $DIS$, and \textit{normalized nearest training nodes label distribution} $SPD\in [0,1]^{C}$. Here $C$ is the number of classes. \zhihua{}{In order to encode the $DIS$ in the node glyph and highlight the difference between smaller values, \textit{i.e.}, $DIS\in[0,5)$,
we define $Closeness=max(0,1-DIS*0.2)$. It depicts the \textit{closeness of the nearest training nodes to the current node}.
The node glyph (Fig.~\ref{Fig.projection_view_overall}(d)) is designed to show this group of metrics.
The length of the line on the top of the rectangle encodes the \textit{closeness}. The rectangle on the right-hand side of the glyph shows the \textit{distribution of ground truth labels of training nodes with the shortest path distance to that node}. The width and height of the whole node glyph encode the size of the clusters.
The left-hand side rectangle encodes the \textit{GNN prediction label}. This helps users analyze the correlation between those variables. A high correlation between $P_1$ and the dominant component of $SPD$ indicates that the closest training nodes have a strong influence on \textit{GNN's prediction of the current node}.} \zhihua{}{For the projection and clustering, the distance between Node $a$ and Node $b$ is defined as:}
\begin{equation}
\begin{split}
D_{3}^{2}(a,b) = \mathbb{I}\{P_{a1}=P_{b1}\} + |SPD_a-SPD_b|^2 \\+ (Closeness_a-Closeness_b)^2.
\end{split}
\end{equation}
\textbf{\textit{D. Training nodes feature influence.}}
We further use another plane to help users capture the feature influence of training nodes.
\zhihua{}{The metrics we used in this plane include \textit{GNN prediction label} $P_1$ and $KFS\in [0,1]^{C}$, the \textit{label distribution of the top-k training nodes with the most similar features}. }
The node glyph (Fig.~\ref{Fig.projection_view_overall}(e)) share a similar visual design with Fig.~\ref{Fig.projection_view_overall}(d). The difference is that the right-hand side rectangle encoded the \textit{top-k feature similarity training nodes ground truth label distribution} and the node glyphs do not have a line at the top.
It enables users to analyze \textit{GNN prediction results} from the perspective of features. The clusters on the projection plane indicate nodes that have been similarly affected by the features. Combined with the Feature Matrix View, we can determine which feature may play a better or worse role in the GNN predictions. We use a similar distance function defined in the plane of \textit{training nodes structure influence}:
\begin{equation}
D_{4}^{2}(a,b) = \mathbb{I}\{P_{a1}=P_{b1}\} + |KFS_a-KFS_b|^2.
\end{equation}
\zhihua{}{There are a few design alternatives of those node glyphs. For the node glyph in the plane of \textit{prediction results comparison}, we can use a $2\times2$ grid to represent the \textit{ground truth label} and \textit{three model prediction results}. However, such a design cannot effectively help users compare the metrics in the diagonal and will confuse users. Therefore, such a design is not adopted. For the node glyph in the plane of \textit{surrounding nodes label consistency}, an alternative design is to use Parallel Coordinates Plot to display the five continuous metrics. However, it is generally hard for users to distinguish between two node glyphs. Such a design is not used in this plane. For the node glyph in the plane of \textit{training nodes structure influence and feature influence}, we can use a similar node glyph in the plane of \textit{prediction results comparison} to encode the GNN prediction result in the inner circle and encode the \textit{label distribution} in the outer ring. However, to avoid any confusion in the node glyph between those planes, we do not use this design in \textit{training nodes structure influence and feature influence} plane.}
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{figs/GNNVis_Graph_View.png}
\caption{Graph View enables users to inspect the graph structure. Node glyph in Graph View enables users to compare \textit{three model prediction results} and \textit{ground truth label} simultaneously. The color legend indicates which class the color represents.
\zhihua{}{The legend for the node glyphs shows the position at which each metric is encoded.
The color in the legend for node glyphs is only intended to show an example of node glyphs.} }
\label{Fig.alternative_design_graph_view}
\end{figure}
\subsubsection{Graph View}
\zhihua{}{We use the classic node-link diagram with the force-directed collision-avoidance layout to visualize the graph dataset.} Users can get a sense of the distribution of the selected nodes in the graph, and inspect the neighborhood of the nodes (\textbf{R2, R3}).
To further facilitate the convenient exploration of the reasons for errors, we design a node glyph to encode a group of node-level metrics.
The experts commented that they are interested in the \textit{ground truth label}, and the \textit{predictions of the GNN, GNNWUF, and MLP models}. Combining the four metrics, they are able to investigate the potential error types of the nodes. As shown in Fig.~\ref{Fig.alternative_design_graph_view}, the glyph designed to present the node-level metrics is similar to the design used in the Projection View. A legend for the glyph is also displayed at the corner of the Graph View as an easy reference for users.
The set of nodes selected in the Parallel Sets View or the Projection View is highlighted in the Graph View. Users can hover a node in the Graph View, which will be further highlighted with the radius doubled. The Graph View allows users to quickly check any interesting neighboring nodes. \zhihua{}{Users can also switch to the ``Extended'' mode,
which would further highlight the one-hop or two-hop neighbors of selected nodes, enabling users to explore different hops of neighborhood nodes.}
\zhihua{}{An overview of the graph is displayed in the bottom right-hand corner to support users navigating the graph. Users can click the specific position in the overview to navigate the displayed area of the graph. Users can choose to filter out the unfocused nodes
to accelerate the rendering and reduce the visual clutter in the graph.}
To investigate the node features and \textit{most similar features of training nodes}, users can click on the nodes of interest in the Graph View and further explore the node-level features in the Feature Matrix View.
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\linewidth]{figs/GNNVis_Feature_Matrix_View.png}
\caption{Feature Matrix View includes brushable bar chart (Top) and feature matrix (Bottom).}
\label{Fig.feature_matrix_View_design}
\end{figure}
\subsubsection{Feature Matrix View}
We design the Feature Matrix View to help users further explore the node features (\textbf{R3}), as shown in Fig.~\ref{Fig.feature_matrix_View_design}.
The Feature Matrix View consists of two components, \textit{i.e.}, a brushable bar chart and a feature matrix.
\zhihua{}{
We first assume that all the features used in our dataset range from zero to one.
}
The feature matrix indicates all the node features.
The color encodes the prediction label of that node and the opacity encodes the specific feature value. In the brushable bar chart, the bar height encodes the count of any features with a value larger than 0 in the feature matrix.
Users can brush a range of bars in the brushable bar chart and thus the feature matrix will display the specific range of feature dimensions.
This makes it really convenient for users to inspect the features of nodes with a high dimensionality and without this design, the scalability of this view is not guaranteed.
\zhihua{}{Users can change the sorting methods of feature dimensions. It can be sorted based on node ordering or frequency of features. }
When users select a subset of nodes in Parallel Sets View and Projection View, it will display the features of selected nodes. \zhihua{}{The hierarchical clustering algorithm and optimal leaf ordering~\cite{DBLP:conf/ismb/Bar-JosephGJ01} will be employed to generate the node ordering.
After sorting the nodes, the similarity will be calculated between two consecutive nodes. If they are very similar, we highlight them by adding a border in the rectangle in the rows of corresponding nodes.
When a node is selected in the Graph View, it will display the features of that node and the \textit{top-k most similar feature training nodes}.
The training nodes will be sorted based on feature similarity with that node.
Sorting based on the frequency of features uses a heuristic to sort feature dimensions.
This heuristic is that for each dimension of the features, we first count the frequency $|N|$ and then calculate the frequency of support $|SUPP|$, \textit{i.e.}, how many nodes with the same features and prediction label as the first node.
We then calculate the support rate of the features $SUPPRATE$ by using formula: $SUPPRATE=|SUPP|/|N|$.
Therefore, when the support rate is high, it will have a higher ranking. When the support rate between two dimensions of a feature is the same, it is sorted based on the frequency of features.
Then we can figure out what features can be supportive of the predictions for the GNN model.}
\section{Evaluation}
\label{sec:evaluation}
In this section, we demonstrate the effectiveness and usability of the system by two case studies and structured interviews with GNN experts. \zhihua{}{We conduct two case studies with two experts E1 and E2 who have both been mentioned in Section~\ref{subsec:design_requirements}. }
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{figs/GNNVis_Case_One_Parallel_Sets.pdf}
\caption{The correlation between GCN correctness and label: (a) All nodes in Amazon Photo dataset; (b) Training nodes in Amazon Photo dataset.}
\label{Fig.case_photo_1}
\end{figure}
\subsection{Case One: Error Pattern Analysis of GCN on Amazon Photo Dataset}
This case study shows how our approach helps the model researcher explore the error patterns of the GCN model, one of the most representative GNN models, on the Amazon Photo dataset~\cite{shchur2018pitfalls}.
The Amazon Photo dataset is a co-purchasing network of 7,650 products. In this dataset, each node represents a product and is classified into one of eight classes, including File Photography, Digital Concepts, Binoculars \& Scopes, Lenses, Tripods \& Monopods, Video Surveillance, Lighting \& Studio, and Flashes.
Each edge is a co-purchasing relationship, \textit{i.e.}, products are purchased by the same customer.
Each feature of the node is a vector of 0-1 value indicating whether the corresponding word appears in product reviews.
\subsubsection{Developing Initial Hypotheses about the Possible Error Patterns in GNN Results}
E1 started his analysis from the Parallel Sets View. E1 found that the GCN model achieves an accuracy of 91.15\% on the whole dataset. The test accuracy is 91.80\%.
The model performance is consistent with the results reported in other papers~\cite{shchur2018pitfalls}.
E1 changed the first axis of the Parallel Sets View to be GCN correctness by dragging the corresponding axis to the first axis.
The total number of wrong prediction nodes is 677.
Then E1 explored which variables the wrong prediction correlated with.
E1 put the \textit{Label} in the second axis and found that the GCN model makes most percentage of wrong predictions on the Class 7 nodes.
This is indicated by the ribbon link flowing from the wrong category to the \textit{ground truth label} \textit{``7''}, which occupies the largest portion of the \textit{ground truth label} \textit{``7''} in Fig.~\ref{Fig.case_photo_1}(a).
E1 used the Control Panel (Fig.~\ref{Fig.system}(a)) to see the training node information by ticking \textit{``Training''} only.
E1 found that the training nodes are sampled with even probability from eight classes, which is shown by a similar distribution of \textit{ground truth labels} in training nodes and all the nodes (Fig.~\ref{Fig.case_photo_1}(b)).
The number of nodes with the \textit{ground truth label} \textit{``7''} is small compared with the number of nodes of other labels and the number of training nodes with the ground truth label \textit{``7''} is also small.
\zhihua{}{Perhaps this is the reason GCN is unable to correctly classify the nodes in Class 7.
However, E1 also found that the number of training nodes with the \textit{ground truth label} \textit{''0''} is also small, but the GCN model correctly classifies most of the nodes in Class 0. E1 doubted his hypothesis and decided to further investigate the cause of wrong predictions (\textbf{R1}). }
\subsubsection{Forming the Hypothesis about Possible Error Patterns}
\zhihua{}{E1 selected four axes, including \textit{Label}, \textit{GCN correctness}, \textit{nearest training nodes dominant label consistency}, and \textit{top-k most similar training nodes dominant label consistency}, displayed in the Parallel Sets View using the Parallel Sets Settings Modal, because E1 thought that those variables are important for analyzing the error patterns.
After hovering over \textit{Label} \textit{''0''} and \textit{Label} \textit{''7''}, E1 found that for \textit{nearest training nodes dominant label consistency}, most nodes with \textit{Label} \textit{''0''} have true value, while most nodes with \textit{Label} \textit{''7''} have a false value.
This shows that from graph structure perspective, it is easy for nodes with \textit{Label} \textit{''0''} to find training nodes with the same \textit{ground truth label} through searching the training nodes with shortest path distance to \textit{current nodes}, while nodes with \textit{Label} \textit{''7''} cannot satisfy these conditions.
Therefore, E1 speculated that this is the reason for more classification errors when the GCN model is applied to the nodes of \textit{Label} \textit{''7''} (\textbf{R2}).
}
\subsubsection{Analyzing the Cause of Error Patterns}
\zhihua{}{To further verify the cause of the error patterns identified above, E1 selected 150 wrongly-classified nodes of \textit{Label} \textit{''7''} by GCN in Parallel Sets View (Fig.~\ref{Fig.system}(b)) and further explored other views. }
\zhihua{}{From the planes of \textit{training nodes structure influence and feature influence} in the Projection View (Fig.~\ref{Fig.system}(c)), few nodes of \textit{Label} \textit{''7''} appear in the label distribution and many prediction labels are consistent with the largest component on the right-hand side of the glyph.
From the plane of \textit{surrounding node label consistency}, the \textit{label consistency} of most nodes is relatively small. It can be explained that the labels of the neighbors of these nodes are mostly inconsistent with the \textit{label} of the \textit{current node}.
E1 further explored three training nodes that are also misclassified, as shown in Fig.~\ref{Fig.system}(c). E1 lasso-selected them, and then selected one of the nodes in the Graph View. E1 found that it has a large number of neighbors with other \textit{ground truth labels}. In the Feature Matrix View, there are also many training nodes with other \textit{labels}.
Therefore, E1 believed that the error of this training node is due to the existence of a large number of neighboring nodes with other \textit{ground truth labels} around it. It is also observed from node glyphs that the \textit{GCNWUF prediction result} is consistent with the \textit{GCN prediction result}, and the \textit{MLP prediction result} is consistent with the current node's \textit{ground truth label}.
It also provides support for the conclusion that the structural impact on \textit{GCN prediction result} maybe larger than the impact of features on the \textit{GCN prediction result} for this training node (\textbf{R3}).}
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{figs/GNNVis_Case_Two_2.pdf}
\caption{E2 selected a cluster (a1) to inspect in Projection View (a). Then E2 selected a node in Graph View (b) to further inspect its neighborhood. E2 found that in Feature Matrix View (c), the first few words are ``markov'', ``model'', ``chain'' (c1), which are common words in the paper belong to Probabilistic Methods. It maybe a reason for the misclassification of that node.}
\label{Fig.case_cora_ml_1}
\end{figure}
\subsection{Case Two: Error Pattern Analysis of GAT on Cora-ML Dataset}
The model developer, E2, often needs to use GNNs to model network data in real applications. This case study shows how {{\textit{GNNVis}}} assists him in analyzing another representative GNN model (\textit{i.e.}, the GAT model) on the Cora-ML dataset~\cite{bojchevski2017deep}.
Specifically, the Cora-ML dataset is a citation network of 2810 scientific publications. Each node in the Cora-ML dataset represents a paper and is classified into one of seven classes, including Case-Based, Theory, Genetic Algorithms, Probabilistic Methods, Neural Networks, Rule Learning, and Reinforcement Learning.
Each edge is a citation relationship. Each feature of the node is a vector, with each feature element ranging from 0 to 1. A feature element larger than 0 represents that the paper abstract contains the corresponding word
\subsubsection{Forming the Hypothesis about Possible Error Patterns}
In the Parallel Sets View, E2 finds that the GAT achieves an accuracy of 86.16\% on the whole dataset and 84.70\% on the testing set. \zhihua{}{After inspecting the overview of metrics in the Parallel Sets View, E2 selected three axes, including \textit{nearest training nodes dominant label consistency}, \textit{top-k most similar training nodes dominant label consistency}, and \textit{GAT correctness}, because E2 thought that those metrics are important and can support a detailed analysis. E2 found an interesting set of nodes with the \textit{nearest training nodes dominant label consistency} as true, the \textit{top-k most similar training nodes dominant label consistency} as false, and the \textit{GAT correctness} as wrong.
E2 decided to further explore them and select those nodes in the Parallel Sets View by clicking the ribbon satisfying the above conditions. }
\zhihua{}{In the Projection View (Fig.~\ref{Fig.case_cora_ml_1}(a)), E2 found that in the \textit{training nodes feature influence} plane, the left side and the right side of the glyphs are the same colour (Fig.~\ref{Fig.case_cora_ml_1}(a1)).
This consistency means that the \textit{GAT prediction labels} are consistent with the \textit{top-k similar features training nodes dominant labels}. It implies that the node features may have a great impact on \textit{GAT prediction}.
Then, E2 selected one of the clusters, and then checked the other planes of the Projection View.
In the \textit{training nodes structure influence} plane, E2 can see that the left-hand side and the right-hand side of the highlighted node glyphs are different colours. In the \textit{surrounding nodes label consistency} plane, it can be seen that the \textit{label consistency} is generally large, indicating that the surrounding \textit{ground truth label} is consistent with the current node's \textit{ground truth label}. In the \textit{prediction results comparison} plane, it is found that the \textit{prediction results of MLP} are consistent with the \textit{prediction results of GAT}. It also shows that the node features of this cluster of nodes may have a negative impact on the GAT model performance on them (\textbf{R1, R2}).}
\subsubsection{Analyzing the Cause of Error Patterns}
\zhihua{}{To verify his observation, E2 also explored the Graph View and selected a node for further checking. E2 found that the node has some neighbor nodes with a different \textit{ground truth label}, as shown in Fig.~\ref{Fig.case_cora_ml_1}(b). In the Feature Matrix View (Fig.~\ref{Fig.case_cora_ml_1}(c)), E2 can see that the \textit{ground truth labels} of most training nodes are the same as its \textit{GAT prediction label}.
The first few words are ``markov'', ``model'', ``chain'' (Fig.~\ref{Fig.case_cora_ml_1}(c1)), which are common words in papers about Probabilistic Methods. This article has these words, but the ground truth class of this article is Neural Network. Therefore, these features may be one of the reasons for the misclassification made by GAT, and these features have a significantly negative impact on the performance of the GAT model (\textbf{R3}).
}
\subsection{Expert Interviews}
\zhihua{}{To evaluate whether {{\textit{GNNVis}}} can help users find possible error patterns and is easy to understand and use, we conduct interviews with four experts (E3, E4, E5, E6).}
The four experts have diverse research interests in the field of GNNs. E3 has experience in research on Graph Pooling and Graph Agreement Models. E4 has experience in research on new GNN models and GNN model robustness. E5 has experience in applying GNN in healthcare such as predicting drug interactions. E6 has experience in utilizing GNN to generate Graph Embedding.
None of the four experts are co-authors of this paper and know about the details of {{\textit{GNNVis}}} before the interviews.
The expert interviews were conducted as follows:
Experts first reviewed the introductory materials provided by us, including slides which illustrate the problem solved, the system overview, and a video which demonstrates the case studies on the system design and workflow of using the system. After experts reviewed those materials to learn about how the {{\textit{GNNVis}}} works, we asked them to explore the {{\textit{GNNVis}}} demo by following the demonstrated workflow to find the cause of the prediction errors of individual nodes and extract general error patterns in GNN prediction results independently.
\zhihua{}{Finally, we asked them to finish a post-interview questionnaire to collect their feedback on {{\textit{GNNVis}}}.}
The questionnaire mainly comprises the meta-information about the experts, the evaluation of the effectiveness of {{\textit{GNNVis}}}, and the evaluation of the usability of each component and overall design of {{\textit{GNNVis}}}.
\zhihua{}{Results and feedback are summarized as follows: }
All experts stated that they have no prior experience in using visualization to diagnose GNNs. E3 said that he has used networkx\footnote[2]{https://networkx.github.io/} for visualizing graph datasets but not for the diagnosis of GNNs.
Without visualization, they inspected their GNN models through such ways.
\zhihua{}{E3 commented that he will first inspect the traditional evaluation metrics such as Accuracy, Recall, Precision, and Loss to monitor the training process of GNN. Then he would analyze the error of each class and further check network properties like the average length of the shortest paths, the average cluster coefficients, the average degree, and so on.}
E4 said that he will check whether each GNN layer output result is correct or not. E5 commented that he also uses training loss and accuracy to monitor the training process of GNN. Moreover, he would further check the attention weights of nodes and neighboring nodes of misclassified nodes. E6 mentioned that he will first find out whether GNN works on small test sets and if it works, he will train the model on a large dataset. To evaluate a GNN, E3 pointed out that it should be evaluated on the different splitting of the training dataset due to the fact that the performance of GNN will be greatly influenced by the training splitting of the dataset. E5 commented that memory usage and inference speed also should be considered in the evaluation of GNN.
\noindent
\textbf{Effectiveness:}
After exploring the {{\textit{GNNVis}}}, all experts appreciated our efforts in making such an effective system to help them understand and diagnose the GNN models.
E3 said that {{\textit{GNNVis}}} can help him to check properties of the dataset and inspect the model behaviors. E5 commented that it can help him to extract the patterns of misclassified nodes. E6 mentioned that it is more intuitive to analyze GNN. It can also help facilitate users to simultaneously analyze the GNN prediction results from multiple perspectives.
Through the exploration of {\textit{GNNVis}},
E3 found that most of the error cases occur from \textit{ground truth label ``3``} in the Cora dataset based on the GCN model.
He could observe that most misclassifications are related to the \textit{nearest training nodes label distribution}.
It means that a few edges in the graph dataset might be noise for node classification. E6 found that if the \textit{degree} of a node is small, it is hard for models to classify them correctly for they have little effective information to help models to correctly classify them. Based on these observations, experts proposed solutions to solve these problems. E3 suggested that we should add operations to correct edge noise in the model. E6 suggested that we make a GNN model which implements different strategies to make predictions on nodes. If the nodes have a high \textit{degree}, we can enforce the model which considers more connection information. If the nodes have a low \textit{degree}, we can enforce the model which considers more feature information of that node. Moreover, he observed that a few nodes misclassified by GNN can be correctly classified by MLP. He commented that utilizing multiple models to make predictions may be a good strategy to enhance model performance. The experts said that after using {{\textit{GNNVis}}}, it inspires them to do something different than before to further inspect the GNN model. E3 would utilize {{\textit{GNNVis}}} to check the plane of \textit{training nodes structure influence} in the Projection View to find the misclassification of nodes with ground truth label 3, before and after training models, to see whether his model can correct the wrong edges or not.
E6 would explore which patterns are shared by the misclassified nodes and find which kinds of nodes the specific model would be suitable to make predictions on.
\noindent
\textbf{Usability:}
All experts agreed that {{\textit{GNNVis}}} is easy to use and easy to understand.
For Parallel Sets View, they commented that this view offers important insights for them to analyze the GNN model prediction. E3 said that \textit{``Parallel Sets View helps me check what the major problem of the models is''}. E5 mentioned that he can clearly see the ratio of correctly and wrongly classified nodes and whether the structure or feature has an influence on the performance of GNN models. E6 pointed out that it is intuitive to use that view. After adjusting the order of axes in the Parallel Sets View, he can get useful information to further support his analysis of the GNN model.
E3 liked the Projection View, as it allows him to associate model behavior with node characteristics.
For Graph View and Feature Matrix View, most of the experts also appreciated that they are easy to use and understand.
Different from other experts, E5 preferred the Feature Matrix View over the Graph View because he is concerned more about the features of nodes when using GNNs in the drug interaction prediction.
\noindent
\textbf{Suggestions:}
Experts also gave helpful suggestions for improving {{\textit{GNNVis}}} to further support their analysis of GNN models. E3, E5, and E6 mentioned that in Projection View, they would like to see the node id and other metric information when hovering over the nodes in Projection View so we have implemented it. E3 pointed out that basic properties of nodes like Connectivity, Clustering coefficient, and Centrality should be calculated and displayed. It can further help him to check the correlation between those metrics. Moreover, it would be better for him to easily download the figure in the system because he thinks that the system is awesome and powerful. E3, E5, and E6 commented that the system should support customized datasets like multiple relation graphs.
E5 also suggested a ``what-if'' analysis that enables users to dynamically insert nodes, change the node features, and observe corresponding changes. Future work may include those functions.
\section{Discussions and Future Work}
\label{sec:discussion}
\textbf{Generalizability:} {{\textit{GNNVis}}} can be applied to analyze various kinds of GNN models and different datasets. However, it can only be utilized to analyze the task of node classification. It does not currently support the analyzing of link prediction task and graph classification. Moreover, if the dataset has multiple relations or the edges have features, the system cannot be currently directly used to analyze those customized data. Also, users may want to inspect self-defined metrics or other metrics defined in other papers but currently, the system does not support customized metrics inspection.
\noindent
\textbf{Scalability:} \zhihua{}{
One limitation of {{\textit{GNNVis}}} is its scalability and we have attempted to mitigate this issue by different means. For example,
the Projection View displays the individual node glyphs for each node. Due to the limited screen space, we cannot display up to 300 nodes.
We improved the Projection View by using a hierarchical clustering algorithm to make it scale up to more nodes, as described in Section~\ref{sec:ProjectionView}.
The node glyphs are used to represent clusters and details can be checked on demand. This significantly improves the scalability of the Projection View.
For the Graph View, in order to accelerate the rendering speed, we enable users to display only the focused nodes and their neighbors, without rendering other nodes.
}
\noindent
\textbf{Future Work:}
In the future, we plan to generalize {{\textit{GNNVis}}} to other graph-related tasks, like link prediction and graph classification.
We plan to make Parallel Sets View and Projection View more configurable, such as enabling users to define their own metrics, such as clustering coefficients to show. We also want to further improve the running performance of {{\textit{GNNVis}}} and support the graph dataset with more nodes and higher dimensional features.
We also want to generalize {{\textit{GNNVis}}} to support dynamically insert nodes and edges to see the corresponding change in GNN prediction results. It will help users to understand more deeply how graph structure helps a model to make a correct prediction.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we present {{\textit{GNNVis}}}, a visual analytics system to help model developers and users understand and diagnose GNN model prediction results. {{\textit{GNNVis}}} comprises four visualization components: the Parallel Sets View enables users to see the distribution of metrics; the Projection View presents a set of 2D projections of the selected nodes according to metrics summarized from different perspectives enabling users to extract potential clusters of nodes; the Graph View shows the whole graph; and the Feature Matrix View shows the selected node feature information. It further enables users to check the detailed information of individual nodes. All four visualization components are linked together to support users to analyze GNN models simultaneously from multiple angles and extract general error patterns in GNN prediction results.
Two case studies and expert interviews demonstrate the effectiveness and usability of our system {{\textit{GNNVis}}}.
|
2,869,038,153,749 | arxiv | \section{Introduction}
\label{sec:intro}
Astrophysical shocks are often considered to be associated with the most energetic events in the universe. After the death of massive stars, destruction of WDs, or mergers of compact objects, shocks are expected to be launched into the ambient medium \citep{chevalier82,Matzner99,Meszaros97}. For a normal core collapse supernova or thermonuclear runway explosions, these supernova-driven shocks are usually non-relativistic, expanding in the surrounding media with a speed $\sim$ a few $\times 10^4$ \mbox{\,km s$^{-1}$} \citep{chevalier16,parrent14}. Some special supernovae and compact binary coalescence of two neutron stars (NSs) or a NS and black hole can launch relativistic outflows that power energetic gamma rays bursts (GRBs) \citep{meszaros06,zhang18_grb_book}. In the case of GRBs, both internal and external relativistic shocks are possible, with the former believed to power the observed $\gamma$-ray emission \citep{Rees94} and the latter believed to power the multi-wavelength afterglow \citep{Meszaros97,Sari98}. These shocks may produce low frequency radio bursts through synchrotron maser mechanisms \citep{usov00,sagiv02,lyub14,belo17,waxman17,plotnikov19,metzger19,belo20,Yu21}.
\par
Fast radio bursts (FRBs) are mysterious radio transients from cosmological distances \citep{Lorimer07,Petroff19,cordes19}. It is unclear what astrophysical objects are the main sources of FRBs and whether they are generated within the magnetosphere of a highly magnetized object (e.g. a magnetar) or from relativistic shocks \citep{zhang20_nat_nov}. The very high observed event rate, e.g. more than $10^3$ $\rm sky^{-1}$ events everyday \citep{Petroff19}, suggests a high event rate density, i.e. $\sim 10^4$ $\rm Gpc^{-3} \rm yr ^{-1}$ \citep{luo20,ravi20}. One therefore is expected to detect a few of them from the nearby galaxies or even in Milky Way within a reasonable time interval. The discovery of the galactic FRB 200428 in association with the galactic magnetar SGR 1934+2154 \citep{chime20_galacticFRB,bochenek20,Mereghetti20,Li20_Xray_galSGR,ridnaia20} confirmed this and suggested that at least magnetars can make FRBs.
\par
The immediate surrounding of the FRB sources may cause the radio wave signal to undergo several absorption and scattering processes. For example, a radio wave may suffer from free-free absorption \citep{Luan14,Murase16,yang17,kundu20}, induced Compton and Raman scattering \citep{Lyubarsky08,Lu18,pawan_kumar20,Ioka20}, and synchrotron absorption \citep{yang16} before making a way out of its production sites. Some of these conditions demand that the FRB outflow moves with a relativistic speed \citep[e.g.][]{Murase16}. In the following, we discuss what happens when a low frequency signal, produced by whatever physical mechanism, passes through a hot relativistic shell and undergoes free-free absorption in that medium. The applications and the consequences of this absorption process to FRBs are examined and discussed in $\S$ \ref{sec:app_FRB} and $\S$ \ref{sec:dis}.
\section{Free-free absorption in a hot relativistic shell}
\label{sec:ff_abs_rel_shell}
We consider two shells with a relative Lorentz factor $\Gamma_{\rm rel}$ collide and drive a pair of internal shocks. If the two shells merge after the collision and have a bulk Lorentz factor $\Gamma$, the lab-frame total energy can be estimated as
\begin{equation}
E = N \Gamma (\Gamma_{\rm rel} - 1) m_p c^2,
\label{eq:E_intrnl_shck_1}
\end{equation}
where $m_p$ and $c$ are the proton mass and the velocity of light in vacuum, respectively, and $N = 4 \pi n r^2 \Delta r$ is the total number of protons in the shell, $n$ is the number density of the protons in the lab-frame, We consider two relativistic shells with a not-too-large relative Lorentz factor, so that $r = c \Gamma^2 t_{\rm obs}$ is the radius of the shock from the central engine, $\Delta r$ is the thickness of the shock in the lab-frame, and $t_{\rm obs}$ is the time in the observer frame. Combining these, we get
\begin{equation}
n = \frac{E}{4 \pi m_p c^4 t_{\rm obs}^2 (\Delta r) \Gamma^5 (\Gamma_{\rm rel} - 1) },
\label{eq:E_intrnl_shck_2}
\end{equation}
The particle number density in the comoving frame, $n'$, is related to the density in the lab frame, $n$, through $n = n' \Gamma$. Similarly, $\Delta r = \Delta r'/\Gamma$. The frequency in the observer frame, $\nu_{\rm obs}$, is approximately equal to $2 \Gamma \nu'$. Since the particles are relativistic in the shock comoving frame, in this frame the free-free absorption coefficient can be written as
\begin{equation}
\alpha_{\nu'}^{\rm 'ff} = 0.018 Z^2 \bar{g}'_{\rm ff} T'^{-3/2} n'_e n'_i \nu'^{-2} (1 + A T')
\label{eq:ff_abs_1}
\end{equation}
\citep{ribicki79}, with $A = 4.4 \times 10^{-10}$ K$^{-1}$. Here $Z$ is the atomic number of the gas. $n'_e$ and $n'_i$ represent the densities of electrons and ions in the shock, respectively. Again we assume that the shell is made up of hydrogen ion. Therefore, $n'_e = n'_i (\equiv n')$ and $Z = 1$. Also $\bar{g}'_{\rm ff}$ represents the velocity averaged Gaunt factor and $T'$ is the temperature of the electrons in the shock comoving frame. When the shocked energy is shared between electrons and protons, the temperature of the electrons behind the shock depends on the shock kinematics through the following relation
\begin{equation}
T' = \frac{\epsilon_e (m_p/m_e)}{k_B} (\Gamma_{\rm rel} - 1) m_e c^2,
\label{eq:temp_elec}
\end{equation}
\citep{Meszaros93}, where $\epsilon_e$ is the fraction of the post shock energy that goes to electron. $k_B$ is the Boltzmann constant and $m_e$ represents electron mass. Assuming $\epsilon_e = 0.1$, one gets $T' \approx 1.1 \times 10^{12}$ K for $(\Gamma_{\rm rel} - 1) \approx 1$. Thus, $1 + A ~ T' \approx A ~ T'$. Radio pulses remain trapped in this shock until the medium is optically thick to the waves. When it becomes transparent to a given frequency the optical depth for that frequency in the shock becomes unity, i.e.,
\begin{equation}
\alpha_{\nu'}^{\rm 'ff} \Delta r' = 1,
\label{eq:opt_depth}
\end{equation}
which gives $t_{\rm obs}^4 = A T'^{-1/2} ~ \nu_{\rm obs}^{-2} \eta_1$, with
\begin{equation}
\eta_1 = 0.018 \frac{4}{c^4 (\Delta r) \Gamma^9 (\Gamma_{\rm rel} - 1)^2} \bigg(\frac{E}{4 \pi m_p c^2}\bigg)^2.
\label{eq:eta_1}
\end{equation}
Therefore,
\begin{equation}
t_{\rm obs} = \frac{A^{1/4} T'^{-1/8} \eta}{\sqrt{\nu_{\rm obs}}},
\label{eq:tobs2}
\end{equation}
where
\begin{equation}
\eta = \eta_1^{1/4}.
\label{eq:eta}
\end{equation}
If $t_{{\rm obs},1}$ and $t_{{\rm obs},2}$ are the times when the shell becomes transparent to $\nu_{\rm obs,1}$ and $\nu_{\rm obs,2}$, then the drift rate, $\rm DR $, can be approximated as
\begin{equation}
{\rm DR} = \frac{\nu_{\rm obs,2} - \nu_{\rm obs,1}}{t_{{\rm obs},2} - t_{{\rm obs},1}}.
\label{eq:DR_1}
\end{equation}
This implies
\begin{equation}
\rm DR = -\frac{T'^{1/8}}{A^{1/4} \eta} ~ \sqrt{\nu_{\rm obs,2} \nu_{\rm obs,1}} (\sqrt{\nu_{\rm obs,2}} + \sqrt{\nu_{\rm obs,1}}).
\label{eq:DR_2}
\end{equation}
Note that the inverse scaling between $t_{\rm obs}$ and $\nu_{\rm, obs}$ in Eq.(\ref{eq:tobs2}) stems from the $\propto \nu^{-2}$ in Eq.(\ref{eq:ff_abs_1}), which does not depend on whether the shock is relativistic, but a relativistic internal shock is needed to make the drift rate matching the observations. It is also needed to satisfy the duration and induced Compton scattering constraints if the shocks are the sites of FRBs.
For the drift rate expressed in MHz/ms, and denoted as $\rm DR_{\rm obs}$, the above equation gives
\begin{equation}
\begin{split}
\eta = -\frac{T'^{1/8}}{A^{1/4} {\rm DR_{\rm obs}}} \sqrt{(\nu_{\rm obs,2})_{\rm MHz} (\nu_{\rm obs,1})_{\rm MHz}} ~ \\
\Bigg[ \sqrt{(\nu_{\rm obs,2})_{\rm MHz}} + \sqrt{(\nu_{\rm obs,1})_{\rm MHz}} \Bigg],
\end{split}
\label{eq:DR_3}
\end{equation}
where $(\nu_{\rm obs})_{\rm MHz}$ represents frequency in MHz in the observer frame.
Notice that when $\nu_{\rm obs,2}$ and $\nu_{\rm obs,1}$ are close to each other, their arithmetic mean would be similar to their geometric mean, so that $\eta$ is inversely proportional to the normalized drift rate ${\rm DR_{\rm obs}} / \nu_{\rm mean}$, as shown in Figure \ref{fig:eta_DR_by_numean} (discussed below).
\section{Application to Fast radio bursts}
\label{sec:app_FRB}
FRBs have been detected from 110 MHz \citep{pastor20} to 8 GHz \citep{gajjar18} with a minimum and maximum width of the pulses of around 30 $\mu$s \citep{michilli18} and 26 ms \citep{farah17} detected from FRB\,121102 and FRB\,170922, respectively. One interesting feature of FRBs, especially of those repeating ones, is the sub-pulse drifting of frequency. Low frequency subpulses are observed to be delayed with respect to high-frequency ones \citep{hessels19,amiri19,Andersen2019,fonseca20,pastor20,luo20_b}. The observed drift rates for CHIME FRBs are in the range 1 MHz/ms to 30 MHz/ms with one of the bursts from the second repeating FRB\,180814.J0422+73 having a minimum drift rate of $\sim 1$ MHz/ms. At higher frequencies, the drift rates have increased significantly as exhibited by bursts from the first repeating FRB\,121102, though some of the bursts of FRB\,180916B, detected with the Apertif telescope, displays a drift rate as small as 4 MHz/ms in the L band.
We apply the free-free absorption theory discussed in Section \ref{sec:ff_abs_rel_shell} to FRBs and to investigate whether it can interpret the down-drifting feature. Since FRBs are millisecond duration transient radio pulses, one can take $\Delta t_{\rm obs} \sim 1$ ms. The millisecond duration of the bursts implies that the characteristic length scale of the emission region is $\Delta r \sim c \Delta t_{\rm obs} \sim 3\times 10^7$ cm. Inserting this value of $\Delta r$ in Eq. \ref{eq:eta_1} and using Eq.\ref{eq:eta} we obtain
\begin{equation}
\Gamma^9 (\Gamma_{\rm rel} - 1)^2 = \frac{8.3 \times 10^{42}}{\eta^4} E_{\rm 45}^2,
\label{eq:eq21}
\end{equation}
where $E_{\rm 45} =E (\rm erg)/10^{45}$ erg. Over the frequency range of 400 MHz to 7 GHz, the observed drift rates, $\rm DR_{\rm obs}$, of different bursts vary from $\sim - 1$ MHz/ms to $- 870$ MHz/ms. The emission bandwidth of most of the bursts is small: for the CHIME bursts it is around $200$ MHz. From Eq.\ref{eq:eta}, the estimated values of $\eta$ for an internal shock are in the range of $2 \times 10^{6}$ to $2 \times 10^{8}$ ms MHz$^{1/2}$ K$^{-1/8}$, where $T' = 1.1 \times 10^{12}$ K. Figure \ref{fig:eta_DR_by_numean} displays the required $\eta$ as a function of $|{\rm DR_{\rm obs}}|/\nu_{\rm mean}$ with $\nu_{\rm mean} = (\nu_{\rm obs,2} + \nu_{\rm obs,1})/2$, which represents the central frequency of the observed radio pulse. The values of the DR of different FRBs were published in \citet{hessels19,Andersen2019,fonseca20,pastor20}. The filled diamonds, triangles and circles represent the CHIME repeaters, FRB\,121102 and FRB\,180916B, respectively, where the second repeating FRB\,180814.J0422+73 is included in the CHIME sample.
\begin{figure}
\centering
\includegraphics[width=8.5cm,origin=c]{eta_vs_DRobs_div_nu_mean_3.pdf}
\caption{The required $\eta$ as a function of modulus of $\rm DR_{\rm obs}$ divided by the central frequency of the observed pulse for an internal shock. The $\rm DR_{\rm obs}$ of different FRBs are taken from \citet{hessels19,Andersen2019,fonseca20,pastor20}. The filled diamonds, triangles and circles represent the CHIME repeaters, FRB\,121102 and FRB\,180916B, respectively. The CHIME sample includes the second repeating FRB\,180814.J0422+73.}
\label{fig:eta_DR_by_numean}
\end{figure}
\par
The magnetic energy stored in a magnetar with a radius $\sim$ 10 km and a characteristic magnetic field $B \sim 10^{15}$ G \citep{duncan_RC92} is $\gsim 10^{47}$ erg. For $B \gsim 10^{16}$ G the dipolar magnetic energy could be as high as $10^{49}$ erg. The giant flare detected from SGR\,1806-20 in 2004, during a hyperactive phase, had an isotropic flare energy of around $2 \times 10^{46}$ erg \citep{palmer05}. Moreover, it was found that a similar amount of energy was released from a magneter in NGC 253 during an extremely bright gamma-ray burst event on 15th April 2020 \citep{Svinkin21}.
For $\eta$ in the range from $2 \times 10^{6}$ to $2 \times 10^{8}$ ms MHz$^{1/2}$ K$^{-1/8}$, the allowed values of $\Gamma$ (estimated using eq.\ref{eq:eq21}) as a function of the shock energy $E$ varying from $10^{44}$ erg to $10^{46}$ erg are shown in Fig.\ref{fig:gamma_E}. We note that free-free absorption is pronounced for shocks having higher kinetic energies, $\gtrsim 10^{44}$ erg (otherwise, $\Gamma$ is too small to satisfy the drift-rate constraint).
\begin{figure}
\centering
\includegraphics[width=8.5cm,origin=c]{shell_paramaters_4.pdf}
\caption{The allowed values of $\Gamma$ as a function of shock energy, $E$, for $\eta$ in the range $2 \times 10^{6}$ to $2 \times 10^{8}$ ms MHz$^{1/2}$ K$^{-1/8}$.}
\label{fig:gamma_E}
\end{figure}
The particle density in the shocks is
\begin{equation}
n = \frac{2.4 \times 10^{-25} \eta^4 \Gamma^4 (\Gamma_{\rm rel} - 1)}{t_{\rm obs}^2 E_{\rm 45}},
\label{eq:eq22}
\end{equation}
as obtained from Eqs.\ref{eq:E_intrnl_shck_2} and \ref{eq:eq21}. Assuming that shocks are solely made up of hydrogen and $t_{\rm obs} \sim 1$ ms, the mass of the shells, $M_{\rm sh}$, as a function of $E$ is depicted in Fig.\ref{fig:E_mass} for different values of $\eta$, estimated for an internal shock. For $E \sim 10^{44}$ erg, $M_{\rm sh}$ is around $10^{-12}$ $\hbox{M$_{\odot}$}$; and it is about two orders of magnitude higher when the shock energy is $10^{46}$ erg\footnote{ For the giant flare detected from SGR\,1806-20, \citet{Granot06} invoked a mildly relativistic outflow to interpret the radio afterglow and derived a lower limit on the ejecta mass as $\ge 1.5 \times 10^{-9}$ \hbox{M$_{\odot}$}. If FRBs are related to shocks with that set of parameters, then the down-drifting rate cannot be interpreted within the model proposed here.}. For an active magnetar, the maximum mass of the ejected material during a flaring event could be $\sim 10^{-5}$ \hbox{M$_{\odot}$} \citep{belo17}. In our model, even when the highest energy outbursts occur everyday, a magnetar would require around 100 yr to eject a total of $\sim 10^{-5}$ \hbox{M$_{\odot}$} matter in the surrounding medium.
\begin{figure}
\centering
\includegraphics[width=8.5cm,origin=c]{shell_energy_mass_6.pdf}
\caption{The mass of the shell, $M_{\rm sh}$, as a function of shock energy $E$ for different values of $\eta$ obtained for an internal shock.}
\label{fig:E_mass}
\end{figure}
\par
The radius of the shell as a function of the shock energy $E$ for different values of $\eta$ is displayed in Fig.\ref{fig:E_radius}. Since relativistic shocks are expected to form beyond the magnetosphere, we also draw the magnetar light cylinder (LC) radius $R_{\rm LC} \simeq c P/(2\pi)$ (which defines the outer boundary of the magnetosphere) for comparison, where $P$ is the magnetar spin period.
In the figure, the cyan, red and blue horizontal lines exhibit the radii of the LC for $P = $ 1 s, 3 s and 10 s, respectively. Assuming that the spindown energy loss of a magnetar is dominated by magnetic dipole radiation, the characteristic age of the magnetar can be estimated as $\tau \simeq {P}/(2 \dot{P})$, where the braking index of the magnetar is presumed to be 2, and $\dot{P}$ represents the derivative of the period. For a magnetar of mass $1.4 ~ \hbox{M$_{\odot}$}$ and radius 10 km, an upper limit on $\dot{P}$ is calculated as $\dot{P} ({\rm {s \ s^{-1}}}) = [B ({\rm {G}})/(3 \times10^{19})]^2/{P (\rm {s})}$. Considering $B = 10^{15}$ G, the age $\tau$ of the magnetar is found to be around 16 yr, 146 yr and 1622 yr for $P = $ 1 s, 3 s and 10 s, respectively. This implies that the older the population, the smaller the $\eta$ values for free-free absorption to be a relevant process to produce the observed down drifting in the frequency-time plane.
\begin{figure}
\centering
\includegraphics[width=8.5cm,origin=c]{shell_energy_radius_13.pdf}
\caption{Radius of the shell as a function of shock energy $E$ for different values of $\eta$. The cyan, red and blue horizontal lines display the radius of the LC when the period $P$ of a magnetar is 1 s, 3 s and 10 s, respectively. The age of the magnetar corresponding to each $P$ is shown in the plot, assuming that the spindown energy loss is dominated by magnetic dipole radiation.}
\label{fig:E_radius}
\end{figure}
\section{Discussion}
\label{sec:dis}
FRBs from magnetars have been interpreted within the framework of two types of models. One group of models suggests that coherent emission originates from the magnetosphere of the neutron star \citep{kumar17,yang18,yang21,Lu18,Lu20}, while the other group of models interprets coherent emission as synchrotron maser in relativistic shocks \citep{lyub14,belo17,waxman17,plotnikov19,metzger19,belo20}. Within the magnetosphere model, the frequency down-drifting has been interpreted as a consequence of ``radius-to-frequency mapping'' commonly observed in radio pulsars, i.e. emission with different frequencies originate from different altitudes and the high-frequency emission is observed earlier than low-frequency one \citep{wang19,Lyutikov20}. Within the synchrotron maser model, the drift is interpreted as due to emitting frequency decreasing with radius in relativistic shocks \citep{metzger19,belo20}. Polarization angle swings of some FRBs favor the magnetosphere origin of FRBs \citep{luo20_b}, but it is unclear whether the shock model may be responsible for some FRBs.
\par
In this work, we illustrate that the free-free absorption in relativistic shocks introduces a feature of down-drifting. Whether it is related to the FRB phenomenology is unclear.
However, if the FRB down drifting is due to this mechanism then the following conditions should be satisfied: i.) there should be a hot relativistic shell through which the radio waves pass through and ii.) the energy of this shock should be $\gsim 10^{44}$ erg. In this model, the shell serves as the absorber of the radio waves. The radiation may originate from the shock, similar to what happens in the case of synchrotron maser model of FRBs, or could be produced at a radius smaller than the hot shell, e.g. from the magnetosphere of the central engine\footnote{For the latter scenario, the radio waves encounter the shock while propagating out so that a downdrifting pattern can be observed only when the shock just turns from optically thick to optically thin in the observing frequency band during the encounter. Since not all the bursts from repeating FRBs have the down-drifting pattern, according to our interpretation, those with such a pattern are the ones that satisfy this requirement.}.
These shocks are expected to have $\Gamma$ much greater than unity. In the case of an internal shock, this criteria is fulfilled except for $\eta \sim 10^{8}$ ms MHz$^{1/2}$ K$^{-1/8}$ and $E < 10^{45}$ erg. The majority of the bursts (see Fig.\ref{fig:eta_DR_by_numean}) require $\eta < 4 \times 10^{7}$ ms MHz$^{1/2}$ K$^{-1/8}$. For $\eta \sim 10^{6}$ ms MHz$^{1/2}$ K$^{-1/8}$ and $E \sim 10^{45}$ erg, $\Gamma$ is $\sim 100$ (see Fig.\ref{fig:gamma_E}). The smaller values of the $\eta$ imply larger shock radii (see Fig.\ref{fig:E_radius}). For the shocks with a radius $\sim 10^9$ cm from the central engine, the magnetar that powers FRBs could be as young as a couple of decade old, as demonstrated in Fig.\ref{fig:E_radius} based on the assumption that the spin down energy loss is dominated by the magnetic dipole radiation. Furthermore, the mass of the shocked shell is less than a few $\times 10^{-10}$ $\hbox{M$_{\odot}$}$ as shown in Fig.\ref{fig:E_mass}. This suggests that our model may work even for a magnetar that is active for $\gsim 100$ yr and flares almost every day.
\par
Particle-in-cell simulations of synchrotron maser emission from relativistic shocks suggests that the efficiency of this mechanism is $7 \times 10^{-4}/\sigma^2$, for $\sigma \gsim 1$, where $\sigma$ is the magnetisation parameter \citep{plotnikov19}. This implies that the efficiency of synchrotron maser is very low requiring a large amount of energy to go to other wavelengths. The observed radio-to-X-ray flux ratio for FRB 200428 associated with the
Galactic magneter SGR 1934+2154 \citep{Mereghetti20,Li20_Xray_galSGR,ridnaia20,tavani_nature20,chime20_galacticFRB,bochenek20} roughly satifies this constraint \citep{Margalit20_galFRB}, even though the magnetoshere model can also interpret this ratio \citep{Lu20,yang21}. The energy of the localised cosmological FRBs are in the range $10^{27} - 10^{34}$ erg/Hz \citep{tendulkar17,bannister19,Ravi19_nature,marcote20_nature,law20_nature,bhandari20,bhandari20_FRB191001}. Using the dispersion measure-redshift \citep{deng14} relation \citet{zhang_2018_nonlocFRB_lum_eng} demonstrates that the peak luminosities of the FRBs are in the range $10^{42} - 10^{45}$ erg $\rm {s}^{-1}$. These imply that the radio energy of the cosmological FRBs vary between $10^{36} - 10^{43}$ erg. Even though no simultaneous X-ray bursts has been detected from cosmological FRBs despite attempts \citep{Scholz16}, if one assumes that the X-ray-to-radio luminosity ratio of these FRBs is similar to that of FRB 200428, one would expect that the total energy of these events is at least $10^{40} - 10^{47}$ erg. For $E>10^{44}$ erg free-free absorption in shocks would be important in producing the downward drift in frequency-time as demonstrated in Fig.\ref{fig:gamma_E}. As a result, the mechanism discussed in this paper should be considered in FRB modeling and may be relevant to the down-drifting feature observed in at least some FRBs.
\section*{Acknowledgements}
E.K acknowledges the Australian Research Council (ARC) grant DP180100857.
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
2,869,038,153,750 | arxiv | \section{Introduction}
The transverse Ising model with random exchange interactions is a simple interesting quantum spin model.
Many researchers of information, mathematics and physics have been studied
this model extensively, since D-Wave Systems actually
devised and produced a quantum annealer based on this model \cite{J}, which is
theoretically proposed by Finnila-Gomez-Sebenik-Stenson-Dol \cite{FGSSD} and Kadowaki-Nishimori \cite{KN}.
On the other hand, the Edwards-Anderson (EA) model is a well-known Ising model
with short range random exchange interactions \cite{EA}.
Quite recently, it is proven that the ground state in the
Edwards-Anderson (EA) model is unique in any dimension for almost all continuous random exchange interactions
under a boundary condition which breaks the global ${\mathbb Z}_2$ symmetry \cite{I}.
Uniqueness of the ground state in the EA model at zero temperature agrees with the claims of Fisher- Huse and Newman-Stein \cite{FH,NS}.
In the present paper, it is proven that
the uniqueness of ground state is preserved also in the Edwards-Anderson model with a weak transverse field.
The obtained uniqueness theorem for the ground state without transverse field gives a convergent perturbative expansion for sufficiently weak transverse field.
A convergent perturbative expansion proposed by
Kirkwood and Thomas \cite{KT} and developed by Datta and Kennedy \cite{DK1,DK2} enables us to obtain a rigorous result for the unique ground state
under a weak transverse field.
\section{Definitions and main theorem}
Consider $d$-dimensional hyper cubic lattice $\Lambda_L= {\mathbb Z}^d \cap (-L/2,L/2]^d$
with an even integer $L >0$. Note that the lattice $\Lambda_L$ containts $L^d$ sites.
Define a set of nearest neighbor bonds by
$$B_{\Lambda} = \{ \{i, j\} \in \Lambda _L^2 | |i-j|=1\}.$$ Note $|B_{\Lambda}|=|\Lambda_L| d.$
A bond spin $\sigma_b$ denotes
$$\sigma_b^a=\sigma_i^a \sigma_j^a $$
for a bond $b= \{i,j\} \in B_{\Lambda}$ and $a=x,y,z$.
Let $\bm J=(J_b)_{b \in B_{\Lambda}}$ be a sequence of
independent and identically distributed (i.i.d)
random variables whose expectation value and variance are given by
\begin{eqnarray}
&&{\mathbb E} J_b=J_0 \in{\mathbb R}, \ \ {\mathbb E} ( J_b-J_0)^2=J^2,
\end{eqnarray}
for $J>0$.
The Hamiltonian of this model is given by
\begin{equation}
H_\Lambda(\sigma, h, \bm J)= - \sum_{b \in B_{\Lambda}} J_b\sigma_b ^z
-h\sum_{i \in \Lambda_L} \sigma_i^x,
\end{equation}
with a random sequence $\bm J.$
This Hamiltonian is invariant under a discrete SU(2) transformation $U:=\exp [i \pi \sum_{i\in \Lambda_L} \sigma_i^x/2] $ acting on each spin
$\sigma _i^z \mapsto U^\dag \sigma_i ^z U =-\sigma^z_i$. This corresponds to ${\mathbb Z}_2$ symmetry in the Edwards-Anderson model.
Define a state $| \sigma\rangle$ with a spin configuration $\sigma : \Lambda_L \to \{1,-1\}$ by
$$
\sigma_i^z | \sigma \rangle = \sigma_i |\sigma\rangle.
$$
For any $\beta >0 $, the partition function as a function of $(\beta, h, \bm J)$ is defined by
\begin{equation}
Z_{\Lambda}(\beta, h, \bm J) = {\rm Tr} e^{ - \beta H_\Lambda(\sigma,h, \bm J)} = \sum_{\sigma \in \{1,-1\}^{\Lambda_L}} \langle \sigma | e^{-\beta H_\Lambda(\sigma,h, \bm J)} |\sigma\rangle,
\end{equation}
The average of an arbitrary function $f: \Sigma_ L \rightarrow {\mathbb R}$ of the spin configuration in the Gibbs state is given by
$$
\langle f(\sigma) \rangle = \frac{1}{Z_{\Lambda}(\beta, h, \bm J)}{\rm Tr} f(\sigma) e^{ -\beta H_\Lambda(\sigma,h, \bm J)}.
$$
Note that the expectation $\langle \sigma_i\rangle$ of spin
at each site $i$ vanishes in the ${\mathbb Z}_2$ symmetric Gibbs state.
To remove the trivial two-fold degeneracy
due to the global ${\mathbb Z}_2$ symmetry,
assume $+$ boundary condition, such that
\begin{equation}\sigma_i=1,
\label{BC}
\end{equation}
for $i \in \Lambda_L\setminus\Lambda_{L-2}$.
$\Sigma_\Lambda ^+ \subset \{1,-1 \}^{\Lambda_L}$ denotes the set of spin configurations satisfying the boundary condition (\ref{BC}).
{\theorem
The ground state of
the transverse field EA model is unique for sufficiently small $|h|$.
\label{Th1}}
\section{Proof of Theorem \ref{Th1}}
\subsection{Correlation functions at zero temperature}
{\lemma Consider the transverse field EA model.
Let $f(\sigma)$ be
an arbitrary uniformly bounded function of spin operators $(\sigma^a_i)_{i\in\Lambda}$, such that $\|f(\sigma)\| \leq C.$
For any bond $b \in B_{\Lambda}$ and
for almost all $J_b$, the infinite volume limit and the zero temperature
limit of the connected correlation function vanishes
\begin{equation}
\lim_{\beta \rightarrow \infty
[ (\sigma_b^z, f(\sigma) ) -\langle \sigma_b^z \rangle \langle f( \sigma )\rangle] =0.
\label{bond}
\end{equation}
Proof. \label{Lem1}} The derivative of one point function is represented in terms of the following Duhamel function
\begin{equation}
\frac{1}{\beta }\frac{\partial}{ \partial J_b} \langle f( \sigma) \rangle = ( \sigma_b^z, f( \sigma) ) -\langle \sigma_b^z \rangle \langle f(\sigma)\rangle,
\end{equation}
where the Duhamel function $(A,B)$ between linear operators $A$ and $B$ is defined by
$$
(A, B):= \int_0 ^1 dt \langle e^{\beta t H}A e^{-\beta t H} B\rangle.
$$
The integration over an arbitrary interval $(J_1,J_2)$ is
$$
\frac{1}{\beta }[ \langle f( \sigma) \rangle_{J_2} - \langle f(\sigma)\rangle_{J_1}]=\int_{J_1} ^{J_2}
dJ_b[ ( \sigma_b^z, f(\sigma) ) -\langle \sigma_b^z \rangle \langle f(\sigma) \rangle].
$
Uniform bounds $\|f(\sigma)\| \leq C$ in the left hand side,
$-2C \leq | ( \sigma_b^z, f(\sigma) ) -\langle \sigma_b^z \rangle \langle f(\sigma) \rangle | \leq 2C
$ on the integrand in the right hand side,
and the dominated convergence theorem imply the following commutativity between the limiting procedure and the integration
\begin{eqnarray}
0&=&\lim_{\beta \rightarrow \infty
\int_{J_1} ^{J_2} dJ_b[ ( \sigma_b ^z, f(\sigma) ) -\langle \sigma_b^z \rangle \langle f(\sigma) \rangle]\\
&=&\int_{J_1} ^{J_2} dJ_b \lim_{\beta \rightarrow \infty
[ ( \sigma_b^z, f(\sigma) ) -\langle \sigma_b^z \rangle \langle f(\sigma) \rangle].
\end{eqnarray}
Since the integration interval $(J_1,J_2)$ is arbitrary, the following limit vanishes
\begin{equation}
\lim_{\beta \rightarrow \infty
[ ( \sigma_b^z, f(\sigma) ) -\langle \sigma_b^z \rangle \langle f(\sigma) \rangle]=0,
\end{equation}
for any $b\in B_{\Lambda}$ for almost all $J_b \in {\mathbb R}$.
$\Box$\\
\subsection{Unperturbed system}
The following lemma guarantees the uniqueness of ground state in the case for $h=0$, where the model
becomes the classical EA model. This has been obtained in \cite{I}
{\lemma Consider the transverse field Edwards-Anderson (EA) model at $h=0$ in $d$-dimensional hyper cubic lattice $\Lambda_L$
under the boundary condition (\ref{BC}).
Let
$f(\sigma^z)$ be a real valued function of a spin operators .
For almost all $ \bm J$, there
exists a unique spin configuration $s^+ \in \Sigma_{\Lambda}^+$,
such that the following limit
is given by
\begin{eqnarray}
\lim_{\beta \to \infty}
\langle f(\sigma )
\rangle= f( s^+).
\label{bondexp}
\end{eqnarray}
Proof. \label{Lem2}}
Consider the model at $h=0$. In this case, the model becomes the classical Edwards-Anderson model,
then the Duhamel function becomes
normal correlation function
$$(\sigma_b^z, f(\sigma))=\langle \sigma_b^z f(\sigma) \rangle$$
Lemma \ref{Lem1} indicates the following conistent property of the bond spin configuration
at zero temperature.
Eq.(\ref{bond}) in Lemma \ref{Lem1} for an arbitrary bond $b \in B_{\Lambda}$ and $f(\sigma)=\sigma_b$ implies
\begin{equation}
\lim_{\beta \rightarrow \infty
(1-\langle \sigma_b^z \rangle^2) = 0.
\end{equation}
The above identity can be represented in terms of a probability $\displaystyle p_b :=\lim_{\beta\to\infty
\langle \delta_{\sigma_b^z, 1} \rangle$
\begin{eqnarray}
0 &=&\lim_{\beta \to \infty
[ 1-\langle (2\delta_{\sigma_b^z,1}-1 ) \rangle^2]=1- (2 p_b -1)^2 \nonumber \\
&=&4 p_b (1-p_b).\end{eqnarray}
Since either $p_b=1$ or $p_b =0$ is valid,
either a ferromagnetic $\sigma_b=1$ or an antiferromagnetic $\sigma_b =-1$ bond spin configuration appears
almost surely on any bond $b\in B_{\Lambda}$
for almost all $\bm J$ at zero temperature.
Consider a plaquette ($i,j,k,l)$ with an arbitrary $i\in \Lambda_L$ and
$j=i+e,k=i+e',l=i+e+e'$ for unit vectors with $|e|=|e'|=1$. Lemma for $b=\{i,j\}, \{i,k\}$ and
$f(\sigma)=\sigma_j^z\sigma_l^z, \sigma_k^z \sigma_l^z$ implies
\begin{eqnarray}
&& \lim_{\beta\to\infty}
[\langle \sigma_i ^z \sigma_j^z \sigma_j ^z\sigma_l^z \rangle - \langle \sigma_i ^z \sigma_j^z \rangle \langle \sigma_j^z \sigma_ l^z\rangle]
= 0,
\\
&& \lim_{\beta\to\infty
[\langle \sigma_i^z \sigma_k^z \sigma_k ^z\sigma_l^z \rangle - \langle \sigma_i^z \sigma_k^z \rangle \langle \sigma_k^z\sigma_ l^z\rangle] =0.
\end{eqnarray}
These and $(\sigma_j^z)^2=(\sigma_k^z)^2=1$ give the consistent property of the bond spin configuration
$$ \displaystyle \lim_{\beta\to\infty}
\langle \sigma_i^z \sigma_j^z \rangle \langle \sigma_j^z\sigma_ l^z\rangle
\langle \sigma_l^z \sigma_k^z \rangle \langle \sigma_k^z\sigma_ i^z\rangle =1.$$
For any site $i \in\Lambda_
$ and for $b=\{i,j\}\in B_{\Lambda}$,
Lemma \ref{Lem1} and $f(\sigma)=\sigma_i^z$
imply
$$
\lim_{\beta \to \infty
\langle \sigma_j ^z\rangle =
\lim_{\beta \to \infty}
\langle \sigma_i ^z\sigma_j ^z\rangle \langle \sigma_i^z \rangle
$$
for almost all $\bm J$. For any sites
$i,j\in \Lambda_L$
and $i,j$ are connected by bonds in $B_{\Lambda}$.
Then, the $+$ boundary condition $\sigma_i=1$ given by (\ref{BC})
and a bond spin configuration $(\sigma_i\sigma_j)_{\{i,j\}\in B_{\Lambda}}$
fix a spin configuration $s^+\in \Sigma_\Lambda^+$ uniquely at zero temperature for any $L$.
This spin configuration $s^+
gives $$\lim_{\beta \to \infty}
\langle \sigma_i^z \rangle
= s_i^+,$$
for $i \in \Lambda_L$ and also gives
$$
\lim_{\beta \to \infty}
\langle f(\sigma^z) \rangle = f( s^+),
$$
for a real valued function $f(\sigma^z)$.
This completes the proof. $\Box$
\subsection{Expansion method}
Datta and Kennedy develop the Kirkwood-Thomas expansion method and study transverse field Ising model and XXZ model.
Here we employ their method to the EA model in a weak transverse field.
To obtain ground state in the transverse EA model, consider the following unitary transformed Hamiltonian
\begin{equation}
\tilde H_\Lambda(\sigma, h,\bm J) :=U H_\Lambda U^\dag= -\sum_{b\in B_L} J_b \sigma_b^x -h\sum_{i\in \Lambda_L} \sigma_i^z,
\end{equation}
where $U\sigma_i^zU^\dag=-\sigma_i^x$ and $U\sigma_i^xU^\dag=\sigma_i^z$.
Define
$$\sigma_X := \prod_{i\in X} \sigma_i.$$
Note the following identity
\begin{equation}
2^{-|\Lambda_L'|} \sum_{\sigma\in \Sigma_\Lambda^+} \sigma_X\sigma_Y = I(X=Y) =: \delta_{X,Y},
\label{on}
\end{equation}
where $ \Lambda_L':= \Lambda_L\setminus ( \Lambda_L\setminus \Lambda_{L-2})$, and
an indicator $I(e)$ for an arbitrary event $e$ is defined by $I({\rm true})=1$ and $I({\rm false}) =0$.
Let $\psi(\sigma)$ be a function of spin configuration,
and express ground state of the Hamiltonian
$$
|GS \rangle= \sum_{\sigma \in \Sigma_\Lambda^+}\sigma_D \psi(\sigma)| \sigma \rangle,
$$
where $D :=\{i \in \Lambda_L | s_i^+ =-1 \} $.
Note that
$\psi(\sigma) =1$ for $h=0$ is given by the ground state spin configuration $s^+ \in \Sigma_\Lambda$ defined by Lemma \ref{Lem2}.
The eigenvalue equation defined by
$$
\tilde H_\Lambda(\sigma, h,\bm J) |GS \rangle =E_0 |GS \rangle
$$
is written in
$$
-( \sum_{b\in B_L} J_b \sigma_b^x +h\sum_{i\in \Lambda_L} \sigma_i^z ) |GS \rangle = E_0 |GS\rangle.
$$
If $\sigma_i ^x |\sigma \rangle = | \tau \rangle$, $\tau_i = - \sigma_i $ and $\tau _j=\sigma_j$ for $j\neq i$.
$$
\sigma_b^x| \sigma\rangle =\sigma_i^x \sigma_j^x| \sigma\rangle
= |\sigma^{(i,j)} \rangle,
$$
where $\sigma^{(i,j)}$ denotes a spin configuration replaced by $(\sigma_i , \sigma_j) \to (-\sigma_i, -\sigma_j)$.
This eigenvalue equation can be represented in terms of $\psi(\sigma)$.
\begin{equation}
\sum _{b\in B_L} J_b\sigma_D^{(b)} \psi(\sigma^{(b)}) + \sum_{i\in \Lambda_L}h \sigma_i \sigma_D \psi(\sigma) = -E_0\sigma_D \psi(\sigma).
\label{eigeneqpsi}
\end{equation}
Therefore
\begin{equation}
\sum _{b\in B_L} J_b \frac{\sigma_D^{(b)}\psi(\sigma^{(b)})}{\sigma_D\psi(\sigma)} + \sum_{i\in \Lambda_L}h \sigma_i = -E_0.
\label{eigeneqpsi}
\end{equation}
To obtain the Kirkwood-Thomas equation for the ground state, represent the function $\psi(\sigma)$ in terms of
a complex valued function $g(X)$ of an arbitrary subset $X \subset \Lambda_L' $,
\begin{equation}
\psi(\sigma) = \exp \Big[-\frac{1}{2} \sum_{X \subset \Lambda_L'} g(X) \sigma_X \Big].
\end{equation}
Note the following relations
\begin{equation}
\psi(\sigma^{(b)}) = \exp \Big[ -\frac{1}{2}\sum_{X \subset \Lambda_L'} g(X) \sigma_X +\sum_{X \subset \Lambda_L'} I(b \in \partial X)g(X) \sigma_X \Big].
\end{equation}
$$
\frac{\psi(\sigma^{(b)})}{\psi(\sigma)} = \exp \Big[ \sum_{X \subset \Lambda_L'} I(b \in \partial X)g(X) \sigma_X \Big]
$$
where a set $\partial X$ is defined by
$$
\partial X :=\{ \{i,j\} \in B_L | i\in X, j\notin X \ {\rm or} \ j\in X, i\notin X \}.
$$
Note also
$$
\frac{\sigma^{(b)}_D}{\sigma_D}=s_b^+ .
$$
These and the eigenvalue equation (\ref{eigeneqpsi}) give
\begin{equation}
\sum _{b\in B_L} J_bs_b^+ \exp \Big[\sum_{X \subset \Lambda_L'} I(b \in \partial X)g(X) \sigma_X \Big] + \sum_{i\in \Lambda_L}h \sigma_i = -E_0.
\end{equation}
We expand the exponential function in power series. The first term is
\begin{equation}
\sum _{b\in B_L} J_b s_b^+ \sum_{X \subset \Lambda_L'} I(b \in \partial X)g(X) \sigma_X=
\sum_{X \subset \Lambda_L'} \sum _{b \in \partial X} J_bs_b^+ g(X) \sigma_X ,
\end{equation}
then we have
\begin{eqnarray}
&&\sum _{b\in B_L} J_bs_b^+ +E_0 + \sum_{X \subset \Lambda_L'} \sum _{b \in \partial X} J_b s_b^+ g(X) \sigma_X\nonumber \\
&&+\sum _{b\in B_L} J_b s_b^+ \exp^{(2)} \Big[ \sum_{X \subset \Lambda_L'} I(b \in \partial X)g(X) \sigma_X \Big] + \sum_{i\in \Lambda_L}h \sigma_i =0,
\end{eqnarray}
where
$$
\exp^{(2)} x := e^x-1-x =\sum_{k=2}^\infty\frac{x^k}{k!}.
$$
The orthonormalization property (\ref{on}) gives
\begin{equation}
E_0 =-\sum _{b\in B_L} J_bs_b^+ -\sum _{c\in B_L} J_c s_c^+\sum_{k=2}^\infty \frac{1}{k!} \sum_{ X_1, \cdots, X_k\subset\Lambda_L' }
\delta_{
X_1 \triangle \cdots \triangle X_k, \phi} \prod_{l=1}^k g(X_l)I(c\in \partial X_l),
\end{equation}
and
\begin{eqnarray}
g(X) &=& \frac{-1}{\sum _{b\in \partial X} J_bs_b^+} \left[\sum _{c\in B_L} J_c s_c^+\sum_{k=2}^\infty \frac{1}{k!} \sum_{ X_1, \cdots, X_k \subset\Lambda_L'}
\delta_{
X_1 \triangle \cdots \triangle X_k, X} \prod_{l=1}^k g(X_l) I(c\in \partial X_l)+h \delta_{|X|, 1} \right], \nonumber \\
&=:& F(g)(X)
\end{eqnarray}
where $X \triangle Y:= (X \cup Y) \setminus (X\cap Y) $ for arbitrary sets $X, Y$, and we have used $\sigma_X \sigma_Y=\sigma_{X\triangle Y}.$
The first term in the ground state energy
is identical to that of the ground state spin configuration $s^+$ for $h=0$,
and the excited energy of a spin configuration $\sigma$ for $h=0$
is $2 \sum_{b \in \partial X} J_b s_b^+ $, where
$X:=\{ i \in \Lambda_L| \sigma_i \neq s^+_i \}$. Lemma \ref{Lem2} guarantees the positivity
$ \sum_{b \in \partial X} J_b s_b^+ >0.$
To prove uniqueness of the ground state in the transverse field EA model with a given $\bm J$
for sufficiently small $h$, define a norm for the function $g(X)$ by
\begin{equation}
\| g \| := \sup_{c\in B_L}\sum_{X \subset \Lambda_L} I(c\in \partial X) \sum_{b\in \partial X}J_b s_b^+ |g(X)|(|h| M)^{-w(X)},
\label{norm}
\end{equation}
where $w(X)$ is a number of elements of the smallest connected set which contains $X$.
We say that a set $X$ is connected, if for any $i,j \in X$ there exists a sequence $i_1, i_2, \cdots, i_n \in X$, such that
$i_1=i$, $i_n =j$ and $\{i_k, i_{k+1}\} \in B_L$ for $k=1, \cdots, n-1$.
Then, the following lemma can be proven.
{\lemma
There exist a constant $M>0$ and define $\delta=\frac{4}{M}$
, such that if $|h| M \leq 1$
\begin{equation}
\|F(g)-F(g') \| \leq \| g-g' \|/2, \ \ {\rm for}\ \| g\|, \| g'\| \leq \delta,
\end{equation}
and $\| F(g) \| \leq \delta$ for $\|g\|\leq \delta.$
\\
\noindent
Proof. \label{Lem3}} For lighter notations, define $\triangle_k := X_1 \triangle \cdots \triangle X_k$, and
$b:X \Leftrightarrow b \in \partial X$.
The norm $\| F(g)-F(g') \|$ is represented in
\begin{eqnarray}
&&\| F(g)-F(g')\| \\
&&=\Big| \sup_{c \in B_L}\sum_{b\in B_L} J_b s_b^+ \sum_{c :X\subset\Lambda_L'}
\sum_{k=2}^\infty \frac{1}{k!} \sum_{ b:X_1, \cdots, X_k \subset\Lambda_L'}
\delta_{\triangle_k, X} [ \prod_{l=1}^k g(X_l) - \prod_{l=1}^k g'(X_l) ] (|h|M)^{-w(X)}\Big|\nonumber\\
&&\leq
\sup_{c \in B_L} \sum_{b\in B_L} J_b s_b^+
\sum_{k=2}^\infty \frac{1}{k!} \sum_{b: X_1, \cdots, X_k \subset\Lambda_L', c : \triangle_k}
\Big| \prod_{l=1}^k g(X_l) - \prod_{l=1}^k g'(X_l) \Big| (|h|M)^{-w(\triangle_k)}. \nonumber\\
&&\leq
\sup_{c \in B_L} \sum_{b\in B_L} J_b s_b^+
\sum_{k=2}^\infty \frac{1}{(k-1)!} \sum_{b: X_1, \cdots, X_k \subset\Lambda_L', c :X_1}
\Big| \prod_{l=1}^k g(X_l) - \prod_{l=1}^k g'(X_l) \Big| (|h|M)^{-w(\triangle_k)}. \nonumber
\end{eqnarray}
$\partial \triangle_k \subset \cup_{ l=1}^k \partial X_l$ and permutation symmetry in the summation over $X_1, \cdots, X_k$
have been used.
Inequalities
$
w(\triangle_k) \leq \sum_{l=1}^k w(X_l),
$
and
$$
\Big| \prod_{l=1}^k g(X_l) - \prod_{l=1}^k g'(X_l) \Big| \leq \sum_{l=1} ^k \prod_{j=1}^{l-1} |g(X_j)||g(X_l)-g'(X_l) |\hspace{-2mm} \prod_{j=l+1} ^k\hspace{-2mm} |g'(X_j)|
$$
enable us to evaluate the norm as follows:
\begin{eqnarray}
&&\| F(g)-F(g')\| \\
&&\leq
\sup_{c \in B_L}\sum_{b\in B_L} J_b s_b^+\sum_{k=2}^\infty \frac{1}{(k-1)!} \hspace{-1mm}
\sum_{b : X_1, \cdots, X_k, c: X_1}
\sum_{l=1} ^k \prod_{j=1}^{l-1} |g(X_j)||g(X_l)-g'(X_l) |\hspace{-2mm} \prod_{j=l+1} ^k\hspace{-2mm} |g'(X_j)|(|h|M)^{-w(\triangle_k)} \nonumber
\\
&&\leq \sup_{c \in B_L}
\sum_{b\in B_L} J_b s_b^+\sum_{k=2}^\infty \frac{1}{(k-1)!} \hspace{-1mm}
\sum_{b : X_1, \cdots, X_k, c:X_1}
\sum_{l=1} ^k \prod_{j=1}^{l-1} |g(X_j)||g(X_l)-g'(X_l) |\hspace{-2mm} \prod_{j=l+1} ^k\hspace{-2mm} |g'(X_j)|(|h|M)^{-\sum_{j=1}^kw(X_j)} \nonumber
\\
&&=\sup_{c \in B_L}
\sum_{b\in B_L} J_b s_b^+\sum_{k=2}^\infty \frac{1}{(k-1)!} \hspace{-1mm}
\sum_{b : X_1, \cdots, X_k, c:X_1}
\sum_{l=1} ^k \prod_{j=1}^{l-1} |g(X_j) |(|h| M)^{-w(X_j)}\nonumber
\\
&& \hspace{5mm} \times |g(X_l)-g'(X_l) |(|h|M)^{-w(X_l)}\hspace{-2mm} \prod_{j=l+1} ^k\hspace{-2mm} |g'(X_j)|(|h| M)^{-w(X_j)}\nonumber\\
&&= \sup_{c \in B_L}
\sum_{k=2}^\infty \frac{1}{(k-1)!} \hspace{-1mm}
\sum_{l=1} ^k\sum_{b\in B_L} J_b s_b^+ \sum_{X_1:b,c} |g(X_1) |(|h| M)^{-w(X_1)}
\prod_{j=2}^{l-1}\sum_{X_j:b} |g(X_j) |(|h| M)^{-w(X_j)} \nonumber \\
&&\hspace{5mm} \times \sum_{X_l:b}|g(X_l)-g'(X_l) |(|h|M)^{-w(X_l)}\prod_{j=l+1} ^k\hspace{-0mm} \sum_{X_j:b}|g'(X_j)|(|h| M)^{-w(X_j)}\nonumber\\
&&\leq
\|g-g'\|\sum_{k=2}^\infty \frac{1}{(k-1)!} \hspace{-1mm}
\sum_{l=1} ^k \prod_{j=1}^{l-1}\| g \| /\Delta\prod_{j=l+1} ^k\| g'\|/\Delta
= \|g-g'\|\sum_{k=2}^\infty \frac{1}{(k-1)!} \sum_{l=1} ^k (\| g \|/\Delta)^{l-1} (\|g'\|/\Delta)^{k-l} \nonumber \\
&&\leq \|g-g'\| \sum_{k=2} ^\infty \frac{k (\delta/\Delta)^{k-1}}{(k-1)!} = K \| g-g'\|,
\nonumber
\end{eqnarray}
where an averaged energy gap $\Delta>0$ over excited states above the unique ground state in the unperturbed
model is defined by
\begin{eqnarray}
\Delta &:=& \Big[\sup_{c\in B_L}\sum_{c:X \subset \Lambda_L} |g(X)| (|h|M)^{-{w(X)}} \Big]^{-1}\sup_{c\in B_L}
\sum_{c:X \subset \Lambda_L} \sum_{b \in \partial X} J_bs_b^+ |g(X)| (|h|M)^{-{w(X)}}\nonumber \\
&=& \Big[\sup_{c\in B_L} \sum_{c:X \subset \Lambda_L} |g(X)| (|h|M)^{-{w(X)}} \Big]^{-1}\| g \|,
\end{eqnarray}
and $K:=e^{\delta/\Delta} (1+\delta/\Delta ) -1$.
The condition $K = \frac{1}{2}$ fixes $\delta/\Delta$, and then the following inequality
$$
\sup_{c\in B_L} \sum_{c:X \subset \Lambda_L} |g(X)| (|h|M)^{-{w(X)}} \leq \frac{\delta}{\Delta},
$$
requires sufficiently large $M>0$.
To obtain the bound on $\| F(g) \|$, let us evaluate $\| F(0)\|$ first. Since
$$
F(0) (X)= \frac{-h \delta_{|X|,1}}{ \sum_{b \in \partial X} J_b s_b^+ },
$$
the norm is given by
$$
\| F(0)\| = \sup_{c\in B_L}\sum_{X \subset \Lambda_L'} I(c\in \partial X)
|h| \delta_{|X|,1}(|h| M)^{-w(X)} = 2 M^{-1} = \frac{\delta}{2},
$$
Hence,
$$
\| F(g) \| = \| F(g) -F(0) +F(0)\| \leq \| F(g) -F(0)\| +\| F(0)\| \leq \frac{\| g \|}{2} + \frac{\delta}{2}\leq \delta.
$$
This completes the proof of Lemma \ref{Lem3}. $\Box$
\paragraph{Proof of Theorem \ref{Th1}.} Lemma \ref{Lem3} and the contraction mapping theorem
enable us to prove that the fixed point equation
\begin{equation}
F(g) =g
\end{equation}
has unique solution $g$ satisfying $\| g\| \leq \delta $. This completes the proof of Theorem \ref{Th1}. $\Box$
|
2,869,038,153,751 | arxiv | \section{Introduction}
Constraints on the nature and properties of dark energy can be obtained from an accurate description of the Universe's expansion history. Such a description can be provided by standard rulers: objects or properties of known size for which we can easily retrieve the distance-redshift relation \citep{Weinberg:2013}. Baryon acoustic oscillations (BAOs), frozen relics of the epoch when matter and radiation were coupled together, are promising standard ruler candidates and could help us understand more about the nature of dark energy \citep[see][]{Albrecht06}. The current scenario is that, for much of cosmic history, matter dominated over dark energy and the expansion indeed slowed, enabling galaxies and large-scale structures (LSSs) to form. A billion years ago, matter became sufficiently dilute due to expansion, dark energy became the dominant component of the Universe, and the expansion accelerated.
To date, BAOs have only been detected by performing large galaxy redshift surveys in the optical waveband \citep{Eisenstein05}. However, given the implications of these measurements, it is important that they be confirmed in other wavebands and measured over a wide range of redshifts. The radio band provides a unique and complementary observational window for the understanding of dark energy via the redshifted 21 cm neutral hydrogen emission line from distant galaxies. The \textbf{B}aryon Acoustic Oscillations from \textbf{I}ntegrated \textbf{N}eutral \textbf{G}as \textbf{O}bservations (BINGO) telescope is a proposed new instrument designed specifically to observe such a signal and to provide a new insight into the Universe \citep{Battye13,2020_project}.
The telescope design consists of two dishes in a compact configuration with no moving parts and will operate in the frequency range from 0.98\,GHz to 1.26\,GHz (corresponding to a co-moving distance of 380--1280\,Mpc/$h$ assuming a $\Lambda$CDM cosmology; \citealt{Planck2018a}). It will map the cosmic web in three dimensions without detecting individual galaxies, a technique called intensity mapping \citep[IM;][]{Peterson06}. Instead of cataloging many individual galaxies, one can study the LSS directly by detecting the aggregate emission from many galaxies that occupy large $\approx$ 100 Mpc$^{3}$ voxels. The unresolved 21 cm signal from the galaxies is therefore similar to a low-resolution galaxy survey and can be used as a low-z cosmological probe via BAO measurements. The full-width at half-maximum of the BINGO beam is 40\,arcmin, allowing structures of angular sizes corresponding to a linear scale of around 150\,Mpc to be resolved (BAOs manifest themselves as a small but detectable excess of galaxies with separations of \mbox{$\approx$ 150\,Mpc} in the chosen redshift range).
Large-scale \textsc{Hi} fluctuations above redshift $z$ = 0.1 have been unambiguously detected only in cross-correlation with existing surveys of optically selected galaxies \citep{Lah09, Chang10, Masui13}. Cross-correlation between the cosmological 21 cm signal and optical surveys provides potentially useful information on the statistical properties of the \textsc{Hi} distribution \citep{Switzer15}.
The cross-correlation has the advantage, in comparison to ``autocorrelation'' studies, that the measured statistics are less sensitive to contaminants such as foregrounds, systematics, and noise.
The detection of the redshifted 21 cm radiation will provide valuable information about the post-recombination history of the Universe, including the Dark Ages. Information can also be extracted about the formation of the first ionizing sources and the subsequent reionization of the intergalactic medium (IGM) due to these sources (for a comprehensive review, see \citealt{Furlanetto06} and \citealt{Pritchard12}).
Broadband foreground emission poses the greatest challenge to 21 cm IM and needs to be characterized carefully before the technique becomes a sensitive probe of the post-recombination epoch ($z < 160$). The foregrounds are expected to be predominantly Galactic and approximately four orders of magnitude larger than the cosmological signal (at low frequencies, synchrotron emissions from our Galaxy and other radio galaxies are the dominant foregrounds; \citealt{Sazy15}). Therefore, one of the key observational challenges in detecting the cosmological 21 cm signal is modeling and removing Galactic foregrounds at low frequencies. In doing this, the BINGO pipeline adopts the component separation strategies that use the spatial structure of foregrounds to separate them from the cosmological signal. Indeed, the cosmological 21 cm signal is expected to have structure in frequency space, while the foregrounds are expected to mostly be spectrally smooth.
Many of these techniques were first developed for cosmic microwave background (CMB) data analysis and are now being extended to IM, where there is the extra dimension of frequency \citep{Ansari12, Switzer13}.
There are still spectrally un-smooth emission sources, such as radio frequency interference (RFI) from terrestrial and non-terrestrial sources, that can dominate over Galactic and extragalactic foregrounds. Radio frequency interference can be minimized through software removal, by choosing radio-quiet locations, and through band selection.
This work aims to test and optimize the constructive and operational parameters of the telescope, as well as the data analysis process itself. A set of computational routines and procedures (pipeline) that simulate the BINGO operation has been implemented. Its input is composed of maps of different emission mechanisms, produced by theoretical models or observations, and inherent noise properties of the equipment and the environment. The number and arrangement of horns, optical design, and receiver characteristics are also input parameters of the radio telescope. The IM pipeline produces, as output, time-ordered data sets (TODs) and antenna temperature maps that simulate the signal picked up by the instrument during a given period of operation. Next, these output data are passed through a component separation process for the recovery of the \textsc{Hi} component.
The paper proceeds as follows: In Sect. \ref{sec:instr} we give an overview of the instrument. In Sect. \ref{sec:simulations} the pipeline is briefly introduced along with a complementary discussion of the input models, the latest configuration updates, and the component separation method ({\tt GNILC}). Then, in Sect. \ref{sec:strategy} we study the observational efficiency of different feed horn arrangements. This is followed by the simulations and the component separation results and analysis (Sect. \ref{sec:gnilc}). Finally, we present the conclusion, with several future prospects.
Throughout this paper we use cosmological parameters from \cite{Planck2018a}. This is the fourth (IV) of a series of papers describing the BINGO project. The theoretical and instrumental projects are in papers I and II \citep{2020_project,2020_instrument}, the optical design in paper III \citep{2020_optical_design}, the component separation and correlations in paper V \citep{2020_component_separation}, the simulations for a mock 21 cm catalog are described in paper VI \citep{2020_mock_simulations}, and the cosmological forecasts for BINGO in paper VII \citep{2020_forecast}.
\section{The instrument}
\label{sec:instr}
The telescope will be built on a hill near Aguiar, Para\'{\i}ba (northeastern Brazil). Earlier concepts of the BINGO can be found in \cite{Battye13} and \cite{Wuensche:2018}. The primary dish will be a 40\,m diameter paraboloid and the secondary, a 34\,m-diameter hyperboloid. The particular mirror configuration chosen is the crossed Dragone, also known as Compact Range Antenna. Such a design has very low geometric aberrations, leading to an instantaneous field-of-view of $\approx$ 88 deg$^{2}$. It will provide excellent polarization performance and very low side-lobe levels required for \textsc{Hi} IM.
A detailed description of the project and the instrument is available in companion papers I and II \citep{2020_project,2020_instrument}.
\begin{table}
\centering
\caption{Summary of the BINGO Phase 1 telescope parameters.}
\begin{tabular}{c||c} \toprule
{\bf Description} & {\bf Value} \\ \midrule \midrule
Dish diameters (m) & 40 (primary) \\
& 34 (secondary) \\ \midrule
Resolution ($^{\circ}$) & $\approx$ 0.67 \\ \midrule
Focal length (m) & 63.2 \\ \midrule
Frequency range (MHz) & 980--1260 \\ \midrule
Channel resolution (MHz) & 9.33 \\ \midrule
Z interval & 0.127--0.449 \\ \midrule
Number of feeds $n_{f}$ & 28 \\ \midrule
Central focal plane array elevation ($^{\circ}$) & $\approx$ 82 \\ \midrule
Azimuth ($^{\circ}$) & 0 (North) \\ \midrule
Telescope effective area (m$^{2}$) & $\approx 1120$ \\ \midrule
Pixel solid angle (sr) - $\Omega_{pix}$ & 0.35 \\ \midrule
Field of view (deg$^{2}$) & 14.75 $\times$ 6.0 \\ \midrule
Survey area $\Omega_{sur}$ (deg$^{2}$) & $\approx$ 5324 \\ \midrule
System temperature $T_{\textsc{sys}}$ (K) & $\approx$ 70 \\ \bottomrule \bottomrule
\end{tabular}
\label{tab_parameters}
\end{table}
In this paper we compare the two feed horns arrays considered in the current optical design \citep[see][]{2020_optical_design}: the first one (hexagonal) is composed of 31 units (Fig. \ref{fig:arranjo_31}),
while the second one (double-rectangular) is made up of 28 units
(Fig. \ref{fig:arranjo_28}).
One of the purposes of this work is to investigate, through simulations, which arrangement best meets the scientific goals defined for BINGO.
Each horn will be secured in the focal plane by a hexagonal casing, which works both as a transportation box and assembly cell. It will encapsulate the horn, transitions, polarizer, magic tee, and the receiver box \citep{2020_instrument}. With this hexagon concept, it is likely that no additional external structure will be needed to position the horn array structure. The hexagonal casing will be 2400 mm tall and will allow moving the horn in elevation $y$ and azimuth $x$ directions, as well as longitudinally $z$ (along the horn optical axis). By means of a pivot attached to the hexagonal case, where the horn is mounted, it will be possible to do a fine positioning of the horns. The aim is to reproduce the desired curvature of the focal plane by translating and tilting the horns across the array. This configuration will improve the sky coverage and final science products. In the future, the structure has the capacity to increase the number of horns to 56 for the double-rectangular array (by adding two extra columns on both sides) and 49 for the hexagonal one (by adding two columns on both sides with five and four horns, respectively, in such a way as to form a hexagon). This will double the redundancy (for the double-rectangular configuration) in the area covered and will increase the sensitivity by $\sqrt{2}$ per year.
\section{Simulations and data processing}
\label{sec:simulations}
In this section we briefly describe the data processing and the imaging algorithm used to obtain the BINGO maps. We follow the procedure described by \citet{Sazy15} for an earlier version of BINGO.
To assess the reliability with which the cosmological signal can be extracted from the observed data, an end-to-end simulation pipeline has been developed by the BINGO collaboration,
allowing the testing of various aspects of the data analysis pipeline including foreground removal. The input is composed of maps of different emission mechanisms, produced by theoretical models or by observations, as well as by the inherent noise of the instrument and contamination from the environment. Other inputs are features of the telescope, such as the number and arrangement of horns, optical design and receiver characteristics. The pipeline produces as output a time series, which can be turned into maps that simulate the signal picked up by the instrument during a given period of operation.
\begin{figure}[t]
\centering
\includegraphics[height=8.5cm]{figures/Arranjo_31.jpg}
\caption{Hexagonal feed horns array ($\approx$ 9\,m $\times$ 17\,m). The numbers represent the positions of the horns along the $y$ axis, from the center of the field-of-view, given in mm.}
\label{fig:arranjo_31}
\end{figure}
To detect BAOs in the \textsc{Hi} signal, we will need to remove the contributions from much brighter emission coming from our Galaxy and many extragalactic sources (foregrounds). The most relevant emissions at $\approx$ 1 GHz are a combination of extragalactic point sources and diffuse Galactic synchrotron emission, which taken together is nearly four orders of magnitude larger ($\approx$ 5 K rms at 1 GHz) than the 21 cm signal fluctuations ($\approx$ 200 $\mu$K rms) outside of the Galactic plane. Therefore, the output data are passed through a component separation process to recover the cosmological \textsc{Hi} component.
In Fig. \ref{fig:flow} a flowchart of the BINGO simulation pipeline is shown. In blue are the ``configurable'' parameters of the simulation: instrument specifications (beam shape and number of horns, observation time, knee frequency, number of channels, etc.) and component separation (method and parameters). In red are the parameters that have already been measured (the instrument noise module and the CMB temperature are already included in the pipeline). In yellow are the modules that make use of known models and observations to produce emission maps used as input in the simulations. The RFI part of the pipeline currently only deals with simulating the emissions from the Global Navigation Satellite System (GNSS), and a new module is currently under development to include fixed location terrestrial RFI.
The atmosphere appears as a source of large-scale black-body emission with a brightness temperature of a few K at around 1\,GHz.
The atmospheric brightness temperature has been calculated according to the model in \citet{paine19}, and we found, assuming an air temperature of 284\,K, that the value is approximately 4\,K at the zenith.
\begin{figure}[t]
\centering
\includegraphics[height=8.5cm]{figures/Arranjo_28}
\caption{Double-rectangular feed horns array ($\approx$ 7.8\,m $\times$ 18.6\,m). The numbers represent the positions of the horns along the $y$ axis, from the center of the field-of-view, given in mm.}
\label{fig:arranjo_28}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=.8\textwidth]{figures/flowchart}
\end{center}
\caption{BINGO mission simulation flowchart.
}
\label{fig:flow}
\end{figure*}
The pipeline operation makes use of a mission simulator, which processes all the input information described above to produce TODs. In the case of BINGO, this data set consists of the temperature measurements in a given channel $i$ in the range 980--1260\,MHz, in a given celestial coordinate ($\alpha,\delta$) and at a given time of observation $t_{\rm{obs}}$. As BINGO is a telescope without moving parts, each pixel will visit $n$ times the same coordinate ($\alpha$,$\delta$), where $n$ depends on the instrument operating time. Then, for every frequency channel in each horn, the TOD is processed into a map using the {\tt HEALPix} pixelization scheme \citep{Gorski05}.
By default, each pixel in the {\tt HEALPix} map is filled with the mean value of all measurements in that pixel according to equation
\begin{equation}
{T_{\rm{map}}}{\left( {\alpha ,\delta } \right)_{}} = \sum\limits_{t = 0}^t {\left\langle {{T_t}\left( {{\alpha _t},{\delta _t}} \right)} \right\rangle }\,,
\end{equation}
where $t$ is the measurement time and $T_{t}$ is the temperature measured at that time in the coordinates ($\alpha$,$\delta$). After this step, which is done separately for each horn, the horn maps are combined into the final cube of BINGO maps.
Eventually, as a separate task from the operation of the pipeline, the set of maps produced by the simulation is processed through a component separation algorithm to recover the cosmological \textsc{Hi} signal from the data produced by the pipeline.
This work aims to simulate the results to be obtained by the Phase 1 BINGO instrument configuration and to test a new component separation method developed for the Planck mission ({\tt GNILC}), following the results obtained in \cite{Olivari16}. A number of different configurations of the instrument are investigated.
\subsection{The cosmological signal}
\label{sec:FLASK}
The idealized observed brightness temperature of the sky, $T_{\rm sky}$($\nu$,$\phi$), at frequency $\nu$ and direction $\phi$, is given by
\begin{equation}
\begin{aligned}
T_{\rm{sky}}(\nu ,\phi){\rm{ }} = {T_{\rm{gal}}}(\nu,\phi){\rm{ }} + {T_{\rm{eg}}}(\nu,\phi){\rm{ }} + {T_{\rm{CMB}}}(\nu,\phi){\rm{ }} \\
+ {T_{\rm{atm}}}(\nu,\phi){\rm{ }} + {T_{\rm{COSMO}}}(\nu,\phi)\,,
\end{aligned}
\end{equation}
where $T_{\rm{gal}}$ is the diffuse Galactic radiation, $T_{\rm{eg}}$ is the emission from extragalactic sources, $T_{\rm{CMB}}$($\nu$,$\phi$) is the CMB temperature, $T_{\rm{atm}}$($\nu$,$\phi$) is the atmosphere emission, and $T_{\rm{COSMO}}$ is the cosmological \textsc{Hi} emission. In what follows we briefly describe the cosmological \textsc{Hi} emission model used for the simulations.
The \textsc{Hi} brightness temperature has been simulated with the \textit{Full-sky Lognormal Astro-fields Simulation Kit} \citep[{\tt FLASK};][]{Xavier16}. {\tt FLASK} can generate fast full-sky simulations of cosmological LSS observables such as multiple matter density tracers (galaxies, quasars, dark matter halos), CMB temperature anisotropies and weak lensing convergence and shear fields. The multiple fields can be generated tomographically in an arbitrary number of redshift slices and all their statistical properties (including cross-correlations) are determined by the angular power spectra supplied as input and the multivariate lognormal (or Gaussian) distribution assumed for the fields. After generating the fields, {\tt FLASK} can apply selection functions and noise to them.
The \textsc{Hi} emission at low redshifts ($z \lesssim 0.5$, much later than the end of reionization era)
is assumed to be confined to discrete elements such as galaxies, bubbles, and filaments. In this case, the \textsc{Hi} signal is characterized by a mean \textsc{Hi} brightness temperature given by \citep{Hall13}
\begin{equation}
{T_b} = 188{\text{ }}h{\text{ }}{\Omega _{{\textsc{Hi}}}}\frac{{{{\left( {1 + z} \right)}^2}}}{{E\left( z \right)}}{\text{mK}}\,,
\end{equation}
where $E(z)=(H(z)/H_0)$, $H_0=100 h $ km\,s$^{-1}$ Mpc (where $h$ is the Hubble parameter) and $\Omega _{\textsc{Hi}}$ is the density parameter for \textsc{Hi}. This is the H{\sc i}\ assumed value for all papers in this BINGO series.
The fluctuations around this mean are due to differences in densities of structure. Once the \textsc{Hi} signal is assumed to be a tracer of the dark matter, these fluctuations can easily be calculated.
Using the \textsc{Hi} angular power spectra
we produce full-sky maps of the \textsc{Hi} signal with the help of the {\tt synfast} routine of {\tt HEALPix}. However, with the {\tt FLASK} software, we can assume a log-normal distribution for the \textsc{Hi} signal. In doing this, we used the $C_\ell$s from the \textit{Unified Cosmological Library for $C_\ell$s} code \citep[{\tt UCLCL};][]{mcleod2017joint,Loureiro:2018qva} as input for {\tt FLASK}. {\tt UCLCL} is a library for computing two-point angular correlation function of various cosmological fields that are related to LSS surveys. It uses the formalism of angular power two-point correlations and then derives the exact analytical equations for the angular power spectrum of cosmological observables. The auto- and cross-correlations between different observables as well as different galaxy populations (bins) can also be computed.
The simulated full-sky \textsc{Hi} signal for this work includes 30 redshift bins of 21 cm intensity fields for BINGO, equally spaced in frequency, following the fiducial BINGO parameters.
The \textsc{Hi} density fields are generated from discrete matter tracers (galaxies) from the DES photometric survey \citep{Flaugher05}. The photo-z distribution for DES galaxies has been estimated from \citet{sanchez14}. The mean temperature of the \textsc{Hi} signal fluctuations in the BINGO redshift range is $\approx$ 200\,$\mu$K. We set {\tt HEALPix} resolution of the map equal to $N_{\rm{side}}$ = 128, which corresponds to a map pixel size of 27 arcmin.
In Fig. \ref{fig:maps} the resulting \textsc{Hi} map from the {\tt FLASK} code at 1.1\,GHz (the central frequency of BINGO bandwidth) is shown.
Due to the discrete tracers field that emits the \textsc{Hi} signal, the measured auto-spectra have a shot noise contribution as well as a clustering contribution. To correctly predict the overall \textsc{Hi} signal, this contribution must be accounted for in our \textsc{Hi} simulation. The shot noise depends on the abundance of galaxies observable in our hypothetical survey, and assuming a comoving number density sources $n = \frac{{dN}}{{dV}} = 0.03{h^3}Mp{c^{ - 3}}$, the angular density of the sources can be expressed as
\begin{equation}
\bar N\left( z \right) = 0.03{h^3}\frac{c}{{{H_0}}}\int {{\chi^2}(z)} \frac{{dz}}{{E(s)}}\, .
\end{equation}
Then, the shot noise contribution to the 21 cm power spectrum is subtracted, assuming assuming a Poisson behavior,
\begin{equation}
C_\ell ^{\rm shot} = \frac{1}{{\bar N\left( z \right)}}\,.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=.48\textwidth]{figures/flask_map.jpg}
\includegraphics[width=.48\textwidth]{figures/synchroton_128.jpg}
\includegraphics[width=.48\textwidth]{figures/free_128.jpg}
\includegraphics[width=.48\textwidth]{figures/ame_128.jpg}
\end{center}
\caption{Full-sky maps of the cosmological signal and the different foregrounds for a frequency slice $\nu \approx 1.1$ GHz: (a) extragalactic \textsc{Hi}, (b) synchrotron, (c) free-free, and (d) AME. The stripe defined by the white solid lines is the sky region covered by BINGO. We selected different temperature intervals for the maps to show their features and to allows the comparison of temperature differences at first inspection. Temperatures are given in K.}
\label{fig:maps}
\end{figure}
\subsection{Foregrounds}
\label{sec:Foregrounds}
An accurate understanding of the foreground emission is essential in order to precisely determine the cosmological \textsc{Hi} signal. Usually, the principal source of uncertainty is the contamination by foreground emission from the Galaxy, rather than the instrumental noise itself.
Three different types of Galactic foregrounds have been included in the present version of the pipeline: synchrotron, free-free emission and anomalous microwave emission (AME; Fig. \ref{fig:maps}). Extragalactic radio sources, which are an inhomogeneous mix of radio galaxies, quasars and other objects, are being implemented as a separate module in the code.
However, besides the diffuse Galactic emission and the extragalactic radio sources, the CMB also contributes as a contaminant to the \textsc{Hi} signal. In the BINGO frequency band, the antenna noise temperature and the CMB temperature are of the same order of magnitude ($h\nu /{k_B}T \ll 1$). Therefore, at BINGO frequencies, the CMB radiation represents nearly a constant background (fluctuations of $\approx$ 100 $\mu$K) of 2.7 K. The fluctuations themselves, can be removed by having a spatial CMB template from WMAP/Planck \citep{Planck2018a}, which can then be removed directly from the data (after convolution with the BINGO beam).
For the sake of comparison, in the companion paper V \citep{2020_component_separation} the foreground maps are generated directly from the $Planck$ $Sky$ $Model$ (PSM) package. They include as input synchrotron, free-free and AME (same as the options in this work) as well as the contribution of a background of radio sources, which we considered less important in a first stage due to the angular scale of our pixelization.
In the following subsections, we briefly describe the specific Galactic foregrounds used in this work and how we simulate them.
Further details about the Galactic and extragalactic foregrounds are given in companion paper I \citep{2020_project}.
\subsubsection{Galactic synchrotron}
\label{sec:synchrotron}
It is well known that synchrotron radiation arises from interactions between cosmic ray electrons and magnetic fields in the Galaxy. Since the magnetic fields in our Galaxy extend far to the outskirts of the Galactic plane, synchrotron emission can also be measured at high Galactic latitudes, making it difficult to avoid by only excluding the Galactic plane regions from the analysis. The frequency scaling of synchrotron flux emission is often approximated in the form of a power law, $I_{\nu} \propto \nu^{\epsilon}$, over a limited range of $\nu$. In terms of Rayleigh-Jeans brightness temperature, we have $T \propto \nu^{\gamma}$, with $\gamma = - \left( {\epsilon + 3} \right)/2$. A typical value is $\gamma$\,$\approx -$\,2.5 at radio frequency \citep{Oliveira08}, and takes steeper values $\gamma$ $\approx -$ 3.0 at $\approx$10\,GHz frequencies. Full-sky continuum maps in the low-frequency range are available, for example, the Haslam map \citep{Haslam82} at 408\,MHz
and the 1.4\,GHz map
by \cite{Reich86}.
For our simulations, we used the reprocessed Haslam 408 MHz all-sky map from \cite{Remazeilles15}. This includes artificially added small-scale fluctuations as described by \citet{Delabrouille13}. We consider that the synchrotron spectral index is spatially variable according to the Giardino model \citep{Giardino02}, which was derived using the full-sky map of synchrotron emission at 408\,MHz from \cite{Haslam82}, the northern hemisphere map at 1420\,MHz from \cite{Reich86} and the southern hemisphere map at 2326\,MHz from \cite{Jonas98}. In the Giardino model, the synchrotron spectral index has a mean value of $-$ 2.9 and a standard deviation of 0.1 (Fig. \ref{fig:synch_index}) and is good for frequencies $\lesssim$ 2.3\,GHz. Our choice here differs from companion paper V \citep{2020_component_separation}, who use the PSM synchrotron sky with the synchrotron spectral index distribution produced by \cite{Miville08}.
The observed diffuse synchrotron emission at radio frequencies is distributed across the entire sky as shown in the point-source subtracted all-sky map in Fig. \ref{fig:maps}. Most of the synchrotron emission from the Galaxy is concentrated along the Galactic plane, but large-scale features such as Loop I stretch over around half of the north Galactic sky.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/sync_index}
\caption{Map of the synchrotron spectral index according to the Giardino model.}
\label{fig:synch_index}
\end{figure}
\subsubsection{Free-free}
\label{sec:free}
Since, in the BINGO frequency interval, free-free radio continuum emission is subdominant to other Galactic emission types such as synchrotron emission it is very challenging to uniquely isolate a map of radio free-free emission. Therefore, generating free-free emission maps of the whole sky mostly relies upon the use of tracer emissions. However, free-free maps can also be produced by component separation techniques (e.g., \citealp{pla_2016}), which could be used here in the future.
At optical wavelengths, the emission from the H$\alpha$ transition can be used as a tracer of free-free \citep{Dickinson03}. Optical H$\alpha$ continuum maps can be easily related to free-free emission at radio wavelengths in regions with a small H$\alpha$ optical depth ($\tau$ $<$ 1), which is limited to the sky far from the Galactic plane. We use the H$\alpha$ map by \cite{Dickinson03} as a template for the Galactic free-free emission. This map includes small-scale fluctuations as described in \cite{Delabrouille13}.
The free-free spectrum can be well defined by a power law with a temperature spectral index $\beta=-$ 2.1 \citep{Dickinson03},
\begin{equation}
{T_{\rm{f\,f}}} \approx 10\,{\rm{mK}\,}{\left( {\frac{{{T_e}}}{{{{10}^4}{\,\rm{K}\,}}}} \right)^{0.667}}{\left( {\frac{\nu }{{{\rm{GHz}}}}} \right)^{ - 2.1}}\left( {\frac{{{I_{H\alpha }}}}{{\rm{R}}}} \right)\,,
\label{eq:free}
\end{equation}
which flattens the spectral index of the total continuum of our Galaxy where the free-free has a brightness temperature comparable to that of the synchrotron emission.
$T_{e}$ is the electron temperature in K, and $I_{H \alpha}$ is the H$_{\alpha}$ template whose emission is given in Rayleigh (R).
In the BINGO observation region, the free-free brightness temperature fluctuations are $\approx$ 0.25\,mK; therefore, they are considerably weaker than the synchrotron component but significantly brighter than the \textsc{Hi} fluctuations \citep{Battye13}. The final free-free template at 1.1\,GHz is shown in Fig. \ref{fig:maps}.
\subsubsection{Anomalous microwave emission}
\label{sec:AME}
Anomalous microwave emission is diffuse Galactic radiation detected at frequencies between 10\,GHz and 60\,GHz \citep{Dickinson18}. The best accepted physical process for the production of AME is the spinning dust model, which proposes that the rapid rotation of the electric dipoles associated with the smallest dust grains in the interstellar medium can generate microwave frequency emission that peaks between 10 and 60\,GHz.
To simulate the AME emission, we used as a template the \textit{Planck} $\tau_{353}$ optical depth map \citep{PlanckXLVIII}. We adopted the factor 8.3 $\times$ 10$^{6}$ $\mu$K/$\tau_{353}$ to convert the dust optical depth at 353 GHz to the AME temperature at 22.8\,GHz, in units of $\mu$K \citep{Planck2016}. To scale the AME emission from 22.8\,GHz to the BINGO frequencies of $\approx$ 1\,GHz, we used the publicly available {\tt spdust2} code \citep{Silsbee11}, which calculates the spinning dust emissivity as a function of frequency for various environments of the interstellar medium.
Anomalous microwave emission temperature fluctuations are extremely weak ($\approx$ 2\,$\mu$K) at frequencies below 10\,GHz and are negligible in the frequency range of BINGO (Fig. \ref{fig:foregrounds}), but we included them for completeness. The final AME template at 1.1\,GHz is shown in Fig. \ref{fig:maps}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/foregrounds_1}
\caption{Angular power spectra of the cosmological signal and the different foreground components at $\nu$ $\approx$ 1.1 GHz.}
\label{fig:foregrounds}
\end{figure}
\subsection{Instrumental noise}
\label{sec:noise}
\subsubsection{Thermal noise}
Phase 1 of BINGO will provide a large field-of-view, and the fine-tuning in the positioning of the 28 horns will improve the overall sensitivity of the experiment. The parameters that affect the sensitivity are the survey area $\Omega_{\rm{sur}}$, the beam size $\theta_{\rm{FWHM}}$, the number of horns $n_{f}$, and the integration time $t_{\rm{obs}}$.
The r.m.s. noise per pixel is given by the well-known radiometer equation \citep[see][]{Wilson13}:
\begin{equation}
\sigma_t = \frac{T_{\rm sys}}{\sqrt{t_{\rm pix}\delta \nu }} \,,
\label{eq:noise}
\end{equation}with $\delta {\nu}$ as the frequency bin width and $t_{pix}$ the observation time per pixel related to the total observing time by
\begin{equation}
{t_{\rm{pix}}} = {t_{\rm{obs}}}{n_f}\frac{{{\Omega _{\rm{pix}}}}}{{{\Omega _{\rm{sur}}}}}\,,
\label{eq:noise2}
\end{equation}where $\Omega_{\rm{pix}}$ is the pixel solid angle.
Equation \ref{eq:noise2} shows that increasing the field-of-view results in higher r.m.s noise, whereas adding more horns reduces it. The compromise option is determined by the balance between the cosmic variance and systematic effects that dominates the error at large scales and the thermal error dominated at small scales. The cosmic variance error can be reduced with a larger sky coverage. The thermal noise amplitude of the instrument $\sigma _{t}$ has been calculated according to the number of horns (Eq. \ref{eq:noise2}). Table \ref{thermal_noise} presents the estimated sensitivities for the scenarios considered in this study, as well as values for other cases.
In theory, in order to improve the constraints on the acoustic scale $k_{A}$, we require a larger field-of-view. However, a larger focal plane area likely brings with it more systematic errors. The criteria is the uncertainty on the acoustic scale $k_{A}$, related to the constraints on the measurements of the BAO features. Following the analysis carried out in \citet{Battye13}, if we consider one central redshift $z$ = 0.3 it is possible to probe the acoustic scale $k_{A}$ with a fractional error of 2.4\% with 2000 deg$^{2}$ and 50 horns. We find that increasing the number of horns from 50 to 70 and the survey area from 2000 deg$^{2}$ to 4000 deg$^{2}$ gives a measurement of the acoustic scale $k_{A}$ with accuracy 2.0\% (Fig. \ref{fig:accuracy}). In order to determine the optimal concept, there is a balance to find between the cosmic variance and $1/f$ noise that dominates the error at large scales and the thermal error dominated at small scales \citep{Blake_2003, Seo10}.
The latter can be reduced with larger field-of-view and number of horns. At large scales the limitation on the accuracy on the measured power spectrum due to the sample variance can be improved by larger survey volume, and so more sky coverage. However, we have to take into account the systematics induced by a larger focal plane area. Optical simulations show that going further away from the center of the focal plane induces a slightly loss in terms of performance of the beam (ellipticity, gain).
In this case, the configuration with $\approx$ 50 horns and 5000 deg$^{2}$ field-of-view represents a good compromise ($\delta$ $ k_{A}$/$k_{A}$ $\approx$ 2.1\%). During Phase 1, BINGO will operate with 28 horns and we intend to add 28 more in Phase 2.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/accuracy_horns}
\caption{Uncertainty on the acoustic scale for 30, 50, and 70 feeds based on a one-year observation time and $\theta_{\rm{FWHM}}$ = 40 arcmin.}
\label{fig:accuracy}
\end{figure}
\begin{table}
\centering
\caption{Thermal noise amplitude for different numbers of horns (considering a 100\% duty cycle).}
\resizebox{9cm}{!}{
\begin{tabular}{c|c|c}
\toprule
{\bf Array - Number of feeds} & {\bf Integration time (yr)} & {\bf $\sigma_{t}$ ($\mu$K)} \\ \midrule \midrule
Double-rectangular - {\bf 28} & 1 (2) & 30 (21) \\ \midrule
Hexagonal - {\bf 31} & 1 (2) & 29 (20) \\ \midrule
Double-rectangular - {\bf 56} & 1 (2) & 15 (11) \\ \midrule
Hexagonal - {\bf 49} & 1 (2) & 16 (11) \\ \bottomrule
\bottomrule
\hline
\end{tabular}
}
\label{thermal_noise}
\end{table}
\subsubsection{$1/f$ noise}
The term $1/f$ noise refers to the type of noise with a power spectrum density (PSD) of the form $S(f)$ $\propto$ $1/f^{\alpha}$ where the spectral index $\alpha$ identifies specific types such as pink noise \mbox{($\alpha$ = 1)} and red noise ($\alpha$ = 2). In most cases, $\alpha$ is equal to 1. The frequency taken into account for the noise is the inverse of the observation time ($f$ = 1/$t_{\rm{obs}}$). Even though it is widely observed in a broad range of scientific areas, a fundamental description of $1/f$ noise is yet to be found.
In the case of BINGO, the $1/f$ noise is produced by gain fluctuations of the amplifiers and therefore it is expected to be strongly correlated across all frequency channels.
It is common in astronomy to define the PSD of a receiver contaminated with thermal and $1/f$ noise as follows: for a system with knee frequency $f_{k}$ (defined as the frequency to which thermal and $1/f$ noise have the same PSD value), system temperature $T_{\rm{sys}}$, the PSD of the noise fluctuations is given by a power-law model \citep{Harper18b}
\begin{equation}
S\left( {f,\omega } \right) = \frac{{T_{\rm{sys}}^2}}{{\delta \nu }}\left[ {1 + C\left( {\beta ,{n_{ch} }} \right){{\left( {\frac{{{f_k}}}{f}} \right)}^\alpha }{{\left( {\frac{1}{{\omega \Delta \nu }}} \right)}^{\frac{{1 - \beta }}{\beta }}}} \right]\,,
\label{eq:psd}
\end{equation}
where $\omega$ is the Fourier mode of the spectroscopic frequency $\nu$, $\Delta\nu$ is the total bandwidth of the receiver and $C$($\beta$,$N_{\nu}$) a normalization factor given by ($N_{\nu} - 1)/(2N_{\nu}\delta{\nu}$), $\delta{\nu}$ is the channel bandwidth and $\alpha$ is the spectral index of the correlated noise. The spectral index $\beta$ describes a $1/f$ noise identical in every receiver channel ($\beta$ $\approx$ 0) and a noise that is independent in every channel \mbox{($\beta$ = 1)}. For an ideal receiver, $f_{k}$ should be 0 meaning
that the receiver TOD is dominated by flat power-spectrum thermal (white) noise only. The thermal and $1/f$ noise are both simulated from independent noise realizations. The stability of the BINGO noise properties can be quantified by the variation in the
white noise, knee frequency and spectral index over the lifetime of the experiment.
The presence of $1/f$ noise in an observation map introduces stripes following the scan circle strategy \citep{Maino02}, and its main effect is to increase the uncertainty of measurements on large spatial scales. This striped structure appears because the mean level of the noise is, in general, different for each circle of measurements, as shown by \citet{Janssen96}. It is important to note the difference to the companion paper V approach, where the details of the instrument observation strategy are not considered. Their analysis does include the $1/f$ component but not the $\beta$ factor, which accounts for correlations across the frequency channels.
In these simulations, the $1/f$ noise fluctuations are assumed to be small multiplicative variations around the system temperature and Gaussian distributed. The 1/f noise can be represented in the TOD as
\begin{equation}
\Delta T\left( {t,\nu } \right) = \delta G\left( {t,\nu } \right){T_{\rm{sys}}}\left( {t,\nu } \right),
\end{equation}
where $\Delta T\left( {t,\nu } \right)$ is the power of the $1/f$ fluctuations at time $t$ and frequency $\nu$, which is the combination of the instantaneous fluctuation in the gain $\delta{G}$ and system temperature $T_{\rm{sys}}$. The PSD of $\delta{G}$ can be described by two parts. The first part is the power spectrum of the temporal fluctuations as in Eq. \ref{eq:psd} but without the thermal noise component
\begin{equation}
P(f)=\frac 1 {\delta \nu}\left(\frac {f_k}f\right)^ \alpha,
\end{equation}
while the second component of the $1/f$ power spectrum describes the correlations of the noise in frequency, and may be described by a conservative power-law model
\begin{equation}
F\left( \omega \right) = {\left( {\frac{{{\omega _0}}}{\omega }} \right)^{\frac{{1 - \beta }}{\beta }}},
\end{equation}
where $\omega$ is the Fourier mode of the spectral frequency (i.e., the wavenumber), $\omega_{0}$ is the smallest wavenumber (1/$\Delta{\nu}$), and $\beta$ describes the frequency correlation. The simulated gain fluctuations should be interpreted as ripples across the 2D observed region.
In Fig. \ref{fig:1/f} we show a $1/f$ map (there is no thermal noise) of the BINGO region using the \mbox{28 horn} double-rectangular array. When $\beta$ = 1, the frequency spectrum is entirely uncorrelated. This means that the number of modes needed to describe the $\beta$ = 1 $1/f$ noise is equal to the number of channels; therefore, removing the $1/f$ noise will be very challenging for typical component separation methods (see Sect. \ref{sec:results_p}). For the simulation in Fig. \ref{fig:1/f} we assumed a spectral index $\beta$ = 0.25 and the same value of $f_{k}$ = 1\,mHz for each receiver and for a 9.33\,MHz channel bandwidth. The index $\alpha$ is assumed to be 1, while $\beta$ and $f_{k}$ were obtained from preliminary measurements, at Jodrell Bank Observatory, of the statistical properties of the noise using data from the BINGO test correlation receiver \citep{Evans2017}. The $1/f$ noise has been filtered on timescales of $\approx$ 6 minutes, which is the time structures with the angular scales of interest take to drift across the field-of-view of one horn. This assumes that $1/f$ noise is fully calibrated out (e.g., using a calibration diode) on such timescales, and represents an optimistic scenario.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/f_noise.pdf}
\caption{Simulated $1/f$ noise map of the BINGO observational region. The striped structure is reduced after the filtering of the data. The scale quantifies fluctuations $\delta T$ around the average in K.}
\label{fig:1/f}
\end{figure}
A good front end for a radio telescope designed for IM with the required stability will exhibit a low knee frequency, ideally on the order of one thousandth of a hertz. The use of correlation receivers using colfets as reference sources in the BINGO telescope will allow for efficient accounting of the $1/f$ noise contribution \citep[see][]{2020_instrument}. To reduce the stripe contamination in the maps, we need to carefully measure the relative gains of the individual receivers and determine the beam contribution for each horn before combining the signals from all the beams.
\section{Observational strategy}
\label{sec:strategy}
In the radio band the natural tracer of LSSs is the 21 cm line of \textsc{Hi}, but the volume emissivity associated with this line is low, meaning that detecting individual galaxies at z $\approx$ 1 requires a very substantial collecting area. Interferometer arrays are likely to be the best approach to probing higher redshifts at z $\approx$ 1, where an angular resolution of $\approx$ 0.1$^{\circ}$ is required. Using a single dish, moderate-sized telescope with an ultra-stable receiver system, is the lowest-cost approach to IM measurements of BAOs at low redshifts \citep{Battye13} and intermediate angular scales. The idea is to exploit the broad beam at low-frequencies to carry out IM \citep{Peterson06, Masui13} and consequently measure the overall integrated \textsc{Hi} brightness temperature of a large number of galaxies, taken together as a tracer of the LSS.
Detecting signals of $\approx$ 200 $\mu$K with a non-cryogenic receiver of standard performance implies that every pixel in our intensity map requires an accumulated integration time of $>$ one day over the course of the observing campaign. The total integration time can be built up by many returns to the same patch of sky, but between these, the receiver gains need to be stable.
The Galactic emission is known to be significantly polarized. The synchrotron emission, for instance, is polarized up to 40-50\% \citep{Wolleben06}. The linearly polarized part of the Galactic emission will undergo Faraday rotation as it travels through the Galactic magnetic field and the interstellar medium.
This means that the observations need to be made with clean beam(s) with low side-lobe levels and very good polarization purity in order to add foreground degrees of freedom.
For the following analysis, we chose a declination strip that minimizes the contribution from Galactic foregrounds. We assume that the telescope will map a $\approx$ 15$^{\circ}$ declination strip centered at $\delta=$ -17.5$^{\circ}$ as the sky drifts past the telescope.
The need to resolve structures of angular sizes corresponding to a linear scale of around 150 Mpc in our chosen redshift range implies that the required angular resolution has to be about 40 arcmin.
As the Earth rotates, each BINGO horn observes a ring at a single declination set up by the instrument geometry. One complete ring is scanned in 24 hours, which is the periodicity of the sky signal. Therefore, a set of periodic rings (one per horn) with 24 hours each, is a standard representation of the BINGO data. The arrangement of feed horns in the focal plane has been optimized in such a way as to cover the $\approx$ 15$^{\circ}$ declination strip and at the same time to have some redundancy, that is to say, beams have some superposition with beams of adjacent horns (as shown in Fig. \ref{fig:arranjo_28}). This will increase the signal to noise ratio.
The positioning of each horn is defined by two parameters. The first is the Cartesian ($x$,$y$) coordinates of the horn in the focal plane and the second is the elevation and azimuthal angles (el, az). Horns located in the outermost regions of the array are slightly tilted with respect to the focal plane to under-illuminate the secondary mirror and to reduce sidelobes and ground contaminations. The details of the BINGO optical design are given in \citet{2020_optical_design}.
In this work, we assume two different horn configurations from that paper: the hexagonal (Fig. \ref{fig:arranjo_31}) and the double-rectangular (Fig. \ref{fig:arranjo_28}) formats. The beams are well approximated by a Gaussian shape and are diffraction-limited.
Different observation times have been tested to understand how they affect the \textsc{Hi} signal recovery. The uniformity of the sky coverage will depend on the pixel size used.
Since our observation method relies upon the sky drifting across the focal plane, we guarantee a complete (and uniform) sky coverage for BINGO allowing for N independent lines-of-sight, with the resolution of each pixel in the final map being $\approx$ 15$^{\circ}$/N.
Both arrangements are good enough when mapping the sky using {\tt HEALPix} with $N_{\rm{side}}$ = 64 (larger pixels). This $N_{\rm{side}}$ roughly corresponds to pixels of size 54 arcmin, which avoids missing pixels. This means the effective beam of BINGO will be broader, but by about 30\% However, gaps will appear in the sky coverage (declination direction) when the resolution is increased to the real BINGO resolution $\theta_{\rm{FWHM}}$ = 40 arcmin.
Other arrangements have been analyzed like the ones from \citet{Battye13} and \citet{Sazy15} or with 60 horns placed equidistantly along the vertical axis, and in such a way as to cover $\approx$ 15$^{\circ}$ in total.
However, the sky coverage obtained with the above-mentioned options and a resolution $N_{\rm{side}}$ = 128 (27' pixels) leaves gaps between data stripes, with unobserved regions at constant declination.
There are three possible ways of overcoming the problem. Only use {\tt HEALPix} with maximum $N_{\rm{side}}$ = 64 to produce the maps, change the declination coverage by varying the pointing of the full focal plane, or build a focal plane with more horns. The last option is not possible for financial reasons. Changing the declination coverage can be accomplished by moving all horns vertically at different steps. Five different horn vertical positions are allowed inside the 2400\,mm tall hexagonal case. These vertical displacements should happen in such a way as to minimize declination separation between adjacent horns, meaning vertical positions change in the focal plane as a whole (all horns should be placed in the same ‘‘new'' position), as opposed to elevation and azimuth displacements, which may occur for individual units. Maximum displacement is $\pm$ 300\,mm for the focal plane.
We simulated the resulting BINGO coverage after displacing all the horns up and down in the focal plane every year by $\pm$ 150\,mm steps. In doing this, we generated hits maps (meaning how many times a given pixel is scanned) relative to the central channel, which corresponds to the frequency interval 1110.67--1120.00\,MHz. All maps are created with $N_{\rm{side}}$ = 128 and then degraded to BINGO resolution ($\theta_{\rm{FWHM}}$ = 40 arcmin). The ``five elevation'' summed maps are equivalent to five years of observations.
The resulting maps are shown in Fig. \ref{fig:hexagonal_sum} and Fig. \ref{fig:double_sum}. The horn repositioning strategy results in a more homogeneous covered area (we found 25\% more coverage than the fixed elevation option for the hexagonal format and 12\% more coverage for the double-rectangular), which will allow us to obtain a more uniform signal-to-noise ratio per pixel and a better recovery of the overall \textsc{Hi} signal. Regarding the two arrangements, we found differences in terms of sky coverage and uniformity. The declination strip is $\approx$ 15$^{\circ}$ for the hexagonal array, whereas in the case of the double-rectangular is $\approx$ 17.5$^{\circ}$ ($\approx$ 17\% larger).
\begin{figure}[t]
\centering
\includegraphics[width=0.93\columnwidth]{figures/hexagonal_sum}
\caption{Gnomonic projection centered at $\delta$ = -17.5$^{\circ}$, RA = 0 of the BINGO sky coverage when using the hexagonal arrangement with 31 feed horns, after 5 years of mission.}
\label{fig:hexagonal_sum}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.93\columnwidth]{figures/double_sum}
\caption{Gnomonic projection centered at $\delta$ = -17.5$^{\circ}$, RA = 0 of the BINGO sky coverage when using the double-rectangular arrangement with 28 feed horns, after 5 years of mission. }
\label{fig:double_sum}
\end{figure}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=.85\textwidth]{figures/coverage_2.jpg}
\end{center}
\caption{Sky coverage for BINGO observing strategy (pixel size = 40 arcmin). The histograms represent the sky coverage obtained with the hexagonal ($gray$) and the double-rectangular ($red$) arrangement. The histograms have been obtained with five different horn positionings ($\pm$ 300\,mm, $\pm$\,150 mm and 0) considering one year of integration for each elevation. On the $y$ axis we have the number of observations relative to each pixel of the map.}
\label{fig:coverage}
\end{figure*}
The better sky coverage uniformity achieved with the double-rectangular array is clearly visible in Fig. \ref{fig:coverage}, where the minimum number of observations per pixel (hits) as a function of the covered area ({\tt HEALPix} pixel) is shown. The peaks of the hexagonal arrangement are due to feed redundancy along the scan direction (i.e., pixels seeing the same sky each day) since there are more horns aligned in cross-elevation.
It is clearly visible how different feed arrays can affect the uniformity in the sky coverage. The strategy of displacing the horns vertically, reconfiguring the focal plane, allow us to avoid gaps in the sky coverage. Better results might be obtained with larger vertical displacements, but the actual geometry of the supporting structure prevents this option.
In terms of noise power spectrum, there is additionally an impact when the observation time of the $N$ horns is not uniformly spread over a fraction of the sky. It is worth noting that the noise power spectrum of an homogeneous coverage is independent of the pixel size used to produce the map.
In the case of inhomogeneous coverage, the noise power in a spherical harmonic transform (computed from the harmonic coefficients of the noise map, excluding pixels that are not observed) is
\begin{equation}
{N_\ell } = 4\pi \times \left\langle {\sigma _t^2} \right\rangle /{N_{\rm{pix}}} = \frac{{4\pi }}{{{N_{\rm{pix}}}}} \times \sum\limits_p {\left[ {\frac{{\sigma _{\rm{noise}}^2}}{{{\tau _{\rm{obs}}}\left( p \right)}}} \right]}\,,
\label{eq:noisepow}
\end{equation}
where $N_{\rm{pix}}$ is the total number of pixels in the map, $\tau_{\rm{obs}}(p)$ is the total time spent observing pixel $p$ (summing-up the time of observation by all horns) and $\sigma_{\rm{noise}}$ is the white noise level
\begin{equation}
{\sigma _{\rm{noise}}} = \frac{{{T_{\rm{sys}}}}}{{\sqrt {\delta \nu } }}\,.
\label{eq:white}
\end{equation}
Hence, the amplitude of the noise power spectrum increases when the time distribution is not uniform. When the inhomogeneity is not too large, such an increase remains small. We found a noise level $\approx$ 7\% greater when the sky is scanned with the hexagonal horn configuration compared to the double-rectangular arrangement and $N_{\rm{side}}$ = 128. However, as the $N_{\rm{side}}$ of the maps is increased, big gaps appear and the noise level can reach values of $\approx$ 70\% (in the extreme case where half of the pixels are observed five times more often than the other half).
In the double-rectangular configuration, two rows of detectors are shifted compared to the original first two rows of detectors by a one-quarter height of the hexagon height, whereas in the hexagonal configuration the difference in horn heights between the first and the second columns represents half the hexagonal height. This indicates we can reach a better than Nyquist configuration (i.e., having a number of samples per beam equal to two) with the double-rectangular array by simply shifting the position of the horns once during the survey lifetime. Since the shifting occurs each year of the survey, we can obtain a map that is over-sampled in the $y$ direction compared to Nyquist sampling. This will allow us to use other techniques to extract further resolution from the maps such as the drizzle technique applied to {\tt HEALPix} maps \citep{Paradis12}. Finally, we can say the double-rectangular array give a better noise distribution, which is good for the quality of the data that can be expected from BINGO, but reduces the redundancy.
\section{Component separation}
\label{sec:gnilc}
\subsection{GNILC}
The signal measured by a radio telescope is the composition of cosmological signals emitted in the early Universe (e.g., CMB or cosmological \textsc{Hi} signal), astrophysical sources emitting in the late Universe (e.g., Galactic foregrounds and extragalactic point sources) and systematic noise of the instrument (e.g., thermal and $1/f$ noise). A component separation process aims to extract the signal of interest from the measured signal by evaluating the correlations of the measurements at different frequencies using physical emission models. This process is extremely important for \textsc{Hi} IM experiments, since the detected signal is typically smaller than the Galactic foreground contribution by a factor of roughly $10^{-4}$. In addition to these components, systematic contributions can also be removed during the separation process.
Several component separation techniques are available in the literature concerning foreground removal, in particular for CMB data. For instance, the principal component analysis (PCA) technique was successfully employed in the detection of \textsc{Hi} at $z$ = 0.8 by cross-correlation using the Green Bank 100 m Telescope \citep{Switzer13}.
The Generalized Needlet Internal Linear Combination \citep[{\tt GNILC};][]{Remazeilles11}, is a component separation method developed for the \textit{Planck} collaboration and applied to IM experiments by \cite{Olivari16}. For this method, a data set containing the measurements of intensity (or temperature) $x_{i}(p)$ at a given frequency $i$ and at a given pixel $p$ can be represented by
\begin{equation}
{x_i}\left( p \right) = {s_i}\left( p \right) + {n_i}\left( p \right),
\label{eq:gnilc}
\end{equation}
where $s_{i}(p)$ is the map of the cosmological signal to be recovered and $n_{i}(p)$ is the map of foregrounds emission and instrumental noise.
Equation \ref{eq:gnilc} can be also written as a 1 $\times$ $n_{ch}$ vector, where the $n_{ch}$ is the number of channels (frequency bins)
\begin{equation}
{\textbf{x}}\left( p \right) = {\textbf{s}}\left( p \right) + {\textbf{n}}\left( p \right)\,.
\label{eq:gnilc2}
\end{equation}
In this work what is assumed to be the noise are the foregrounds plus the $1/f$ component. \textsc{Hi} and thermal noise are about the same order of magnitude in some multipole scales and so, as we try to recover \textsc{Hi}, some thermal noise will be recovered as
well. In the {\tt GNILC} method a set of windows (``needlets'') is defined in harmonic space so that specific different ranges of
angular scales of the input map are isolated. Needlets are a particular construction of wavelets family that can be interpreted as
band-pass filters, $h_l^{\left( j \right)}$, in harmonic space and can be defined such that
\begin{equation}
\sum\limits_j^{} {\left[ {h_l^{\left( j \right)}} \right]} = 1\,,
\end{equation}
where $j$ defines an interval of multipoles. The $a_{lm}$ harmonic coefficients are filtered and transformed back into a map, so the statistical information contained there is only relative to a certain range of angular scales. It means that, for each frequency map, there are several needlet maps. This step permits a more localized analysis of the signal. Next, the data covariance matrix \textbf{R}, whose dimensions are $n_{ch}$ $\times$ $n_{ch}$, is defined at pixel $p$ by
\begin{equation}
\textbf{R}\left( p \right) = {\textbf{R}_\textsc{Hi}}\left( p \right) + {\textbf{R}_n}\left( p \right)\,,
\end{equation}
where ${\textbf{R}_\textsc{Hi}}\left( p \right)$ = $\langle$${\textbf{s}}\left( p \right)$ ${\textbf{s}^{T}}\left( p \right)$$\rangle$ is the \textsc{Hi} covariance matrix, and ${\textbf{R}_n}\left( p \right)$ = $\langle$${\textbf{n}}\left( p \right)$ ${\textbf{n}^{T}}\left( p \right)$$\rangle$ is the covariance matrix of the noise (foregrounds plus $1/f$). The number of components representing the observation data is limited to the number of channels of the experiment, $n_{ch}$. The foreground components are frequency correlated, so that the foreground signal plus noise \textbf{n} can be represented by a linear combination of $m$ independent vectors.
The signal to be recovered can be estimated by
\begin{equation}
{\rm{\hat s}} = \textbf{W}\textbf{x}\,,
\end{equation}
where \textbf{W} is the $n_{ch}$ $\times$ $n_{ch}$ weight matrix and \textbf{x} the data vector described in Eq. \ref{eq:gnilc2}. The matrix \textbf{W} minimizes the total variance of the estimated vector ${\rm{\hat s}}$, under the condition \textbf{W}\textbf{S} = \textbf{S}, so that
\begin{equation}
{\rm{\textbf{W} = \textbf{S}}}{\left( {{{\rm{\textbf{S}}}^T}{{\rm{\textbf{R}}}^{ - 1}}{\rm{\textbf{S}}}} \right)^{ - 1}}{{\rm{\textbf{S}}}^T}{{\rm{\textbf{R}}}^{ - 1}}\,,
\label{eq:gnilc3}
\end{equation}
where \textbf{S} is the estimate \textsc{Hi} plus noise mixing matrix. In order to use Eq. \ref{eq:gnilc3} to recover the signal of interest, it is necessary to estimate the \textbf{S} matrix. To achieve this aim, a theoretical power spectrum of \textsc{Hi} is used to determine the local ratio between the cosmological signal and the total observed signal.
At this point, we have an estimate for the reconstructed signal ${\rm{\hat s}}$ for each needlet scale. Finally, for each frequency channel, the needlet maps are added to give a complete {\tt GNILC} recovered \textsc{Hi} plus thermal noise map.
\subsection{Results}
\label{sec:results_p}
Most of the
pipeline is written in the {\tt Python} programming language. Maps are generated using the {\tt HEALPix} pixelization scheme. The code simulates the $1/f$ noise-contaminated TODs used for generating maps. Time-ordered
data sets with correlated $1/f$ noise properties result in images containing stripes along the telescope drift directions that can dominate the astronomical signal. A set of simulations has been used to test the performance of the BINGO telescope and the quality of the component separation method.
\begin{table}
\centering
\caption{Simulation parameters.}
\resizebox{7cm}{!}{
\begin{tabular}{c||c} \toprule
{\bf Parameter} & {\bf Value} \\ \midrule \midrule
Beam resolution ($^{\circ}$) & 0.67 \\ \midrule
Observation time (yr) & 1, 2 \\ \midrule
Frequency range (MHz) & 980 - 1260 \\ \midrule
Number of feeds $n_{f}$ & 28, 56 \\ \midrule
Number of channels $n_{ch}$ & 30 \\ \midrule
Knee frequency (Hz) & 0.001 \\ \midrule
$1/f$ spectral index $\beta$ & 0.001, 0.12, 0.25, 0.6 \\ \midrule
$T_{\textsc{sys}}$ (K) & 70 \\ \bottomrule
\bottomrule
\end{tabular}
}
\label{parameters_simul}
\end{table}
The {\tt GNILC} method is also investigated in the BINGO paper V \citep{2020_component_separation} with contaminants generated from the PSM code. In this paper, the efficiency of {\tt GNILC} in reconstructing \textsc{Hi} plus thermal noise maps in the presence of contaminants (already described in this work) is analyzed in respect of how the change of $1/f$ parameters, number of feed horns and observation time influence this process.
To estimate the dimension of the \textsc{Hi} plus noise subspace in its PCA step, {\tt GNILC} makes use of theoretically known \textsc{Hi} plus noise power spectra (or \textsc{Hi} plus noise template maps). The reason for using the \textsc{Hi} signal plus the instrumental thermal noise as the signal of interest is that these two emissions, for most of the current IM experiments, are roughly of the same order of magnitude for some of the scales of interest (smaller scales or higher multipoles).
Therefore, even when we try to recover the \textsc{Hi} signal alone, we end up recovering some thermal noise as well at these scales. This, however, is not an optimal reconstruction of the \textsc{Hi} plus noise signal, since {\tt GNILC} will try to remove as much noise as possible from the data. To avoid creating artificial artifacts on the noise maps, the most efficient strategy for \textsc{Hi} IM is then to recover both the \textsc{Hi} and noise signals as one single component.
We used a galaxy mask, similar to the GAL70 Planck \textsc{Hi} mask, with a cosine apodization of $3^{\circ}$ to avoid boundary artifacts in the power spectrum estimation. All maps are generated with $N_{\rm{side}}$ = 128. Table \ref{parameters_simul} shows the instrumental parameters used for the simulations. Some values have been modified in each tested scenario, such as the observation time, the number of feed horns and the $1/f$ spectral index. The number of channels has been limited to $n_{ch}$ = 30. This is a compromise between the increase in the thermal noise amplitude (Eq. \ref{eq:noise}), computational processing time and the improvement in the {\tt GNILC} performance with an increase in the number of channels.
The {\tt GNILC} method has two dependences that must be set before the component separation is performed: the set of needlets and the internal linear combination (ILC) bias $b$. These parameters control the localization that is made by {\tt GNILC} when calculating the covariance matrices. Needlets determine the location in harmonic and real (or pixel) space. The most appropriate set of needlets for BINGO maps is the one that combines the strengths from a mild localization at low multipoles and a fine localization at high multipoles.
Figure \ref{fig:needlet} shows the set adopted in our analysis.
The ILC bias, however, is not a totally free parameter, as its increase leads to an increase in the artificial anticorrelation between the component of interest and the contaminants. It should, therefore, be made as small as possible without increasing the resulting localized area of the sky and computational processing too much.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/needlets.jpg}
\caption{Particular set of needlets used in this work. A needlet is a tool that permits a certain range of multipoles (or physical scales) to be isolated for the benefit of a particular analysis or procedure.}
\label{fig:needlet}
\end{figure}
In order to test the sensitivity of the method as a function of the simulation parameters, we attempted to recover the power spectra for scenarios with a different number of feeds (double-rectangular 28 and 56 and hexagonal 31), different $1/f$ noise correlations and different observation times.
To have a quantitative measure of the ability of the {\tt GNILC} method, we compare the results with the standard PCA method, which is commonly used in \textsc{Hi} IM simulations \citep{Sazy15, Alonso15}. The PCA method consists of the transformation of the independent maps of each frequency channel into orthogonal modes of the covariance matrix between frequencies. This method relates the foreground components to the eigenvectors with the largest variance modes, which are called the principal components. These principal components are then removed from the data.
As a way to quantify the recovery performance of the methods, the average absolute difference between the input $C_{\ell}^{s}$ and the reconstructed power spectrum $C_{\ell}^{R}$ normalized by the input power spectrum is considered:
\begin{equation}
{N_{Rec}} = \frac{1}{{{N_\ell }{n_{ch}}}}\sum\limits_i^{{n_{ch}}} {\sum\limits_\ell ^{{\ell _{\rm{max}}}} {\left| {\frac{{C_\ell ^R\left( {{\nu _i}} \right) - C_\ell ^S\left( {{\nu _i}} \right)}}{{C_\ell ^S\left( {{\nu _i}} \right)}}} \right|} }\,,
\label{eq:Ngnilc}
\end{equation}
\noindent where $N_{\ell}$ is the total number of multipoles considered and $\nu_{i}$ specifies a frequency channel. The ideal scenario would be when $N_{Rec}$ = 0, indicating a perfect \textsc{Hi} plus noise map reconstruction.
We summarize, in Table \ref{tab:n_results}, the values of $N_{Rec}$ obtained for each configuration. All the results shown have been obtained with the same value of the ILC bias (0.01). For the choice of needlets, we used a set that combines, as mentioned before, the strengths from a mild localization at low multipoles and a fine localization at high multipoles. Regarding the standard PCA, the recovery of the \textsc{Hi} signal with the smallest bias is obtained when we choose the number of principal components to be equal to 3.
Measurements made with a digital back-end for an earlier version of the BINGO receiver yielded $\beta \approx 0.25$, which we use as our fiducial value.
The power spectral density of the spectral gain fluctuations is modeled as a power law and characterized by the parameter $\beta$. Small values of $\beta$ ($< 0.25$), or high correlation, are preferred, as this makes it easier to remove the 1/f fluctuations using current component separation techniques. The value for $\beta$ is heavily dependent on the receiver setup.
For our analysis, we considered the frequency that is closest to the middle point of the BINGO band.
\begin{figure}
\centering
\includegraphics[width=\hsize]{figures/28Horns_1year_025}
\caption{Component separation results using the simulation pipeline. $Top$: Angular power spectra for the input \textsc{Hi} plus noise signal ($black$), the {\tt GNILC} recovered \textsc{Hi} plus noise signal ($red$), and the PCA recovered \textsc{Hi} plus noise signal with three modes removed ($blue$) at $\approx$ 1.1\,GHz. For this particular channel and configuration (double-rectangular, 28 feeds, one-year observation time, and $\beta$ = 0.25), $N_{Rec}$ ({\tt GNILC}) equals 21.43$\%$. $Bottom$: Cross-correlation coefficient $r(\ell)$ among the recovered signals ({\tt GNILC} and PCA) and the input signal.}
\label{fig:gnilc1}
\end{figure}
\begin{table}
\centering
\caption{Average normalized absolute difference between the input power spectrum and the recovered power spectrum of the \textsc{Hi} plus noise signal ($N_{Rec}$) for the central frequency ($\approx$ 1.1\,GHz) with $\beta$ = 0.25. ``D.R.'' stands for ``double rectangular'' and ``HEX'' for `hexagonal arrays.''}
\resizebox{9cm}{!}{
\begin{tabular}{c||c||c||c} \toprule
{\bf Number of feeds} & 28 (D.R.) & 31 (HEX) & 56 (D.R.) \\ \midrule
{\bf Integration time (y)} & 1 \space \space \space \space \space \space \space \space \space 2 \space \space \space \space \space \space \space \space \space 5 & 1 & 1 \\ \midrule
{\bf $N_{Rec}$ (\%)} {\tt GNILC} & 21.43 \space \space \space \space \space 20.12 \space \space \space \space \space 14.31 & 21.05 & 19.6 \\ \midrule
{\bf $N_{Rec}$ (\%)} PCA & 20.37 \space \space \space \space \space 18.93 \space \space \space \space \space 12.87 & 20.13 & 18.2 \\ \bottomrule
\bottomrule
\end{tabular}
}
\label{tab:n_results}
\end{table}
Figure \ref{fig:gnilc1} shows the power spectra of the input \textsc{Hi} plus noise signal and the {\tt GNILC} and PCA recovered \textsc{Hi} plus noise signal, relative to the scenario with 28 feeds and one-year observation time. In the bottom panel of Fig. \ref{fig:gnilc1} we plot the cross-correlation coefficient, defined as
\begin{equation}
r\left( \ell \right) = \frac{{C_\ell ^R\left( {{\nu _i}} \right)C_\ell ^S\left( {{\nu _i}} \right)}}{{C_\ell ^S\left( {{\nu _i}} \right)C_\ell ^S\left( {{\nu _i}} \right)}}.
\end{equation}
The cross-correlation coefficient is a complementary way to measure the signal recovery sensitive to scale-dependent signal loss.
Depending on the number of principal components that are removed, the PCA either underestimates the \textsc{Hi} power spectrum or is contaminated by residual foregrounds. The best result was obtained with 3 modes removed.
We can see that the recovery of the two methods is comparable throughout the entire range of multipoles $\approx$ 12 $<$ $\ell$ $<$ 330. The performance also depends on the range of angular scales. The cross-correlation coefficient is $<$ 1 throughout almost the entire range of multipoles, while on smaller scales its value increases, showing that the recovered spectrum is contaminated. The {\tt GNILC} recovered and residuals maps covering the BINGO sky region for this scenario are shown in Fig. \ref{fig:stripes}.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=.75\textwidth]{figures/stripes2}
\end{center}
\caption{BINGO \textsc{Hi} plus noise maps with a Galactic mask applied at $\approx$ 1.1\,GHz (double-rectangular, 28 feeds, one-year integration time, and $\beta$ = 0.25). In the $top$ panel we have the input map, in the $middle$ panel the {\tt GNILC} recovered map, and in the $bottom$ panel the residuals map (temperatures are given in K).}
\label{fig:stripes}
\end{figure*}
For the three different configurations, the input power spectrum is recovered with good accuracy, with an improvement as the number of feeds is increased (Table \ref{tab:n_results}). The results show that the component separation method does not affect significantly the wanted signal statistics, but it underestimates the power spectrum (Fig. \ref{fig:gnilc1}). We show in Fig. \ref{fig:hist} the values of $N_{Rec}$ ({\tt GNILC}) obtained for each channel. Performances are worse at the edges of the frequency band, where there is less freedom for the method to fit for the independent components of emission without compromising the reconstruction of the wanted signal.
\begin{figure}
\centering
\includegraphics[width=\hsize]{figures/hist_28horns_1year_2}
\caption{Histogram showing the average absolute difference between the input and the recovered \textsc{Hi} power spectrum normalized by the input \textsc{Hi} power spectrum ($N_{Rec}$) obtained with {\tt GNILC} for each channel of the BINGO frequency band.}
\label{fig:hist}
\end{figure}
We finally summarize the {\tt GNILC} performance for different multipole ranges in Table \ref{tab:range} (scenario with 28 feeds and one-year observation time). The difference between the total signal and the \textsc{Hi} plus noise signal is largest in the middle part of the interval, where the Galactic foregrounds are more intense. For smaller scales ($\ell$ $>$ 200) instead, we can recover the cosmological \textsc{Hi} plus noise power spectrum with very good accuracy. This happens because in this range of scales the difference in power between the total signal and the wanted signal is smaller. It is worth noting that the {\tt GNILC} performance may be adjusted by changing the ILC bias, whose value represent the localization in the real space. There is a direct dependence on it when the localization in the spherical space is not fine enough. We are investigating this aspect for a future publication.
\begin{table}
\centering
\caption{$N_{Rec}$ values for different ranges of multipoles (double-rectangular, 28 feeds, one-year observation time, and $\beta$ = 0.25) obtained with {\tt GNILC} at $\approx$ 1.1 GHz.}
\label{tab:range}
\begin{tabular}{cc} \toprule
\hline
{\bf Range of multipoles} & $N_{Rec}$ \\
\hline
\toprule
15--30 & 0.18 \\ \midrule
30--60 & 0.28 \\ \midrule
60--90 & 0.19 \\ \midrule
90--120 & 0.23 \\ \midrule
120--150 & 0.30 \\ \midrule
150--180 & 0.31 \\ \midrule
180--210 & 0.31 \\ \midrule
210--240 & 0.22 \\ \midrule
240--270 & 0.14 \\ \midrule
270--300 & 0.12 \\ \midrule
300--330 & 0.08 \\ \bottomrule
\bottomrule
\hline
\end{tabular}
\end{table}
In this context, we used the {\tt GNILC} method as well to investigate the impact of $1/f$ noise correlation in the reconstruction of \textsc{Hi} plus thermal noise signal. Table \ref{tab:1/f} shows the results when varying $\beta$. It can be noted that the degree of $1/f$ noise correlation between the receivers channels affects the recovery of the input power spectrum. According to these results, the {\tt GNILC} method is capable of recovering the spectrum but with different performances.
Our analysis shows that, with real data, the $1/f$ noise contribution will be the more challenging contaminant to be removed. Figure \ref{fig:gnilc1_corr} shows the {\tt GNILC} and PCA recovered \textsc{Hi} plus noise power spectra when there is a high degree of $1/f$ noise correlation ($\beta$ = 0.001, D.R. 28 feeds, one-year observation time). As expected, an improvement in the {\tt GNILC} performance with an increase of correlation in frequency is found. The $1/f$ noise can be sufficiently reduced to be significantly lower than the thermal noise based on a one-year observation time with the same instrumental parameters described in Table \ref{parameters_simul}. Further improvements could also be achieved by using more advanced map-making codes. There are some discussed in the literature (e.g., \citealp{Natoli01,de_Gasperis_2016}) that should be able to suppress in part the $1/f$ noise.
For the real observations the 1/f noise effect in the data can be diminished in two ways: either by using very stable receivers so that the knee frequency is reduced or by increasing the scanning speed so that the signal is shifted to higher frequencies. For a static telescope like BINGO this implies that the receiver gains need to be as stable as possible.
\begin{table}
\centering
\caption{$N_{Rec}$ for different $\beta$ values (double-rectangular, 28 feeds, one-year observation time) obtained with {\tt GNILC} at $\approx$ 1.1\,GHz.}
\label{tab:1/f}
\begin{tabular}{ccr} \toprul
\hline
{\bf Spectral index} & $N_{Rec}$\\
\hline
\toprule
0.001 & 0.12\\ \midrule
0.12 & 0.2 \\ \midrule
0.25 & 0.21 \\ \midrule
0.6 & 0.32 \\ \bottomrule
\bottomrule
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\hsize]{figures/Correlated_Fnoise_rebeamed}
\caption{Angular power spectra for the input \textsc{Hi} plus noise signal ($black$), the {\tt GNILC} recovered \textsc{Hi} plus noise signal ($red$), and the PCA recovered \textsc{Hi} plus noise signal with three modes removed ($blue$) at $\approx$ 1.1\,GHz. For this particular channel and configuration (double-rectangular, 28 feeds, one-year observation time, and $\beta$ = 0.001), $N_{Rec}$ ({\tt GNILC}) equals $\approx$ 12 $\%$. The recovery improvement, due to the increased $1/f$ noise correlation, is clear.}
\label{fig:gnilc1_corr}
\end{figure}
\section{Conclusions}
BINGO is a transit telescope designed to make the first detection of BAOs at radio wavelengths, through the detection of the 21 cm \textsc{Hi} emission and, consequently, the construction of a 3D map of total matter distribution. In this way, the telescope will provide independent cosmological data to complement optical surveys at the redshift interval $0.127 < $ z $ < 0.449$.
We have performed an optimization analysis to determine the best solution for the BINGO focal plane arrangement. Two different arrays have been considered, and other possible configurations can be found in previous works \citep{Battye13, Sazy15}. According to the results obtained in this work, the optimal solution for the BINGO focal plane design is the double-rectangular array. This design concept will use a large structure to mount an array of 28 horns, allowing the displacement of each horn along the vertical axis (elevation axis). The structure will be able to host, in the future, up to 56 horns.
In this work we have presented, for the first time, results from the application of the {\tt GNILC} method to a set of end-to-end simulations generated with the IM pipeline developed by the BINGO collaboration.
The foreground cleaning method did show satisfactory results with the parameters adopted in this work.
The simulations were carried out with $n_{ch}$ = 30 due to computational processing time constraints. This parameter has a direct influence on the efficiency of {\tt GNILC}. With more frequency channels, there is more freedom for the method to fit for the independent components of emission (the foregrounds) without compromising the reconstruction of the wanted signal. The telescope is expected to work with more channels during the data collection, and this will improve the capability of the foreground cleaning method.
In line with the results of this work, we can say that {\tt GNILC} can reconstruct the \textsc{Hi} plus noise signal, for the IM BINGO experiment, with an (absolute) accuracy of $\approx $12\% after five years of observation. This result may be improved by adding more horns to the telescope. The reconstruction is good even in the presence of systematics, such as $1/f$ noise, which was not considered in previous works \citep{Olivari16}. This has been an encouraging result since this systematic will be present, to a great extent, in the real data and is the most complex contaminant to remove. Map-making methods can help to reduce the contribution of $1/f$ noise but at present are not optimized for our \textsc{Hi} IM simulations. This is an important point because it indicates that it is preferable, where possible, to try to suppress $1/f$ noise in hardware, such as with the BINGO correlated channels, than to rely entirely on post-processing in software.
We note that the current implementation of {\tt GNILC} in our analysis relies on a simple choice of the needlet bands, without any attempt at optimizing localization inside the small sky area observed by BINGO. The set was determined with equal intervals between peaks, with prioritization for high multipoles ($\ell$ $>$ 100), and is not definitive. In a future paper, for a more accurate cosmological signal recovery, a study to optimize their quantity and their distribution will be required. We intend to optimize the needlet localization inside the sky area observed by BINGO and anticipate some improvement of {\tt GNILC} over a simple PCA since such localization will allow the number of principal components estimated by {\tt GNILC} to vary inside the BINGO area. This is expected because {\tt GNILC} estimates the number of principal components locally across the sky and, thus, allows the effective number of principal components to vary across the sky, while a PCA assumes a fixed number of principal components (e.g., three), all over the sky, which is a rough approximation given the expected variation in non-Gaussian foregrounds and signal-to-noise across the sky area observed by BINGO.
We can summarized the work presented in this paper as follows:
\begin{itemize}
\item Simulated cosmological signal with {\tt FLASK};\\
\item Foreground modeling with galactic synchrotron, free-free, and AME;\\
\item Simulated instrumental noise (thermal noise plus $1/f$ noise);\\
\item Sky masking for different observational strategies;\\
\item Cosmological \textsc{Hi} signal recovery with {\tt GNILC};\\
\item Comparison of simulated and recovered angular power spectra.
\end{itemize}
As the telescope construction proceeds, we will also work on the improvement of this pipeline to include other component separation methods and intend to address the following items in the near future:
\begin{itemize}
\item A more careful estimation of the errors in the recovered maps. One way to accomplish this is through Monte Carlo runs of different realizations of the observed sky. The main problem here is that it will require the simulation of a large number of different foreground skies, much larger than the number of available models that exist in the literature. A solution to this problem could be the use of some toy models for the spatial and frequency distribution of the different types of foreground emissions.\\
\item Testing of different sets of needlets;\\
\item Refining the $1/f$ noise model;\\
\item Including more realistic terrestrial RFI measurements in the simulations;\\
\item Including extragalactic point radio sources maps in the simulations.
\end{itemize}
The accomplishment of these points will allow us to produce more realistic simulations, which are a valuable tool for testing and verification of the quality of the component separation steps during the BINGO data analysis.
\section*{Acknowledgements}
The BINGO project is supported by FAPESP grant 2014/07885-0.
V. L. acknowledges the postdoctoral FAPESP grant 2018/02026-0.
C.A.W. acknowledges CNPq through grant 313597/2014-6.
T.V. acknowledges CNPq grant 308876/2014-8.
F.B.A. acknowledges the UKRI-FAPESP grant 2019/05687-0, and FAPESP and USP for Visiting Professor Fellowships where this work has been developed.
E. M. acknowledges a Ph.D. CAPES fellowship.
F. V. acknowledges a CAPES M.Sc. fellowship.
C.P.N. thanks S{\~a}o Paulo Research Foundation (FAPESP) for financial support through grant 2019/06040-0,
K.S.F.F. thanks S{\~a}o Paulo Research Foundation (FAPESP) for financial support through grant 2017/21570-0.
Support from CNPq is gratefully acknowledged (E.A.).
R.G.L. thanks CAPES (process 88881.162206/2017-01) and the Alexander von Humboldt Foundation for the financial support.
A.R.Q., F.A.B., L.B., and M.V.S. acknowledge PRONEX/CNPq/FAPESQ-PB (Grant no. 165/2018).
M.P. acknowledges funding from a FAPESP Young Investigator fellowship, grant 2015/19936-1.
J.Z was supported by IBS under the project code, IBS-R018-D1.
L.S. is supported by the National Key R\&D Program of China (2020YFC2201600).
M.R. acknowledges funding from the European Research Council Grant CMBSPEC (No. 725456).
A.A.C. acknowledges financial support from the China Postdoctoral Science Foundation, grant number 2020M671611.
B. W. and A.A.C. were also supported by the key project of NNSFC under grant 11835009.
Some of the results in this paper have been derived using the HEALPix package (http://healpix.sourceforge.net) \citep{Gorski05}.
This research made use of Astropy,\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{astropy:2018}, NumPy \citep{harris2020array} and Healpy \citep{Zonca2019}.
\bibliographystyle{aa}
|
2,869,038,153,752 | arxiv | \section{Introduction}
Stars are tidally disrupted in galactic nuclei when orbital perturbations reduce their angular momentum and place them on nearly radial orbits. Once the stellar pericenter is reduced below a critical value, a strong tidal encounter with the central supermassive black hole (SMBH) destroys it on an orbital time \citep{Hills75}. Roughly half of the stellar mass falls back onto the SMBH, circularizing into an accretion disk and powering a luminous flare \citep{Rees88}.
Approximately two dozen of these tidal disruption events (TDEs) have been observed, found with a diverse mixture of optical \citep{vanVelzen+11, Cenko+12b, Gezari+12, Arcavi+14, Chorno+14}, UV \citep{Gezari+06, Gezari+08, Gezari+09}, X-ray \citep{Bade+96, KomGre99, Maksym+13}, and gamma ray \citep{Bloom+11, Levan+11, Zauderer+11, Cenko+12} detections (see \citealt{Gezari13} for a review). The mass fallback curves of these events encode information on the mass, radius, and structure of the disrupted star \citep{Lodato+09, GuiRam13}, the SMBH mass \citep{Rees88, EvaKoc89}, and, more subtly, the pericenter of disruption \citep{GuiRam13, Stone+13}. TDE light curves and spectra can in principle be used to measure these dynamical quantities, although it is currently unclear how mass fallback rates translate into luminosities. The spin of the SMBH may also be imprinted into these observables \citep{StoLoe12, Kesden12a, Lei+13, SheMat14}. Individual TDEs are valuable tools for probing SMBHs in distant galactic nuclei, but in this paper we focus on the information that can be obtained from large statistical samples of TDEs, and in particular from the rates of these events and their distributions of parameters, such as SMBH mass.
The rates of stellar TDEs are currently uncertain; typically a value $\sim 10^{-5}~{\rm yr}^{-1}$ per galaxy is inferred from X-ray (e.g.~\citealt{Donley+02}), UV (\citealt{Gezari+08}) and optically (e.g., ~\citealt{vanVelzen&Farrar14}) selected events. These observed rates are generally an order of magnitude lower than previous theoretical predictions of $\gtrsim 10^{-4}$ yr$^{-1}$ gal$^{-1}$ (e.g., the two-body relaxation calculations of \citealt{MagTre99,Wang&Merritt04}), although selection effects, small number statistics, and the possible influence of dust or photoelectric extinction could all contribute to this disagreement. The rates of TDEs accompanied by relativistic jets appear to be smaller still (e.g.~\citealt{Bower+13}, \citealt{VanVelzen+13}). Many candidate TDE flares furthermore possess much higher optical luminosities than predicted by previous models for TDE emission (e.g.,~\citealt{Gezari+12,Arcavi+14}); some of the tension in rates could thus be alleviated if the detected events represent only the brightest tail of the luminosity function. Conversely, two body relaxation is generally thought to set a conservative floor on the true TDE rate, which can be enhanced by more exotic dynamical processes. This would worsen the existing tension between observed and theoretical rates.
Fortunately, the observed sample of candidate TDEs is expanding rapidly, especially at optical frequencies, due to the advent of wide-field sensitive surveys such as Pan-STARRs (\citealt{Kaiser+02}) and the Palomar Transient Factory (soon to be Zwicky Transient Facility; \citealt{Kulkarni12}). The study of TDEs will be further revolutionized in the next decade by the Large Synoptic Survey Telescope (LSST) and by the wide-field X-ray satellite eROSITA (\citealt{Merloni+12}), which could detect hundreds or thousands of TDEs per year \citep{Gezari+08, StrQua09, vanVelzen+11, Khabibulli+14}. One potential drawback of the future LSST/eROSITA era is that the large rate of detections could overwhelm what resources are available for photometric or spectroscopic follow-up observations. This situation enhances the value of statistical studies of TDEs, such as how the TDE rate varies with galaxy type or SMBH mass.
Past theoretical work has estimated TDE rates by deprojecting high resolution surface brightness profiles for small samples of galaxies, and making simplifying assumptions to obtain stellar densities and distribution functions (\citealt{MagTre99, Wang&Merritt04}). A key conclusion of these works is that the rate of TDEs is dominated by the lowest mass galaxies that host black holes. This is due to a combination of three factors: the larger numbers of small galaxies, the negative correlation between SMBH mass and central stellar density, and the nontrivial dynamical result that the TDE rate is higher for lower SMBH masses in steeply sloped (``cuspy'') galaxies\footnote{\citet{Wang&Merritt04} find that in cuspy stellar profiles, the TDE rate scales roughly inversely with SMBH mass.}. Although the intrinsic TDE rate is thus likely to be highest among the smallest extant SMBHs, this is not necessarily true of the {\it detected} TDE rate, because the latter also depends on how the TDE luminosity in a given waveband scales with the SMBH mass.
In this paper, we update past theoretical estimates of stellar tidal disruption rates, using the newest calibration of the $M_\bullet-\sigma$ relation and a much larger sample of galaxy surface brightness profiles than was employed in the past ($\S\ref{sec:TDErates}$). We examine the robustness of the TDE rate to a number of uncertainties, including the stellar mass function, the SMBH mass function, choices of galaxy scaling relations, and the (somewhat arbitrary) choice of surface brightness parametrization. Whenever possible, we make a conservative choice of assumptions, to test the robustness of the disagreement between theory and observation. Our main conclusions are that the poorly constrained occupation fraction of SMBHs in low mass galaxies represents the largest current uncertainty in the intrinsic TDE rate, and that the tension between theory and observation is persistent.
We briefly overview the observable quantities of interest for TDEs (\S \ref{sec:observables}), and then translate our volumetric rates into detection rates by future surveys ($\S\ref{sec:results}$), considering several different models for the optical light curves of TDEs. An analytic parameterization of the SMBH occupation fraction is used to clarify how TDE samples can be used to constrain the ubiquity of low mass SMBHs. We also estimate for the first time the relative abundances of deeply penetrating and grazing tidal disruptions. We next discuss our results ($\S\ref{sec:discussion}$) in the context of the current TDE sample and describe possible resolutions to the ``rates dilemma" discussed above. Finally ($\S\ref{sec:conclusions}$), we provide a bulleted summary of our conclusions.
Appendix \ref{sec:analytic} provides a derivation of closed-form analytic expressions for several theoretical quantities of interest (e.g. orbit-averaged diffusion coefficients and per-energy flux of stars into the loss cone) in limiting regimes. In Appendix \ref{sec:optical} we review four different optical emission mechanisms, and in several cases update or improve existing theoretical models for these. Our full results are tabulated in Appendix \ref{sec:fullResults}.
\section{TDE Rates}
\label{sec:TDErates}
Although many different dynamical processes may contribute to observed rates of tidal disruption, the most robust and ubiquitous is the collisional two-body relaxation of stars into the phase space ``loss cone,'' the region of $\{\vec{x}, \vec{v}\}$ space where orbital pericenters $r_{\rm p}$ are less than the tidal radius $r_{\rm t}$ and stars are destroyed on a dynamical time (e.g.~\citealt{Alexander12} for a review).
Other processes may contribute to, and in some cases, dominate, observed TDE rates. Secular resonances in the vicinity of an SMBH may lead to ``resonant relaxation'' of angular momentum \citep{RauTre96}, although this is likely a subdominant contributor to the total TDE rate \citep{RauIng98, HopAle06, Madiga+11}. Massive perturbers, such as intermediate mass black holes (IMBHs), giant molecular clouds, or infalling globular clusters, can strongly perturb stellar orbits and lead to rapid refilling of an empty loss cone \citep{Perets+07}. Analogous dynamical processes in the vicinity of an SMBH binary can temporarily enhance the TDE rate \citep{Ivanov+05, Chen+09, Chen+11, StoLoe11}, potentially by several orders of magnitude, but SMBH mergers are sufficiently rare that this channel nonetheless contributes subdominantly to the total TDE rate \citep{WegBod14}. Finally, in nonspherical stellar systems, non-conservation of angular momentum allows stars to ergodically explore a large portion of phase space, and some will wander into the loss cone even absent collisional relaxation. This modestly enhances TDE rates in axisymmetric systems \citep{MagTre99, VasMer13} and can increase them dramatically in triaxial ones \citep{MerPoo04}. In general, all of these processes involve much greater observational and theoretical uncertainty than does simple two-body relaxation, and we therefore neglect them in the remainder of this paper. Neglecting these additional processes is in part justified by the fact that the observed TDE rate is already significantly less than the minimum rate estimated from two-body relaxation alone.
TDE rates in spherical star clusters containing massive black holes were first estimated in an analytic way \citep{FraRee76}, which was quickly supplemented by semi-analytic calculations \citep{LigSha77, Cohn&Kulsrud78} treating angular momentum diffusion as a Fokker-Planck process. The results of these semi-analytic calculations were used to calibrate analytic approximations, which enable accurate calculation of the TDE rate using integrals over moments of the stellar distribution function \citep{MagTre99, Wang&Merritt04}. Increasingly, direct N-body simulations are used to estimate tidal disruption rates \citep{Brocka+11, VasMer13, Zhong+14}. A detailed comparison of diffusion in N-body simulations finds good agreement with the Fokker-Planck approximation \citep[Fig. 7]{VasMer13} except in regions inside the SMBH influence radius, where resonant relaxation may enhance diffusion coefficients above their two-body values. In the following subsections, we follow the analytic prescriptions of \citet[hereafter WM04]{Wang&Merritt04} to compute TDE rates in a large sample of observed galaxies.
\subsection{Galaxy Sample and Parametrization}
Unfortunately, it is not feasible to measure the distribution function of stars in distant galactic nuclei. Instead, 2D surface brightness profiles $I(R)$ are measured as a function of the projected radial distance from the center of light $R$. Under various assumptions, these can be used to determine the 3D stellar density profile and the implied phase space distribution ($\S\ref{sec:losscone}$).
Observed isophotes can be fit by many different parametrizations. These include the Sersic model \citep{Sersic68},
\begin{equation}
I_{\rm S}(R) = I_{\rm S}(0) \exp(-b_{\rm n}(R/R_{\rm e})^{1/n}),
\end{equation}
where $I_{\rm S}(0)$ is the central intensity, $R_{\rm e}$ is the galaxy half-light radius, $n$ is a parameter encoding the curvature of the profile, and $b_{\rm n}$ is a constant given by the solution to the equation $2\Gamma (2n, b_{\rm n}) = \Gamma (2n),$ where $\Gamma(x)$ and $\Gamma(x, y)$ are complete and incomplete\footnote{Specifically, we define $\Gamma (x, y)=\int_0^y t^{x-1} \exp(-t)dt$.} Gamma functions, respectively.
A more complex variant of this is the core-Sersic model \citep{Graham+03}, given by
\begin{equation}
I_{\rm CS}(R) = \tilde{I}_{\rm CS}(1+(R_{\rm b}/R)^\alpha)^{\Gamma/\alpha} \exp(-b((R^\alpha + R_{\rm b}^\alpha)/R_{\rm e}^\alpha)^{1/(n\alpha)}).
\end{equation}
This profile behaves like a standards Sersic profile at radii $R\gg R_{\rm b}$, but interior to this break radius it obeys a power law with slope $\Gamma$. The sharpness of the transition at $R_{\rm b}$ is mediated by the parameter $\alpha$, while the normalization is given by $\tilde{I}_{\rm CS} = 2^{-\Gamma/\alpha} I_{\rm b} \exp(2^{1/(n \alpha)} b (R_{\rm b}/R_{\rm e})^{1/n})$, where $I_{\rm b}$ is the surface brightness at the break radius. We will only consider the $\alpha=\infty$ limit; if we make the further (reasonable) assumption that $R_{\rm b} \ll R_{\rm e}$, then $b$ is a constant given by the solution of $
\Gamma(2n)+\Gamma(2n, b(R_{\rm b}/R_{\rm e})^{1/n}) = 2\Gamma(2n, b)$.
The final parametrization that we consider is the ``Nuker'' profile \citep{Lauer+95}, which is essentially a broken power law with inner slope $\gamma$ and outer slope $\beta$:
\begin{equation}
I_{\rm N}(R) = I_{\rm b} 2^{(\beta - \gamma)/\alpha} (R/R_{\rm b})^{-\gamma} \left( 1 + (R/R_{\rm b})^\alpha \right)^{-(\beta-\gamma)/\alpha}.
\label{eq:nuker}
\end{equation}
Our primary galaxy sample is from \citet{Lauer+07a}, who provide distances, luminosities $L_{\rm gal}$, and a complete set of Nuker parameters for 219 galaxies, of which 137 also have tabulated half-light radii $R_{\rm e}$. Tabulated velocity dispersions $\sigma$ for every galaxy in this sample are taken from \citet{Lauer+07b}. Moreover, 21 galaxies in this sample overlap with the tabulated Sersic and core-Sersic parameters given in \citet{Trujil+04}, allowing us to perform a detailed comparison between the TDE rate calculated from these three $I(R)$ parametrizations. When $R_{\rm e}$ is available, we calculate the mass-to-light ratio as
\begin{equation}
\Upsilon = \frac{2\sigma^2R_{\rm e}}{GL_{\rm gal}},
\label{eq:upsilon}
\end{equation}
but when $R_{\rm e}$ is not available we instead use the empirically derived galaxy scaling relationship \citep{Magorr+98}
\begin{equation}
\frac{\Upsilon_{V}}{\Upsilon_\odot} = 4.9 \left( \frac{L_{\rm V}}{10^{10}L_\odot} \right)^{0.18}.
\end{equation}
Here $L_{\rm V}$ is the {\it V}-band luminosity; this can be calculated from the {\it V}-band absolute magnitudes tabulated in \citet{Lauer+07a}.
Whenever possible, SMBH masses are computed for all galaxies in our sample using the $M_\bullet-\sigma$ relation,
\begin{equation} M_{\bullet} = M_{200}(\sigma/200\,{\rm km\,s^{-1}})^{p},
\label{eq:msigma}
\end{equation}
where in most cases we adopt the recent calibration of \citet{McConnell&Ma13} ($M_{200} = 10^{8.32}M_{\odot}; p = 5.64$). However, for the sake of comparison to WM04, we also consider the older relation of \citet{MerFer01}, where $M_{200} = 10^{8.17}M_{\odot}$ and $ p = 4.65$ (technically, this is the updated version of the \citealt{MerFer01} result used in WM04). However, $\approx 10\%$ of our galaxies lack tabulated $\sigma$ values; for these we estimate SMBH masses using the $M_\bullet - L_{\rm V}$ relation of \citet{McConnell&Ma13}.
We note that many low-mass galaxies possess nuclear star clusters, which are specifically excluded from the Nuker power-law fits in \citet{Lauer+07a}. Neglecting these central overdensities is a conservative choice in our two-body rate estimates, which would in general be enhanced by the presence of additional central density.
\subsection{Loss Cone Dynamics}
\label{sec:losscone}
A star of mass $M_\star$ and radius $R_\star$ is tidally disrupted if it passes within a tidal radius,
\begin{equation}
r_{\rm t} = R_\star \left( \frac{M_\bullet}{M_\star} \right)^{1/3},
\label{eq:rt}
\end{equation}
of an SMBH with mass $M_\bullet$. Geometrically this criterion can be thought of as creating a loss cone in the six dimensional phase space of stellar orbits. The loss cone is defined as the set of orbits with pericenters $r_{\rm p} < r_{\rm t}$, a condition which translates into a specific angular momentum $J$ less than the loss-cone value $J_{\rm LC} = \sqrt{2GM_{\bullet} r_{\rm t}}$ if one assumes a spherical potential and considers only highly eccentric orbits. In calculating the TDE rate we also demand that $r_{\rm p} > 2r_{\rm g}$ so as not to count stars swallowed whole, where $r_{\rm g} = GM_{\bullet}/c^{2}$ is the black hole gravitational radius. This criterion is appropriate for non-spinning SMBHs; corrections due to SMBH spin are modest \citep{Kesden12b} for black holes significantly beneath the Hills mass, $M_{\rm H}^{2/3}=R_{\star}c^2/(2GM_{\star}^{1/3})$. As we shall see, these heavier SMBHs contribute only marginally to the total TDE rate, so we neglect spin corrections to the Hills mass and tidal radius for the remainder of this paper
Giant stars can be disrupted by much larger SMBHs, but their greatly increased tidal radii also greatly reduce the rate of mass fallback onto (and thus luminosity of) the black hole. Furthermore, the long timescales associated with these events can make them difficult to capture in a time-limited transient survey. For these reasons we neglect giant disruptions in this paper, but we note that the rates of these events can be comparable to main sequence TDE rates \citep{SyeUlm99, MacLeo+12}, and of course dominate the total TDE rate in galaxies with $M_\bullet > M_{\rm H}$.
Assuming spherical symmetry, the observed surface brightness profile can be deprojected into a 3D stellar density profile $\rho_{\star}(r)$ using an Abel inversion,
\begin{equation}
\rho_\star(r) = -\frac{\Upsilon}{\pi}\int_r^\infty \frac{{\rm d}I}{{\rm d}R}\frac{{\rm d}R}{\sqrt{R^2-r^2}},
\end{equation}
where $\Upsilon$ is the mass-to-light ratio from Eq.~\eqref{eq:upsilon}. Under the same assumptions, the total gravitational potential is calculated according to\footnote{We adopt the stellar dynamics definition of the potential, which is the negative of the more commonly used definition.}
\begin{equation}
\psi(r) = \frac{GM_\bullet}{r} + \frac{GM_\star(r)}{r} + 4\pi G \int_r^\infty \rho_\star(r') r' {\rm d}r',
\end{equation}
where $M_\star(r)$ is the stellar mass enclosed within a radius $r$.
From the stellar density profile and $\psi(r)$, the stellar distribution function (DF) $f(\epsilon)$ is calculated using Eddington's formula,
\begin{equation}
f(\epsilon) = \frac{1}{8^{1/2}\pi^2 M_\star} \frac{{\rm d}}{{\rm d}\epsilon} \int_0^\epsilon \frac{{\rm d}\rho_\star}{{\rm d}\psi} \frac{{\rm d}\psi}{\sqrt{\epsilon - \psi}},
\label{eq:Eddington}
\end{equation}
where we have made the additional assumption of isotropic velocities and here $\epsilon$ is the negative of the specific orbital energy. The use of Eddington's formula is only justified if it produces a positive-definite value of $f(\epsilon)$. When not the case, this indicates that the assumption of velocity isotropy is incompatible with $\rho_\star (r)$. We discard such galaxies from our sample, as occurs in practice for a relatively small number with very flat interior slopes ($\gamma \approx 0$ in the Nuker parametrization).
The DF can be used to calculate the orbit-averaged angular momentum diffusion coefficient for highly eccentric orbits,
\begin{equation}
\bar{\mu}(\epsilon) = \frac{2}{P(\epsilon)} \int_{r_{\rm p}}^{r_{\rm a}} \frac{{\rm d}r}{v_{\rm r}(r)} \lim_{R\rightarrow0} \frac{\langle ( \Delta R)^2 \rangle}{2R},
\end{equation}
where $R \equiv J^{2}/J_{\rm c}^{2}$ and $J_{\rm c}(\epsilon)$ is the angular momentum of a circular orbit with energy $\epsilon.$ Here, $r_{\rm p}$, $r_{\rm a}$, $P(\epsilon) = 2\int_0^{r_{\rm a}(\epsilon)} $d$r/\sqrt{2(\psi - \epsilon)}$, and $v_{\rm r}$ are the pericenter radius, apocenter radius, orbital period, and radial velocity of the orbit. The local diffusion coefficient
\begin{equation}
\lim_{R\rightarrow0} \frac{\langle ( \Delta R)^2 \rangle}{2R} = \frac{32 \pi^2 r^2 G^2 \langle M_\star^2 \rangle \ln \Lambda}{3J_{\rm c}^2(\epsilon)} \left( 3I_{1/2}(\epsilon) - I_{3/2} (\epsilon) + 2 I_0 (\epsilon) \right),
\label{eq:diffusion}
\end{equation}
can be expressed in terms of the DF moments
\bea
I_0(\epsilon) \equiv & \int_0^\epsilon f(\epsilon ') {\rm d}\epsilon \\
I_{n/2}(\epsilon) \equiv & \left[2(\psi(r) - \epsilon) \right]^{-n/2} \\
& \times \int_\epsilon^{\psi(r)} \left[2(\psi(r) - \epsilon ') \right]^{n/2} f(\epsilon ') {\rm d}\epsilon ', \notag
\eea
where ln$\Lambda \approx \ln (0.4 M_\bullet / M_{\star})$ is the Coulomb logarithm \citep{SpiHar71} and
\begin{equation}
\langle M_{\star}^{2} \rangle \equiv \int \frac{dN_{\star}}{dM_{\star}}M_{\star}^{2}dM_{\star}
\label{eq:Mstardiff}
\end{equation}
averages the contributions of different stars to orbital diffusion over the stellar mass distribution ${\rm d}N_\star/{\rm d}M_\star$. For most mass functions, the largest stars generally dominate the diffusion coefficients.
The flux of stars that scatter into the loss cone per unit time and energy is given by:
\begin{equation}
\mathcal{F}(\epsilon){\rm d}\epsilon = 4\pi^2 J_{\rm c}^2(\epsilon) \bar{\mu}(\epsilon) \frac{f(\epsilon)}{\ln R_0^{-1}}{\rm d}\epsilon,
\label{eq:flux}
\end{equation}
where $R_{0}(\epsilon)< R_{\rm LC}(\epsilon)$ defines the angular momentum below which no stars remain. $R_{0}(\epsilon)$ generally resides inside the nominal loss cone because stars can scatter into and out of the $R < R_{\rm LC}$ parts of phase space during a single orbit. \citet{Cohn&Kulsrud78} show that
\begin{equation}
R_0(\epsilon) = R_{\rm LC}(\epsilon)
\begin{cases}
\exp (-q), &q>1 \\
\exp (-0.186q-0.824q^{1/2}), &q<1,
\end{cases}
\end{equation}
where
\begin{equation}
q(\epsilon) = \bar{\mu}(\epsilon) \frac{P(\epsilon)}{R_{\rm LC}(\epsilon)},
\label{eq:q}
\end{equation}
is the dimensionless ratio of the per-orbit change in $R$ to its loss cone value.
Physically, $q(\epsilon)$ can be thought of as demarcating two different regimes of loss cone refilling. When $q \gg 1$ ($R_0 \ll R_{\rm LC}$), as applies to orbits far from the SMBH, stars wander in and out of the loss cone many times during the course of a single orbit (the so-called ``pinhole'' limit). Very near the SMBH, $q \ll 1$ ($R_0 \approx R_{\rm LC}$) and stars instead diffuse into the loss cone over many orbits (the so-called ``diffusion" limit). The observational consequences of this are potentially of interest; pinhole-dominated galaxies are capable of producing TDEs with large penetration parameter $\beta \equiv r_{\rm t}/r_{\rm p}$. In particular, $\dot{N}_{\rm TDE}(\beta) \propto \beta^{-1}$ in the pinhole limit. For diffusion-dominated galaxies, however, virtually all TDEs will have $\beta \approx 1$.
In practice, $\mathcal{F}(\epsilon)$ is a very sharply peaked function, and the vast majority of flux into the loss cone comes from energies near $\epsilon_{\rm crit}$, where $q(\epsilon_{\rm crit})\equiv 1$. In coordinate terms this corresponds to a critical radius from the SMBH, $\psi(r_{\rm crit}) \equiv \epsilon_{\rm crit}$, that sources most TDEs in a given galaxy. For the lower-mass galaxies that produce observable TDEs, the SMBH influence\footnote{Defined here as the radius that contains a mass in stars equal to $M_\bullet$.} radius $r_{\rm infl} \sim r_{\rm crit}$. However, this is largely a coincidence: as we show in Appendix \ref{sec:fullResults}, when $M_\bullet \gtrsim 10^8 M_\odot$, $r_{\rm crit} \gtrsim 10 r_{\rm infl}$, typically.
We apply the above procedure to all Nuker galaxies in our sample, discarding only those which cannot be spherically deprojected ($\gamma<0$) and those whose DFs are incompatible with the assumption of isotropic velocities ($\gamma \lesssim 0.05$, although this criteria varies from galaxy to galaxy). The TDE rate per galaxy is calculated by integrating the total flux into the loss cone $\dot{N}_{\rm TDE} = \int \mathcal{F}(\epsilon) {\rm d}\epsilon$ for stars of a given mass, and then by integrating over the stellar mass function ($\S\ref{sec:PDMF}$). Although the functions of interest to this calculation (e.g. $f(\epsilon), q(\epsilon)$, $\mathcal{F}(\epsilon)$) are in general too complex to be written in closed form, in Appendix \ref{sec:analytic} we derive analytic expressions for them in limiting regimes. These expressions are useful for checking the results of numerical integration.
\subsection{Stellar Mass Function}
\label{sec:PDMF}
In addition to the properties of the galaxy, the TDE rate depends on the present day mass function $dN_{\star}/dM_{\star}$ (PDMF) of stars. \citet{MagTre99} considered a PDMF resulting from a Salpeter initial mass function (IMF)
\begin{equation}
\chi_{\rm Sal} = \left.\frac{dN_{\star}}{dM_{\star}}\right|_{\rm Sal} =
\begin{cases}
0.046(M_\star/M_{\odot})^{-2.35}&, M_\star^{\rm min}<M_\star<M_\star^{\rm max} \\
0&, {\rm otherwise},
\end{cases}
\end{equation}
truncated at a maximum mass $M_\star^{\rm max} = M_{\odot}$, where $M_\star^{\rm min}=0.08M_{\odot}$. The upper truncation was chosen to approximate an old stellar population, and we keep it for the same reason. A further motivation for this upper truncation is that it is a conservative choice with regards to the total TDE rate, for reasons described below. We also consider a second PDMF, derived by applying the same $1M_\odot$ cutoff to the Kroupa IMF, viz.~
\begin{equation}
\chi_{\rm Kro} = \left.\frac{dN_{\star}}{dM_{\star}}\right|_{\rm Kro} =
\begin{cases}
0.98(M_\star/M_{\odot})^{-1.3}&, M_\star^{\rm min}<M_\star<0.5 M_\odot \\
2.4(M_\star/M_{\odot})^{-2.3}&, 0.5M_\odot<M_\star<M_\star^{\rm max}, \\
0&, {\rm otherwise}.
\label{eq:Kroupa}
\end{cases}
\end{equation}
where $M_\star^{\rm min}$ and $M_\star^{\rm max}$ have the same values as the Salpeter PDMF.
As compared to a monochromatic distribution of $M_\star = M_\odot$ stars, including a realistic PDMF increases the TDE rate due to the greater number of sub-solar mass stars, but decreases it because of the reduced angular momentum diffusion coefficients $\bar{\mu}\propto \langle M_{\star}^{2}\rangle$ (Eq.~\ref{eq:Mstardiff}). For high mass black holes, the TDE rate is also reduced because of the smaller tidal radii $r_{\rm t} \propto R_{\star}M_{\star}^{-1/3} \propto M_{\star}^{0.47}$ (Eq.~\ref{eq:rt}) of low mass stars, which reduce the Hills mass $M_{\rm H}$. Here we have used the fact that $R_\star \propto M_\star^{0.8}$ on the lower main-sequence.
Both the Salpeter and Kroupa PDMFs give comparable rate enhancements (relative to the monochromatic stellar population) of 1.63 and 1.53, respectively. Interestingly, TDE rates depend on the age of the nuclear stellar population, as the diffusion coefficients are largely set by the most massive extant stars. If we extend the Kroupa IMF to values of $M_\star^{\rm max}/M_\odot=\{2, 5, 10 \}$ we find enhancements (again relative to a monochromatic $M_\star=M_\odot$ stellar population) of $\{1.91, 2.75, 3.80\}$, corresponding to stellar populations of age $t_{\rm age}/{\rm yr} = \{1.77 \times10^9, 1.79\times 10^8, 3.16\times 10^7 \}$.
In addition to main sequence stars, scattering by stellar remnants (white dwarfs, neutron stars, and stellar mass black holes) may contribute to the TDE rate; white dwarfs can also themselves be disrupted by smaller massive black holes.\footnote{A white dwarf can be disrupted by a SMBH with a mass as high as $\sim 10^{6} M_{\odot}$ if the SMBH is nearly maximally spinning.} White dwarfs and neutron stars, being both less common and comparable in mass to solar type stars, make little difference for the total rate of angular momentum diffusion. Stellar mass black holes, on the other hand, possess considerably greater masses than the Sun and hence could contribute if their number densities are sufficiently high.
To explore the possible influence of stellar mass black holes, we calculate the enhancement to the diffusion coefficients from inclusion of the black hole mass functions in \citet[Fig. 1]{Belczy+10}. We consider three different mass functions for stellar remnants, corresponding to metallicities $Z=\{Z_\odot, 0.3Z_\odot, 0.01Z_\odot\}$. All three tabulated mass functions (Belczynski, private communication) have slightly different minimum masses, $M_{\rm SN} \approx 7M_\odot$, for the onset of supernovae. Stars drawn from the IMF with zero-age-main-sequence masses $1M_\odot < M_{\rm ZAMS} < M_{\rm SN}$ are assigned final masses of $0.5M_\odot$; these white dwarf stars are of minimal importance for the diffusion coefficients. We find that in all three models, the stellar mass black holes dominate the total relaxation rate, with the high, medium, and low metallicity cases increasing diffusion coefficients by factors of 1.3, 2.8, and 4.9, respectively (for a Kroupa IMF). The total loss cone flux increases by roughly the same amount, because the normalization of the PDMF is relatively unchanged (i.e. the black holes dominate $\int M_\star^2 {\rm d}N_\star$ but not $\int M_\star {\rm d}N_\star$).
This enhancement to the TDE rate can be prevented if mass segregation moves most black holes inward from the critical radius that dominates flux into the loss cone. The mass segregation timescale for a stellar mass black hole of mass $M_{\rm SBH}$ in a stellar population with average mass $\langle M_\star \rangle$ is $t_{\rm seg}(r) = (M_{\rm SBH}/\langle M_\star \rangle)t_{\rm rel}(r)$, where the energy relaxation timescale
\begin{equation}
t_{\rm rel}(r) = 0.34 \frac{\sigma^3(r)}{G^2 \sqrt{\langle M_{\star}^2\rangle}\rho_\star(r) \ln \Lambda}.
\end{equation}
Figure \ref{fig:massSegregation} shows the segregation radius $r_{\rm seg}$, interior to which black holes will have mass segregated from the sphere of influence radius $r_{\rm infl}$ into a more compact subcluster within the Hubble time, as a function of SMBH mass. Cusp galaxies with low mass SMBHs ($M_\bullet \lesssim 10^6 M_\odot$) have $r_{\rm seg} > r_{\rm infl}$ and hence will have stellar mass BHs removed from radii $\sim r_{\rm infl}$ that dominate the TDE rate. For cusp galaxies with larger SMBHs or for core galaxies, however, black hole segregation is generally unimportant. This implies that stellar remnants will indeed enhance the TDE rate by factors of a few in most galaxies. Because of astrophysical uncertainties in the metallicity distribution of stars in distant galactic nuclei, as well as our approximate treatment of mass segregation, we neglect this enhancement for the remainder of the paper, but we emphasize that all but the smallest cusp galaxies could see a rate enhancement $\gtrsim 1.5$ due to stellar remnants. This enhancement could be even larger if top-heavy IMFs are common in galactic nuclei, as has been suggested for some stars in the center of the Milky Way \citep{Bartko+10}. Although this simplified discussion of mass segregation largely agrees with simulations of two-component stellar systems \citep{Watters00}, star clusters with a realistic mass spectrum see a more complicated evolution of their stellar mass black holes towards mass segregation, e.g. \citet{Baumga+04}.
\begin{figure}
\includegraphics[width=85mm]{massSegregation.pdf}
\caption{The radius $r_{\rm seg}$ out to which stellar mass black holes of mass $M_{\rm SBH}=10M_\odot$ can reach energy equipartition at radii $\sim r_{\rm infl}$, in units of the SMBH influence radius $r_{\rm infl}$, for stellar density profiles $\rho_\star(r) \propto r^{-g}$. The solid black curve is for core galaxies with $g=1$, and the dashed red curve is for cusp galaxies with $g=2$. Because the tidal disruption critical radius $r_{\rm crit} \approx r_{\rm infl}$ in the small galaxies that dominate the TDE rate, this plot indicates that stellar remnants will increase TDE rates by a factor of a few for cusp galaxies with $M_\bullet \gtrsim 10^6 M_\odot$. Below this mass, stellar remnants will mass segregate into a smaller subcluster in cusp galaxies. The rate enhancement from stellar remnants likely occurs in all core galaxies; when $g \le 1.5$, relaxation times do not monotonically decrease with decreasing radius. The $g=1$ curve cuts off at a finite value of $M_\bullet$ because of this effect (i.e. larger SMBHs in core profiles will see {\it no} mass segregation in a Hubble time).}
\label{fig:massSegregation}
\end{figure}
More speculatively, a large population of freely floating gas giant planets could also contribute to the total TDE rate, especially at low SMBH masses for which the tidal radius $r_{\rm t} \approx 50r_{\rm g}(M_{\bullet}/10^{6}M_{\odot})^{-2/3}(M_{\rm p}/M_{\rm J})^{-1/3}$ (Eq.~\ref{eq:rt}) of a planet of mass $M_{\rm p}$ exceeds the gravitational radius $r_{\rm g} = GM_{\bullet}/c^{2}$, where $M_{\rm J}$ is the mass of Jupiter and we have assumed a constant planetary radius of $R_{\rm p} = R_{\rm J} = 7\times 10^{9}$ cm. Although freely floating planets may be common relative to main sequence stars (e.g.~\citealt{Sumi+11}), the much lower accretion rates produced by the disruption of a planet $\propto M_p^{2}$ (Eq.~\ref{eq:Mdotpeak}) implies much dimmer events which are unlikely to contribute appreciably to the {\it detected} TDE rate.
\section{TDE Observables}
\label{sec:observables}
Following tidal disruption, the gaseous stellar debris travels on approximately geodesic trajectories, with a ``frozen-in'' specific energy spread \citep{Rees88} given by
\begin{equation}
\Delta \epsilon = \frac{GM_\bullet R_\star}{r_{\rm t}^2}.
\end{equation}
If one assumes a simple top-hat distribution of debris mass with respect to specific energy (width $\Delta \epsilon$), then the most tightly bound debris returns to pericenter after a time (e.g.~\citealt{Stone+13})
\begin{equation}
t_{\rm fall} = 3.5\times 10^6~{\rm s}~ M_6^{1/2}m_\star^{-1}r_\star^{3/2},
\label{eq:tfb}
\end{equation}
where $M_6=M_\bullet/10^6M_\odot$, $m_\star=M_\star/M_\odot$, and $r_\star=R_\star/R_\odot$. The peak mass fallback rate occurs at this time, and has an Eddington-normalized value of
\begin{equation}
\frac{\dot{M}_{\rm peak}}{\dot{M}_{\rm edd}} = 133 \eta_{-1} M_6^{-3/2}m_\star^2r_\star^{-3/2},
\label{eq:Mdotpeak}
\end{equation}
where $\dot{M}_{\rm edd} \equiv L_{\rm edd}/\eta c^{2}$ is the Eddington accretion rate, $L_{\rm edd} \simeq 1.5\times 10^{46}M_{8}$ erg s$^{-1}$ is the Eddington luminosity and $\eta = 0.1\eta_{-1}$ is the constant accretion efficiency. Assuming the initial mass fallback rate is super-Eddington, it will remain so for a timescale
\begin{equation}
t_{\rm edd} = 6.6\times 10^7~{\rm s}~ \eta_{-1}^{3/5}M_6^{-2/5}m_\star^{1/5}r_\star^{3/5}.
\label{eq:tedd}
\end{equation}
The above equations have the correct parameter scalings but can have errors $\approx 2$ in the prefactors due to the debris mass distribution not actually being a top-hat, with weak dependences on stellar structure and $\beta$ \citep{GuiRam13} that we neglect here. In our rate calculations we assume $r_{\star} = m_{\star}^{0.8}$, as is appropriate for low mass main sequence stars.
Another observable in TDE flares is the total radiated energy, $E_{\rm rad}$. The exact value of $E_{\rm rad}$ will depend on the particular emission model, but we can gain intuition by considering a very simple model for the luminosity $L(t)$ of initially super-Eddington TDEs, where $L=L_{\rm edd}$ if $t<t_{\rm edd}$, and $L=\eta \dot{M}c^{2}$ if $t>t_{\rm edd}$. Assuming $\eta$ is constant in time, we then have
\bea
E_{\rm rad} &=& L_{\rm edd}(\tilde{t}_{\rm edd}-t_{\rm fall}) + \frac{M_\star\eta c^2}{2} \left(\frac{t_{\rm fall}}{\tilde{t}_{\rm edd}} \right)^{2/3} \notag \\
&\approx& \begin{cases} 8.1\times 10^{51}~{\rm erg}~\eta_{-1}^{3/5}M_6^{3/5}m_\star^{1/5}r_\star^{3/5}, &t_{\rm edd} \gg t_{\rm fall} \\ 8.9\times 10^{52}~{\rm erg}~\eta_{-1}m_\star, &t_{\rm edd} \lesssim t_{\rm fall}, \end{cases}
\label{eq:ERad}
\eea
where $\tilde{t}_{\rm edd}=\max( t_{\rm edd}, t_{\rm fall} )$. Figure \ref{fig:ERad} shows the dependence of $E_{\rm rad}$ on $M_{\bullet}$, from which one observes that stars disrupted by low-mass SMBHs ($\sim 10^5 M_\odot$) radiate an order of magnitude less energy than their high mass, initially sub-Eddington counterparts. In practice, $E_{\rm rad}$ is a difficult quantity to measure: X-ray selected TDEs (or TDEs found in SDSS) suffer from poor cadence and generally miss the peak of the light curve, where most of the energy is emitted. Optically- or UV-selected TDEs avoid this problem, but unless followup observations reveal the peak of the SED, the bolometric correction to the observed light is quite uncertain. We take five optically- or UV-selected TDE candidates and show their {\it lower} limits for $E_{\rm rad}$. While two events (D3-13 and D23H-1) fall in the expected portion of parameter space, the other three (D1-9, PS1-10jh, and PS1-11af) possess $E_{\rm rad} \ll 0.1M_\odot \eta c^2$, for $\eta \approx 0.1$. We list here four possible explanations for the low $E_{\rm rad}$ seen in these events:
\begin{enumerate}
\item All data points in Fig. \ref{fig:ERad} are lower limits, because of uncertainties in the bolometric correction. If the brightness temperature is larger than the minimal value assumed, the true $E_{\rm rad}$ values will be higher. This inherent underestimate may be worsened by dust extinction, particularly in events with UV data.
\item Partial tidal disruption of a stellar envelope (i.e. $\beta \lesssim 1$) will reduce the amount of mass fed to the SMBH, often by orders of magnitude \citep{GuiRam13}.
\item An accretion efficiency $\eta \ll 0.1$. This could arise from the hydrodynamics of super-Eddington accretion flows, or (because $r_{\rm crit} \gg R_{\rm g}=GM_\bullet/c^2$) from the isotropic distribution of pre-disruption orbits with respect to the SMBH spin axis. The subset of TDEs approaching rapidly spinning SMBHs on retrograde, roughly equatorial orbits will have accretion efficiencies $\eta \lesssim 0.01$.
\item As mentioned above, the number of free-floating planets in the Milky Way may be comparable to or greater than the number of stars \citep{Sumi+11}. Tidal disruption of a planet or brown dwarf would reduce the available mass budget for the TDE flare.
\end{enumerate}
Possibility (i) almost certainly is contributing at some level, but absent better followup observations of future TDEs (or a much better theoretical understanding of their optical emission mechanisms), its importance is difficult to quantify. Possibility (ii) is promising, as partial tidal disruptions should be a nontrivial fraction of all TDEs. In the diffusive regime of relaxation, partial disruptions will make up a large majority of TDEs; in the pinhole regime of relaxation (i.e. $q(\epsilon)> 1$; see \S \ref{sec:losscone}), they will make up a non-negligible fraction of events. If we take $0.6<\beta < 1.0$ ($0.9<\beta<1.8$) as the parameter space for observable partial disruptions of polytropic $\gamma=5/3$ ($\gamma=4/3$) stars \citep[Fig. 3]{GuiRam13}, then $\approx 40 \%$ ($\approx 51\%$) of pinhole-regime TDEs from $M_\bullet \sim 10^6 M_\odot$ SMBHs will be partial disruptions.
The viability of the final two hypotheses is more ambiguous. The retrograde orbits explanation for possibility (iii) is sufficiently fine-tuned that we can discount it on the basis of the five events in Fig. \ref{fig:ERad}, and recent general relativistic radiation hydrodynamic simulations of super-Eddington accretion suggest $\eta \sim 0.1$ is typical \citep{Sadows+14, Jiang+14}. Finally, possibility (iv) is hard to disprove but fairly speculative. A full explanation of the unexpectedly low $E_{\rm rad}$ seen in many TDEs is beyond the scope of this paper, but is an important topic for future work.
\begin{figure}
\includegraphics[width=85mm]{ERad.pdf}
\caption{Total energy radiated in a TDE, $E_{\rm rad}$, as a function of SMBH mass $M_{\bullet}$, calculated assuming the radiated luminosity is limited to the Eddington luminosity of the SMBH, and that radiative efficiency $\eta=0.1$. Gray solid, orange dashed, and red dotted curves are for stellar masses of $1M_\odot$, $0.3M_\odot$, and $0.1M_\odot$, respectively. Low mass black holes with highly super-Eddington peak accretion rates radiate an order of magnitude less total energy than will TDEs from high mass SMBHs (Eq.~\ref{eq:Mdotpeak}). We also plot lower limits on the radiated energy for all five TDE candidates selected in the optical or UV with published $E_{\rm rad}$ estimates: D1-9, D3-13 \citep{Gezari+08}, D23H-1 \citep{Gezari+09}, PS1-10jh \citep{Gezari+12}, and PS1-11af \citep{Chorno+14}. In all five events we take SMBH mass estimates and error bars from the discovery papers.}
\label{fig:ERad}
\end{figure}
\subsection{Optical Emission Models}
\label{sec:opticalmodels}
Several physical processes may contribute to the optical emission from TDEs, which depend in different ways on the properties of the disrupted star and the SMBH. These processes, as described in Appendix \ref{sec:optical}, include thermal emission from the (viscously spreading) accretion disk \citep{SheMat14}; super-Eddington outflows from the accretion disk \citep{StrQua09}; reprocessing of the accretion luminosity \citep{Guillochon+14} by an extended outer dense screen (e.g., nonvirialized tidal debris); and synchrotron emission from an off-axis relativistic jet that has decelerated to transrelativistic speeds through its interaction with the circumnuclear medium. In $\S\ref{sec:detection}$ we calculate the observed rates of TDEs using several of these models. We focus on the emission at optical wavebands because sensitive wide-field surveys, such as the Zwicky Transient Factory (ZTF) and LSST, are expected to greatly expand the current TDE sample in the near future.
Figure \ref{fig:LPeak} shows the peak {\it g}-band optical luminosities predicted by each emission model as a function of SMBH mass for our fiducial choice of model parameters (see Appendix \ref{sec:optical}). Both the spreading disk and the reprocessing models represent ``Eddington-limited" emission mechanisms, and hence their peak luminosities decrease for lower mass SMBHs (Eq.~\ref{eq:ERad}). By contrast, emission from super-Eddington outflows or an off-axis jet are not limited to the Eddington luminosity and instead predicted higher peak luminosities for lower SMBH masses. This difference has important implications for the sensitivity of the detected TDE fraction on the SMBH mass distribution ($\S\ref{sec:detection}$).
Figure \ref{fig:LPeak} also shows observed peak {\it g}-band luminosities\footnote{PTF09axc and PTF09djl only have peak {\it r}-band magnitudes (Arcavi, private communication); we present the {\it g}-band extrapolation of a blackbody spectrum assuming both the {\it g}- and {\it r}-band are on the Rayleigh-Jeans tail of these events. The peak {\it g}-band magnitudes of D1-9 and D3-13 are likewise calculated by correcting (observed) peak {\it r}-band magnitudes in \citet{Gezari+09}.} for five optically-selected TDE candidates, represented with filled points: PTF09ge, PTF09axc, PTF09djl \citep{Arcavi+14}, PS1-10jh \citep{Gezari+12}, and PS1-10af \citep{Chorno+14}. Three UV-selected TDE candidates are plotted as open points: D1-9, D3-13 \citep{Gezari+08}, and D23H-1 \citep{Gezari+09}. All five of the optically-selected events are tightly clustered in the diagram, at least relative to the huge uncertainties in the predicted optical emission of our four different mechanisms. This is likely due to the flux limitations of PTF and Pan-STARRS. Notably, all eight of these events appear compatible with only a single optical emission mechanism: an extended reprocessing layer that converts a fraction $\sim 10\%$ of the accretion power into optical luminosity. The spreading disk and super-Eddington outflow scenarios are unable to produce optical luminosities comparable to those observed.\footnote{Technically, super-Eddington outflows can reach the observed peak luminosities $\sim 10^{43}~{\rm erg~s}^{-1}$, but only if all galaxies hosting these TDEs have SMBHs that are undermassive by a factor $\sim 30$.} Our model for optical synchrotron emission from a decelerating jet possesses enough free parameters that it can be brought up into the observed luminosity range (and indeed, our choice of parameters was quite conservative), but none of these five events were seen to possess the nonthermal spectra characteristic of synchrotron radiation.
\begin{figure}
\includegraphics[width=85mm]{LPeak.pdf}
\caption{Peak {\it g}-band optical luminosities $P$ as a function of SMBH mass $M_{\bullet}$ for different models of optical TDE emission (Appendix \ref{sec:optical}), assuming a $\beta = 1$ tidal disruption of a solar type star. Models shown include thermal emission from the viscously spreading accretion disk ({\it orange, solid}; $\S\ref{sec:thermal}$); reprocessing by an extended outer layer of nonvirialized debris ({\it green, dashed}; $\S\ref{sec:reprocessing}$); super-Eddington outflows ({\it red, dotted}, $\S\ref{sec:SE}$); and off-axis emission from decelerating relativistic jet ({\it blue, dot-dashed}; $\S\ref{sec:jet}$). Thin black lines represent upper limits: the solid, thin black line corresponds to the total bolometric $\dot{M}c^2$ power available (assuming a radiative efficiency of $\eta=0.1$), while the dashed, thin black line is the same, but Eddington-limited. We also plot the peak {\it g}-band luminosities for all eight claimed TDE candidates that capture the peak of the optical light curve. In particular, we plot PTF09ge ({\it circle}), PTF09axc ({\it square}), PTF09djl ({\it diamond}), PS1-10jh ({\it upward triangle}), PS1-11af ({\it downward triangle}), D1-9 ({\it open circle}), D3-13 ({\it open square}), and D1-9 ({\it open diamond}).}
\label{fig:LPeak}
\end{figure}
\section{Results}
\label{sec:results}
Following the prescriptions of the preceding section, we compute TDE rates $\dot{N}_{\rm TDE}$ for the 146 galaxies in our sample which can be deprojected and Eddington-inverted assuming spherical symmetry and isotropic velocities. In this section, we present the raw results, provide power law fits for $\dot{N}_{\rm TDE}$ as a function of other galaxy parameters, and then use these fits to calculate distributions of TDE observables and rates of detectability by optical surveys.
\subsection{Rates}
\label{sec:rates}
Table \ref{tab:rates} presents our results for TDE rates in every galaxy in our extended sample of Nuker galaxies, calculated under the assumption of a Kroupa PDMF and the latest calibration of the \citet[hereafter MM13]{McConnell&Ma13} $M_{\bullet}-\sigma$ relationship (this relation is broadly consistent with other recent analyses, e.g. \citealt{GraSco13}). Because of its length, we relegate this table to a separate appendix, but analyze its data here. Our most important findings are in Figure \ref{fig:SampleNDotMBH}, which shows how the TDE rate varies as a function of SMBH mass.
Although we start with a sample of 219 galaxies, we are forced to discard 51 which cannot be deprojected assuming spherical symmetry ($\gamma<0$), as well as a further 22 whose distribution functions are incompatible with the assumption of velocity isotropy ($f(\epsilon)\ge 0$ is not everywhere satisfied)\footnote{Distribution functions unable to satisfy positivity generally occur for $\gamma < 0.05$. Both of these rejection criteria generally occur for very massive galaxies with SMBHs incapable of disrupting main sequence stars.}. We discard 2 more outlier galaxies with extremely small $\sigma$ values, leaving a final 144 galaxies with calculated TDE rates. The best-fit regression of these results to an arbitrary power law gives
\begin{equation}
\dot{N}_{\rm TDE} = \dot{N}_0 \left( \frac{M_\bullet}{10^8M_\odot} \right)^{B},
\label{eq:bestfit}
\end{equation}
with $\dot{N}_0 = 2.9 \times 10^{-5}$ yr$^{-1}$ gal$^{-1}$ and $B = -0.404$, when we consider our entire galaxy sample. If we limit ourselves to core (cusp) galaxies alone, we find $\dot{N}_0 = 1.2 \times 10^{-5}$ yr$^{-1}$ gal$^{-1}$ ($\dot{N}_0 = 6.5 \times 10^{-5}$ yr$^{-1}$ gal$^{-1}$) and $B=-0.247$ ($B=-0.223$). The significantly steeper slope of our fit to the full sample occurs because of the transition from a core-dominated to a cusp-dominated galaxy population as one moves from high- to low-mass galaxies. These fits are shown in Fig. \ref{fig:SampleNDotMBH}. Our fit to the overall galaxy sample is comparable to that found using the results of WM04, which yield $\dot{N}_0 = 2.3\times 10^{-5}~{\rm yr}^{-1}$ and $B=-0.519$.
All of these fits are somewhat sensitive to the sample restrictions used in their calculation; beyond the choice of core, cusp, or both, we can also limit ourselves to galaxies under some Hills limit (say, $M_\bullet < 10^8 M_\odot$). If we fit a power law to this subsample, we find almost no dependence of $\dot{N}_{\rm TDE}$ on SMBH mass, with $B=0.061$. However, as we show in Appendix C, our most trustworthy results are for those galaxies with $M_\bullet \gtrsim 10^7 M_\odot$: for these galaxies, HST photometry resolves $r_{\rm crit}$, the radius from which the vast majority of loss cone flux originates. Generally speaking, the critical radius is marginally resolved for $M_\bullet \approx 10^7 M_\odot$, and better resolved at larger SMBH masses. For this reason, our fiducial model uses a power law fit to the entire galaxy sample ($B=-0.404$), but we will briefly comment on this choice at later points in the paper.
\begin{figure}
\includegraphics[width=85mm]{SampleNDotMBH.pdf}
\caption{Tidal disruption rates $\dot{N}_{\rm TDE}$ for every galaxy in our sample, measured in units of stars per year. Results are plotted against SMBH mass $M_\bullet$. Cusp galaxies are shown as blue diamonds, core galaxies are shown as red circles, and rare intermediate galaxies ($0.3\le \gamma \le 0.5$) are shown as purple squares. The solid black line, dotted blue line, and dashed red line are best fit power laws to the full sample, the cusp subsample, and the core subsample, respectively. The power law fit to the full sample is significantly steeper than the fits to the subsamples because of the transition from cusp to core galaxies with increasing $M_\bullet$.}
\label{fig:SampleNDotMBH}
\end{figure}
Figure \ref{fig:SampleNDotGamma} shows the TDE rate as a function of the inner stellar slope $\gamma$ of the Nuker profile. Galaxies with larger $\gamma$ possess higher rates, as is expected because their denser central stellar populations naturally feature shorter relaxation times and faster rates of energy and angular momentum diffusion. This correlation is a relatively strong one: employing our entire sample, we find $\dot{N}_{\rm TDE} \propto \gamma^{0.705}$. No strong correlations exist between the per-galaxy TDE rate and the other Nuker structural parameters of $\alpha$, $\beta$, $R_{\rm b}$, and $I_{\rm b}$.
\begin{figure}
\includegraphics[width=85mm]{SampleNDotGamma.pdf}
\caption{Tidal disruption rates $\dot{N}_{\rm TDE}$ for every galaxy in our sample, measured in units of stars per year. Results are plotted against the inner Nuker profile power law slope $\gamma$. Green circles indicate galaxies with SMBH masses below the Hills mass for a solar type star; orange squares are larger galaxies with $M_\bullet$ above that Hills mass. The solid black, dashed green, and dotted orange lines show best fit power laws for the full sample, the $M_\bullet < M_{\rm Hills}$ subsample, and the $M_\bullet > M_{\rm Hills}$ subsample, respectively.}
\label{fig:SampleNDotGamma}
\end{figure}
The TDE rate is relatively insensitive to assumptions about the parametrization of the galaxy profile. Table \ref{tab:compare} compares our fiducial (Nuker, MM13) rates to rates calculated using instead the Sersic and Core-Sersic galaxy parameterizations, for a subsample of 21 galaxies from \citet{Trujil+04}.\footnote{Two galaxies in our sample, NGC1700 and NGC4478, do not have computable event rates for the Nuker parametrization because their best-fit $\gamma<0$.} In general the Nuker parameterizations result in slightly higher TDE rates than the Sersic parameterization, but comparable to the more realistic Core-Sersic models. This is not too surprising of a result, as the plots of fit residuals in \citet{Trujil+04}, Figs. 4-5, show very little difference between Nuker and Core-Sersic fits (pure Sersic fits sometimes do differ, and generally have greater residuals).
Our results are very insensitive to changes in the $M_\bullet-\sigma$ relation; we have rerun the WM04 sample with both the \citet{MerFer01} and the MM13 calibrations of this relation, and find that the mean change in individual rates $\dot{N}_{\rm TDE}$ is $15\%$. However, some rates go down and others go up, with no systematic shift: if we take sample-averaged rates for both sets of $M_\bullet-\sigma$, the difference in these average rates is only $2\%$.
Figure \ref{fig:SamplePinhole} shows $f_{\rm pinhole}$, the fraction of TDEs occuring in the pinhole regime for each galaxy, as a function of SMBH mass (see also the entries in Table \ref{tab:rates}). As described in $\S\ref{sec:losscone}$, pinhole events possess a distribution $d\dot{N}_{\rm TDE}/d\beta \propto \beta^{-2}$ in penetration factor $\beta$, while non-pinhole (diffusive) TDEs instead possess $\beta \approx 1$, with the rate of higher $\beta$ strongly suppressed. In other words, we expect the total $\beta$-distribution distribution to be approximately
\bea
\frac{d\dot{N}_{\rm TDE}}{d\beta} &\approx& \begin{cases} f_{\rm pinhole}\beta^{-2} &\beta \gtrsim 1, \\ (1-f_{\rm pinhole}) & \beta \approx 1, \end{cases}
\label{BetaDist}
\eea
where $f_{\rm pinhole}$ is the TDE rate-weighted pinhole fraction, which we estimate to be $\approx 0.3$ based on our galaxy sample. The maximum attainable $\beta$ for a given Schwarzschild SMBH is $\beta_{\rm max}\approx R_{\rm t}/(2R_{\rm g})$. We find a strong correlation of the pinhole fraction with SMBH mass, and plot this in Fig.~\ref{fig:SamplePinhole}. Specifically, $f_{\rm pinhole}$ rises with decreasing $M_\bullet$. As we shall see, small SMBHs dominate the volumetric TDE rate, so this result indicates that high-$\beta$ events are relatively common among TDEs. Our best fit power law is
\begin{equation}
f_{\rm pinhole} = 0.22 \left( \frac{M_\bullet}{10^8M_\odot} \right)^{-0.307}.
\end{equation}
Interestingly, at fixed $M_\bullet$, $f_{\rm pinhole}$ is largest for core galaxies.
In this section we have crudely approximated pinhole TDEs as occurring when $q > 1$, and diffusive TDEs as occurring when $q<1$; in practice, there is a large intermediate zone with $0.1 \lesssim q \lesssim 3$ where high-$\beta$ TDEs can occur, but at a rate that is somewhat suppressed relative to the logarithmically flat rate of Eq. \eqref{BetaDist}. A formalism for more precise calculation of $\dot{N}(\beta)$ is presented in \citet{Strubb11}.
\begin{figure}
\includegraphics[width=85mm]{SamplePinhole.pdf}
\caption{The fraction, $f_{\rm pinhole}$, of all TDEs in a galaxy which fall into the pinhole regime of tidal disruption. TDEs in this regime can access any $\beta$ value up to the maximum permitted by the size of the horizon, with a distribution $\dot{N}_{\rm TDE}(\beta) \propto \beta^{-1}$. TDEs in the opposite, diffusive, regime, see $\beta \approx 1$ almost always, with higher $\beta$ values exponentially suppressed. Data point styles are the same as in Fig. \ref{fig:SampleNDotMBH}. Although $f_{\rm pinhole}$ increases with decreasing $M_\bullet$, and is highest among small cusp galaxies, at any given $M_\bullet$ value it is higher in core galaxies.}
\label{fig:SamplePinhole}
\end{figure}
\begin{table*}
\begin{minipage}{180mm}
\caption{Intrinsic tidal disruption event rates for the 21 galaxies parametrized in \citet{Trujil+04}. We show event rates for the Nuker ($\dot{N}_{\rm TDE}^{N}$), Sersic ($\dot{N}_{\rm TDE}^{S}$), and core-Sersic ($\dot{N}_{\rm TDE}^{CS}$) surface brightness parametrizations, as well as the most relevant structural parameters.}
\begin{tabular}{r|r|r|r|r|r|r|r|r|r|r|r} \\
\hline
Galaxy & Type$^{a}$ & $R_{\rm e}$$^{b}$ & $n$$^{c}$ & $R_{\rm b}$$^{d}$ & $\gamma$$^{e}$ & $M_\bullet$$^{f}$ & $\dot{N}_{\rm TDE}^{N}$$^{g}$ & $\dot{N}_{\rm TDE}^{S}$$^{h}$ & $\dot{N}_{\rm TDE}^{CS}$$^{i}$ \\
\hline
NGC2986 & $\cap$ & 3.66 & 3.29 & 226 & 0.18 & 8.96 & -5.15 & -4.92 & -4.91 \\
NGC3348 & $\cap$ & 3.95 & 3.09 & 199 & 0.09 & 8.76 & -5.00 & -5.03 & -5.00 \\
NGC4168 & $\cap$ & 3.88 & 2.68 & 365 & 0.17 & 8.14 & -5.52 & -5.27 & -5.36 \\
NGC4291 & $\cap$ & 1.99 & 3.75 & 72.7 & 0.01 & 9.13 & -4.76 & -4.31 & -4.33 \\
NGC5557 & $\cap$ & 5.16 & 3.74 & 204 & 0.02 & 8.95 & -5.11 & -4.74 & -4.73 \\
NGC5903 & $\cap$ & 5.13 & 2.96 & 257 & 0.13 & 8.52 & -5.35 & -5.22 & -5.26 \\
NGC5982 & $\cap$ & 4.19 &3.24 & 68.7 & 0.05 & 8.92 & -4.99 & -4.87 & -4.82 \\
NGC3613 & $\cap$ & 4.82 & 3.63 & 50.8 & 0.04 & 8.38 & -4.83 & -4.62 & -4.65 \\
NGC5077 & $\cap$ & 3.86 & 3.56 & 351 & 0.23 & 9.08 & -5.03 & -4.68 & -4.74 \\
NGC1426 & $\backslash$ & 4.15 & 4.95 & 5.10 & 0.26 & 7.69 & -4.58 & -4.22 & -3.68 \\
NGC1700 & $\cap$ & 7.39 & 5.99 & 11.8 & -0.10 & 8.80 & - & -4.27 & -4.27 \\
NGC2634 & $\backslash$ & 2.93 & 4.54 & 8.6 & 0.81 & 7.95 & -4.38 & -4.33 & -3.76 \\
NGC2872 & $\backslash$ & 4.29 & 4.56 & 11.5 & 1.01 & 9.18 & -4.31 & -4.37 & -3.93 \\
NGC3078 & $\backslash$ & 3.91 & 4.37 & 9.0 & 0.95 & 8.74 & -3.96 & -4.46 & -4.51 \\
NGC4458 & $\cap$ & 4.09 & 10.1 & 7.8 & 0.16 & 6.76 & -4.39 & -2.71 & -3.96 \\
NGC4478 & $\cap$ & 1.13 & 3.11 & 19.1 & -0.10 & 7.50 & - & -4.74 & -4.46 \\
NGC5017 & $\backslash$ & 1.91 & 5.11 & 9.8 & 1.12 & 7.98 & -2.89 & -3.99 & -3.42 \\
NGC5576 & $\cap$ & 3.96 & 4.74 & 550 & 2.73 & 0.4 & -4.47 & -4.09 & -4.11 \\
NGC5796 & $\wedge$ & 5.03 & 4.79 & 232 & 0.41 & 9.23 & -4.87 & -4.32 & -4.33 \\
NGC5831 & $\backslash$ & 3.36 & 4.72 & 7.0 & 0.33 & 7.89 & -4.63 & -4.11 & -4.08 \\
NGC5845 & $\backslash$ & 0.57 & 2.74 & 13.9 & 0.51 & 8.81 & -4.71 & -4.74 & -4.76
\label{tab:compare}
\end{tabular}
\\ $^{a}$ Galaxy type, with $\cap$, $\backslash$, and $\wedge$ denoting core, cusp, and intermediate galaxies, respectively; $^{b}$ half-light radius $R_{\rm e}$; $^{c}$ Sersic index $n$; $^{d}$ Nuker break radius $R_{\rm b}$; $^{e}$ inner power law slope $\gamma$; $^{f}$ SMBH mass $M_\bullet$ as computed from the $M_\bullet - \sigma$ relation; $^{g}$TDE rate, calculated using Nuker parameterization; $^{h}$TDE rate, calculated using Sersic parameterization; $^{i}$ TDE rate, calculated using core-Sersic parametrization.
\end{minipage}
\end{table*}
\subsection{SMBH Occupation Fraction}
Volumetric TDE rates are obtained by combining well known galaxy scaling relations with our best-fit power law for $\dot{N}_{\rm TDE}(M_\bullet)$ (Eq.~\ref{eq:bestfit}). The {\it R}-band luminosity $L_{\rm R}$ function of galaxies is assumed to follow the Schechter function \citep{Schechter76}
\begin{equation}
\phi(L_{\rm R}){\rm d}L_{\rm R} = \phi_\star \left( \frac{L_{\rm R}}{L_\star} \right)^{-1.1}\exp (-L_{\rm R}/L_\star){\rm d}L_{\rm R},
\end{equation}
where $\phi_\star = 4.9 \times 10^{-3} h_7^3~{\rm Mpc}^{-3}$, $L_\star = 2.9 \times 10^{10} h_7^{-2} L_{\odot}$, and we take the normalized Hubble constant $h_7 = 1$ \citep{Brown+01}. Combining the Faber-Jackson law, $\sigma \approx 150~{\rm km~s}^{-1} ( L_{\rm R}/10^{10}L_{\odot})^{1/4}$, with the MM13 calibration of the $M_\bullet - \sigma$ relationship (Eq.~\ref{eq:msigma})
allows us to rewrite the Schechter function in terms of the SMBH mass,
\begin{align}
\phi (M_\bullet) {\rm d}M_\bullet = & 2.56 \phi_\star f_{\rm occ} \left( \frac{M_\bullet}{10^8M_\odot} \right)^{-1.07} \\
& \times \exp \left(-0.647 \left( \frac{M_\bullet}{10^8 M_\odot} \right)^{0.709} \right) {\rm d}M_\bullet, \notag
\end{align}
where $f_{\rm occ}(M_{\bullet})$ is the occupation fraction of SMBHs.
The intrinsic TDE rates are relatively robust to uncertainties in the stellar mass function and the parameterization of galaxy surface brightness profiles ($\S\ref{sec:results}$; Table \ref{tab:compare}). However, the SMBH occupation fraction is much less certain, especially in low mass galaxies (e.g.~\citealt{Greene&Ho07}). In order to explore the sensitivity of the TDE rate to the SMBH mass function, we follow \citet{Miller+14} in parameterizing the occupation fraction as
\begin{equation}
f_{\rm occ} = \begin{cases} \label{eq:fOcc}
0.5+0.5& \tanh\left(\ln \left(\frac{M_{\rm bulge}}{M_{\rm c}} \right) \times 2.5^{8.9-\log_{10} (M_{\rm c}/M_\odot)} \right),\\
& M_{\rm bulge} < 10^{10}M_\odot \\
1,& M_{\rm bulge} > 10^{10}M_\odot
\end{cases}
\end{equation}
where $M_{\rm bulge}$ is the bulge mass, which we relate to the SMBH mass using the $M_{\rm bulge}-M_\bullet$ relation from MM13. The parameter $M_{\rm c}$ is the approximate mass below which the occupation fraction turns over, the value of which is not well constrained observationally but is likely to be less than $\sim 10^{8.5}M_{\odot}$ (Miller, private communication). In what follows we consider five fiducial occupation fractions (Fig. \ref{fig:fOcc}), corresponding to $M_{\rm c}/M_\odot$ = $10^{9}$ (case A), $10^{8.5}$ (case B), $10^{8}$ (case C), and $10^{7.5}$ (case D), along with a uniform $f_{\rm occ} = 1$ (case E).
\begin{figure}
\includegraphics[width=85mm]{fOcc.pdf}
\caption{Occupation fraction of SMBHs $f_{\rm occ}$ as a function of SMBH mass $M_{\bullet}$, based on the parameterization given in equation (\ref{eq:fOcc}) from \citet{Miller+14} for different values of the critical turnover mass $M_{\rm c}/M_{\odot} = 10^9$ ({\it gray, solid}), $10^{8.5}$ ({\it brown, dashed}), $10^{8}$ ({\it green, dot-dashed}), $10^{7.5}$ ({\it blue, dot-dot-dashed}), $\sim 0$ (uniform occupation fraction; {\it purple, dotted}). Where appropriate, the turnover mass is labeled.}
\label{fig:fOcc}
\end{figure}
With the adoption of an occupation fraction, we can now calculate the volumetric rate of TDEs, $\dot{n}_{\rm TDE}$. First we compute the differential volumetric rate of TDEs with respect to SMBH mass,
\begin{equation}
\dot{n}_{\rm TDE}'(\ln M_\bullet)= \int_{M_{\star}^{\rm min}}^{M_{\star}^{\rm max}} \dot{N}_{\rm TDE}(M_{\bullet},M_{\star})\phi_{\bullet}(M_{\bullet}) \chi_{\rm Kro}(M_{\star})dM_{\star},
\label{eq:dNdV}
\end{equation}
which is shown in Figure \ref{fig:NDot} for the different occupation fraction models. For brevity we have written $\dot{n}_{\rm TDE}'(\ln M_\bullet)={\rm d} \dot{n}_{\rm TDE}/{\rm d}\ln M_\bullet$. Here $\dot{N}_{\rm TDE}(M_\bullet, M_{\star})$ is given by our power law fit for $\dot{N}_{\rm TDE}$ (i.e. Eq. \ref{eq:bestfit}) if $M_\bullet < M_{\rm Hills}(M_{\star})$, but is $0$ otherwise. Consequently, small M stars dominate the volumetric TDE rate, although their smaller Hills mass enables solar-type stars to compete for $2\times 10^7 \lesssim M_\bullet/M_\odot \lesssim 10^8$.
The full volumetric rate is then simply
\begin{equation}
\dot{n}_{\rm TDE} = \int \frac{{\rm d}\dot{n}_{\rm TDE}}{{\rm d}{\rm ln}M_{\bullet}} {\rm d}\ln M_\bullet .
\end{equation}
One can also define a galaxy-averaged TDE rate, $\langle \dot{N}_{\rm TDE} \rangle =\dot{n}_{\rm TDE}/n_{\rm gal}$, where $n_{\rm gal} = \int \phi_{\bullet}dM_{\bullet} = 0.015~{\rm Mpc}^{-3}$ is the total spatial density of galaxies (calculated assuming a lower limit of $M_{\rm bulge} = 10^7 M_\odot$ on the bulge mass defining a ``galaxy"). The five occupation fraction distributions give different average TDE rates, with $\langle \dot{N}_{\rm TDE} \rangle$ of $2.0\times 10^{-4}~{\rm yr}^{-1}$ (case A), $3.7 \times 10^{-4}~{\rm yr}^{-1}$ (case B), $6.7 \times 10^{-4}~{\rm yr}^{-1}$ (case C), $1.2\times 10^{-3}~{\rm yr}^{-1}$ (case D), and $4.6 \times 10^{-3}~{\rm yr}^{-1}$ (case E).
\begin{figure}
\includegraphics[width=85mm]{NDot.pdf}
\caption{Volumetric rate of TDEs $\dot{n}_{\rm TDE}'(\ln M_\bullet)={\rm d}\dot{n}_{\rm TDE}/{\rm d}\ln M_{\bullet}$ per unit log SMBH mass $M_{\bullet}$, as a function of $M_{\bullet}$. Different lines correspond to different assumptions about the low mass cut-off in the SMBH occupation fraction $f_{\rm occ}$, with line styles and colors corresponding to $M_{\rm c}$ values in Fig. \ref{fig:fOcc}. The decrease in the TDE rate at $M_{\bullet} \gtrsim 2\times 10^{7}M_{\odot}$ occurs because low mass stars (which dominate the total TDE rate for the assumed Kroupa PDMF) possess small radii and hence fall into the BH without being disrupted. We also show the volumetric rate integrated over ${\rm d}\ln M_\bullet$ for each of our five cases, with colors corresponding to the associated curves.}
\label{fig:NDot}
\end{figure}
\subsection{Distributions of Observables}
The TDE rate can also be translated into distributions of variables that are either directly observable, or dynamically important for TDE observables. We calculate differential volumetric TDE rates with respect to a variable $X$, once again denoting ${\rm d}\dot{n}_{\rm TDE}/{\rm d}\ln X=\dot{n}_{\rm TDE}'(\ln X)$, by using Eq.~\eqref{eq:dNdV} and changing variables while integrating over the Kroupa PDMF $\chi_{\rm Kro}(M_{\star})$:
\begin{equation}
\dot{n}_{\rm TDE}'(\ln X) = \int_{M_\star^{\rm min}}^{M_\star^{\rm max}} \frac{{\rm d}\dot{n}_{\rm TDE}}{{\rm d}{\rm ln}M_{\bullet}} \chi_{\rm Kro} \frac{{\rm d}\ln M_\bullet}{{\rm d}\ln X} {\rm d}M_{\star} .
\end{equation}
Figures \ref{fig:NDotObs} shows our results for TDE distribution with respect to peak fallback rates $\dot{M}_{\rm peak}/\dot{M}_{\rm edd}$ (Eq.~\ref{eq:Mdotpeak}), fallback timescales $t_{\rm fall}$ (Eq.~\ref{eq:tfb}), Eddington timescales $t_{\rm edd}$ (Eq.~\ref{eq:tedd}), and total radiated energy $E_{\rm rad}$ (Eq. \ref{eq:ERad}). The peak Eddington ratio is very sensitive to the occupation fraction, with the most probable value of $\dot{M}_{\rm peak}/\dot{M}_{\rm edd}$ varying between $\sim 10-1000$ as the turn-over mass decreases from $M_{\rm c} = 10^{8.5}M_{\odot}$ to $10^{7.5}M_{\odot}$. For all our choices of occupation fraction except for case A ($M_{\rm c}=10^9 M_\odot$, which is in any case disfavored by observations of nearby galactic nuclei), most TDEs are characterized by a phase of highly super-Eddington accretion. This also implies that TDE emission mechanisms that scale with absolute accretion power, instead of those which are limited to the Eddington luminosity, provide the most sensitive probe of the SMBH occupation fraction.
The distribution of fall-back times $t_{\rm fall}$ is less sensitive to the occupation fraction, with typical values ranging from a few weeks to a few months (except for case E, where $t_{\rm fall} \sim 1~{\rm day}$). All models for TDE emission predict light curves with characteristic durations $\gtrsim t_{\rm fall}$. Because the characteristic fallback time generally exceeds a couple weeks, even for a relatively low value of $M_{\rm c} = 10^{7.5}M_{\odot}$, this shows that optical surveys such as ZTF or LSST, with planned cadences of several days, should (modulo selection criteria) be limited by flux rather than cadence in TDE discovery.
The distribution of Eddington timescales $t_{\rm edd}$ also depends only weakly on the occupation fraction; other than setting the overall normalization of the distribution, $f_{\rm occ}$ mainly determines the steepness of the cutoff at large $t_{\rm edd}$. The value of $t_{\rm edd} \approx 500$ days inferred by the time at which the beamed X-ray emission shut off following the jetted TDE {\it Swift} J1644+57 (e.g., \citealt{Zauderer+13}; \citealt{Tchekhovskoy+13}; \citealt{Kawashima+13}) appears in line with theoretical expectations. The distribution of (Eddington-limited) energy radiated, $E_{\rm rad}$, is more sensitive to $f_{\rm occ}$, but as discussed in \S \ref{sec:observables}, it is challenging to measure anything but lower limits on this quantity.
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[width=85mm]{NDotMDot.pdf} &
\includegraphics[width=85mm]{NDotTFall.pdf} \\
\includegraphics[width=85mm]{NDotTEdd.pdf} &
\includegraphics[width=85mm]{NDotERad.pdf}
\end{tabular}
\caption{The volumetric TDE rate, weighted by different potentially observable quantities, shown for different assumptiosn about the SMBH occupation fraction $f_{\rm occ}$, with the line styles and colors the same as in Fig.~\ref{fig:fOcc}. Observable quantities shown include the peak fallback rate $\dot{M}_{\rm peak}/\dot{M}_{\rm Edd}$ (Eq.~\ref{eq:Mdotpeak}; {\it top left}); characteristic fallback time $t_{\rm fall}$ (Eq.~\ref{eq:tfb}; {\it top right}); Eddington timescale $t_{\rm Edd}$ (Eq.~\ref{eq:tedd}; {\it bottom left}); total Eddington-limited radiated energy $E_{\rm rad}$ (Eq.~\ref{eq:ERad}; {\it bottom right}). }
\label{fig:NDotObs}
\end{figure*}
\subsection{Detection Rate of TDEs by Optical Surveys}
\label{sec:detection}
The population of TDEs that will be selected by optical surveys depends sensitively on which emission mechanism dominates ($\S\ref{sec:opticalmodels}$; Appendix \ref{sec:optical}). We calculate the detection rate of TDEs, $\dot{N}_{\rm obs}$, using a simple flux threshold criterion, as is motivated by the generally long duration of TDE flares relative to the planned cadence of upcoming surveys. Our results are normalized to those detected by an all-sky survey with a {\it g}-band limiting magnitude of $g_{\rm lim} = 19$, in order to represent the approximate sensitivity of the planned survey by ZTF (e.g.~\citealt{Rau+09}; \citealt{Kulkarni12}). Although the $5\sigma$ limiting magnitude of PTF is formally $\sim 21$, it is hard to determine whether a detected optical transient is in fact a TDE without high signal to noise and the ability to resolve the light curve for several epochs away from peak. All three TDE flares discovered by iPTF possess peak {\it g}-band magnitudes $g \approx 19$ (\citealt{Arcavi+14}), motivating our choice of this limit. Absolute rates can be readily scaled to other limiting magnitudes $g$ according to $\dot{N}_{\rm obs} \propto f_{\rm sky} \times 3.95^{(g-19)}$, where $f_{\rm sky}$ is the fraction of the sky covered (e.g., 20$\%$ for PTF). We neglect cosmological corrections to the light curves, as well as a possible evolution in the TDE rate with redshift, because most detected events occur at $z < 1$. These assumptions are approximately correct for PTF and ZTF, but may not be justified for the brightest emission mechanisms, when applied to LSST.
Figure \ref{fig:NDotObs2} shows the detected TDE distribution with respect to SMBH mass, assuming different models for the optical emission (and setting $f_{\rm sky}=1$). Thermal emission from the spreading disk represents the dimmest mechanismwe consider and the resulting detection rates ({\it upper left panel}) are correspondingly low ($\sim 0.1-3~{\rm yr}^{-1}$ for an all-sky survey). Since the emission is Eddington limited in this scenario, the SMBH distribution measured by such a survey only depends weakly on the SMBH mass fraction, with the total number of events ranging from $0.2 - 0.9~{\rm yr}^{-1}$ as the turn off mass of the occupation fraction decreases from $M_{\rm c} = 10^{8.5}$ to $10^{7.5}M_{\odot}$.
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[width=85mm]{NDotSMDisk.pdf} &
\includegraphics[width=85mm]{NDotJet.pdf} \\
\includegraphics[width=85mm]{NDotSE.pdf} &
\includegraphics[width=85mm]{NDotRL.pdf}
\end{tabular}
\caption{Observed rates of TDE by an all-sky optical survey of limiting magnitude $g = 20$ per unit log SMBH mass, shown for different SMBH occupation fractions (colors and line styles are the same as in Fig.~\ref{fig:fOcc}) and for different models of the TDE optical emission mechanism (different panels). Total rates (integrated over all SMBH masses) are marked next to each line. Emission models shown include thermal emission from the spreading disk ({\it upper left}; $\S\ref{sec:thermal}$), synchrotron radiation from off-axis jet (assumed to accompany 1 per cent of all TDEs; {\it upper right}; $\S\ref{sec:jet}$), super-Eddington outflows ({\it lower left}; $\S\ref{sec:SE}$), and reprocessing by an optically thick layer ({\it lower right}; \S\ref{sec:reprocessing}). Note that the off-axis jet and super-Eddington outflows strongly differentiate between SMBH occupation models, while the Eddington-limited models (spreading disks and reprocessing layers) do not.}
\label{fig:NDotObs2}
\end{figure*}
The other scenarios result in more optimistic TDE rates. Both super-Eddington outflows and off-axis jets predict a bottom-heavy SMBH mass distribution among detected events. In these scenarios, the total observed TDE rate depends sensitively on the occupation fraction. If super-Eddington outflows (synchrotron jets) are the dominant optical emission mechanism, a SMBH mass cutoff of $M_{\rm c}=10^{7.5}M_\odot$ yields 6.6 (14) detections per year, compared to a much smaller 0.14 (0.5) per year if $M_{\rm c}=10^{8.5}M_\odot$. In both of these scenarios, case E ($f_{\rm occ}=1$) produces $\dot{N}_{\rm obs}\sim1\times10^{3}~{\rm yr}^{-1}$.
Contrastingly, the reprocessing layer model ($\S\ref{sec:reprocessing}$) is Eddington limited, cutting off detections at low $M_\bullet$ and producing a sharp peak in $N_{\rm obs}(M_\bullet)$ near $M_{\bullet} \sim 10^{7}M_{\odot}$, with a total rate $\sim 10^3$ yr$^{-1}$ that is much less sensitive to the low-$M_{\bullet}$ occupation fraction. Although this model provides the closest match to observed peak luminosities of TDE candidates, the predicted detection rates (after accounting for PTF's limited sky coverage) are a factor $\sim 10^{2}$ higher than what was actually found with PTF \citep{Arcavi+14}\footnote{We have conservatively set the reprocessing layer efficiency to $\epsilon_{\rm opt}=0.03$, which is already slightly low compared to the observed peak luminosities of some TDEs in Fig. \ref{fig:LPeak}.}. As we will discuss in the subsequent section, this overprediction could be alleviated if only a small fraction, $\sim 1-10\%$, of all TDEs possess a reprocessing layer.
\section{Discussion}
\label{sec:discussion}
We have seen that volumetric tidal disruption event rates are fairly sensitive to the bottom end of the SMBH mass function. A flux-limited TDE sample will be extremely sensitivity to choice of $f_{\rm occ}$ if optical emission is not Eddington-limited, a point raised in \citet{StrQua09} and quantified for samples of X-ray selected TDE jets in \citet{De-ColleGuillochon+:2012a}. However, the Eddington-limited emission mechanisms we consider give detection rates $\dot{N}_{\rm obs}$ that are highly insensitive to $f_{\rm occ}$. The current sample of TDEs is inhomogeneous and suffers from many selection effects, but is still informative because of the enormous variance in both $\int \dot{N}_{\rm TDE}(M_\bullet){\rm d}M_\bullet$ and in $\dot{N}_{\rm TDE}(M_\bullet)$ with respect to $f_{\rm occ}(M_{\bullet})$ and choice of optical emission mechanism (Fig. \ref{fig:NDotObs2}).
\subsection{Rate Tension}
We have calculated a per-galaxy TDE rate of $\langle \dot{N}_{\rm TDE}\rangle \sim {\rm few} \times 10^{-4}~{\rm yr}^{-1}$ gal$^{-1}$, which exceeds the best observationally inferred values by at least an order of magnitude. This disagreement is with respect to the flare rate of $\sim 10^{-5}$ yr$^{-1}$ inferred by both X-ray \citep{Donley+02} and optical/UV \citep{Gezari+09} surveys\footnote{Although we do note that one analysis of X-ray TDEs inferred a rate more consistent with our calculations, of $2.3 \times 10^{-4}~{\rm yr}^{-1}~{\rm gal}^{-1}$ \citep{Esquej+08}.}. Although many of these rate estimates are troubled by selection effects, the TDE rates inferred from the flux-limited sample of \citet{vanVelzen&Farrar14} also fall an order of magnitude below our lowest estimates. This discrepancy is also apparent from comparing our direct estimate of the optical flare detection rate (Fig.~\ref{fig:NDotObs2}) to the optically-selected TDE sample. For instance, for the reprocessing emission model tuned to best reproduce the observed light curves, our estimated detection rate of $\sim 100$ per year for PTF ($f_{\rm sky} = 0.2$) greatly exceeds the three accumulated TDE flare candidates reported by PTF over its three year survey (\citealt{Arcavi+14}).
In this paper we have relaxed and updated a number of assumptions used in past theoretical rate calculations \citep{MagTre99, Wang&Merritt04} in an attempt to alleviate this rate discrepancy, but in general this has had little effect, and if anything may have only heightened the tension between theory and observation. Our rate calculations employ a significantly larger galaxy sample than in the past\footnote{Our sample $N=144$ is significantly larger than the $N=29$ galaxy sample of \citet{MagTre99}, or the $N=41$ galaxy sample used in \citet{Wang&Merritt04}.} and we have incorporated updated galaxy scaling relations, but neither of these changes has a significant effect on the volumetric TDE rate. Theoretically calculated TDE rates are, furthermore, relatively unchanged by the use of alternate (non-Nuker) parametrizations for galactic surface brightness profiles, or by including a realistic stellar mass function. We emphasize that the theoretical TDE rates calculated in this paper are in most ways {\it conservative floors on the true TDE rate}, as they neglect alternative relaxational mechanisms, the non-conservation of angular momentum in aspherical potentials, and the potentially enhanced rates of angular momentum diffusion due to stellar-mass mass black holes ($\S\ref{sec:PDMF}$). The robustness of the tension between predicted and observed TDE rates motivates alternate ways to bring these two into alignment.
Perhaps most conservatively, the observed flare rate could be reduced by environmental or selection effects. Galactic nuclei can suffer from significant dust obscuration, which would reduce the optical flux and the corresponding detection rate. Significant dust extinction was inferred for the jetted TDE {\it Swift J1644+57} (\citealt{Bloom+11}), but the SEDs of the other, thermal TDE flares show little to no evidence of reddening (Cenko, private communication). Dust extinction cannot account for the similar rate tension present in the X-ray selected sample (\citealt{Donley+02}), although photoelectric absorption by large columns of neutral gas could in principle play a similar role. TDE searches must also take care to distinguish actual TDE flares from impostor transients with much higher intrinsic event rates; in particular, AGN variability and nuclear supernovae must be excluded from TDE searches through careful cuts on the candidate sample. For example, the completed PTF survey was strongly biased against TDE detection due to frequent rejection of transients in galactic nuclei (Arcavi, private communication). Cuts such as these, and other factors related to choice of events for followup, make it clear that our ``detectable rates'' predicted in Fig. \ref{fig:NDotObs2} represent upper limits on the TDEs detectable by optical time domain surveys. Although the large future TDE samples of optical transient surveys will resolve many of the questions raised in this section, for now it is likely more useful to compare our volumetric ($\dot{n}_{\rm TDE}$) or per-galaxy ($\langle \dot{N}_{\rm TDE} \rangle$) rates to smaller, flux-limited samples \citep{vanVelzen&Farrar14}.
If observational selection effects can be reduced in the future and this rate tension still persists, potentially more interesting explanations could exist on the theoretical side. While almost all of our assumptions were conservative (spherical symmetry, two-body relaxation, absence of stellar mass black holes), our assumption of isotropic stellar velocities was not necessarily so. Two-body relaxation calculations assuming isotropic velocities will overestimate the physical TDE rate in a galaxy if the true velocity distribution is significantly anisotropic in a tangentially biased way. This is because the longer angular momentum relaxation times of tangential orbits make them less promising sources for tidal disruption. Conversely, a radial bias in stellar orbits would increase TDE rates even further. From both observational and theoretical perspectives, it is unclear whether galactic nuclei are sufficiently anisotropic (and overwhelmingly in the tangential direction) as to reduce TDE rates by an order of magnitude.
Recent N-body simulations have indicated that the presence of a loss cone will bias orbits towards tangential anisotropy near the SMBH, although this bias is minor at $r_{\rm crit}$ and a radial bias appears at larger radii \citep[Figs. 9, 15]{Zhong+14}. If steady state loss cone dynamics are indeed insufficient to provide the required tangential bias, it could arise from more exotic dynamical processes. For example, the presence of a SMBH binary (and its ``effective loss cone'') will strongly deplete radial orbits in a galactic nucleus, and the anisotropic scar left by such a binary on stellar orbits can persist (and reduce TDE rates) for Gyr \citep{MerWan05}. However, an extrapolation of the fitted curve in \citet{MerWan05}, Fig. 4, would indicate that the TDE rate reduction persists for $t \lesssim 10^9~{\rm yr}$ in the small galaxies that dominate $\dot{n}_{\rm TDE}$; binary-induced anisotropy would therefore be most effective at reducing $\dot{n}_{\rm TDE}$ if SMBH binaries often fail to solve the final parsec problem.
A rate discrepancy could also result from current uncertainties in the physical processes that power the observed optical emission from TDEs, in particular given that the effective temperatures $\sim 10^{4}$ K of the current sample of optical/UV flares are much lower than those predicted by simple theoretical models for the accretion disk (e.g., Appendix $\ref{sec:thermal}$). For example, if only 10$\%$ of TDEs possess a reprocessing layer that greatly enhances their optical luminosities relative to the majority of ``unshielded" events, then current flare samples could be dominated by this minority of high luminosity events. Observational inferences of the true event rate would then underestimate it by a factor $\sim 10$.
Although the development of a physically-motivated model for a reprocessing layer is beyond the scope of this work, such a layer might naturally be limited to a minority of TDEs if it requires a high value of the penetration parameter $\beta$. For instance, we estimate that $\sim 10\%$ of all TDEs should occur with $\beta \gtrsim 3$ (Fig.~\ref{fig:SamplePinhole}). TDE debris circularization is not yet well understood, but of the two existing models for this process, both relativistic precession \citep{Hayasa+13} and hydrodynamic compression at pericenter \citep{Guillochon+14} depend sensitively on $\beta$. This scenario is investigated in greater detail in the following subsection.
\subsection{Non-Fiducial Scenarios Limiting Flare Production}
Rates of detectable TDEs could also be reduced if TDEs from the diffusive regime of relaxation are unable to produce bright flares, instead only shedding small amounts of mass each pericenter passage as they drift towards lower angular momentum orbits. A more detailed analysis of angular momentum diffusion suggests that the number of pericenter passages between the onset of partial disruptions and a final, full TDE is $\sim$few for $q \gtrsim 0.3$ \citep[Figs. 4.1, 4.2]{Strubb11}. If we repeat our analysis and only count TDEs from the $q>0.3$ regime as observable, the mean TDE rate in our sample is reduced to $55\%$ of its fiducial value: not enough to explain the rate tension between theory and observation. The differences between pinhole and diffusive TDEs may even worsen the rate discrepancy. In many observed optically-bright TDEs, the energy release appears to be a small fraction of $0.1M_\odot c^2$, consistent with a partial disruption (Fig. \ref{fig:ERad}, see also \citealt{Campan+15}). If partial disruptions power a fraction of the existing TDE sample, then disruptions from the diffusive regime may generate many visible flares per star, exacerbating the rate discrepancy.
\begin{figure}
\includegraphics[width=85mm]{ratesCircularization.pdf}
\caption{The same as Fig. \ref{fig:NDot} (volumetric rate of TDEs, as a function of $M_{\bullet}$), but in a non-fiducial model where luminous flares are only generated by disruptions with relativistic ($R_{\rm p} < 12R_{\rm g}$) pericenters, as is motivated by \citet{Shioka+15, Hayasa+15}. Line styles are the same as in previous figures, and the colored numbers correspond to integrated volumetric TDE rates in each scenario for $f_{\rm occ}$. The two more conservative scenarios do not see large decreases in their integrated TDE rates, but the other three do.}
\label{fig:ratesCirc}
\end{figure}
A more extreme version of the above non-fiducial scenario is to postulate that luminous flares can only be generated assuming rapid circularization of debris streams\footnote{We are grateful to the anonymous referee for suggesting this to us.}, assuming that this circularization efficiency depends strongly on $R_{\rm p}/R_{\rm g}$. The question of debris circularization in TDEs is very much an open one, as the dynamic range of the problem is too extensive to have been simulated from first principles in realistic TDEs. Existing circularization simulations cheat by using one of two different non-physical limits to reduce the spatial range covered by debris streams: either (1) tidal disruption of stars on parabolic orbits by IMBHs \citep{Guillochon+14, Shioka+15}, or (2) tidal disruption of stars on eccentric orbits by SMBHs \citep{Hayasa+13, Hayasa+15, Bonner+15}. Of the subset of these simulations that incorporate relativistic precession, efficient circularization is seen in most of the simulations of \citet{Hayasa+15, Bonner+15} but not in \citet{Shioka+15}. The key difference is the location of the self-intersection point $R_{\rm si}$ where streams collide to dissipate kinetic energy in shocks \citep{GuiRam15, Dai+15, Stone+15}; if $R_{\rm si} \gg R_{\rm p}$, it will take many self-intersections to thermalize a large fraction of the excess specific energy, $\approx GM_\bullet / (2R_{\rm p})$.
The simulations of \citet{Shioka+15}, which see quite inefficient circularization, have $R_{\rm si} \approx 1000R_{\rm g}$, while efficient circularization is seen in all of the \citet{Hayasa+15} simulations with an adiabatic gas equation of state and $R_{\rm si} \lesssim 250 R_{\rm g}$. This self-intersection radius corresponds to $R_{\rm p} \approx 12.5R_{\rm g}$. The circularization efficiency of Models 1-2 in \citet{Hayasa+15} is more ambiguous (these models have $R_{\rm si} \approx 420, 890 R_{\rm g}$), so we neglect them in this discussion.
As a conservative implementation of this idea, we postulate that luminous flares are not produced unless $R_{\rm p} < 12 R_{\rm g}$, and repeat our fiducial calculations under this assumption. This results in all diffusive-regime TDEs being discarded when $M_\bullet \lesssim 10^7 M_\odot$, and a fraction of pinhole-regime TDEs as well. We show the resultant volumetric TDE rates in Fig. \ref{fig:ratesCirc}. The overall rate tension is not removed by this (rather conservative) assumption: the per-galaxy TDE rates in cases A, B, C, D, and E are reduced to $1.6\times 10^{-4}~{\rm yr}^{-1}$, $2.5\times 10^{-4}~{\rm yr}^{-1}$, $3.4\times 10^{-4}~{\rm yr}^{-1}$, $4.3\times 10^{-4}~{\rm yr}^{-1}$, and $5.8\times 10^{-4}~{\rm yr}^{-1}$, respectively\footnote{These revised rates are $81\%$, $67\%$, $50\%$, $35\%$, and $13\%$, respectively, of their fiducial values.}. However, the distribution of $M_\bullet$ in a volume-complete TDE sample is shifted away from the smallest SMBHs, and is spread more evenly across black hole mass. We note that the relatively low energy releases seen in optically-selected TDE flares suggest that observed flares can be accomodated by the accretion of only a small fraction of the bound mass \citep{Piran+15}, which is why we retain our earlier calculations as fiducial.
Finally, we note two important caveats to the above discussion. The first is the role of SMBH spin in the circularization process. Rapid and misaligned SMBH spin induces nodal precession in tidal debris streams that winds them into different orbital planes and can retard circularization. In the simulations of \citet{Hayasa+15}, no delay occurs in the adiabatic equation of state limit that is likely relevant for most TDEs\footnote{Delays do occur for high values of SMBH spin when the gas follows a polytropic equation of state (corresponding to efficient cooling), but order of magnitude photon diffusion timescale considerations suggest that the adiabatic limit is more physical \citep{Hayasa+15}.}; this is because heating of the debris streams increases their thickness to a size greater than the spin-induced ``gap'' at the nominal self-intersection point. On the other hand, the semi-analytic model of \citet{GuiRam15} finds a larger role for spin-induced circularization delays, primarily due to the different analytic model employed for stream thickness. The second caveat is the role of magnetohydrodynamic stresses in debris circularization. These forces have not been included in any circularization simulation to date, and were initially suggested as an extra source of dissipation to aid the circularization process \citep{Guillochon+14}, but more recent work indicates that they may also be able to hinder circularization by mediating angular momentum transport \citep{Svirsk+15}. Ultimately, the complex dynamics of TDE circularization and emission mechanisms are beyond the scope of this work, but Fig. \ref{fig:ratesCirc} provides a preliminary examination of how rates would change if highly relativistic pericenters are required to circularize debris and produce a flare.
\subsection{Black Hole Mass Distribution}
Once observational selection effects are mitigated, and our understanding of the physics of TDE emission improved, the demographics of TDE flares may prove to be a powerful probe of the occupation fraction of SMBHs in low mass galaxies. Alternatively, given prior observational constraints on the SMBH occupation fraction, distribution of TDE flares with SMBH mass $M_{\bullet}$ could inform our knowledge of what physical processes produce TDE emission.
Figure \ref{fig:NDotObserved} shows the $M_{\bullet}$ distribution of the current TDE sample, calculated using 8 X-ray and $\gamma$-ray selected tidal disruption flares, and 11 flares found in the optical or UV (see caption). This sample is composed of all strong TDE candidates with available SMBH masses based on observed properties of the host galaxy (e.g. $M_{\rm bulge}, \sigma$) in combination with known scaling relations. SMBH mass estimates based on theoretical fits to the observed light curves are not included, given the many uncertainties in the emission process. The small and inhomogeneous sample used to create Figure \ref{fig:NDotObserved} is likely hampered by selection effects, thus warranting caution in its interpretation. Nevertheless, given the huge range of predictions shown in Figure \ref{fig:NDotObs2}, even our preliminary version of this plot has significant utility for constraining uncertainties in the SMBH occupation fraction $f_{\rm occ}(M_\bullet)$, and in the nature of TDE optical emission.
The bias towards moderately massive SMBHs visible in Figure \ref{fig:NDotObserved} is incompatible with super-Eddington outflows or blastwaves from decelerating jets, even in our case A scenario where the SMBH occupation fraction cuts off at very high values $M_\bullet \approx 10^{6.5}M_\odot$. This suggests that Fig. \ref{fig:NDotObserved} can be interpreted as evidence that an Eddington-limited optical emission mechanism dominates the current TDE flare sample: a highly nontrivial conclusion given the enormously super-Eddington typical values of $\dot{M}_{\rm peak}$ (Fig. \ref{fig:NDotObs}) and recent numerical results on the viability of super-Eddington luminosities \citep{Sadows+14, Jiang+14}. Alternatively, this could be seen as evidence for the non-fiducial model proposed in the prior subsection, where visible flares are only produced in events with quite relativistic pericenters ($R_{\rm p} \lesssim 12 R_{\rm g}$).
A selection effect that could influence this interpretation is the possible existence of systematic biases against detecting TDE flares in particularly low mass galaxies, for instance if such host galaxies were too dim to detect, or if the angular resolution of the telescope was insufficient to constrain the location of the TDE to the center of the galaxy. However, the recent discovery of a potential TDE in an intermediate mass galaxy (\citealt{Donato+14}; \citealt{Maksym+13}) shows that TDEs can in fact be associated with low mass hosts in practice. Future observational efforts will hopefully improve upon our Fig. \ref{fig:NDotObserved} by expanding the observational sample and by combining data from different surveys in a more self-consistent way.
\begin{figure*}
\includegraphics[width=170mm]{NDotObserved.pdf}
\caption{The SMBH mass $M_\bullet$ distribution of observed TDE flares, including 11 optical/UV-selected events ({\it blue dashed}), 8 X-ray selected events ({\it purple dotted}), or the full sample ({\it black solid}). Estimates of the SMBH mass $M_\bullet^{\rm est}$ for each flare are determined using galaxy scaling relations. $M_\bullet - \sigma$ is used when $\sigma$ is available, but other correlations between bulge luminosity and SMBH mass are used otherwise. Measurement errors are combined in quadrature with the intrinsic scatter of the galaxy scaling relations to produce error bars for each $M_\bullet^{\rm est}$; each individual TDE is modeled as a Gaussian probability density function $P(\log_{10}M_\bullet)=\exp(-(\log_{10}M_\bullet - \log_{10}M_\bullet^{\rm est})^2/(2s^2))/(s \sqrt{2 \pi})$, where the standard deviation is approximated as $s = \log_{10}M_\bullet^{\rm up} - \log_{10}M_\bullet^{\rm low}$. The X-ray/$\gamma$-ray sample consists of NGC5905 \citep{Bade+96}, RXJ1420 \citep{Greine+00}, SDSS J1323 \citep{Esquej+07}, SDSS J1311 \citep{Maksym+10}, Swift J1644 \citep{Bloom+11}, SDSS J1201 \citep{Saxton+12}, WINGS J1348 \citep{Maksym+13}, and GRB060218 \citep{Shcher+13}. The optical/UV sample consists of D1-9, D3-13 \citep{Gezari+06, Gezari+08}, D23H-1 \citep{Gezari+09}, VV-1, VV-2 \citep{vanVelzen+11}, PS1-10jh \citep{Gezari+12}, PS1-11af \citep{Chorno+14}, ASASSN-14ae \citep{Holoie+14}, and PTF09ge, PTF09axc, PTF09djl \citep{Arcavi+14}. }
\label{fig:NDotObserved}
\end{figure*}
\subsection{E+A galaxies}
\citet{Arcavi+14} point out that all three PTF candidate TDEs in their sample occur in E+A galaxies (\citealt{Dressler&Gunn83}), which are known to be post-starburst galaxies produced by the relatively recent ($\lesssim 1$ Gyr) major merger of two galaxies \citep{Yang+08, Snyder+11}. Because E+A galaxies represent only a fraction $\sim 10^{-3}$ of those in the low redshift Universe \citep{Goto07, Snyder+11}, if the TDE rate in E+A galaxies was the same as that in normal galaxies, then the odds of
seeing $\approx3/19$ of all TDEs in E+As would be only $\sim 10^{-2}$.
This apparent coincidence led \citet{Arcavi+14} to suggest that the apparent rate enhancement was due to a recent SMBH merger, which can increase the TDE rate by producing a SMBH binary. The SMBHs are brought together by dynamical friction, which can happen in less than a Hubble time so long as their mass ratio $q \lesssim 10$, as is favored for E+A progenitor mergers. After the binary forms, it will quickly harden through three-body interactions that eject nearby stars. Eventually, depletion of the stellar population stalls the binary hardening at $\sim {\rm pc}$ scales, giving rise to the well-known ``final parsec problem,'' but before this there is a phase where TDE rates are enhanced by many orders of magnitude, up to $\dot{N}_{\rm TDE} \sim 0.1~{\rm yr}^{-1}$ \citep{Ivanov+05, Chen+09, Chen+11}. However, these rate enhancements are predicted to be short lived, lasting only $\sim 10^{5}-10^{6}$ years, so that the fraction of all TDEs coming from hardening SMBH binaries is $\sim 3\%$ \citep{WegBod11}.
If we optimistically conjecture that E+A galaxies are the hosts of all hardening SMBH binaries, then the $\sim 10\%$ of TDEs associated with E+As is not too different from theoretical predictions. However, this conjecture is difficult to reconcile with the finding of \citet{Chen+11} that the greatest enhancements to TDE rates come from $10\lesssim q \lesssim 100$. Such mass ratios are disfavored as the origins of E+As, and furthermore will have difficulty forming binaries within 1 Gyr due to their long dynamical friction timescales \citep{Taffoni+03}.
We propose an alternative hypothesis: that the post-starburst nature of E+As implies these galaxies have anomalously large central stellar densities and are able to produce huge TDE rates ($\dot{N}_{\rm TDE} \sim 10^{-2}~{\rm yr}^{-1}$ would be required to match the observed prevalence of E+As in the TDE sample) through enhanced two-body relaxation. The younger stellar population may also assist in increasing the E+A TDE rates, but only by a factor $\approx 2$ for ages $\sim 100~{\rm Myr}$ (\S \ref{sec:PDMF}). Since per-galaxy TDE rates go roughly as $\dot{N}_{\rm TDE} \propto \rho(r_{\rm crit})^2$, the stellar populations in E+A galaxies (within $r\lesssim r_{\rm crit}$) would need to be roughly an order of magnitude denser than those in normal galaxies. Because $r_{\rm crit} \sim r_{\rm infl}$, this does not put unreasonable mass requirements on star formation during the starburst that preceded the birth of the E+A, but it does require a significant amount of star formation to be concentrated within the critical radius. Whether this is realized in practice is unclear.
\section{Conclusions}
\label{sec:conclusions}
We have calculated the rates of stellar tidal disruption events due to two-body relaxation in galactic nuclei, and explored the implications for current and future optical TDE flare samples. Motivated by the substantial tension between theoretical (high) and observed (low) TDE rates, we have relaxed, updated, or improved upon several assumptions that go into theoretical rate calculations; however, the novel components of our paper have either maintained or heightened the discrepancy between theory and observation. We stress that our neglect of alternate relaxational mechanisms, our assumption of spherical symmetry, the neglect of nuclear star clusters in \citet{Lauer+07a}, and our remnant-free stellar mass function have set a conservative floor on the true TDE rate in our sample of galaxies. Of all our assumptions, the only one which {\it may} cause an overestimate of TDE rates in individual galaxies is that of velocity isotropy; strongly anisotropic velocities could either increase or decrease the true TDE rate. Our major results are summarized as follows.
\begin{itemize}
\item The Nuker surface brightness profile $I_{\rm N}(R)$ is a robust choice of parametrization for use in TDE rate calculations. Adopting the alternate core-Sersic parametrization produces little change in per galaxy TDE rates $\dot{N}_{\rm TDE}$. Use of the Sersic parametrization will modestly decrease TDE rates, but this surface brightness profile was not designed to fit the innermost regions of galactic nuclei.
\item Adoption of a realistic stellar PDMF will modestly increase $\dot{N}_{\rm TDE}$ relative to a calculation where all stars are taken to have mass $M_\star = M_\odot$. Our fiducial choice of the Kroupa IMF increases the total TDE rate by a factor $\approx 1.5$. Incorporating stellar remnants into the PDMF can produce a significantly greater increase in $\dot{N}_{\rm TDE}$, but this may be prevented by mass segregation of stellar mass black holes in the small cusp galaxies that dominate the volumetric TDE rate. In systems where mass segregation does not occur, TDE rates will inversely correlate with nuclear metallicity.
\item A significant fraction ($\sim 30\%$) of TDEs come from the ``pinhole'' regime of relaxation, and can access large values of the penetration parameter $\beta$. In this regime, roughly half of TDEs are partial disruptions.
\item The volumetric rate of tidal disruption events is sensitive to the uncertain occupation fraction of low-mass SMBHs, and a volume-complete sample will share that sensitivity. However, if optical emission from TDEs is Eddington-limited, then flux-limited TDE samples will be insensitive to $f_{\rm occ}(M_\bullet)$. Flux-limited samples of TDEs found through super-Eddington emission mechanisms will be extremely sensitive to $f_{\rm occ}(M_\bullet)$. Sensitivity to $f_{\rm occ}$ will also be reduced if, speculatively, luminous flares require rapid circularization, and rapid circularization requires relativistic pericenters ($R_{\rm p} \lesssim 12R_{\rm g}$).
\item The current sample of observed TDEs is small and inhomogeneous, but nonetheless suggests that super-Eddington mechanisms {\it do not} dominate the optical emission of most TDEs. Of the Eddington-limited emission mechanisms we consider here, the direct emission from viscously spreading TDE disks is too faint to produce observed tidal disruption flares. Along with other lines of evidence, this suggests that some sort of reprocessing layer downgrades hard bolometric emission from the inner disk to softer wavelengths, but the nature of this reprocessing layer is as of yet unclear.
\item An even stronger rate tension between optically-selected TDE samples and our predictions for the size of these samples suggests that reprocessing layers may exist for only a small fraction of TDEs. The only remaining theoretical avenues for reducing the discrepancy between theoretical and observed TDE rates appear to be this hypothesis (strong bimodality in optical emission mechanisms), or alternatively strong and predominantly tangential velocity anisotropies in galactic nuclei.
\end{itemize}
More technical results of interest can be found in our appendices; in particular, we have derived for the first time the light curves and peak luminosities of viscously spreading TDE disks, have corrected past models of super-Eddington outflows in light of new models for $\Delta \epsilon$, and have also found new closed-form analytic expressions for $\bar{\mu}(\epsilon)$ and $\mathcal{F}(\epsilon)$ in regions close to a SMBH.
In closing, we note that the reliability of our results is limited by two extrapolations we are required to make. The first concerns the critical radius $r_{\rm crit}$ from which most TDEs are sourced: while this is marginally resolved or better for galaxies with $M_\bullet \gtrsim 10^7 M_\odot$ in the \citet{Lauer+07a} sample, it is unresolved in smaller galaxies which have an outsize effect on the TDE rate. If small galaxies preferentially turn over to shallow density profiles at small radii $\sim r_{\rm infl}$ (in a way that large galaxies do not), our results may overestimate the true TDE rates. We reiterate, however, that the opposite is more likely true: the Nuker fits we employ explicitly ignore excess inner light characteristic of nuclear star clusters, which are common in small galaxies. Our second extrapolation is that our power law TDE rate fit, Eq. \eqref{eq:bestfit}, extends from the smallest galaxies in our sample ($M_\bullet \sim 10^6 M_\odot$) down to even lower masses, where the galaxy scaling relations we employ (e.g. $M_\bullet -\sigma$) are untested\footnote{We note here that in calibrating Eq. \eqref{eq:bestfit}, we explicitly excluded two very small galaxies due to concerns about our ability to estimate their SMBH masses.}. Our two most conservative choices of occupation fraction $f_{\rm occ}$ do not significantly extrapolate in this way and should be regarded as more reliable than our three more liberal $f_{\rm occ}$ scenarios.
Overall, TDEs offer a unique and promising probe of the bottom end of the SMBH mass function. At present we are limited by both the small size of today's TDE sample and our limited understanding of optical emission mechanisms in these events. Amelioration of these problems offers important avenues for future observational and theoretical work, respectively, and improvements in both will allow us to fully realize the scientific potential of these dramatic events.
\section*{Acknowledgments}
We thank Chris Belczynski for providing tabulated data to map between ZAMS stellar masses and final compact remnant masses, and Tod Lauer for assistance in interpreting the Nuker data sets. We thank Iair Arcavi and Suvi Gezari for providing useful data on the peak luminosities of observed TDEs. We thank Brad Cenko, Jacqueline van Gorkum, Morgan MacLeod, Jeremiah Ostriker, Greg Snyder, Linda Strubbe, and Sjoert van Velzen for helpful conversations. Finally, we also thank the anonymous referee for many useful suggestions. BDM gratefully acknowledges support from the NSF grant AST-1410950 and the Alfred P. Sloan Foundation. This work was supported in part by the National Science Foundation under Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics.
\bibliographystyle{mn2e}
|
2,869,038,153,753 | arxiv | \section{Introduction}
Quantum phase transition (QPT), which is closely associated with
the fundamental changes that can occur in the macroscopic nature
of matter at zero temperature due to small variations in a given
external parameter, is certainly one of the major interests in
condensed matter physics. Actually, the past decade has seen a
substantial rejuvenation of interest in the study of quantum phase
transition, driven by experiments on the cupric superconductors,
the heavy fermion materials, insulator-superfluid transition in
ultrocold atoms, organic conductors and related
compounds\cite{Sachdev,Wen}. Quantum phase transitions are
characterized by the dramatic changes in the ground state
properties of a system driven by quantum fluctuations.
Traditionally phases and phase transitions are described by the
Ginzburg-Landau symmetry-breaking theory based on order parameters
and long range correlation. Recently, substantially effort has
been devoted to the analysis of quantum phase transitions from
other intriguing perspectives, such as topological
order\cite{Wen}, quantum entanglement\cite{Osterloh,Gu}, geometric
phases\cite{Carollo,Zhu2006} and some other geometric
quantities\cite{Zanardi0,Zanardi1,Zhou,Venuti,Quan}.
It is well-known that geometric ideas have played an important
role in physics. For example, Minkiwski's geometric reformulation
of special relativity by means of a space-time geometry was very
useful in the construction of general relativity by Einstein. In
this paper we will address another example: the study of quantum
phase transition from the perspective of geometric phase (GP)
factors. Actually,the phase factor of a wave function is the
source of all interference phenomena and one of most fundamental
concepts in quantum physics. The first considerable progress in
this field is achieved by Aharonov and Bohm in
1959\cite{Aharonov59}. They proposed that the loop integral of the
electromagnetic potentials gives an observed nonintegrable phase
factor in electron interference experiments. By using the
non-Abelian phase factor, Yang reformulated the concept of gauge
fields in an integral formalism in 1974\cite{Yang74}, and then Wu
and Yang showed that the gauge phase factor gives an intrinsic and
complete description of electromagnetism. It neither
underdescribes nor overdescribes it\cite{Wu_Yang}. The recent
considerable interests in this field are motivated by a pioneer
work by Berry in 1984\cite{Berry}, where he discovered that a
geometric phase, in addition to the usual dynamical phase, is
accumulated on the wave function of a quantum system, provided
that the Hamiltonian is cyclic and adiabatic. It was Simon who
first recognized the deep geometric meaning underlying Berry's
phase. He observed that geometric phase is what mathematicians
would call a $U(1)$ holonomy in the parameter space, and the
natural mathematical context for holonomy is the theory of fiber
bundles\cite{Simon}. A further important generalization of Berry's
concept was introduced by Aharonov and Anandan\cite{Aharonov87},
provided that the evolution of the state is cyclic. Besides,
Samuel and Bhandari introduced a more general geometric phase in
the nonadiabatic noncyclic evolution of the system\cite{Samuel}.
Now the applications of Berry phases and its generalizations
\cite{Berry,Simon,Aharonov87,Samuel,Sjoqvist,Zhu2000} can be found
in many physical fields, such as optics, magnetic resonance,
molecular and atomic physics, condensed matter physics and quantum
computation, {\sl
etc}.\cite{Shapere,Li98,Bohm,Thouless,Morpurgo,Zanardi}.
Very recently, investigations in the geometric phase of the
many-body systems have revealed so-called "criticality of
geometric phase"\cite{Carollo,Zhu2006}, in which geometric phase
associated with the ground state exhibits universality, or scaling
behavior, around the critical point\cite{Zhu2006}. The closed
relation between quantum phase transitions and geometric phases
may be understood from an intuitive view: quantum phase
transitions occur for a parameter region where the energy levels
of the ground state and the excited state cross or have an avoided
crossing, while geometric phase, as a measure of the curvature of
Hilbert space, can reflect the energy structures and then can
capture certain essential features of quantum phase
transitions\cite{Zhu2006}.
A typical example to show the significant connection between
geometric phase and quantum phase transition is one-dimensional XY
spin chain\cite{Carollo,Zhu2006}. Since the XY spin chain model is
exactly solvable and still presents a rich structure, it has
become a benchmark to test many new concepts. The XY spin chain
model and the geometric phase that corresponds to the quantum
phase transition have been analyzed in detail in
Ref.\cite{Carollo,Zhu2006}. The XY model is parameterized by
$\gamma$ and $\lambda$ (see the definitions below
Eq.(\ref{Hamiltonian})). Two distinct critical regions appear in
parameter space: the segment $(\gamma,\lambda)=(0,(0,1))$ for the
XX chain and the critical line $\lambda_c=1$ for the whole family
of the XY model\cite{Sachdev,Lieb}. It has been shown that
geometric phase can be used to characterize the above two critical
regions\cite{Carollo,Zhu2006,Hamma}. As for the first critical
region, a noncontractible geometric phase
itself\cite{Zhu2006,Hamma} or its difference between the ground
state and the first excited state\cite{Carollo} exists in the XX
chain if and only if the closed evolution path circulates a region
of criticality. There are much more physics in the second critical
region since second order quantum phase transition occur there.
The geometric phase of the ground state has been shown to have
scaling behavior near the critical point of the XY model. In
particular, it has been found that the geometric phase is
non-analytical and its derivative with respect to the field
strength $\lambda$ diverges logarithmically near the critical line
described by $\lambda_c=1$. Together with a logarithmic divergence
of the derivative as a function of system size, the critical
exponents are derived based on the scaling ansatz in the case of
logarithmic divergence\cite{Barber}. Furthermore, universality in
the critical properties of geometric phase for a family of XY
models is verified. These results show that the key ingredients
of quantum criticality are present in the ground-state geometric
phase and therefore are indicators of criticality of geometric
phase\cite{Zhu2006}.
\begin{figure}[tbph]
\centering
\includegraphics[height=5cm]{fig1.eps}
\caption{Schematic diagrams of the physical patterns reviewed in
the paper. (a) Pattern I: N spins in one-dimensional chain is the
whole system. The geometric phase of the whole N-spin system has
close relation with quantum phase transitions of the whole system.
(b) Pattern II: N spins are arranged in a circle and a test qubit
in the center possesses homogeneous coupling with all N-spin in
the ring. The geometric phase of the test qubit may be used to
locate the criticality of quantum phase transition exhibits in the
N-spin system. Depending on the couplings between the spins,
N-spin chain (ring) in (a) and (b) can be classified as the XY
model, the Dicke model and the Lipkin-Meshkoc-Glick model. All
these three models exhibit quantum phase transitions which
features can be captured by the geometric phases or some other
geometric quantities.} \label{Fig1}
\end{figure}
Motivated by these results in the XY model\cite{Carollo,Zhu2006},
the criticality of geometric phase for other many-body models are
investigated\cite{Plastina,Chen,Cui,Yi,Yuan,Cozzini}. Roughly
speaking, there are two patterns (see Fig.1) in literature to
investigate the criticality of geometric phase in the many-body
systems: (i) Pattern I is order to investigate the relation
between geometric phase of the whole many-body system and the
system's quantum phase transition. As illustrate in Fig.1 (a), the
geometric phase of the whole N-spin system is calculated and its
scaling features in the vicinity of critical points are
discussed\cite{Carollo,Zhu2006,Hamma,Plastina,Chen,Cui}. (b)
Pattern II is concerned with the geometric phase of a test qubit
as shown in Fig. 1(b). N spins are arranged in a circle and a
test qubit in the center possesses homogeneous coupling with all
N-spin in the ring\cite{Quan,Yi,Yuan}. The geometric phase of the
test qubit may be used to locate the criticality of quantum phase
transition exhibiting in the N-spin system\cite{Yi,Yuan}.
Depending on the couplings between the spins, N-spin chain (ring)
in (a) and (b) can be classified as the XY model, the Dicke model
and the Lipkin-Meshkoc-Glick model. All these three models exhibit
quantum phase transitions, whose features can be captured by the
geometric phases in both patterns I and II.
Furthermore, the study of QPTs by using other geometric
quantities, such as quantum overlap (quantum
fidelity)\cite{Zanardi0}, the Riemannian tensor\cite{Zanardi1}
etc., has been put forward and fruitful results have been reported
in literature. In particular, GP is a imagine part of quantum
geometric tensor and quantum fidelity is a real part, therefore a
unified theory of study QPTs from the perspective of quantum
geometric tensor has been developed\cite{Venuti}.
In this paper we will review some aspects of the theoretical
understanding that has emerged over the past several years towards
understanding the close relation between GPs and QPTs. In section
2, we present the connection between Berry curvature and QPs.
Section 3 describes the detailed relation between QPT and GP in
the patter I. Section 4 discusses the results in the patter II.
Finally, Section 5 presents some discussion and perspective in the
topic reviewed in this paper, in particular, we address the recent
advances in the connection of some other geometric quantities and
QPTs.
\section{Berry curvature and quantum phase transitions}
Let us first address the close relation between quantum phase
transitions and geometric phases from an intuitive view. Consider
a generic many-body system described by the Hamiltonian $H(\eta)$
with $\eta$ a dimensionless coupling constant. For any reasonable
$\eta$, all observable properties of the ground state of $H$ will
vary smoothly as $\eta$ is varied. However, there may be special
points denoted as $\eta_c$, where there is a non-analyticity in
some properties of the ground state at zero temperature, $\eta_c$
is identified as the position of a quantum phase transition.
Non-analytical behavior generally occur at level crossings or
avoided level crossings\cite{Sachdev}. Surprisingly, the geometric
phase is able to capture such kinds of level structures and is
therefore expected to signal the presence of quantum phase
transitions. To address this relation in greater detail, we review
geometric phases in a generic many-body system where the
Hamiltonian can be changed by varying the parameters ${\bf R}$ on
which it depends. The state $|\psi (t)\rangle$ of the system
evolves according to Schrodinger equation
\begin{equation}
\label{Schrodinger} i\hbar
\partial_t |\psi (t)\rangle=H ({\bf R} (t))|\psi(t)\rangle.
\end{equation}
At any instant, the natural basis consists of the eigenstates
$|n({\bf R})\rangle$ of $H({\bf R})$ for ${\bf R}={\bf R}(t)$,
that satisfy $H ({\bf R})|n ({\bf R})\rangle=E_n ({\bf R})|n ({\bf
R})\rangle$ with energy $E_n ({\bf R})$ $(n=1,2,3\cdots)$. Berry
showed that the GP for a specific eigenstate, such as the ground
state ($|g\rangle=|1\rangle$) of a many-body system we concern
here, adiabatically undergoing a closed path in parameter space
denoted by $C$, is given by\cite{Berry}
\begin{equation}\beta_g
(C)=-\int\int_C V_g ({\bf R})\cdot d{\bf S},
\end{equation} where
$d{\bf S}$ denotes area element in ${\bf R}$ space and $V_g ({\bf
R})$ is the Berry curvature given by
\begin{equation}
\label{Curvarure} V_g ({\bf R})=Im\sum_{n \not= g}\frac{\langle
g|\nabla_{\bf R} H|n\rangle \langle n|\nabla_{\bf R}
H|g\rangle}{(E_n-E_g)^2}.
\end{equation}
The energy denominators in Eq.(\ref{Curvarure}) show that the
Berry curvature usually diverges at the point of parameter space
where energy levels are cross and may have maximum values at
avoided level crossings. Thus level crossings or avoided level
crossings (seem Fig. 2), the two specific level structures related
to quantum phase transitions, are reflected in the geometry of the
Hilbert space of the system and can be captured by the Berry
curvature of the ground state. However, although the Berry
curvature is gauge invariant and is therefore an observable
quantity, no feasible experimental setup has been proposed to
directly observe it. On the other hand, the area integral of Berry
curvature, i.e., the geometric phase may be measured by the
interference experiments. Therefore, rather than the Berry
curvature, hereafter we will focus on the relation between
geometric phase and quantum phase transition, and therefore the
proposed relation between them may be experimentally tested.
\begin{figure}[tbph]
\centering \label{Fig2}
\includegraphics[height=4cm,width=9cm]{fig2.eps}
\caption{Schematic representation of the energy level for the
many-body systems. The energy levels of the ground state and the
excited state cross in (a) and have an avoided crossing in (b). On
the one hand, quantum phase transition occurs at level crossings
or avoided level crossings, which represents by the parameter
$\eta_c$; on the other hand, the Berry curvature usually diverges
or may have maximum values at the point of parameter $\eta_c$. }
\end{figure}
\section{Pattern I: QPT and GP of the many-body systems}
In this section we review the closed relation between QPTs and GPs
for the Pattern I, as shown in Fig.1 (a), where the N-spin chain
can be classified as the XY model, the Dicke model and the
Lipkin-Meshkoc-Glick model.
\subsection{The XY spin chain}
Our first example is one-dimensional XY spin chain investigated in
detail in Ref.\cite{Zhu2006} . The XY model concerns N spin-1/2
particles (qubits) with nearest neighbor interactions and an
external magnetic field. The Hamiltonian of the XY spin chain has
the following form
\begin{equation}
\label{Hamiltonian} H=-\sum_{j=-M}^M \left
(\frac{1+\gamma}{2}\sigma_j^x\sigma_{j+1}^x+\frac{1-\gamma}{2}\sigma_j^y\sigma_{j+1}^y+\lambda\sigma_j^z
\right), \end{equation} where $\sigma^\mu_j\ (\mu=x,y,x)$ are the
Pauli matrices for the $j$th spin, $\gamma$ represents the
anisotropy in the $x-y$ plane and $\lambda$ is the intensity of
the magnetic field applied in the $z$ direction. We assume
periodic boundary conditions for simplicity and choose $N\
(=2M+1)$ odd to avoid the subtleties connected with the boundary
terms. Nevertheless, the differences with other boundary
conditions and the even $N$ case are the order to O(1/N) and then
negligible in the thermodynamic limit where quantum phase
transitions occur\cite{Lieb,Osterloh}. This XY model encompasses
two other well-known spin models: it turns into transverse Ising
chain for $\gamma=1$ and the XX (isotropic XY) chain in a
transverse field for $\gamma=0$.
In order to derive the geometric phase of ground state in this
system, we introduce a new family of Hamiltonians that can be
described by applying a rotation of $\phi$ around the $z$
direction to each spin \cite{Carollo}, {\sl i.e.},
\begin{equation}H_\phi=U_\phi^\dagger H U_\phi,\ \ U_\phi=\prod_{j=-M}^M
\exp(-i\phi\sigma_j^z/2).\end{equation} The critical behavior is
independent of $\phi$ as the spectrum $\Lambda_k$ (see below) of
the system is $\phi$ independent. This class of models can be
diagonalized by means of the Jordan-Wigner transformation that
maps spins to one-dimensional spinless fermions with creation and
annihilation operators $a_j$ and $a_j^\dagger$ via the relations,
$a_j=(\prod_{l<j} \sigma_l^z)\sigma_j^\dagger$
\cite{Lieb,Sachdev}. Due to the (quasi) translational symmetry of
the system we may introduce Fourier transforms of the fermionic
operator described by $d_k=\frac{1}{\sqrt{N}}\sum_j a_j
\exp(-i2\pi jk/N)$ with $k=-M,\cdots,M$. The Hamiltonian $H_\phi$
can be diagonalized by transforming the fermion operators in
momentum space and then using the standard Bogoliubov
transformation. In this way, we obtain the following diagonalized
form of the Hamiltonian,
\begin{equation}H=\sum_k \Lambda_k
(c_k^\dagger c_k-1),\end{equation} where the energy of one
particle excitation is given by
\begin{equation}\Lambda_k=\sqrt{(\lambda-\cos(2\pi
k/N))^2+\gamma^2\sin^2(2\pi k/N)}\end{equation} and $c_k=d_k
\cos\frac{\theta_k}{2}-id_{-k}^\dagger e^{2i\phi}
\sin\frac{\theta_k}{2}$ with the angle $\theta_k$ defined by
$\cos\theta_k=(\cos\frac{2\pi k}{N}-\lambda)/\Lambda_k$.
The ground state $|g\rangle$ of $H_\phi$ is the vacuum of the
fermionic modes described by $c_k |g\rangle=0$. Substituting the
operator $c_k$ into this equation, one obtains the ground state as
\begin{equation} |g\rangle=\prod_{k=1}^M \left (
\cos\frac{\theta_k}{2}|0\rangle_k |0\rangle_{-k} -i e^{2i\phi}
\sin\frac{\theta_k}{2}|1\rangle_k|1\rangle_{-k} \right),
\end{equation}
where $|0\rangle_k$ and $|1\rangle_k$ are the
vacuum and single excitation of the $k$th mode, respectively. The
ground state is a tensor product of states, each lying in the
two-dimensional Hilbert space spanned by
$|0\rangle_k|0\rangle_{-k}$ and $|1\rangle_k|1\rangle_{-k}$. The
geometric phase of the ground state, accumulated by varying the
angle $\phi$ from $0$ to $\pi$ (Because the Hamiltonian $H_\phi$
has bilinear form, $H_\phi$ is $\pi$ periodic in $\phi$ ), is
described by
\begin{equation}\beta_g=-\frac{i}{M}\int_0^\pi \langle g|\partial_\phi
|g\rangle d\phi.\end{equation} The direct calculation
shows\cite{Carollo}
\begin{equation}
\label{Phase} \beta_g=\frac{\pi}{M}\sum_{k=1}^M (1- \cos\theta_k).
\end{equation}
The term $\beta_k\equiv \pi (1-\cos\theta_k)$ is a geometric phase
for the $k$th mode, and represents the area in the parameter space
(which is the Bloch sphere) enclosed by the loop determined by
$(\theta_k,\phi)$. To study the quantum criticality, we are
interested in the thermodynamic limit when the spin lattice number
$N\ \to \infty$. In this case the summation
$\frac{1}{M}\sum_{k=1}^M$ can be replaced by the integral
$\frac{1}{\pi}\int_0^\pi d\varphi$ with $\varphi=\frac{2\pi
k}{N}$; and then the geometric phase in the thermodynamic limit is
given by
\begin{equation}
\label{Limit} \beta_g=\int_0^\pi (1-\cos\theta_\varphi)d\varphi,
\end{equation}
where $\cos\theta_\varphi=(\cos\varphi-\lambda)/\Lambda_\varphi$
with the energy spectrum
$\Lambda_{\varphi}=\sqrt{(\lambda-\cos\varphi)^2+\gamma^2\sin^2\varphi}$.
As for quantum criticality in the XY model, there are two regions
of criticality, defined by the existence of gapless excitations in
the parameter space $(\gamma,\lambda)$: (i) the XX region of
criticality described by the segment $(\gamma,\lambda)=(0,(0,1))$;
(ii) the critical line $\lambda_c=1$ for the whole family of the
XY model. For the second critical region, we need to distinguish
two universality classes depending on the anisotropy $\gamma$. The
critical features are characterized in term of a critical exponent
$\nu$ defined by $\xi \sim |\lambda-\lambda_c|^{-\nu}$ with $\xi$
representing the correlation length. For any value of $\gamma $,
quantum criticality occurs at a critical magnetic field
$\lambda_c=1$. For the interval $0<\gamma\le 1$ the models belong
to the Ising universality class characterized by the critical
exponent $\nu=1$, while for $\gamma=0$ the model belongs to the XX
universality class with $\nu=1/2$ \cite{Lieb,Sachdev}. The close
relation between geometric phase and quantum criticality for the
first region has been addressed in
Refs.\cite{Carollo,Zhu2006,Hamma}, here we mainly review the
results for the second region, which is clearly more interesting
in the sense that the second order quantum phase transitions occur
there.
\begin{figure}[tbph]
\centering
\includegraphics[height=5cm,width=8cm]{fig3.eps}
\caption{(color online). (a) Geometric phase $\beta_g$ of the
ground state (b) and its derivative $d\beta_g/d\lambda$ as a
function of the Hamiltonian parameters $\lambda$ and $\gamma$. The
lattice size $N=10001$. There are clear anomalies for the
derivative of geometric phase along the critical line
$\lambda_c=1$. } \label{Fig3}
\end{figure}
To demonstrate the relation between geometric phase and quantum
phase transitions, we plot geometric phase $\beta_g$ and its
derivative $d\beta_g/d\lambda$ with respect to the field strength
$\lambda$ and $\gamma$ in Fig.3. A significate feature is notable:
the nonanalytical property of the geometric phase along the whole
critical line $\lambda_c=1$ in the XY spin model is clearly shown
by anomalies for the derivative of geometric phase along the same
line.
\begin{figure}[tbph]
\centering \vspace{1.5cm}
\includegraphics[height=3.5cm,width=8cm]{fig4.eps}
\vspace{-1.0cm}
\caption{(color online). The derivatives $d\beta_g/d\lambda$ for
the Ising model ($\gamma=1$) as a function of the Hamiltonian
parameter $\lambda$. The curves correspond to different lattice
sizes $N=21,101,501,1001,\infty$. With increasing the system
sizes, the maximum becomes more pronounced. The inset shows that
the position of the maximum changes and tends as $N^{-1.803}$
towards the critical point $\lambda_c=1$. } \label{Fig4}
\end{figure}
To further understand the relation between geometric phase and
quantum criticality, we study the scaling behavior of geometric
phases by the finite size scaling approach\cite{Barber}. We first
look at the Ising model. The derivatives $d\beta_g/d\lambda$ for
$\gamma=1$ and different lattice sizes are plotted in
Fig.\ref{Fig4}. There is no real divergence for finite $N$, but
the curves exhibit marked anomalies and the height of which
increases with lattice size. The position $\lambda_m$ of the peak
can be regarded as a pseudo-critical point \cite{Barber} which
changes and tends as $N^{-1.803}$ towards the critical point and
clearly approaches $\lambda_c$ as $N \to \infty$. In addition, as
shown in Ref \cite{Zhu2006}, the value of $d\beta_g/d\lambda$ at
the point $\lambda_m$ diverges logarithmically with increasing
lattice size as:
\begin{equation}
\label{Scaling1} \frac{d\beta_g}{d\lambda}|_{\lambda_m} \approx
\kappa_1 \ln N +\mbox{const.},
\end{equation}
with $\kappa_1=0.3121$. On the other hand, the singular behavior
of $d\beta_g/d\lambda$ for the infinite Ising chain can be
analyzed in the vicinity of the quantum criticality, and we find
the asymptotic behavior as
\begin{equation} \label{Scaling2}
\frac{d\beta_g}{d\lambda}\approx \kappa_2
\ln |\lambda-\lambda_c|+\mbox{ const.}, \end{equation}
with $\kappa_2=-0.3123$. According to the scaling ansatz in the
case of logarithmic divergence \cite{Barber}, the ratio
$|\kappa_2/\kappa_1|$ gives the exponent $\nu$ that governs the
divergence of the correlation length. Therefore, $\nu \sim 1$ is
obtained in our numerical calculation for the Ising chain, in
agreement with the well-known solution of the Ising model
\cite{Lieb}.
A cornerstone of QPTs is a universality principle in which the
critical behavior depends only on the dimension of the system and
the symmetry of the order parameter. The XY model for the interval
$\gamma \in (0,1]$ belong to the same universality class with
critical exponent $\nu=1$. To verify the universality principle in
this model, the scaling behavior for different values of the
parameter $\gamma$ has been numerically calculated in Ref.
\cite{Zhu2006}. The results there shown that the asymptotic
behaviors are still described by Eqs. (\ref{Scaling1}) and
(\ref{Scaling2}) with $\kappa_1$ and $\kappa_2$ being
$\gamma$-dependent constants, and the same critical exponent
$\nu=1$ can be obtained for any $\gamma \in (0,1]$.
Comparing with the $\gamma\not= 0$ case, the nature of the
divergence of $d\beta_g/d\lambda$ at the critical point
$(\gamma=0,\lambda=1)$ belongs to a different universality class,
and the scaling behavior of geometric phase can be directly
extracted from the explicit expression of the geometric phase in
the thermodynamic limit. The geometric phase under the
thermodynamic limit can be obtained explicitly from
Eq.(\ref{Limit}) for $\gamma=0$ as
\begin{eqnarray}
\beta_g =\left\{
\begin{array}
{ll}%
2\pi, & (\lambda \leq 1)\\
2\pi-2 \arccos(\lambda), & (\lambda > 1)%
\end{array}
\right.
\end{eqnarray}
However, it appears from
Eq.(\ref{Phase}) that the geometric phase $\beta_g$ is always
trivial for strictly $\gamma=0$ and every finite lattice size $M$,
since $\theta_k=0$ or $\pi$ for every $k$. The difference between
the finite and infinite lattice sizes can be understood from the
two limits $N\to \infty$ and $\gamma \to 0$. Assume
$\gamma=\epsilon$ with $\epsilon$ an arbitrary small but still
finite value, then we can still find a solution $\varphi_0$ (it
implies $N\to \infty$) for $\cos\varphi_0-\lambda=0$ but $\Lambda
_{\varphi_0}=\epsilon\sqrt{1-\lambda^2}\neq 0$ for $\lambda\neq
1$. Then a $\pi$ geometric phase appears for such $\varphi_0$
since $\theta_{\varphi_0}=\pi/2$. Since
$d\beta_g/d\lambda=\sqrt{2}(1-\lambda)^{-1/2}$ $(\lambda \to
1^{-})$, we can infer the known result that the critical exponent
$\nu=1/2$ for the XX model.
Furthermore, we can confirm the known equivalent $z\nu=1$ between
$\nu$ and the dynamical exponent $z$ from the calculations of
geometric phases. The dynamical behavior is determined by the
expansion of the energy spectrum, i.e., $\Lambda_{\varphi
\rightarrow 0} \sim \varphi^z [1+(\varphi\xi)^{-z}]$. Then $z=1$
for $\gamma\in(0,1]$ and $z=2$ for $\gamma=0$ are found by the
expansion of $\Lambda_\varphi$ in the case $\varphi\rightarrow 0$.
So we have $z\nu=1$, which is indeed the case for the XY
criticality\cite{Sachdev}.
Therefore, the above results clearly show that all the key
ingredients of the quantum criticality are present in the
geometric phases of the ground state in the XY spin model.
\subsection{The Dicke model}
Our second example is the Dicke model \cite{Dicke} studied in
Ref.\cite{Plastina,Chen}. It consists of $N$ two-level (qubit)
systems coupled to a single Bosonic mode. The Hamiltonian is given
by ($\hbar=1$)
\begin{equation}
H=\omega a^{+}a+\Delta
J_x+\frac{\lambda}{\sqrt{N}}(a^\dagger+a)J_z,
\end{equation}
where $a$, $a^{+}$ are the annihilation and creation operators of
the Bosonic mode, respectively;
$J_{x,z}=\sum_{j=1}^{N}\sigma_{x,z}^j$ with $\sigma_{x,z}^j$ being
the Pauli matrices for the qubit $j$ are collective angular
momentum operators for all qubits; $\lambda$ denotes the coupling
strength between the atom and field; The parameters $\Delta$ and
$\omega$ represent the transition frequency of the atom and
Bosonic mode frequency, respectively. The prefactor $1/\sqrt{N}$
is inserted to have a finite free energy per atom in the
thermodynamical limit $N\rightarrow\infty$. This Hamiltonian is
canonically equivalent to the Dicke Hamiltonian by a $\pi/2$
rotation around the $y$ axis.
As illustrated in Refs.\cite{Hepp1,Emary}, exact solutions may be
obtained in the thermodynamic limit by employing a
Holstein-Primakoff transformation of the angular momentum algebra.
In the thermodynamical limit, the Dicke Hamiltonian undergoes a
second quantum phase transition at the critical point
$\lambda_c=\sqrt{\omega\Delta/2}.$ When $\lambda<\lambda_c$, the
system is in its normal phase in which the ground state is highly
unexcited, while $\lambda>\lambda_c$, the system is in its
superradiant phase in which both the bosonic field occupation and
the spin magnetization acquire macroscopic values.
Similarly to the XY spin model, in order to investigate the
geometric phase one changes the original Hamiltonian by the
unitary transformation $U_\phi=\exp(-i\phi J_x/2)$ where $\phi$ is
a slowly varying parameter, and then the transformed Hamiltonian
is given by
\begin{equation}
\label{Dicke} H_\phi=U^\dagger_\phi H
U_\phi=\frac{\omega}{2}[p^2+q^2+\mathbf{B}\cdot \mathbf{J}],
\end{equation}
where the Hamiltonian of the free bosonic field is expressed in
terms of canonical variables $q=(a^\dagger+a)/\sqrt{2}$ and
$p=i(a^\dagger-a)/\sqrt{2}$ that obey the standard quantization
condition $[q,p]=i$.
$\mathbf{B}=(D,\frac{Lq}{\sqrt{N}\sin\phi},\frac{Lq}{\sqrt{N}\cos\phi})$
with dimensionless parameters $D=2\Delta/\omega$ and
$L=2\sqrt{2}\lambda/\omega$ is an effective magnetic field felt by
the qubits.
In the adiabatic limit, the geometric phase associated with the
ground state of the system can be obtained by the Born-Oppenheimer
approximation \cite{Plastina,Liberti}. In this case, the total
wave function of the ground state of the system can be
approximated by
\begin{equation}
\label{Ground_state} |\psi_{tot}\rangle=\int dq \varphi
(q)|q\rangle \otimes |\chi (q,\phi)\rangle.
\end{equation}
Here the state $|\chi (q,\varphi\rangle$ is the state of the
adiabatic equation of the qubit ("fast") part for each fixed value
of the slow variable $q$, i.e.,
\begin{equation}
\mathbf{B} \cdot \mathbf{J} |\chi (q,\varphi)\rangle=E (q) |\chi
(q,\varphi)\rangle
\end{equation}
with $E(q)$ the eigenenergy. It can be proven that the state
$|\chi (q,\varphi)\rangle$ can be expressed as a direct product of
$N$ qubits as $|\chi (q,\varphi)\rangle=\otimes_{j=1}^N |\chi
(q,\varphi)\rangle_j,$ and the state of each qubit can be written
as $$|\chi
(q,\varphi)\rangle_j=\sin\frac{\alpha}{2}|\uparrow\rangle_j-\cos\frac{\alpha}{2}
e^{-i\eta} |\downarrow\rangle_j$$ with $\cos\alpha=Lq
\cos\phi/(\sqrt{N} E(q))$ and $\tan\eta=Lq \sin\phi/(\sqrt{N} D).$
On the other hand, the ground state wave function for the
oscillator $\varphi (q)$ is governed by one-dimensional
time-independent Schrodinger equation
$$
H_{ad} |\varphi (q)\rangle=\frac{\omega}{2} \left( \frac{d^2}{d
q^2}+q^2-N E(q) \right)=\varepsilon_0 |\varphi (q)\rangle,
$$
where $\varepsilon_0$ is the lowerest eigenvalues of the adiabatic
Hamiltonian $H_{ad}$.
Once the total wave function of the ground state is derived, the
geometric phase $\beta_g$ of the ground state may be derived by
the standard method as $\beta_g=i\oint
\langle\psi_{tot}|d/d\phi|\psi_{tot}\rangle d\varphi$, and the
final result is given by
\begin{equation}
\beta_g=N\pi \left(1+\frac{\langle J_x\rangle}{N}
\right).\end{equation} In the thermodynamic limit, one can show
that
\begin{equation}
\frac{\beta_g}{N} \left|_{N\to\infty} \right.=\left\{
\begin{array}{ll}\ \ \ \ 0, & (\alpha \leq 1 ) \\ \pi
(1-\frac{1}{\alpha}),\ \ \ \ & (\alpha > 1 ).\end{array} \right.
\end{equation}
\begin{figure}[tbph]
\centering
\includegraphics[height=4cm]{fig5.eps}
\caption{The geometric phase $\gamma$ $(\equiv \beta_g)$ of the
ground state and its derivative (inset) for the Dicke model with
respect to the parameter $\alpha$ for different values of qubit
$N$ and the parameter $D=10$. The geometric phase increases with
$\alpha$, and there is a cusplike behavior in the thermodynamic
limit at the critical transition point $\alpha=1$. } \label{Fig5}
\end{figure}
The scaled geometric phase $\beta_g/N$ and its derivative with
respect to the parameter $\alpha$ for $D=10$ is shown in
Fig.\ref{Fig5} \cite{Plastina}. It is evident that the geometric
phase increases with increasing the coupling constant at the
finite qubit number $N$, while in the thermodynamic limit the
geometric phase vanishes when $\alpha <\alpha_{c}$ and has a
cusplike behavior at the critical point $\alpha=\alpha_{c}$. In
addition, the derivative is discontinuous at the critical point.
These results are consistent with the expected behavior of the
geometric phase across the critical point, and therefore we add
another unusual example to the close relation between geometric
phase and quantum phase transition.
\subsection{The Lipkin-Meshkov-Glick model}
Our third example is the Lipkin-Meshkov-Glick (LMG) model
discussed in Ref\cite{Cui}. The LMG was first introduced in
nuclear physics\cite{Lipkin}. The LMG model describes a set of N
qubits coupled to all others with a strength independent of the
position and the nature of the elements and a magnetic field $h$
in the $z$ direction, i.e., the Hamiltonian is given by
\begin{equation}\label{LMG}
H= - \frac{1}{N}(S^2_x + \gamma S^2_y) - h S_z,
\end{equation}
where $\gamma$ is the anisotropy parameter.
$S_{\alpha}=\sum_{i=1}^{N}\sigma^i_{\alpha}/2 (\alpha=x, y, z)$
and the $\sigma_{\alpha}$ is the Pauli operator, N is the total
particle number in this system. The prefactor $1/N$ is essential
to ensure the convergence of the free energy per spin in the
thermodynamic limit. As widely discussed in the literature ( see,
e.g., Ref. \cite{Botet}), this system displays a second-order
quantum phase transition at the critical point $h=1$.
The diagonalization of the LMG Hamiltonian and derivation of the
geometric phase can be obtained by a standard procedure, which can
be summarized in the following steps\cite{Cui}: (i) perform a
rotation of the spin operators around the $y$ direction, that
makes the $z$ axis along the so-called semiclassical magnetization
\cite{Dusuel} in which the Hamiltonian described in Eq.\ref{LMG}
has the minimal value in the semiclassical approximation. (ii)
Similar to the XY model and the Dicke model, to introduce a
geometric phase of the ground state, we consider a system which
has a rotation $U(\phi)=e^{-i\phi\tilde{S}_z}$ around the new $z$
direction, and then the Hamiltonian becomes $H(\phi)=U^\dagger
(\phi)H U^{\dagger}(\phi)$. (iii) then we use the
Holstein-Primakoff representation,
\begin{eqnarray}\label{hp}
\tilde{S}_z(\phi) &=& N/2 - a^{\dagger}a, \nonumber\\
\tilde{S}^{+}(\phi)&=&(N-a^{\dagger}a)^{1/2}a e^{i\phi},\nonumber\\
\tilde{S}^-(\phi)&=&a^{\dagger} e^{- i\phi}(N-a^{\dagger}a)^{1/2}
\end{eqnarray}
in which $a^{\dagger}$ is bosonic operator. Since the $z$ axis is
along the semiclassical magnetization, $a^{\dagger}a/N\ll 1$ is a
reasonable assumption under low-energy approximation, in which $N$
is large but finite. (iv) the Bogoliubov transformation, which
defines the bosonic operator as $\label{bt} b(\phi)=\cosh x a
e^{i\phi} + \sinh x a^{\dagger}e^{-i\phi}$, where
$tanh2x=2\Gamma/\Delta$ with $\Delta=\sin^2\theta - \frac{\gamma+
\cos^2\theta}{2}+h\cos\theta\nonumber$ and
$\Gamma=\frac{\gamma-\cos^2\theta}{4}.$ These procedures
diagonalize the Hamiltonian to a form
\begin{equation}
H_{diag}(\phi)=Nd+\xi + \Delta^D b^{\dagger}(\phi)b(\phi),
\end{equation}
where $d=- \frac{1}{4}(\sin^2\theta + 2h\cos\theta)$,
$\xi=\frac{\Delta}{2}(\sqrt{1-\epsilon^2}-1)\nonumber$
$\Delta^D=\Delta\sqrt{1 - \epsilon^2}\nonumber$, and
$\epsilon=\tanh2x=2\Gamma/\Delta.$ The ground state
$|g(\phi)\rangle$ is determined by the relation
$b(\phi)|g(\phi)\rangle=0.$ Substituting $b(\phi)$ into the
equation above, one finds the ground state,
\begin{eqnarray}
\nonumber |g(\phi)\rangle &
=&\frac{1}{C}\sum_{n=0}^{[N/2]}\sqrt{\frac{(2n-1)!!}{2n!!}}(-
\frac{e^{-i\phi}\sinh x}{e^{i\phi}\cosh x })^{n-1}\\
& & \cdot(-\sqrt{2}e^{-i\phi}\sinh x ) |2n\rangle,
\end{eqnarray}
where $n!!=n(n-2)(n-4)\cdots$ and $n!!=1$ for $n\leq0$.
$|n\rangle$ is the Fock state of bosonic operator $a^{\dagger}$
and the normalized constant is
$C^2=\sum_{n=0}^{[N/2]}2\sinh^2x\frac{(2n-1)!!}{2n!!}\tanh^{2(n-1)}x$.
The geometric phase $\beta_g$ of the ground state accumulated by
changing $\phi$ from $0$ to $\pi$ can be derived by the standard
method as shown before, and the final result is give by\cite{Cui}
\begin{equation}\label{g}
\beta_g=\pi\left[ 1 -
\frac{\sum_{n=0}^{[N/2]}2n\frac{(2n-1)!!}{2n!!}\tanh^{2(n-1)}x}
{\sum_{n=0}^{[N/2]}\frac{(2n-1)!!}{2n!!}\tanh^{2(n-1)}x} \right].
\end{equation}
To have some basic ideas about the relation between the geometric
phase and phase transition in the LMG model, the geometric phases
$\beta_g$ as a function of the parameters $(\gamma,h)$ have been
plotted in Fig.6\cite{Cui}. It is notable that the geometric phase
$\beta$, independent of the anisotropy, is divergent in the line
$h=1$, where the LMG model has been proven to exhibit a
second-order phase transition\cite{Botet}. The divergence of
geometric phase itself, rather then the derivative of geometric
phase, shows distinguished character from the XY and Dicke models.
This difference stems from that the collective interaction in the
LMG model, which is absent in the XY model\cite{Cui}.
\begin{figure}
\centering \label{Fig_LMG}
\includegraphics[height=4cm]{fig6.eps}
\put(-40, 5){$h$} \put(-20, 100){$\gamma$} \put(10, 60){$\beta_g$}
\caption{The geometric phase $\beta_g$ of the ground state for the
LMG model as a function of the parameter $(\gamma,h)$ for $N=200$.
The divergence of $\beta_g$ is evident at the critical line
$h_c=1$}
\end{figure}
The scaling behavior of $\beta_g$ has also been studied in Ref
\cite{Cui}. A relatively simply relation $\beta_g\approx- N$ is
obtained there. Furthermore the scaling is independent of
$\gamma$, which means that for different $\gamma$, the phase
transitions belong to the same university class. This phenomenon
is different from the XY model, in which the isotropic and
anisotropic interactions respectively belong to different
university classes \cite{Zhu2006}.
\section{Pattern II: GP of the test qubit and QPT}
In this section, we consider a test qubit coupled to a quantum
many-body system\cite{Yi,Yuan,Quan}. The Hamiltonian of the whole
system may have the form
\begin{equation}
H=H_t+H_S+H_I,
\end{equation}
where $H_t=\mu \mathbf{B} \cdot \mathbf{\sigma}$ stands for the
Hamiltonian of the test qubit in a general form, $H_S$ represents
the Hamiltonian of a many-body system which we are going to study,
and $H_I$ denotes the coupling between them. We assume that the
quantum system described by $H_S$ undergoes a quantum phase
transition at certain critical points. It is expected that the
geometric phase of the test qubit can be used to identify the
quantum phase transition of the many-body system. A relatively
general formalism to show the close relation between geometric
phase of the test qubit and quantum phase transition of the many
body system has been developed in Ref.\cite{Yi}. For solidness,
here we address a detailed example studied in Ref.\cite{Yuan},
where the many-body system with the quantum phase transition is a
XY spin chain, i.e.,
\begin{equation}
H_t=\mu\sigma^z/2+\nu\sigma^x/2,
\end{equation}
\begin{equation}
H_S=-\sum_{l=-M}^M \left
(\frac{1+\gamma}{2}\sigma_j^x\sigma_{l+1}^x+\frac{1-\gamma}{2}\sigma_l^y\sigma_{l+1}^y+\lambda\sigma_l^z
\right),
\end{equation}
\begin{equation}
H_I=\frac{\eta}{N}\sum_{l=1}^{N}\sigma^z\sigma_l^z
\end{equation}
where the Pauli matrices $\sigma^{x,y,z}$ and $\sigma_l^{x,y,z}$
denote the test qubit and the XY spin chain subsystems,
respectively. The parameter $\eta$ represents the coupling
strength between the test qubit and all spins (qubits) in the spin
chain. This model is similar to the Hepp-Coleman model\cite{Hepp},
which was initially proposed as a model for quantum measurement,
and its generalization\cite{Nakazato,Sun}.
Following Ref.[15], we assume that the test qubit is initially in
a superposition state
$|\phi_{t}(0)\rangle=c_{g}|g\rangle+c_{e}|e\rangle$, where
$|g\rangle=\left(
\sin\frac{\theta_0}{2},-\cos\frac{\theta_0}{2}\right)
^T$ and $|e\rangle=\left( \cos\frac{\theta_0}{2},\sin\frac{\theta_0}%
{2}\right)^T$ with $\theta_0=\tan^{-1}(\nu/\mu)$ are ground and
excited states of $H_{t}$, respectively. The coefficients $c_{g}$
and $c_{e}$ satisfy the normalization condition,
$|c_{g}|^{2}+|c_{e}|^{2}=1$. Then the evolution of the $XY$ spin
chain initially prepared in $|\varphi(0)\rangle$, will split into
two branches $|\varphi_{\alpha}(t)\rangle=\exp(-iH_{\alpha
}t)|\varphi(0)\rangle$ ($\alpha=g,e$), and the total wave function
is obtained
as $|\psi(t)\rangle=c_{g}|g\rangle\otimes|\varphi_{g}(t)\rangle+c_{e}%
|e\rangle\otimes|\varphi_{e}(t)\rangle$. Here, the evolutions of
the two branch wave functions $|\varphi_{\alpha}(t)\rangle$ are
driven, respectively, by the two effective Hamiltonians
\begin{equation}
H_{g}=\langle
g|H|g\rangle=H_{S}-\delta\sum_{l=1}^N\sigma_{l}^{z}-\Delta,
\end{equation}
\begin{equation}
H_{e}=\langle e|H|e\rangle=H_{S}+\delta\sum_{l=1}^N
\sigma_{l}^{z}+\Delta,
\end{equation}
where $\Delta=\sqrt{\mu^{2}+\nu^{2}}/2$ and
$\delta=\eta\cos\theta_0/N$. Both $H_{g}$ and $H_{e}$ describe the
$XY$ model in a transverse field, but with a tiny difference in
the field strength. Similar to the method to diagonalize the
standard XY spin chain addressed in the patter I, the ground
states of the Hamiltonians $H_\alpha$ are given by
\begin{equation} |G_\alpha\rangle=\prod_{k=1}^M \left (
\cos\frac{\theta_k^\alpha}{2}|0\rangle_k |0\rangle_{-k} +i
\sin\frac{\theta_k^\alpha}{2}|1\rangle_k|1\rangle_{-k} \right),
\end{equation}
where $\cos\theta_k^\alpha=\epsilon_k^\alpha/\Lambda_k^\alpha$
with
$\Lambda_k^\alpha=\sqrt{\epsilon_{k,\alpha}^{2}+\gamma^{2}\sin^{2}\frac{2\pi
k}{N}}$ and $\epsilon_{k,\alpha}=\lambda-\cos\frac{2\pi
k}{N}+\kappa_{\alpha}\delta$ ($\kappa_g=-\kappa_e=1$).
$|0\rangle_k$ and $|1\rangle_k$ are the vacuum and single
excitation of the $k$th mode, respectively. Here $d_k$ is
similarly defined as the standard XY model (see section 3.1).
Now we turn to study the behaviors of the geometric phase for the
test qubit when the XY spin chain is at its ground state. Due to
the coupling, it is expected that the geometric phase for the test
qubit will be profoundly influenced by the occurrence of quantum
phase transition in spin-chain environment. Since we are
interesting to the quantum phase transition, which is the property
of the ground state, we assume that the $XY$ spin chain is
adiabatically in the ground state $|G_g(\{\theta_{k}^g\})\rangle$
of $H_{g}$. In this case the effective mean-field Hamiltonian for
the test qubit is given by
\begin{eqnarray}
H_{eff} &=& H_{t}+\langle G_g|H_{I}|G_g\rangle\\
&=& \left( \frac{\mu}{2}+\frac{2\eta}{N}\sum_{k=1}^{M}\cos\theta_{k}%
^{(g)}\right) \sigma^{z}+\frac{\nu}{2}\sigma^{x}.
\end{eqnarray}
In order to generate a geometric phase for the test qubit, as
usual, we change the Hamiltonian by means of a unitary
transformation: $U(\phi)=\exp\left(
-i\frac{\phi}{2}\sigma_{z}\right),$ where $\phi$ is a slowly
varying parameter, changing from $0$ to $\pi$. The transformed
Hamiltonian can be written as $H(\phi)
=U^{+}(\phi)H_{eff}U(\phi)$, i.e.,
\begin{equation}
H (\phi)=\left( \frac{\mu}{2}+\frac{2\eta}{N}\sum_{k=1}^{M}\cos\theta_{k}%
^{(g)}\right)
\sigma^{z}+\frac{\nu}{2}(\sigma^{x}\cos\phi-\sigma^{y}\sin \phi).
\end{equation}
Then the eigen-energies of the effective Hamiltonian for the test
qubit are given by
\begin{equation}
E_{e,g}=\pm\sqrt{\left(
\frac{\mu}{2}+\frac{2\eta}{N}\sum_{k=1}^{M}\cos
\theta_{k}^{(g)}\right) ^{2}+\frac{\nu^{2}}{4}}.%
\end{equation}
and the corresponding eigenstates are given by
\begin{equation}
|g\rangle=\left( \begin{array}{l}
\sin\frac{\theta}{2} \\
-\cos\frac{\theta}{2} e^{-i\phi}
\end{array} \right),
|e\rangle=\left(
\begin{array}{l}
\cos\frac{\theta}{2}\\
\sin\frac{\theta}{2}e^{-i\phi}
\end{array}\right),
\end{equation}
where $\sin\theta=\nu/2E_{e}$.
The accumulated ground-state geometric phase $\beta_g$ for the
test qubit by varying $\phi$ from zero to $\pi$ can be derived
from the standard integral $\int_0^\pi \langle
G_g|\partial_\phi|G_g\rangle d\phi,$ and it is easy to find that
\begin{equation}
\label{T_phase} \beta_{g} =\pi\left( 1+\frac{\mu+4\eta
f(\lambda,\gamma,N)}{\sqrt{[\mu+4\eta f(\lambda
,N)]^{2}+\nu^{2}}}\right),
\end{equation}
where $f(\lambda,\gamma,N)=\frac{1}{N}\sum_{k=1}^{M}\cos
\theta_{k}^{(g)}$. In the thermodynamic limit
$N\rightarrow\infty$, the summation in $f(\lambda,\gamma,N)$ can
be replaced by the integral as follows:
\begin{equation}
\label{F_function}
f(\lambda,\gamma,N)|_{N\rightarrow\infty}=\frac{1}{2\pi}\int_{0}^{\pi}%
\frac{\lambda-\cos\varphi}{\sqrt{(\lambda-\cos\varphi)^{2}+\gamma^{2}\sin
^{2}\varphi}}d\varphi.
\end{equation}
The geometric phase $\beta_g$ and its derivative
$d\beta_g/d\lambda$ with respect to the parameter
$(\lambda,\gamma)$ of the XY model are plotted in Fig.7. As
expected, the nonanalytic behavior of the geometric phase and the
corresponding anomalies in its derivative $d\beta_g/d\lambda$
along the critical lines $\lambda_c=1$ are clear. All these
features are very similar to those in the XY spin chain in patter
I (see section 3.1).
\begin{figure}
\centering \label{fig7}
\includegraphics[height=4cm,width=8cm]{fig7.eps}
\caption{(a) Ground-state geometric phase $\beta_g$ of the test
qubit and (b) its derivative $d\beta_g/d\lambda$ as a function of
the spin-chain parameter $(\lambda,\gamma)$. The anomalies for the
derivative of geometric phase is clear along the critical line
$\lambda_c=1$. The other parameters: $\mu=0.1$, $\nu=2$, and
$\eta=0.5$. }
\end{figure}
\begin{figure}
\centering \label{Yuan_2_LMG}
\includegraphics[height=4cm]{fig8.eps}
\caption{The derivatives $d\beta_g/d\lambda$ for the test qubit
which is coupling to the Ising spin chain ($\gamma=1$), with
respect to the parameter $\lambda$ for different lattice sizes
$N=12,51,251,501,\infty$. With increasing the system sizes, the
maximum becomes more pronounced and the position of the maximum
clearly approaches $\lambda_c=1$ as $N \rightarrow \infty$. The
inset shows the size scaling of the position of the peak occurred
in $d\beta_g/d\lambda$ (circles) and the function
$f(\lambda,\gamma,N)$ (squares). }
\end{figure}
To further understand the relation between GPs and QPTs in this
system, let us consider the case of $XX$ spin model ($\gamma=0$)
in which geometric phase can be analytically derived. In the
thermodynamic limit, the function $f(\lambda,\gamma,N)$ in
Eq.\ref{F_function} can be derived explicitly for $\gamma=0$ as
$f=1/2-\arccos(\lambda)/\pi$ when $\lambda\leq1$ and $f=1/2$ when
$\lambda>1$. In this case, the geometric phase of the test qubit
is given by
\begin{equation}
\beta_{g}\bigr |_{N\rightarrow\infty}=\left\{
\begin{array}
[c]{l}%
\pi\left( 1+\frac{\mu+2g[1-2\arccos(\lambda)/\pi]}{\sqrt{\left(
\mu+2g[1-2\arccos(\lambda)/\pi]\right) ^{2}+\nu^{2}}}\right)
\ \ \ \ \ (\lambda\leq1)\\
\pi\left( 1+\frac{\mu+2g}{\sqrt{\left( \mu+2g\right)
^{2}+\nu^{2}}}\right)
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (\lambda>1)%
\end{array}
\right.
\end{equation}
which clearly shows a discontinuity at $\lambda=\lambda_{c}=1$.
The derivative $d\beta_g/d\lambda$ as a function of $\lambda$ for
$\gamma=1$ and different lattice sizes are plotted in Fig. 8
\cite{Yuan}. It is notable that the derivative of geometric phase
is peaked around the critical point $\lambda_c=1$. The amplitude
of the peak is prominently enhanced by increasing the lattice size
of the spin chain. The size dependent of the peak position
$\lambda_m$ for $d\beta_g/d\lambda$ is shown in the inset of Fig.
8. For comparison, the size dependence of the peak position in
$\lambda$ space for the derivative $d f/d\lambda$ are also shown
in the inset (squires). The scaling behavior of
$d\beta_g/d\lambda$ and $d f/d\lambda$ are evident in the figure.
All these features are similar to these exhibit in the XY spin
chain of the patter I. Therefore, we can see that QPTs of the XY
spin chain are faithfully reflected by the behaviors of the
ground-state GP and its derivative of the coupled test qubit.
\section{Summary and concluding remarks}
Quantum phase transition plays a key role in condensed matter
physics, while the concept of geometric phase is fundamental in
quantum mechanics. However, no relevant relation was recognized
before recent work. In this paper, we present a review of the
connection recently established between these two interesting
fields. Phases and phase transitions are traditionally described
by the Ginzburg-Landau symmetry-breaking theory based on order
parameters and long rang correlation. Recent develops offer other
perspectives to understand quantum phase transitions, such as
topological order, quantum entanglement, geometric phases and
other geometric quantities. Before conclusion, we would like to
briefly address that, rather than geometric phase reviewed in this
paper, the deep relationship between some other geometric
quantities and quantum phase transitions has also been revealed.
Quantum fidelity. Recently an approach to quantum phase
transitions based on the concept of quantum fidelity has been put
forward\cite{Zanardi0,Zhou}. In this approach, quantum phase
transitions are characterized by investigating the properties of
the overlap between two ground states corresponding to two
slightly different set of parameters. The overlap between two
states can be considered as a Hilbert-space distance, and is also
called quantum fidelity from the perspective of quantum
information. A drop of the fidelity with scaling behavior is
observed in the vicinity of quantum phase transition and then
quantitative information about critical exponents can be
extracted\cite{Cozzini,Cozzini1}. The physical intuition behind
this relation is straightforward. Quantum phase transitions mark
the separation between regions of the parameter space which
correspond to ground state having deeply different structural
properties. Since the fidelity is a measure of the state-state
distance, the dramatic change of the structure of the ground state
around the quantum critical point should result in a large
distance between two ground states. The study of QPTs based on
quantum fidelity (overlap) has been reported for several
statistical models\cite{Zanardi0,Zhou,Zanardi_JSM,Gu1,Yi_Hann}. In
addition, the dynamic analogy of quantum overlap is the Loschmidt
echo; it has been shown that the Loschmidt echo also exhibits
scaling behavior in the vicinity of the critical
point\cite{Quan,Quan2,Ou}.
The Riemannian tensor. It
has been shown that the fidelity approach can be better understood
in terms of a Riemannian metric tensor $g$ defined over the
parameter manifold\cite{Zanardi1}. In this approach, the manifold
of coupling constants parameterizing the system's Hamiltonian can
be equipped with a (pseudo) Riemannian tensor $g$ whose
singularities correspond to the critical regions.
We have presented that one can study quantum phase transitions
from the perspective of some geometric objects, such as geometric
phase, quantum fidelity and the Riemannian tensor. Surprisingly,
All these approaches share the same origin and can be therefore
unified by the concept of quantum geometric tensors. We now
briefly recall the formal setting developed in Ref.\cite{Venuti}.
For each element $\eta$ of the parameter manifold $\mathcal{M}$
there is an associated Hamiltonian $H
(\eta)=\sum_{n=0}^{dim\mathcal{H}} E_n (\eta) |\Psi_n
(\eta)\rangle \langle \Psi_n (\eta)|$ $(E_{n+1}>E_n)$, acting over
a finite-dimensional state space $\mathcal{H}$. If $|\Psi
(\eta)\rangle$ represents the unique ground state of $H(\eta)$,
then one has the mapping $\Psi_0:
\mathcal{M}\rightarrow\mathcal{H}:\eta\rightarrow |\Psi
(\eta)\rangle$. In this case, one can define a quantum geometric
tensor which is a complex hermitean tensor in the parameter
manifold $\mathcal{M}$ given by \cite{Provost}
\begin{equation}
Q_{\mu\nu} \equiv \langle \partial_\mu \Psi_0|\partial_\nu
\Psi_0\rangle -\langle \partial_\mu \Psi_0|\Psi_0\rangle
\langle\Psi_0|\partial_\nu \Psi_0\rangle,
\end{equation}
where the indices $\mu$ and $\nu$ denote the coordinates of
$\mathcal{M}$. The real part of the quantum geometric tensor $Q$
is the Riemannian metric, while the imaginary part is the
curvature form giving rise to a geometric phase\cite{Venuti}.
Similar to the heuristic argument that we have addressed for the
singularity of Berry curvature in the vicinity of quantum phase
transition, it has been shown that the quantum geometric tensor
also obeys critical scaling
behavior\cite{Venuti,Zanardi0,Zhu2006}. Therefore, viewing quantum
phase transitions from the perspectives of geometric phase and
quantum fidelity can be unified by the concept of quantum
geometric tensor.
In conclusion, we presented a review of criticality of geometric
phase established recently, in which geometric phase associated
with the many-body ground state exhibits universality, or scaling
behavior in the vicinity of the critical point. In addition, we
addressed that one can investigate quantum phase transition from
the views of some typical geometric quantities. The closed
relation recently recognized between quantum phase transitions and
quantum geometric tensor may open attractive avenues and fruitful
dialog between different scientific communities..
\section*{Acknowledgements}
This work was supported by the State Key Program for Basic
Research of China (No. 2006CB921800), the NCET and NSFC (No.
10674049).
|
2,869,038,153,754 | arxiv | \chapter{\bf Abstract}
The focus of this thesis
is on (1) the role of Ka\v c-Moody algebras in string theory and the
development of techniques for
systematically building string theory models based on higher
level ($K\geq 2$) KM algebras and (2) fractional superstrings, a new class of
solutions based on $SU(2)_K/U(1)$ conformal field theories. The content of
this thesis is as follows.
In chapter two we review KM algebras and their role in string theory.
In the next chapter,
we present two results concerning the construction of modular invariant
partition functions for conformal field theories built by tensoring
together other conformal field theories.
This is based upon our research in ref.~\pr{cleaver93a}.
First we show how the possible modular invariants for the tensor product
theory are constrained if the allowed modular
invariants of the individual conformal field theory factors have been
classified. We illustrate the use of these constraints for theories of
the type $SU(2)_{K_A}\otimes SU(2)_{K_B}$,
finding all consistent theories for $K_A$ and $K_B$ odd. Second we show how
known diagonal
modular invariants can be used to construct inherently asymmetric invariants
where the holomorphic and anti-holomorphic theories do not share the
same chiral algebra. Explicit examples are given.
Next, in chapter four
we investigate some issues relating to recently
proposed fractional superstring theories with $D_{\rm critical}<10$.
Using the factorization
approach of Gepner and Qiu, we systematically rederive
the partition functions of the $K=4,\, 8,$ and $16$ theories
and examine their spacetime
supersymmetry. Generalized GSO projection operators for the $K=4$ model
are found. Uniqueness of the twist field, $\phi^{K/4}_{K/4}$,
as source of spacetime fermions, is demonstrated. Our research
was originally presented in
refs.~[\putref{cleaver92a}, \putref{cleaver93b}]
\hfill\vfill\eject
{\parindent=.6cm
\noindent{\bf\bigfonts Table of Contents}\hfill\\
\vskip .5cm
\noindent
{\bf Acknowledgements}\quad\dotfill\quad iii\\
{\bf Dedication}\quad\dotfill\quad iv\\
{\bf Quotation}\quad\dotfill\quad v\\
{\bf Abstract}\quad\dotfill\quad vi\\
{\bf Table of Contents}\quad\dotfill\quad vii\\
{\bf List of Figures and Tables}\quad\dotfill\quad ix\\
{\ha{\bf 1.}{\bf Introduction}\quad\dotfill\quad 2\\}
{\hb{1.1}Reasons for String Theory Research\quad\dotfill\quad 2\\}
{\hb{1.2}Status of String Theory Phenomenology\quad\dotfill\quad 3\\}
{\hc{1.2.a} Phenomenological Restrictions\quad\dotfill\quad 7\\}
{\ha{\bf 2.}{\bf Ka\v c-Moody Algebras and Superstring
Theory}\quad\dotfill\quad 9\\}
{\hb{2.1}Review of Ka\v c-Moody Algebras\quad\dotfill\quad 9\\}
{\hc{2.1.a}Categories of Ka\v c-Moody Algebras\quad\dotfill\quad 12\\}
{\hc{2.1.b}Affine Algebras\quad\dotfill\quad 15\\}
{\hb{2.2}Application to String Theory\quad\dotfill\quad 18\\}
{\ha{\bf 3.}{\bf Modular Invariant Partition Functions}\quad\dotfill\quad 24\\}
{\hb{3.1}Review of Modular Invariance\quad\dotfill 24\\}
{\hb{3.2}Complications for Models Based on General Ka\v c-Moody
Models\quad\dotfill\quad 31\\}
{\hb{3.3}Constraints on Tensor Product Modular Invariants\quad\dotfill\quad
36\\}
{\hc{3.3.a} Example: SU(2)$_{K_A}\otimes$SU(2)$_{K_B}$ Tensor Product
Theories\quad\dotfill\quad 38\\}
{\hb{3.4}Left-Right Asymmetric Modular Invariants\quad\dotfill\quad 42\\}
{\hb{3.5}Concluding Comments on MIPFs\quad\dotfill\quad 46\\}
{\ha{\bf 4.}{\bf Fractional Superstrings}\quad\dotfill\quad 47\\}
{\hb{4.1}Introduction to Fractional Superstrings\quad\dotfill\quad 47\\}
{\hc{4.1.a}Parafermion Characters\quad\dotfill\quad 49\\}
{\hb{4.2}Fractional Superstring Partition Functions\quad\dotfill\quad 51\\}
{\hc{4.2.a}New Derivation of the Partition Functions\quad\dotfill\quad 54\\}
{\hc{4.2.b}Affine Factor and ``W'' Partition Function\quad\dotfill\quad 57\\}
{\hc{4.2.c}Theta-Function Factor and the ``V'' Partition
Function\quad\dotfill\quad 60\\}
{\hb{4.3}Beyond the Partition Function: Additional
Comments\quad\dotfill\quad 69\\}
{\hc{4.3.a}Bosonization of the $K=4$ Theory\quad\dotfill\quad 69\\}
{\hc{4.3.b}Generalized Commutation Relations and the GSO
Projection\quad\dotfill\quad 75\\}
{\hc{4.3.c}Unique Role of Twist Field
$(\phi^{K/4}_{K/4})^{D-2}$\quad\dotfill\quad 85\\}
{\hb{4.4}Concluding Discussion\quad\dotfill\quad 92\\}
{{\bf Appendix A:} Dynken Diagrams for Lie Algebras
and Ka\v c-Moody Algebras\quad\dotfill\quad 94\\}
\noindent {{\bf Appendix B:} Proof that Completeness of the A-D-E Classification
of\hfill\\}
\noindent{\phantom{\bf Appendix B:} Modular Invariant Partition Functions
for $SU(2)_K$ is Unrelated\hfill\\}
\noindent{\phantom{\bf Appendix B:} to Uniqueness of the
Vacuum\quad\dotfill\quad 97\\}
{{\bf References}\quad\dotfill\quad 106\\}}
\hfill\vfill\eject
{\parindent=.6cm
\noindent{\bf\bigfonts List of Figures and Tables}\hfill\\
\vskip .5cm
\noindent
\hbox to 2.1cm{Figure 3.1\hfill}Two Conformally
Inequivalent Tori\quad\dotfill\quad 28\\
\hbox to 2.1cm{Figure 3.2\hfill}Lattice Representation
of a Two-Dimensional Torus Defined by\hfill\\
\hbox to 2.1cm{\hfill}Complex Number $\tau$\quad\dotfill\quad 28\\
\hbox to 2.1cm{Figure 3.3\hfill}Lattice Representation of a
Two-Dimensional Torus Defined by\hfill\\
\hbox to 2.1cm{\hfill}Complex Numbers $\lambda_1$
and $\lambda_2$\quad\dotfill\quad 28\\
\hbox to 2.1cm{Figure 3.4\hfill}The Two Independent Cycles on a
Torus\quad\dotfill\quad 29\\
\hbox to 2.1cm{Figure 3.5\hfill}Transformation of $\tau$ from Dehn Twist
around the $a$ Cycle\quad\dotfill\quad 29\\
\hbox to 2.1cm{Figure 3.6\hfill}Transformation of $\tau$ from Dehn Twist
around the $b$ Cycle\quad\dotfill\quad 29\\
\hbox to 2.1cm{Figure 3.7\hfill}Fundamental Domain $\cal F$ in Moduli Space
and Its Images under $S$\hfill\\
\hbox to 2.1cm{\hfill}and $T$\quad\dotfill\quad 30\\
\hbox to 2.1cm{Figure 4.1\hfill}Supersymmetry of Lowest Mass States of
Fractional Open String\quad\dotfill\quad 41\\
\hbox to 2.1cm{Figure A.1\hfill}Generalized Dynkin Diagrams of the Finite
KM Algebras\quad\dotfill\quad 94\\
\hbox to 2.1cm{Figure A.2\hfill}Generalized Dynkin Diagrams of the
Untwisted Affine KM\hfill\\
\hbox to 2.1cm{\hfill}Algebras\quad\dotfill\quad 95\\
\hbox to 2.1cm{Figure A.3\hfill}Generalized Dynkin Diagrams of the Twisted
Affine KM\hfill\\
\hbox to 2.1cm{\hfill}Algebras\quad\dotfill\quad 96\\
\hbox to 2.1cm{Figure B.1\hfill}The Integers (mod $N$)
Mapped to a Circle of Radius $N/2\pi$}\quad\dotfill\quad 98\\
\hbox to 2.1cm{\hfill}\hfill\\
\noindent\hbox to 1.9cm{Table 4.1\hfill}$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ Primary Fields\quad\dotfill\quad 70\\
\hbox to 1.9cm{Table 4.2\hfill}Primary Field Representation from Orbifold
Bosonization\quad\dotfill\quad 72\\
\hbox to 1.9cm{Table 4.3\hfill}Primary Field Representation from $R={\sqrt 6}$
Bosonization\quad\dotfill\quad 74\\
\hbox to 1.9cm{Table 4.4\hfill}Masses of $K=4$ Highest Weight
States\quad\dotfill\quad 83\\
\hbox to 1.9cm{Table 4.5\hfill}Mass Sectors as Function of $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$
Charge\quad\dotfill\quad 84\\
\hbox to 1.9cm{Table 4.6\hfill}Fields with $\phi^{j_1}_{m_1}\neq
\phi^1_0$ with Conformal Dimensions in Integer Ratio\hfill\\
\hbox to 1.9cm{\hfill}with
$h(\phi^{K/4}_{K/4})$\quad\dotfill\quad 88\\
\hbox to 1.9cm{Table 4.7\hfill}Potential Alternatives, $\phi^{j_3}_{\pm j_3}$,
to $\phi^{K/4}_{K/4}$ for Spin Fields\quad\dotfill\quad 90\\
\hbox to 1.9cm{Table B.1\hfill}A--D--E Classification in Terms of
$\Omega_{\delta}$ Basis Set\quad\dotfill\quad 98
\hfill\vfill\eject
\noindent Figures 3.1-7 are from ref.~\pr{lust89}.
\noindent Figures A.1-3 are from ref.~\pr{cornwell89}.
\hfill\vfill\eject
\pagenum=0\pagenumstyle{arabic}
\pagenumstyle{arabic}\pagenum=0
\hbox to 1in{\hfill}
\vskip 3in
\centertext{\bf Ka\v c-Moody Algebras and String Theory}
\hfill\vfill\eject
\chapternumstyle{blank}
\noindent {\bf Chapter 1: Introduction}\vskip .8cm
{\hb{1.1}{\bfs Reasons for String Theory Research}}\vskip .5cm
\chapternumstyle{arabic}\sectionnumstyle{arabic}
\chapternum=1\sectionnum=1\equationnum=0
Elementary particle physics has achieved phenomenal success in recent
decades, resulting in the Standard Model (SM),
$SU(3)_C\times SU(2)_L\times U(1)_Y$,
and the verification, to high
precision, of many of its predictions. However, there are still several
shortcomings or unsatisfying aspects of the theory. Consider, for example,
the following:\mpr{langacker92}\vskip 1cm
\item {1.} The SM is very complicated, requiring measurement of some 21 free
parameters, such as the masses the quarks and leptons
and the coupling constants. We should expect the true fundamental theory to
have at most one free parameter.
\item {2.} The SM has a complicated gauge structure. A gauge group
that is the direct product of three gauge groups with independent couplings
does not seem fundamental.
\item {3.} There seems to be a naturalness problem concerning
the scale at which the
electroweak (EW) symmetry, $SU(2)_L\times U(1)_Y$, breaks to the
electromagnetic
$U(1)_{EM}$. Why is this scale of 100 GeV so much smaller than the Planck
scale of $10^{19}$ GeV? Although this is ``explained'' by the scale of the
Higgs mass, fine-tuning is required in renormalization theory to keep
the Higgs mass on the order of the symmetry breaking scale. This seems to
suggest the need for supersymmetry at a higher scale.
\item{4.} Fine-tuning is also required to solve the strong CP problem.
\item{5.} The SM provides no unification with gravity, {{\it i.e.}}, no means of
forming a consistent theory of quantum gravity.
\item{6.} The cosmological constant resulting from EW symmetry breaking
should be approximately 50 orders of magnitude higher than the experimental
limit. Solving this problem from the SM viewpoint again requires a
fine-tuned
cancellation.
\vskip 1cm
These shortcomings have
motivated a search for phenomenologically
viable
Grand Unified Theories (GUT's) that would unify
SM physics through a single force and even for a Theory of Everything
(TOE) that
could consistently combine the SM with gravity. In the last decade,
this pursuit has resulted in an intensive study of string theory, which
involves only one truly
elementary ``particle,'' a (closed) string-like (rather than point-like)
object with a length on the order of the Planck scale,
$l_{\rm Pl}=10^{-33}$cm. In this theory all particles ordinarily
regarded as
``elementary'' are explained as vibrational or
internal modes of this fundamental string.
One of the advantages of string theory
is that it removes the infinities resulting from high-energy
interactions of point-like
particles, without requiring renormalization techniques. The supersymmetric
version of string theory contains no ultraviolet divergences.
String theory is the first theory to successfully
combine the SM forces with gravity.
Any string theory with $D>2$ contains an infinite tower of
vibrational/internal
excitation modes. Included in the closed (super)string spectrum is a massless
spin-2 (and spin-3/2) state which can be identified with the graviton
(and its supersymmetric partner, the gravitino).
\sectionnumstyle{blank}\vskip .5cm
{\hb{1.2}{\bfs Status of String Theory Phenomenology}}\vskip .5cm
\sectionnumstyle{arabic}\sectionnum=2\equationnum=0
In a sense string theory has been too successful following the explosion
of interest in the mid-80's. The (super)string
theory is inherently a (10) 26 dimensional spacetime theory. Although
in both cases there
are only a very few solutions for the theories when all
spacetime dimensions are uncompactified, for each dimension that is
compactified, there arise many more possible solutions.
With only four uncompactified
spacetime dimensions, there is a plethora (on the order of several
million) of distinct solutions to the superstring theory.
Many different approaches to ``compactification,'' {\it e.g.},
bosonic lattices and orbifolds, free fermions, Calabi-Yau manifolds,
and $N=2$ minimal models, have been devised.
(Often there is, however, much overlap and
sometimes even complete equivalence between alternative methods of
compactification.) Four-dimensional solutions
can be classified into two broad categories: (1) those solutions
involving an actual geometrical
compactification from ten uncompactified dimensions,
and (2) those with
internal degrees of freedom having no equivalent representation
in terms of six well-defined compactified dimensions.
There is a
potential problem with solutions in the first class: such models with
$N=1$ spacetime supersymmetry (SUSY) and/or chiral fermions cannot
contain massless spacetime scalar fields in the adjoint or higher dimensional
representations of the gauge group.\mpr{lewellen,font90,ellis90}
This presents a possible difficulty for string theory, because typical
GUT's depend upon scalars in these representations to break the
gauge symmetry down to the SM. In the usual approach,
spontaneous symmetry breaking is brought about
by vacuum expectation values (VEV's) of these scalars.
Thus, the gauge groups of these string models either must break to
the standard model near the string (Planck) scale or a non-standard
Higgs breaking is required. An example of the first method is symmetry
breaking by Wilson lines in Calabi-Yau vacua.\mpr{candelas85}
Flipped $SU(5)$ is the primary example of the second
approach.\mpr{antoniadis87b}
However, standard GUT's such as $SU(5)$ or $SO(10)$
are excluded from this class of string theory models.
In the first class of models,
the absence of spacetime scalars in higher representations results
from the association of geometrical compactification with level-one
KM algebras.
In other words,
the connection of these models to level-one KM algebras is basically a byproduct of
the classical idea of ``compactification.''
Because of this,
basing a model on level-one KM algebras has been
the standard approach to string theory phenomenology.
Starting from
either the ten dimensional type-II or heterotic superstrings,
four-dimensional spacetime has most often been derived through
``spontaneous compactification'' of the extra six dimensions.
In ten uncompactified
dimensions the only modular invariant heterotic
string models with spacetime SUSY and gauge symmetry are the
level-one $E_8\otimes E_8$ and level-one $SO(32)$ solutions.
(In ten uncompactified dimensions, the type-II string has $N=2$ SUSY,
but no gauge group.)
Compactification of the extra six dimensions
on a Calabi-Yau manifold or symmetric orbifold,
naturally keeps
the KM algebra at level-one. The resulting gauge group
$g$, is a subgroup of either $E_8\otimes E_8$ or $SO(32)$,
and the representations of the gauge group that appear are
determined by the level of the algebra.
Models using bosonic lattice compactification, or equivalently
complex world sheet fermions,\mpr{kawai87a,antoniadis87,antoniadis88}
likewise have level-one
KM algebras, with the associated gauge group being a subgroup of
either $SO(12)\otimes E_8\otimes E_8$ or $SO(44)$.
Models can be based on higher-level KM algebras, if the demand for
a classical interpretation of compactification is relaxed.
Such models fall into the second general
class of string solutions and can contain scalars in the adjoint or higher
representations.
These states can exist in the spectrum if
their gauge group arises from a level-$K\geq 2$ KM algebra on
the world sheet.
Examples are given in \pr{lewellen}, where the approach to
such models is via {\it real} fermions.
The unitary representations of a level-$K$ KM algebra in a string model
are required to satisfy (see section 2.2.)
$$ K\geq \sum_{i=1}^{{\rm rank}\, {\cal L}}
n_i m_i\,\, , \eqno\eqnlabel{unitaryrep}$$
where $n_i$ are the Dynkin labels of the highest weight representation of the
associated Lie algebra, ${\cal L}$,
and $m_i$ are the related co-marks.
Based on this unitarity constraint, at level-one only the singlet, spinor,
conjugate spinor and vector representations of $SO(4n+2)$ can
appear. For $SU(N)$ level-one, only
the ${N\choose 0}$ ({\it i.e.,} the singlet),
${N\choose 1}$, ${N\choose 2}$, $\dots$, ${N\choose N-1}$
representations are allowed,\footnote{For $SU(N)$ at level-$K$,
the rule for determining all unitary representations
is the following:
Only those representations that correspond to a Young tableau
with $K$ or fewer columns are allowed. Henceforth
a level-K KM algebra, ${\tilde {\cal L}}$, based on a Lie algebra, ${\cal L}$, will often be
denoted by ${{\cal L}}_K$, with the exception of those Lie algebras that already
carry a subscript denoting the rank, {\it e.g.,} $E_6$.}
while for $E_6$ level-one the $1$, $27$ and $\overline{27}$
representations can be present.
Until geometric ``compactification'' from ten dimensions is sacrificed,
higher-level models cannot be reached.
However, when the basic strategy is generalized,
level-one models become much less special.
If one starts with all spatial dimensions initially
compactified, and not well defined spatially,
the occurrence of level-one KM algebras is not necessarily favored.
That is, after ``decompactification'' of three spatial dimensions,
a gauge group in the $(3+1)$-dimensional
space based on higher level algebras
becomes possible (and not unlikely).
The difference between the two classes of ``four-dimensional'' string
models relates
to the question of how valid it is to think geometrically about
physics at the Planck scale.
Are the lattice, free fermion, and Calabi-Yau approaches
to compactification
too classical for Planck scale physics?
Going beyond the classical notion of spatial dimensions
was one reason that Gepner considered $N=2$ minimal
models,\mpr{gepner87b} (even though
Calabi-Yau manifolds and $N=2$
minimal models were eventually found to be
equivalent\mpr{gepner87b}).\footnote{We suggest that
this indicates more than just the
mathematical equivalences of the appraoches
as demonstrated through Landau-Ginsburg potentials.
It is another example that makes us question the meaningfulness of the
concept of well-defined compactified
spatial dimensions of Plank scale length.}
Many considerations suggest investigating string models
based on higher level K-M algebras, even though the
degrees of freedom of the models generally cannot be
expressed in terms of compactified spatial dimensions.
The central charge of the
level-$K$ algebra
(which measures the contribution to the conformal anomaly of the world
sheet theory) is
$$c_{{\tilde {\cal L}}}= {K\, {\rm dim}\, {\cal L}\over K+ {1\over 2} C_A} \,\, ,
\eqno\eqnlabel{centralcharge}$$
where $C_A$ is the quadratic Casimir for the adjoint representation.
For simply-laced groups the central charge equals the rank of the
group at level one and monotonically approaches the dimension of the group
as $K\rightarrow\infty$. This demonstrates that heterotic models constructed
from free real world-sheet bosons, $\partial X^i$, compactified on a lattice
(or equivalently free complex fermions)
include only simply-laced level-one
algebras.\footnote{The rank of the group is at least equal to the
number of bosons, since each bosonic operator $\partial X^i$ generates
a $U(1)$ KM algebra. Also note that we have specified simply-laced,
because, as eqs.~(2.2.3) and (2.2.9) together indicate,
the central charge for non-simply-laced algebras
is greater than the rank of the group even at level-one.}
Hence, as stated above, both the $E_8\otimes E_8$ and $SO(32)$ ten-dimensional
models are level-one. With
compactification that treats left-moving and right-moving modes of the
string symmetrically the level remains one.
Construction of models with higher-level gauge groups requires
asymmetry between the left- and right-moving fields on the
world-sheet.\mpr{lewellen}
Associated with this property of the fields there
are asymmetric modular invariants.
Systematically constructing asymmetric modular invariants has proven
very difficult, except for the special case of models based
on free bosons or fermions.
However, even for asymmetric models,
use of lattice bosons (or equivalently complex fermions) limits
the possibilities to level one-models.
The first and simplest alternative is
to use real fermions instead.\mpr{lewellen} However, to date, no
phenomenologically viable model has been found using this
approach. A more general method for constructing
(asymmetric) modular invariant tensor products of KM algebras (and of
conformal field theories) has not been developed. Several years of
research has shown that an enormous collection of consistent
free fermion models exist, of which only a small percentage are
actually left-right
symmetric. Perhaps, a systematic approach to developing
asymmetric modular invariants for tensor products of higher-level KM algebras
could produce a new class of string models with viable
phenomenology. Steps toward developing this approach
is the focus of chapter 3 of this thesis.
\vskip .5cm
{\hc{1.2.a}{\sl Phenomenological Restrictions}}
\vskip .5cm
Recent results from LEP have resulted in tighter constraints for viable
string models.
Using renormalization group equations (RGE), the measured high precision
values of
the standard model coupling constants have been extrapolated
from $M_Z$ to near the Planck scale. It was found that the RGE for the
minimal supersymmetric standard model with just two Higgs doublets predict
a unification of the three coupling constants $g_3$, $g_2$ and $g_1$ for
$SU(3)\times SU(2)_L\times U(1)_Y$, respectively,
at about $10^{16}$ GeV. For string theory
this naively poses a problem since the string unification scale is generally
required, at tree level, to be near the Planck scale
(around $10^{18-19}$ GeV).
Three classes of solutions have been proposed for resolving
the potential inconsistency between these extrapolations and string
theory.\markup{[\putref{bailin92}]}
The first proposal is to regard the unification of the couplings at
$10^{16}$ GeV using the minimal SUSY standard model RGE as a coincidence,
and to allow additional states between the electroweak scale and
the string unification scale that raise the RGE unification scale.
A second suggestion is that string threshold effects could
significantly lower the
string scale down to the minimal SUSY standard model RGE unification
scale.
The third possibility is that a
grand unified gauge group results from
a Ka\v c-Moody algebra at level $K\geq 2$.
As we have discussed,
adjoint (and higher) representations for Lorentz scalars become possible
when the level of the KM algebra is greater than one.
These adjoint scalars might allow $SU(5)$ or $SO(10)$ grand
unification. Thus, the SUSY standard model couplings could
unify at $10^{16}$
GeV and run upward from there
with a common value to the string unification scale.
The last proposal appears most natural and
appealing. The concept of a grand unified gauge group
fits well with the idea of successive levels of increasing
symmetry much better
than does
going directly from the symmetry of the standard model to the symmetry of
the string. It seems far more natural for the strong force to merge
with the electroweak significantly
below the string scale, rather than where the gravitational
coupling
(and, additionally, all hidden sector gauge couplings) finally merge.
Thus, we will examine various aspects of higher-level Ka\v c-Moody
algebras in string models. In chapter 2 we review Ka\v c-Moody
algebras in greater depth
and discuss their applications to string theory, including
general properties of and restrictions on higher level models.
In chapter 3 we develop
tools for systematically constructing string
models containing (asymmetric) higher-level KM algebras.
Chapter 4 heads along a different direction as we
investigate aspects of a potentially new class of string models
with spacetime SUSY and critical dimensions below ten.
These models seem to have a local world sheet symmetry that pairs
the world sheet boson not with a fermion, but rather with
a primary field of a higher-level $SU(2)\over U(1)$
conformal field theory.
\hfill\vfill\eject
\chapternumstyle{blank}
\noindent {\bf Chapter 2: Ka\v c-Moody Algebras and String Theory}\vskip .8cm
\chapternumstyle{arabic}\sectionnumstyle{arabic}
\chapternum=2\sectionnum=1\equationnum=0
{\hb{2.1}{\bfs Review of Ka\v c-Moody Algebras}}
\sectionnumstyle{arabic}\sectionnum=1\equationnum=0
\vskip .5cm
At the heart of the gauge symmetries of string theory are not only Lie
algebras, but the more complicated Ka\v c-Moody (KM) algebras,\mpr{kac83}
for which the former are subalgebras.
Because of the
importance of KM algebras in string theory, we review them in this chapter
before proceeding in the next chapter
with our study of modular invariant partition functions
for tensor products of KM algebras.
Often in string theory the terms ``affine algebra,''
``affine Ka\v c-Moody algebra,'' and ``Ka\v c-Moody algebra''
are used interchangeably. The imprecise use of these
terms can be confusing, since there are actually three distinct
classes of KM algebras, only
one of which is ``affine.''\mpr{cornwell89}
The basic step required to progress
from Lie algebras\footnote{Specifically, compact simple or compact
semi-simple Lie algebras, which is henceforth implied.}
to KM algebras is to relax the
{\it finite}-dimension restriction on Lie algebras and
consider {\it infinite}-dimensional generalizations.
As we shall show, many of the features of semi-simple
Lie algebras reappear in KM algebras. In fact, Lie
algebras can be regarded as particular cases of KM algebras with the
special property of being finite-dimensional.
Analogous to Lie algebras, KM algebras are defined by
generalized Cartan matrices (or equivalently by
Dynkin diagrams).
We will discuss KM algebras in terms of these matrices.
After lifting the finite-dimension restriction, examination of
generalized Cartan matrices shows that KM algebras can be grouped into
three distinct classes, called the ``finite'' (corresponding to
standard Lie algebras), ``affine,''
and ``indefinite'' types. Within the affine class are two subclasses,
denoted as ``twisted'' and ``untwisted.''
Recall that the elements of an $l\times
l$-dimensional Cartan matrix, $\bmit A$, for a Lie algebra, ${\cal L}$,
of rank-$l$ are defined by
$$ A_{jk}= {2\langle {\bmit \alpha}_j,{\bmit \alpha}_k\rangle\over
\langle {\bmit \alpha}_j,{\bmit \alpha}_j\rangle} \,\, ,
\quad j,k\in I^{{\cal L}}\equiv \{ 1,2,\dots ,l\}\, ,\eq{cella}$$
where ${\bmit \alpha}_j$ is a simple root of the algebra.
For Lie algebras, the inner product of two roots is defined by
$$\langle {\bmit \alpha}_j,{\bmit \alpha}_k\rangle
\equiv {\bmit \alpha}_j\cdot{\bmit \alpha}_k =\sum_{m=1}^{l} (\alpha_j)_m (\alpha_k)_m
\,\, .
\eq{defdotprod}$$
(As we show, a more general definition applies for the inner product
of roots in a KM algebra. See section 2.1.a.)
Cartan matrices are defined by four properties:
\noindent\hbox to .2cm{\hfill} {(a)} $A_{jj}= 2$ for $j= 1,2,\dots ,l$;
\noindent\hbox to .2cm{\hfill} {(b)} $A_{jk}= 0$, $-1$, $-2$, or $-3$ if $j\neq k$;
\noindent\hbox to .2cm{\hfill} {(c)} for $j\neq k$, $A_{jk}=0$ if and only if (iff) $A_{kj}=0$;
\noindent\hbox to .2cm{\hfill} {(d)} det$\,\bmit A$ and all proper principal minors of $\bmit A$
are positive.\footnote{A {\it principal minor of} $\bmit A$ is the
determinant of a {\it principle submatrix of} $\bmit A$, which is a
submatrix consisting of elements $A_{jk}$ in which $j$ and $k$ both vary
over the same subset of indices. These quantities are {\it proper} if the
subset of indices is a proper subset of the set of indices.}
\noindent Classification of all $l\times l$-dimensional
matrices with these properties
completely classifies Lie algebras of rank-$l$.
In the late 1960's Ka\v c and Moody discovered that some of these
properties could be relaxed to produce a new, enlarged set of algebras,
with the primary difference being that the new algebras were
infinite-dimensional. By infinite-dimensional, we mean that there is
an infinite
number of roots (equivalently an infinite number of generators)
of the algebra.
Their generalized\footnote{There is additionally a slight
change of notation. For Lie algebras, $I^{{\tilde {\cal L}}}$ should be altered to
$I^{{\cal L}}= \{ 1,2,\dots , l\equiv d_{\bmit A}\}$.}
Cartan matrix, ${\bmit A}^{\rm KM}$,
is defined as $d_{{{\bmit A}^{\rm KM}}}\times d_{{{\bmit A}^{\rm KM}}}$-dimensional with the
properties that
\noindent\hbox to .2cm{\hfill} {(a')} $A_{jj}=2$ for $j\in I^{{\tilde {\cal L}}} = \{0,1,\dots ,d_{{{\bmit A}^{\rm KM}}}-1\}$;
\noindent\hbox to .2cm{\hfill} {(b')} for $j\neq k$ $(j,k\in I)$, $A_{jk}$ is a non-positive integer;
\noindent\hbox to .2cm{\hfill} {(c')} for $j\neq k$ $(j,k\in I)$, $A_{jk}= 0$ iff $A_{kj}= 0$.
\noindent One modification is that property (d) has
been lifted.
No longer must the determinant or all proper minors
of the matrix be positive.
det$\, {{\bmit A}^{\rm KM}} \leq 0$ is now
allowed, with the rank-$l$ of the matrix, ${\tilde {\cal L}}$,
determined by the largest square submatrix of
${{\bmit A}^{\rm KM}}$ with non-zero determinant.
Thus, $l= d_{{{\bmit A}^{\rm KM}}}$
only when det$\, {{\bmit A}^{\rm KM}}\neq 0$.
Otherwise $l<d_{{{\bmit A}^{\rm KM}}}$.
Second, non-diagonal elements $A_{jk}< -3$, for $j\neq k$,
are permitted.
The basic ideas and terminology for roots and root subspaces for a complex
KM algebra, ${\tilde {\cal L}}$, are very similar to those for a semi-simple complex Lie
algebra.
The commutative subalgebra, $\cal H$, of ${\tilde {\cal L}}$ is referred to as the
Cartan subalgebra (CSA) of ${\tilde {\cal L}}$, and the set of elements $E^{{\bmit \alpha}}$
of ${\tilde {\cal L}}$
possessing the property that
$$\lbrack {\bmit h}, E^{{\bmit \alpha}} \rbrack = \langle {{\bmit \alpha}}, {\bmit h} \rangle
E^{{\bmit \alpha}}
\,\, ;\quad\quad {\rm\,\, for~all~} h\in {\cal H}\,\, ,
\eq{comrelrv}$$
form the root subspace ${\tilde {\cal L}}_{{\bmit \alpha}}$ corresponding to
the root ${\bmit \alpha}$. The set of roots ${\bmit \alpha}_i$, for $i\in I^{{\tilde {\cal L}}}$, are the
simple
roots upon which the generalized Cartan matrix is based. A generic root,
${\bmit \alpha}$, has the form
$${\bmit \alpha} = \sum_{i\in I} c^{{\bmit \alpha}}_j {\bmit \alpha}_i, \eq{defgenroot}$$
where the set of $c^{{\bmit \alpha}}_j$ are either all non-negative integers
or all non-positive integers.
A distinctive difference between Lie algebras and KM algebras is whereas
the dimension, $n_{{\cal L}}$, of the CSA of Lie algebras
is equal to the rank, $l$, of the Cartan matrix, this relation does not hold
for KM algebras. Rather, for KM algebras\mpr{cornwell89}
the dimension of the generalized CSA is
$$n_{{\tilde {\cal L}}}= 2d_{{{\bmit A}^{\rm KM}}} - l\,\, .\eq{dimkmcsa}$$
Only when $l= d_{{{\bmit A}^{\rm KM}}}$ does $n_{{\tilde {\cal L}}}= l$.
For any KM algebra, the CSA $\cal H$ can be divided into two parts,
$\cal H'$ and $\cal H''$:
$\cal H'$ being a
$d_{{{\bmit A}^{\rm KM}}}$-dimensional algebra with
$ \{ H^i ,\, i\in I^{{\tilde {\cal L}}}\}$ as its basis; and
$\cal H''$ simply defined to be the
$(d_{{{\bmit A}^{\rm KM}}} - l)$-dimensional complimentary subspace of
$\cal H'$ in $\cal H$.
The $ H^i$ are
the generators giving the first $d_{{{\bmit A}^{\rm KM}}}$ components,
$$\alpha_j (H^i) \equiv \langle {{\bmit \alpha}}_j , H^i \rangle \,\, ,
\eq{lacompsr}$$
of the simple roots, ${\bmit \alpha}_j$.
$\cal H''$ is non-trivial only when det ${{\bmit A}^{\rm KM}} = 0$.
Within $\cal H'$ is a subset, $\cal C$,
that forms the center of the KM algebra.
The elements ${\bmit h}\in {\cal C}$ commute with all the members of ${\tilde {\cal L}}$.
That is, if ${\bmit h}\in {\cal C}$ then
$$\langle {{\bmit \alpha}}_j, {\bmit h}\rangle =0 {\rm ~for~all~}
j\in I^{{\tilde {\cal L}}}.\eq{defcenter}$$
That $\cal C$ is $(d_{{{\bmit A}^{\rm KM}}} - l)$-dimensional is shown by elementary matrix
theory. The proof is short and is as follows:
Any element of ${\bmit h}\in\cal H'$ has the form
$${\bmit h}= \sum_{j\in I} \mu_j H^j \,\, .\eq{hincenter}$$
If ${\bmit h}$ is also in $\cal C$, then
$$ \sum_{j\in I} \mu_j {{\bmit \alpha}}\cdot H^j = 0\,\, .\eq{deqns}$$
The $\cal H'$ can always be rotated into the Chevalley basis\mpr{cornwell89}
where the set of
eqs.~(\puteqn{deqns})
becomes the matrix eq.
$$ {{{\bmit A}^{\rm KM}}} {\bmit \mu} =0\,\, .\eq{mateqnmu}$$
($\bmit \mu$ is a column vector with entries $\mu_j$.)
Since ${{\bmit A}^{\rm KM}}$ has rank $l$, elementary matrix theory shows that there are
$d_{{{\bmit A}^{\rm KM}}} - l$ linearly independent solutions to $\bmit \mu$.
The basis of the $d_{{{\bmit A}^{\rm KM}}}$-dimensional subspace
${\cal H}^-\equiv {\cal H} - {\cal C}$
(which includes $\cal H''$)
can be formed from those elements ${{\bmit h}^k_+}\in {\cal H}$,
where $\langle {{\bmit \alpha}}_j, {\bmit h}^k_+ \rangle = \delta_{jk}$
and $j,k\in I^{{\tilde {\cal L}}}$.
Thus, no non-trivial element
${\bmit h}''= \sum_{k\in I}\lambda_k h^k_+\in{\cal H''}\in {\cal H_+}$
can be in the $\cal C$, since
$$ \langle {{\bmit \alpha}}_j , {\bmit h}'' \rangle = \lambda_j\,\, . \eq{noch}$$
\vskip .5cm
{\hc{2.1.a}{\sl Categories of Ka\v c-Moody Algebras}}
\vskip .5cm
Matrices satisfying properties (a'--c') defining a generalized Cartan
matrix can be divided
into three categories, each corresponding to a unique
class of KM algebras. The following three theorems define these
classes
\vskip 2mm
\noindent {\bf Theorem} 2.1: A complex KM algebra, ${\tilde {\cal L}}$, is ``finite''
(equivalently, it is a Lie algebra), iff
all the principle minors of the corresponding
generalized Cartan matrix, ${{\bmit A}^{\rm KM}}$, are positive.
\noindent This constraint on the principle minors is equivalent to
demanding that:
\noindent\hbox to .2cm{\hfill} {(F.1)} det$\, {{\bmit A}^{\rm KM}} \neq 0$;
\noindent\hbox to .2cm{\hfill} {(F.2)} there exists a
vector
${\bmit u}> 0$
of dim $d_{{{\bmit A}^{\rm KM}}}$ such that
${{\bmit A}^{\rm KM}} {\bmit u}>0$;\footnote{${\bmit u}> 0$ is defined to mean
$u_j>0$ for all $j\in I$. Similar definitions apply when
``$<$'', ``$\geq$'', or ``$\leq$'' appear in vector relations.} and
\noindent\hbox to .2cm{\hfill} {(F.3)} ${{\bmit A}^{\rm KM}} {\bmit v}\geq 0$ implies ${\bmit v}\geq 0$.
\noindent Properties (F.1-3) imply that the associated algebra does not contain
any {\it imaginary} roots, {\it i.e.}, roots $\alpha$ such that
$\langle \alpha , \alpha \rangle \leq 0$,\mpr{cornwell89}
which corresponds to reimposing constraints (b) and (d).
Hence, these Cartan matrices define finite Lie algebras.
\vskip 2mm
\noindent {\bf Theorem} 2.2: A complex KM algebra, ${\tilde {\cal L}}$, is ``affine''
iff its generalized Cartan matrix, ${{\bmit A}^{\rm KM}}$, satisfies
det${{\bmit A}^{\rm KM}} =0$ and all the proper minors of ${{\bmit A}^{\rm KM}}$
are positive.
\noindent An equivalent definition of this class is to
require that the matrix is such that:
\noindent\hbox to .2cm{\hfill} {(A.1)} det$\, {{\bmit A}^{\rm KM}} =0$ but $l= d_{{{\bmit A}^{\rm KM}} } -1$;
\noindent\hbox to .2cm{\hfill} {(A.2)} there exists a vector ${\bmit u}>0$ such that
${{\bmit A}^{\rm KM}} {\bmit u} = 0$; and
\noindent\hbox to .2cm{\hfill} {(A.3)} ${{\bmit A}^{\rm KM}} {\bmit v}\geq 0$ implies
${{\bmit A}^{\rm KM}} {\bmit v}= 0$.
\noindent With these properties, this class of KM algebras
must contain imaginary roots. The term ``affine'' is derived
from the special characteristics of its generalized Weyl
group.\mpr{cornwell89} Each complex affine KM algebra,
${\tilde {\cal L}}^{\rm aff}$,
can be constructed from an associated complex simple Lie algebra,
${\cal L}$. The properties that $l= d_{{{\bmit A}^{\rm KM}} } -1$ along with det${\, {{\bmit A}^{\rm KM}}}=0$,
place a severe constraint on the one
additional simple root$\, \equiv {{\bmit \alpha}}_0$.
In terms of its $l$-dimension projection onto the Lie algebra subspace,
which we denote by ${{\bmit \alpha}}_0^{\cal L}$, the constraint is
$${{\bmit \alpha}}_0^{\cal L}= -\sum_{j=1}^{d_{{{\bmit A}^{\rm KM}}} -1}{{\bmit \alpha}}_j \equiv {-{\bmit\psi}}
\,\, ,\eq{alzerodef}$$
where ${{\bmit \alpha}}_j$ are the simple roots of the Lie algebra, ${\cal L}$, and
${\bmit \psi}$ is its highest root.
The affine algebras are the class of KM algebras
upon which the spacetime gauge groups of string theory are based;
therefore, affine algebras are discussed in greater detail in the following
(sub)sections.
The last class of KM algebras, called ``indefinite'' algebras is most simply
defined by those generalized Cartan matrices that satisfy neither of the
conditions of Theorems 2.1 or 2.2. Indefinite matrices have the following
properties.
\noindent\hbox to .2cm{\hfill} {(I.1)} there exists a ${\bmit u}>0$ such that
${{\bmit A}^{\rm KM}} {\bmit u}<0$; and
\noindent\hbox to .2cm{\hfill} {(I.2)} ${{\bmit A}^{\rm KM}} {\bmit v}\geq0$ and ${\bmit v}\geq 0$ imply that
${\bmit v}=0$.
\noindent As (I.1-2) indicate, indefinite algebras also have imaginary roots.
For a specific $d_{{{\bmit A}^{\rm KM}} }$ there are only a finite number
of possible generalized Cartan matrices in the finite or affine classes.
In the finite case,
where $l=d_{{{\bmit A}^{\rm KM}} }$, these matrices
correspond to the standard simple Lie algebras, which are denoted by
$A_l\,$, $B_l\,$, $C_l\,$, $D_l\,$, $E_{6,7,8}\,$,
$F_4\,$, and $G_2\,$. In the affine case, where
$l= d_{{{\bmit A}^{\rm KM}}} -1$, there is an untwisted generalization
of each Cartan matrix associated with a Lie algebra of rank-$l$.
Common notation for the affine algebras is to add
a superscript of $(1)$ to the Lie algebra symbol.
For example, the untwisted affine version of $A_l$
is denoted by $A_l^{(1)}$.
Additionally, for $A_l\,$, $D_l\,$, and $E_{6}$ there is
a ``twisted'' generalization denoted by superscripts $(2)$.
There is a also a
second twisted affinization of $D_4$, denoted by $D_4^{(3)}$. The
twisted algebras result from
either a $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ or a $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ rotation and projection
of the roots of the untwisted
affine algebra,\mpr{cornwell89}
and are non-simply-laced affinizations
of simply-laced Lie algebras.
The third type of KM algebra,
the ``indefinite'' class, is appropriately named because
there is an infinite number of matrices
that meet neither ``finite'' nor ``affine'' requirements
for each value of $d_{{{\bmit A}^{\rm KM}}}$.
All of these matrices
correspond to non-isomorphic algebras not grounded in generalizations of
the Lie algebras. Very few (if any) applications for indefinite
KM algebras have been found.
In particular, they appear to play no part in string theory.
To illustrate the differences of the three classes, consider the simplest
non-trivial generalized Cartan matrix possible, the $2\times 2$-dimensional
$${{{\bmit A}^{\rm KM}}}= \left(\matrix{2&-r\cr
-s&2\cr}\right)\,\, ,$$
where $r$ and $s$ are positive integers.
Now let us classify all of the possible KM algebras in each class
associated with specific values for $r$ and $s$.
\vskip 2mm
\item{1.} Finite (Lie) algebras: det$\, {{{\bmit A}^{\rm KM}}}> 0$ so $rs<4$. There are only
three possibilities for non-equivalent algebras,
\item\item{a.} $r=1$, $s=1$ corresponding to $A_2$;
\item\item{b.} $r=1$, $s=2$ corresponding to $B_2$; and
\item\item{c.} $r=1$, $s=3$ corresponding to $G_2$.
\vskip 2mm
\item{2.} Affine algebras: det$\, {{{\bmit A}^{\rm KM}}} =0 $ so $rs=4$. There are
only two inequivalent possibilities,
\item\item{a.} $r=1$, $s=4$ corresponding to $A^{(2)}_2$; and
\item\item{b.} $r=2$, $s=2$ corresponding to $A^{(1)}_1$.
\vskip 2mm
\item{3.} Indefinite algebras: det$\, {{{\bmit A}^{\rm KM}}}<0 $ so $rs>4$. There is
an infinite number of choices for $r$ and $s$ resulting in non-isomorphic
algebras.
\vskip 2mm
Since the finite class of KM algebras is simply composed of
Lie algebras
and the indefinite class, although the largest, seems to have
little application to physics, we
cease our study of them with this example
of classification of $2\times 2$-dimensional generalized Cartan matrices.
We now focus in greater detail on the affine
algebras and their role in string theory.
\vskip .5cm
{\hc{2.1.b}{\sl Affine Algebras}}\vskip .5cm
Having discussed the three classes of KM algebras, we focus here in detail
on affine KM algebras . We
generalized Cartan-Weyl basis.
Recall from eq.~(\puteqn{dimkmcsa})
that the CSA, $\cal H$, of a KM algebra, ${\tilde {\cal L}}$, has dimension
$$n_{{\tilde {\cal L}}}= 2d_{{{\bmit A}^{\rm KM}}} - l\,\, ,$$
where $d_{{{\bmit A}^{\rm KM}}}$ is the number of simple roots and $l$ is the rank of the
associated generalized Cartan matrix.
$\cal H$ can be divided into two parts,
$\cal H'$ of dimension $d_{{{\bmit A}^{\rm KM}}}$, and its compliment $\cal H''$ of
dimension $d_{{{\bmit A}^{\rm KM}}} -l$.
Within $\cal H'$ is the $(d_{{{\bmit A}^{\rm KM}}} -l)$-dimensional
center, $\cal C$, of ${\tilde {\cal L}}$.
Applying this to the affine class of KM algebras,
shows that:
\item{(1)} $\cal H$ has dimension $l+2$.
\item{(2)} $\cal H'$ is $(l+1)$-dimensional with
only one generator in the center.
\item{(3)} $\cal H''$ is one-dimensional.
The $l$ generators, denoted by
$H^p$ for $p\in I^{{\cal L}}=\{ 1,\, 2,\, \dots \, l\}$,
in $\cal H$ but not in $\cal C$
form the CSA of the Lie algebra ${\cal L}\subset{\tilde {\cal L}}$.
Thus, affine CSA's contain two additional generators of
$\cal H$ not present in
the Lie subalgebra. The single generator of the
center is known as the level, $K$, of the algebra,
and the generator of $\cal H''$ is called the scaling element, $d$.
We can express generic roots of the KM algebra in the
form\footnote{$\alpha_j({\bmit h})\equiv \langle {{\bmit \alpha}}_j, {\bmit h}\rangle$.}
$$ {{\bmit \alpha}}_j = \left( {{\bmit \alpha}}_j^l, \alpha_j(K), \alpha_j(d) \right)\,\, ;
\quad j\in I^{{\tilde {\cal L}}}\,\, ,
\eq{defkmsimcur}$$
where ${{\bmit \alpha}}_j^l$ forms the $l$-dimensional subvector that is
associated solely with the Lie algebra ${\cal L}$.
In this notation, the simple roots can be taken as
$${{\bmit \alpha}}_p=\left({{\bmit \alpha}}_p^l,0,0\right) {\rm ~~for~~} p\in I^{{\cal L}}
\quad\quad {\rm and} \quad\quad
{{\bmit \alpha}}_0 = \left( -{\bmit \psi}, 0, \alpha_0(d) \right) \,\, .\eq{simroots}$$
(Since $K$ forms the center of the algebra,
$\alpha_j (K) = 0$ by (\puteqn{comrelrv}).)
Based on eq.~(\puteqn{mateqnmu}), to
a given affine Cartan matrix ${{{\bmit A}^{\rm KM}}}$
is associated a single linearly independent $d_{{{\bmit A}^{\rm KM}}}$-dimensional
vector ${\bmit \mu}>0$ such that ${{{\bmit A}^{\rm KM}}}{\bmit \mu}=0$.
This vector is related to $\alpha_0(d)$ by
$$ {\bmit \delta} = {\sum_{j=0}^{l} \mu_j {{\bmit \alpha}}_j}
= \left( {\bmit \delta}^{l}= 0, 0, \delta(d)= \alpha_0(d)\right)
\,\, .\eq{deltadefal}$$
In other words, $\delta (H^p) = \delta(K) = 0$.
$d$ can be defined so that $\delta(d)\equiv\alpha_0(d)=1$.
${\bmit \delta}$ is an actual root of the theory, as are all integer
multiples, $m{\bmit \delta}$; $m\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $.
Thus, a general root has the form ${\bmit \alpha}= ({\bmit \alpha}^l,0,m)$.
For ${\bmit \alpha}^0=0$ we denote the associated operator by
$H^0_m$; otherwise we denote the operator by $E^{{\bmit \alpha}}_m$.
Consistency of the algebra\mpr{cornwell89} forces ${\bmit \delta}$ to be
a ``null root'' (imaginary root as previously defined)
with the property that
$$\langle {\bmit \delta}, {\bmit \delta} \rangle =
\langle {\bmit \delta}, {{\bmit \alpha}}_j \rangle= 0
\,\, ,\eq{imagdelta}$$
(this is more clearly seen in the Chevalley basis\mpr{cornwell89})
and determines the generalization of (\puteqn{defdotprod})
for two generic roots, ${\bmit \alpha}$ and ${\bmit \beta}$ of an affine theory:
$$\langle {{\bmit \alpha}} , {\bmit \beta}\rangle
= {{\bmit \alpha}}^l\cdot {\bmit \beta}^l + \alpha(K)\beta(d)
+ \alpha(d)\beta(K)
\,\, .\eq{newdefdotprod}$$
Thus, only the Lie algebra components
contribute to the inner product of any two simple roots,
$$\langle {{\bmit \alpha}}_i, {{\bmit \alpha}}_j \rangle=
{{\bmit \alpha}}_i^l \cdot {{\bmit \alpha}}_j^l\,\, .\eq{inprodsimcur}$$
Using this generalized definition of an inner product,
the Weyl reflection of a weight,
${\bmit \lambda}= ({\bmit \lambda}^l, k, n)$,
about a root, ${\bmit \alpha}= ({\bmit \alpha}^l,0,m)$, is
\subequationnumstyle{alphabetic}
$$\eqalignno{
w_{{\bmit \alpha}}({\bmit\lambda})
&= {\bmit\lambda} - {{\bmit \alpha}}\langle {\bmit\lambda}, {{\bmit \alpha}} \rangle
& \eqnlabel{weyldef-a}\cr
&= \left(
{\bmit \lambda}^l
-2\lbrack
{\bmit \lambda}^l\cdot{\bmit \alpha}^l + k m
\rbrack
{{\bmit \alpha}^l\over {{\bmit \alpha}\cdot{\bmit \alpha}}}, k, n
- 2\lbrack
{\bmit \lambda}^l\cdot{\bmit \alpha}^l + k m\rbrack
{m\over {{\bmit \alpha}\cdot{\bmit \alpha}}}
\right)\,\, .
&\eqnlabel{weyldef-b}}$$
This reflection can be split into two parts, a series of $m$ translations by
$$t_{{\bmit \alpha}^l}({\bmit\lambda}) = \left(
{\bmit\lambda}^l + k {{{\bmit \alpha}}^l\over {{\bmit \alpha}\cdot{\bmit \alpha}}}, k,
n + {1\over 2k}\left\{ {\bmit \lambda}^l \cdot{\bmit\lambda}^l
- ({\bmit\lambda}^l
+ 2k {{\bmit \alpha}^l\over {{\bmit \alpha}\cdot{\bmit \alpha}}})^2 \right\}
\right)
\,\, .\eq{weyldef-c}$$
followed by a Weyl reflection about ${{\bmit \alpha}}^l$.
The affine Weyl rotation is the product of these
transformations,
$$ w_{{\bmit \alpha}}({\bmit\lambda}) =
w_{{\bmit \alpha}^l}\left( t^m_{{\bmit \alpha}^l}({\bmit\lambda}) \right)
\,\, .\eq{weyldef-d}$$
\subequationnumstyle{blank}
We conclude this general discussion of affine KM algebras with
a listing of the algebra, itself.
Adding the two additional generators, $K$ and $d$,
to the CSA of a Lie algebra,
$\{ H^p\equiv H^p_0\, ;\, \, p \in I^{{\cal L}}=\{1,\, 2,\, \dots,\, l\} \}$,
forms the affine CSA and
enlarges the Lie algebra,
${\cal L}$,\footnote{Eqs.~(2.1.21) correspond to an {\it untwisted} affine
KM algebra. The {\it twisted} algebras involve a
{\tenBbb Z}$_{q=2 {\rm ~or~} 3}$
rotation by an outer automorphism of the untwisted
KM algebra, which creates a {\tenBbb Z}$_q$ projection on the roots.
The roots, ${\bmit \alpha} = ({\bmit \alpha}^{{\cal L}}, 0 , m \}$, may be classified
by their
eigenvalues, $\exp \{ 2\pi i p/q \}$ with
$p\in \{ 0, 1,\, 2,\, \dots\, , q-1\}$,
under this rotation.
The related projection requires
\hbox{$m \pmod{q} = p$. The surviving $m=0$ roots}
are isomorphic to the simple roots of a non-simply-laced Lie subalgebra,
${\cal L}^{(q)}$, of ${\cal L}$.}
from:
\subequationnumstyle{alphabetic}
$$
\eqalignno{
\lbrack H^p, H^q \rbrack
&= 0
& \eqnlabel{eqcr-a}\cr
\lbrack H^p, E^{{\bmit \alpha}} \rbrack
& = {\bmit \alpha}(H^i) E^{{\bmit \alpha}}
& \eqnlabel{eqcr-b}\cr
\lbrack E^{{\bmit \alpha}}, E^{{\bmit \beta}}\rbrack
& =
{\cases {\epsilon({\bmit \alpha},{\bmit \beta}) E^{{\bmit \alpha}+{\bmit \beta}}
& if ${\bmit \alpha},{\bmit \beta}$ is a root\cr
{2\over {\bmit \alpha}^2} {\bmit \alpha} \cdot {\bf H}\, ,
& if ${\bmit \alpha}+{\bmit \beta} = 0$\cr
0
& otherwise\cr}}
& \eqnlabel{eqcr-c}\cr
\cr}
$$
(with $p, q \in I^{{\cal L}}$), to the full affine algebra
\subequationnumstyle{blank}
\subequationnumstyle{alphabetic}
$$
\eqalignno{
\lbrack H^i_m, H^j_n \rbrack
&= Km\delta^{ij}\delta_{m,-n}
& \eqnlabel{eqcrkm-a}\cr
\lbrack H^p_m, E^{{\bmit \alpha}}_n \rbrack
& = {\bmit \alpha}(H^i), E^{{\bmit \alpha}}_{m+n}
& \eqnlabel{eqcrkm-b}\cr
&\cr
\lbrack E^{{\bmit \alpha}}_m, E^{{\bmit \beta}}_n\rbrack
& =
{\cases {\epsilon({\bmit \alpha},{\bmit \beta}) E^{{\bmit \alpha}+{\bmit \beta}}_{m+n}
& if ${\bmit \alpha}+{\bmit \beta}$ is a root\cr
{2\over {\bmit \alpha}^2}
\lbrack {\bmit \alpha} \cdot {\bf H}_{m+n} + Km\delta_{m,-n}\rbrack\, ,
& if ${\bmit \alpha}+{\bmit \beta} = 0$\cr
0
& otherwise\cr}}
& \eqnlabel{eqcrkm-c}\cr
&\cr
\lbrack K, T^a_n\rbrack
&= 0 &\eqnlabel{eqcrk-d}\cr
\lbrack d, T^a_n\rbrack
&= n T^a_n &\eqnlabel{eqcrk-e}\cr}
$$
where $i,j\in I= \{0,\, 1,\, \dots ,\, l\}$; $m,\, n\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $,
and $T^a_n$ is any element of the algebra.
\subequationnumstyle{blank}
The operators of the algebra have the hermiticity properties
$$ {H^i_m}^{\dagger}= H^i_{-m}\, ,\quad
{E^{{\bmit \alpha}}_m}^{\dagger}= E^{-{\bmit \alpha}}_{-m}\, ,\quad
K^{\dagger}= K\, , {\rm~~and~~} d^{\dagger}=d\,\, .
\eq{hermitprop}$$
\vskip .5cm
{\hb{2.2}{\bfs Application to String Theory}}
\sectionnumstyle{arabic}\sectionnum=2\equationnum=0
\vskip .5cm
Now we consider the specific role of affine KM algebras in string theory,
where they provide the world sheet realization of spacetime
gauge theories.
Present in string models are sets of $(h,\bar h)=(1,0)$
conformal fields, $J^a(z)$, called currents, which satisfy the OPE of a KM
generator,
$$J^a(z)J^b(w) = {\tilde K^{ab}\over (z-w)^2}
+ {i f^{abc}\over (z-w)} J^c +
({\rm non-singular~terms})\,\, .\eq{kmope}$$
$f^{abc}$ are the structure constants for the Lie algebra, ${\cal L}$,
with normalization
$$f^{abc}f^{dbc}= C_A\delta^{ad}= {\bmit \psi}^2\tilde h \delta^{ad}
\,\, .\eq{defstrcon}$$
$C_A$ is the quadratic Casimir of the adjoint representation of ${\cal L}$,
$\tilde h$ is the dual Coxeter number, and ${\bmit \psi}$
denotes the highest root.
For each simple factor of the algebra, a basis can be chosen such that
$\tilde K^{ab}= \tilde K\delta^{ab}$.
$K\equiv 2\tilde K/{\bmit \psi}^2$
is defined as the level of a simple factor of
the KM algebra (as discussed in 2.1.b).
Commonly in string theory the normalization of ${\bmit \psi}^2=2$ is used,
which results in $K=\tilde K\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $ and $\tilde h=C_A/2$.
Also recall that
\subequationnumstyle{alphabetic}
$$\eqalignno{
C_A &= r^{-1}_{\cal L}\sum^{{\rm dim}\, {\cal L}}_{a=1} {{\bmit \alpha}}^2_{a}
&\eqnlabel{cadef-a}\cr
& = {1\over r_{{\cal L}}}\left( n_L + ({S\over
L})^2 n_S \right){\bmit \psi}^2
&\eqnlabel{cadef-b}\cr
& \rightarrow \left( {{\rm dim}\, {\cal L}\over r_{{\cal L}}} -1 \right)
{\bmit \psi}^2
\quad {\rm for~simply-laced~algebras}
&\eqnlabel{cadef-c}\cr}$$
where $r_{{\cal L}}$ is the
rank of the algebra and ${{\bmit \alpha}}_{a}$ is a simple
root.
$n_S$ and $n_L$ are the number of short and long roots, respectively,
and $S$ and $L$ are the lengths.
\subequationnumstyle{blank}
The presence of an underlying KM algebra is alternatively seen from the
related commutation relations of the modes of the
currents,\footnote{Any field, $\phi$, with conformal dimension,
$h_{\phi}$,
can be written in terms of the normal modes, $\phi_n$,
in a Laurent expansion,
$$\phi(z) = \sum_{n= -\infty}^{\infty} z^{-n-h_{\phi}}\phi_n\, ,
{\rm ~~where~~}
\phi_n = \oint {{\rm d}z\over (2\pi i z)} z^{n+h_{\phi}} \phi (z)\,\, .$$}
$$J^a(z)= \sum_{n\in Z} J^a_n z^{-n-1}\,\, ,
{\rm ~where~} J^a_n= \oint {{\rm d}z\over (2\pi i z)} z^{n+1} J^a(z)\, .
\eq{jmodcur}$$
The commutation relations have the form
$$ \lbrack J^a_m,J^b_n \rbrack = i f^{abc} J^c_{m+n} +
\tilde K m\delta^{ab}\delta_{m,-n}\,\, , \eq{kmcommrel}$$
where $m,n\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $.
As was discussed previously, these commutators define the untwisted affine
KM algebra, ${\tilde {\cal L}}$, associated with a compact (semi)-simple Lie
algebra, ${\cal L}$.
The horizontal Lie subalgebra, ${\cal L}$, is formed from the algebra
of the zero modes, $J^a_0$, for which the level does not appear.
The full (infinite) set of $J^a_n$'s provides the affinization of the
finite dimensional subalgebra of $J^a_0$'s.
In a heterotic string model these currents appear in the vertex operator
for a spacetime gauge boson,\mpr{lewellen} {\it e.g.},
$$V^a= \zeta_{\mu} \psi^{\mu}(\bar z) J^a(z)\exp\{ip\cdot X\}\,\, ;
\quad p^{\mu}p_{\mu}= p^{\mu}\zeta_{\mu}=0\,\, .\eq{vertexop}$$
$X^{\mu}$ is the spacetime string coordinate and $\psi^{\mu}$ is
a left-moving Ramond-Neveu-Schwarz fermion. Thus,
spacetime gauge fields
imply the existence of a KM algebra on the world sheet.
In other words,
there is an extension to the standard Virasoro algebra that includes
the affine KM currents. In OPE language, the extended Virasoro--KM algebra
takes the form
\subequationnumstyle{alphabetic}
$$\eqalignno{
T(z) T(w)
&= {c/2\over (z-w)^4} + {2\over (z-w)^2}T(w) +
{1\over (z-w)}\partial T(w) + \dots
{\hbox to .5cm{\hfill}}
&\eqnlabel{tjope-a}\cr
T(z)J^a(w)
&= {J^a(w)\over (z-w)^2} + {\partial J^a(w)\over (z-w)} + \dots
&\eqnlabel{tjope-b}\cr
J^a(z)J^b(w)
&= {\tilde K^{ab}\over (z-w)^2}
+ {i f^{abc}\over (z-w)} J^c + \dots \,\, .
&\eqnlabel{tjope-c}}
$$
\subequationnumstyle{blank}
Equivalently, the algebra can be expressed in terms of commutation
relations of normal modes:
\subequationnumstyle{alphabetic}
$$\eqalignno{
\lbrack L_m, L_n \rbrack
&= (m-n) L_{m+n}
+ {c\over 12} (m^3 -m)\delta_{m,-n}
&\eqnlabel{tjcr-a}\cr
\lbrack L_m , J^a_n \rbrack
&= -n J^a_{m+n}
&\eqnlabel{tjcr-b}\cr
\lbrack J^a_m , J^b_n \rbrack
&= i f^{abc} J^c_{m+n} +
\tilde K m\delta^{ab}\delta_{m,-n}\,\, .
&\eqnlabel{tjcr-c}}$$
\subequationnumstyle{blank}
In eq.~\pe{tjope-a}
the contribution to the Virasoro central charge (conformal anom-
aly)
from the KM algebra is
$$c_{{\tilde {\cal L}}} = {\tilde K {\rm dim}\, {\cal L} \over \tilde K + C_A/2}
= {K {\rm dim}\, {\cal L} \over K + \tilde h}\,\, .\eq{cccakm}$$
The
energy-momentum tensor, itself, may be written in terms of
the KM currents:\mpr{ginsparg89}
$$ T(z)= {1\over \beta} \sum_{a=1}^{{\rm dim}\, {\cal L}}
: J^a(z) J^a(z): = {1\over \beta}\left( \lim_{z\rightarrow w}
\sum_{a=1}^{{\rm dim}\, {\cal L}} J^a(z) J^a(w)
- {\tilde K {\rm dim}\, {\cal L}\over (z-w)^2}
\right)\,\, . \eq{emtensm}$$
$\beta\equiv 2\tilde K + C_A= 2(K+\tilde h)$ is a constant fixed either by the
requirement that $T(z)$ satisfy (\puteqn{tjope-a})
or, equivalently,
that the $J^a(z)$'s transform as dimension $(1,\, 0)$ primary fields.
In terms of the mode expansion for $T(z)$, (\puteqn{emtensm}) translates into
\subequationnumstyle{alphabetic}
$$\eqalignno{
L_n
&= \oint {{\rm d} z\over (2\pi i z)} z^{n+2} T(z)
&\eqnlabel{ln-a}\cr
&= {1\over \beta} \sum_{m= -\infty}^{\infty} : J^a_{m+n} J^a_{-m}:
\,\, .
&\eqnlabel{ln-b}}$$
\subequationnumstyle{blank}
All states in the
theory necessarily fall into representations of the
Virasoro--KM
algebra.\mpr{lewellen}
Each representation (Verma module),
$\lbrack \phi_{(r)} \rbrack$,
is composed of a primary field $\phi_{(r)}$
(actually, a multiplet of fields $\phi_{(r)}^{\lambda})$,
and all of its ``descendent'' fields.
The descendent fields are the set of fields formed by acting
on a primary field with all possible products of
the raising operators $L_{-m}$ and $J^a_{-n}$ for $m,\, n\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+$,
$$\left\{ \prod_{i=1}^{\infty}(L_{-i})^{A_i}
\prod_{a=1}^{{\rm dim}\, {\cal L}} (J^a_{-i})^{B^a_i}\vert \phi_{(r)}
\rangle\right\} \,\, ,\eq{vermamod}$$
where $A_i\, ,\,\, B^a_i\in \{ 0, Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+ \}$.
$\phi_{(r)}$ transforms as a
highest weight representation $(r)$ of ${\cal L}$, as indicated by
the leading term in the OPE of $\phi_{(r)}$ with the current $J^a(z)$,
\subequationnumstyle{alphabetic}
$$\eqalignno{
J^a(z) \phi_{(r)}(w)
&= {(T^a)_{(r)}^{(r')}\over (z-w)}
\phi_{(r)}\,\, ,
&\eqnlabel{opejphi-a}\cr
&= {(t^a_{(r)})^{\lambda'\lambda}\over (z-w)}
\phi_{(r)}^{\lambda}\,\, .
&\eqnlabel{opejphi-b}}$$
\subequationnumstyle{blank}
$t^a_{(r)}$ are representation matrices for ${\cal L}$ in the representation
$(r)$.
These primary fields create states, called highest weight states, defined by
$$\vert \phi_{(r)}\rangle \equiv \phi_{(r)}(0) \vert {\rm vacuum}\rangle\,\, ,
\eq{hws}$$
that are representations of the zero-mode (Virasoro-Lie)
algebra,\footnote{$-L_0$ can be identified with the scaling
element, $d$, of the KM algebra.}
\subequationnumstyle{alphabetic}
$$
L_0 \vert \phi_{(r)}\rangle = h_{(r)}\vert \phi_{(r)}\rangle
\,\, , \quad\quad{\rm and} \quad\quad
J^a_0 \vert \phi_{(r)}\rangle
= {(T^a)}_{(r)}^{(r')}\vert \phi_{(r')}\rangle
\,\, \,\, {\rm for~} n= 0\,\, ,
\eq{reptrans-a}$$
and
$$
L_n \vert \phi_{(r)}\rangle =
J^a_n \vert \phi_{(r)}\rangle = 0 \,\, \,\, {\rm for~} n>0\,\, .
\eq{reptrans-b}
$$
{}From (\puteqn{reptrans-a}), the general form for the
the conformal dimension, $h_{(r)}$, of the primary field,
$\phi_{(r)}$, is
\subequationnumstyle{alphabetic}
$$\eqalignno{
h_{(r)} &=
{C_{(r)}/2\over \tilde K + C_A/2}
& \eqnlabel{cdpf-a}\cr
&= {C_{(r)}/\psi^2\over K + \tilde h} \,\, ,
& \eqnlabel{cdpf-b}}$$
\subequationnumstyle{blank}
where
$$C_{(r)} \equiv l_{(r)} { {\rm dim}\, {\cal L} \over {\rm dim}\, (r)}$$
is the quadratic Casimir of the representation $(r)$,
with ${\rm tr}\, t^a_{(r)} t^b_{(r)}= l_{(r)} \delta^{ab}$.
The dimensions the descendent fields are $h_{(r)} + Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+$. Specifically,
$$h= h_{(r)} + \sum_{i=1}^{\infty} \left( i A_i +
\sum_{a=1}^{{\rm dim}\, {\cal L}} i B^a_i\right)\eq{hdf}$$
for the field
$${ \prod_{i=1}^{\infty}(L_{-i})^{A_i}
\prod_{a=1}^{{\rm dim}\, {\cal L}} (J^a_{-i})^{B^a_i}\vert \phi_{(r)}
\rangle} \,\, .\eq{dfdef2}$$
An issue we wish to stress is that
only states in representations $(r)$ satisfying,
$$K\geq \sum_{i=1}^{r_{{\cal L}}} n_i m_i\,\, ,\eq{unitaryreq}$$
may appear in sensible string models.
$n_i$ are the Dynkin labels of the highest weight of the
representation $(r)$ and $m_i$ are the related co-marks\footnote{See
Figure A.1 of Appendix A for listings of the co-marks for each
of the compact simple Lie algebras.}
group associated with the Lie algebra, ${\cal L}$.\mpr{lust89}
Eq.~(\puteqn{unitaryreq})
is the condition for unitarity of a representation.
Within any KM algebra, the subset
$$\{ J^{-\bmit\alpha}_1, J^{\bmit\alpha}_{-1},
K- {\bmit\alpha}\cdot {\bmit H}\}, $$
where ${\bmit\alpha}$ is a root in ${\cal L}$ and ${\bmit H}$ is the vector of
currents in the Cartan subalgebra, forms an $SU(2)$ subalgebra.
If $\lambda$ is the weight of a component, $\phi_{(r)}^{\lambda}$,
of the multiplet $\phi_{(r)}$, then
$${\eqalign{
0\leq \langle\phi_{(r)}^{\lambda}\vert J^{-\bmit\alpha}_1
J^{\bmit\alpha}_{-1}\vert\phi_{(r)}^{\lambda}\rangle
& = \langle\phi_{(r)}^{\lambda}\vert \lbrack J^{-\bmit\alpha}_1,
J^{\bmit\alpha}_{-1}\rbrack \vert\phi_{(r)}^{\lambda}\rangle \cr
& = \langle\phi_{(r)}^{\lambda}\vert (K- {\bmit\alpha}\cdot {\bmit H})
\vert\phi_{(r)}^{\lambda}\rangle \cr
& = (K-{\bmit\alpha}\cdot{\bmit\lambda})\langle\phi_{(r)}^{\lambda}
\vert\phi_{(r)}^{\lambda}\rangle \,\, .}}\eq{unitproof}$$
Hence, $(K-{\bmit\alpha}\cdot{\bmit\lambda})$ must be positive for all
roots ${\bmit\alpha}$ and all weights ${\bmit\lambda}$ in the
representation $(r)$. Thus,
\subequationnumstyle{alphabetic}
$$\eqalignno{
K & \geq {\bmit\psi}\cdot {\bmit\Lambda} & \eqnlabel{unitproof2-a}\cr
& = \sum_{i=1}^{r_{{\cal L}} } n_i m_i\,\, , & \eqnlabel{unitproof2-b}}$$
\subequationnumstyle{blank}
where ${\bmit\psi}$ is the highest root of ${\cal L}$ and ${\bmit\Lambda}$ is the
highest weight in the $(r)$ representation.
This is the first major constraint placed on
highest weight states of Lie algebras and the associated primary fields
that can appear in consistent string models.
One consequence of this, as we mentioned in chapter 1, is that
string models based on level-1 KM algebras cannot have spacetime scalars
in the adjoint representation. Naively, there would appear a way of
escaping this. Since the KM currents transform in the adjoint
representation we might use them to form spacetime scalars.
Unfortunately, this cannot be done, at least for models with
{\it chiral} fermions.\mpr{dixon87,dreiner89a,lewellen}
The basic argument is as follows:
The vertex operator, $V_{\rm scalar}^a$ for a spacetime scalar in a
level-1 adjoint representation would necessarily have the form
$$ V_{\rm scalar}^a= O(\bar z) J^a(z)\,\, , \eq{vertscalad}$$
where $J^a$ is one of the KM currents. Masslessness of the state requires
that the anti-holomorphic operator, $O(\bar z)$, have
conformal dimension $\bar h_O={1\over 2}$ and behave both
in its OPE's and under GSO
projections like an additional RNS fermion. Hence, the spacetime spinor
degrees of freedom would fall into representations of the five-dimensional
Lorentz group, $SO(4,1)$. Decomposition into $SO(3,1)$ spinors always gives
non-chiral pairs. Thus, adjoint scalars and chiral fermions are mutually
exclusive. Further,
$N=1$ SUSY, at least for models based on free field construction,
also disallows these adjoint scalars.\mpr{lewellen,dreiner89b}
We have assumed in these arguments that the currents are not
primary fields of the full Virasoro-KM algebra;
comparison of
eq.~(\puteqn{tjope-c}) with (\puteqn{opejphi-a})
proves this is, indeed, a valid assumption.
A second constraint on states is a bit more trivial.
Since the gauge groups come from the
bosonic sector of the heterotic string, the total contribution to the
conformal anomaly from the gauge groups cannot exceed 22, {\it i.e.,}
$$ c_{\rm KM} = \sum_i { K_i{\rm dim}\, {\cal L}_i\over K_i + \tilde h_i}\leq 22\,\, ,
\eq{maxckma}$$
where the sum is over the different factors in the algebra
and every $U(1)_K$
contributes 1 to the sum. This condition gives an upper bound to the
levels for a GUT.\mpr{font90,ellis90}
For example, if the gauge group is $SO(10)$, the maximum
level is seven, for $E_6$ it is four, while it is $55$ for $SU(5)$.
In terms of the massless representations of the Lie algebras that appear,
there is one additional constraint that is stronger
than either of
the first two. This constraint is on the conformal
dimension of a primary field, \pe{cdpf-a}.
Since the intercept for the bosonic sector of a heterotic string is one,
a potentially massless state in an $(r)$ representation cannot have
$h_{(r)}$ greater than one. That is,
$$h_{(r)}= {C_{(r)}/{\bmit \psi}^2\over K + \tilde h}\leq 1\,\, .\eq{maxcdone}$$
\hfill\vfill\eject
\pagenum=23
\chapternumstyle{blank}\subsectionnumstyle{blank}
\noindent {\bf Chapter 3: Modular Invariant Partition Functions}\vskip .8cm
{\hb{3.1}{\bfs Review of Characters, Partition Functions, and
Modular Invariance}}\vskip .5cm
\chapternumstyle{arabic}\chapternum=3
\sectionnumstyle{arabic}\sectionnum=1\equationnum=0
Recently, studies of classical string solutions
have provided impetus for further research into
two-dimensional conformal field theories.
In particular, considerable effort has been spent in classifying
modular invariant partition functions (MIPF's) of these
theories. In any string model, there is
corresponding to each (chiral)
Verma module representation, $\lbrack \phi (z)\rbrack$,
of the Virasoro algebra
(or an extension of it such as a super-Virasoro or Virasoro-KM algebra)
a character (a.k.a. partition function), $\chi_{\lbrack \phi \rbrack}$.
The character is a trace over the Verma module on a cylinder,
$$\chi_{\lbrack \phi \rbrack}=
{\rm Tr}_{\lbrack \phi \rbrack} q^{L_{0_{\rm cyl}}}
= q^{-c/24} {\rm Tr}_{\lbrack \phi \rbrack}
q^{L_{0_{\rm plane}}}\,\, , \eq{defchar}$$
where $q= \exp(2\pi i\tau)$, $\tau= \tau_1 + i\tau_2$ ,
with $\tau_1\, $, $\tau_2\in\R$,
and the trace containing the conformal anomaly
factor is defined on the complex plane.\footnote{The factor of $q^{-c/24}$
results from the stress-energy tensor, $T(z)$ not transforming
homogeneously under a conformal transformation, but picking up a quantity
equal to $c/12$ times the Schwartzian, $S(z,w)$. That is, under
$w\rightarrow z= e^w$,
$$T_{\rm cyl}(w)= \left({ \partial z\over \partial w}\right)^2 T(z)
+ {{c\over12}}\, S(z(w),w)\,\, ,$$
where
$S(z(w),w)\equiv {\partial z\partial^3 z - (3/2)(\partial^2 z)^2\over
(\partial z)^2} = -{1\over 2}\,\, .$ Thus, the $L_0$ defined on the cylinder is
not equivalent to the $L_0$ defined on the complex plane, rather
$L_{0_{\rm cyl}}= L_{0_{\rm plane}} - c/24$. If the only purpose for
partition functions were to count the number of states at each level, the
anomaly term could be effectively discarded. However, this term is very
important with regard to modular invariance.}
If we expand this character in terms of powers of $q$,
$$\chi_{\lbrack \phi \rbrack} = q^{-c/24}\sum_{i=0}^{\infty}
n_i q^{h_{\phi} +i}\,\, ,\eq{charexpq}$$
the integer coefficient, $n_i$ counts how many (descendent) fields the
Verma module contains at the $i^{th}$ energy level.
The one-loop partition function of the string model can be expressed in
terms of bi-linears of the characters of the Verma modules,
$$\eqalignno{Z(\tau,\bar\tau)
&= \sum_{a,b} N_{ab}\chi_a(\tau)\bar\chi_b(\bar\tau)
&\eqnlabel{pfintro1}\cr
&= \sum_{a,b} N_{ab}{\rm Tr}{\rm e}^{2\pi i\tau_1 P}{\rm e}^{- 2\pi \tau_2 H}\,\, .
&\eqnlabel{pfintro2}\cr}$$
$H= L_0 + \bar L_0$ and $P= L_0 - \bar L_0$
are the Hamiltonian and momentum operators of the
theory\footnote{Thus, from the statistical mechanics perspective,
$\tau_2$ can be viewed as either a Euclidean time
that propagates the fields over the one-loop world sheet cylinder
or as the inverse of a temperature.
Analogously, $P$ can be interpreted
as a momentum operator that twists an end of
the world sheet cylinder by $\tau_1$
before both ends meet to form a torus.}
and $N_{ab}\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $ corresponds to the number of times that the primary field
associated with $\chi_a\bar\chi_b$ appears in the theory.
\begin{ignore}Further,
As with the characters from which it is formed, when the partition function
is expanded in terms of powers of $q$ and $\bar q$,
the coefficient of $q^r$ corresponds
to the number of states in the theory at energy (mass) level-$r$.
\end{ignore}
The term ``modular invariant partition function''
is understood, as above,
to generally mean the MIPF for a genus-1 world sheet
(a torus). In conformal field theory,
a torus is characterized by a single complex parameter,
the $\tau$ of the above equations.
Geometrically, $\tau$
may be defined by making the following identifications on the
complex plane:\mpr{lust89}
$$ z\approx z + n + m\tau\,\, , \quad\quad n,m\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ \,\, ,
\quad\quad \tau\in\C\,\, . \eq{deftorus1}$$
The more general definition
$$ z\approx z + n\lambda_1 + m\lambda_2\,\, , \quad\quad n,m\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ \,\, ,
\quad\quad \lambda_1,\lambda_2\in\C\,\, , \eq{deftorus2}$$
leads to conformally equivalent tori under rescaling and rotation of
$\lambda_1$ and $\lambda_2$ by the conformal transformation
$z\rightarrow \alpha z$. Hence, only their ratio,
$\tau\equiv {\lambda_1\over \lambda_2}$, is a conformal invariant. Therefore
$\lambda_1$ is set to one. Also, the freedom to interchange
$\lambda_1$ and $\lambda_2$ allows us to impose Im$\,\tau >0$. Thus, tori
are characterized by complex $\tau$ in the upper-half plane.
(See Figure 3.1.)
This is not the whole story though. It is not quite true that $\tau$ is
a conformal invariant that cannot be changed by rescalings and
diffeomorphisms. There are global diffeomorphisms, not smoothly
connected to the
identity, that leave the torus invariant, but change the parameter $\tau$.
They correspond to cutting the torus along either cycle $a$ or $b$,
twisting
one of the ends by a multiple of $2\pi$, and then gluing
the ends back together. (See Figure 3.2.) Such
operations are know as Dehn twists and generate all global diffeomorphisms
of the torus. A Dehn twist around the $a$ cycle, transforms $\tau$ into
$\tau +1$. (The related transformations of $\lambda_1$ and $\lambda_2$ are
$\lambda_1\rightarrow\lambda_1$ and $\lambda_2\rightarrow\lambda_1 +
\lambda_2$.)
This transformation is commonly denoted as ``$T$'',
\subequationnumstyle{alphabetic}
$$ \hbox to 1cm{\noindent $T$:\hfill}\tau\rightarrow\tau +1 \,\, .\eq{deftranst-a}$$
The twist
around the $b$ cycle corresponds (after rotation and rescaling to bring
$\lambda_1$ to one) to $\tau\rightarrow {\tau\over\tau+1}$
and can be expressed in terms of $T$ and another transformation,
``$S$'', defined by
$$ \hbox to 1cm{\noindent $S$:\hfill}\tau\rightarrow\-{ 1\over \tau}\,\, .
\eq{deftransst-b}$$
\subequationnumstyle{blank}
Specifically, $TST: \tau\rightarrow{\tau\over\tau+1}$.
$S$ and $T$ are the generators of the symmetry group of
$PSL(2,Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ )=SL(2,Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ )/Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$, called the modular group of the torus.
General modular transformations take the form
$$PSL(2,Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ): \tau\rightarrow \tau^{'} = {a\tau + b\over c\tau + d}\,\, ,
\eq{psltrans}$$
with $a,b,c,d\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $ and $ad-bc=1$. (The $Z_2$ projection equates
$(a,b,c,d)$ with $(-a,-b,-c,-d)$ since both correspond to the same
transformation of $\tau$.)
Thus, the true moduli space of conformally inequivalent tori is the
upper-half plane modded out by the modular group.
This region is called the fundamental domain, ${\cal F}$, of $\tau$ .
The range for the fundamental domain
is normally chosen to be
$${\cal F} = \left\{ -{1\over 2}\leq {\rm Re}\tau\leq 0,
\vert\tau\vert^2\geq 1\cup 0<{\rm Re}\tau<{1\over 2}, \vert\tau\vert^2>1\right\}
\,\, .\eq{funddomain}$$
A value of $\tau$ outside of the fundamental domain corresponds to a
torus that is conformally equivalent to another produced by a $\tau$ in the
fundamental domain.
Any value of $\tau$ in the complex plane
outside of the fundamental domain can be transformed,
by a specific element of $PSL(2,Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ )$,
to the inside.
For a consistent string model,
physical quantities, such as amplitudes, must be invariant under
transformations of $\tau$ that produce conformally equivalent tori.
That is, physical quantities must be ``modular invariant''.
This implies the necessity of a modular invariant
partition function, because
the one-loop vacuum-to-vacuum
amplitude, $A$, of a theory
is the integral of the partition function, $Z(\tau,\bar\tau)$
over the fundamental domain, ${\cal F}$,
$$A= \int\limits_{\cal F} {d\tau d\bar\tau\over({\rm Im}\tau)^2}
Z(\tau,\bar\tau)\,\, .\eq{amplitude}$$
Thus, consistency of a string theory requires that the one-loop partition
function be invariant under both $S$ and $T$ transformations.
Please note that although
invariance of the one-loop partition function under $S$ and $T$ is
necessary for a consistent model, it is not
sufficient.\mpr{alvarez86,kawai87a,antoniadis87, antoniadis88}
Multi-loop partition
functions must also be invariant under generalized
\noindent modular transformations. Multi-loop
invariance holds if, in addition to invariance at one-loop,
there is invariance under a symmetry that mixes the cycles of
neighboring tori of Riemann surfaces of genus $g>1$ world sheets.
This mixing is generally referred to as a $U$
transformation.
\hfill\vfill\eject
\hbox to 10cm{\hfill}
\vskip 3.5cm
\centertext{Fig.~3.1 Two conformally inequivalent tori}
\vskip 7.5cm
\centertext{Fig.~3.2 Lattice representation of a two-dimensional torus\\
defined by complex number $\tau$}
\hfill\vfill
\centertext{Fig.~3.3 Lattice representation of a two-dimensional torus\\
defined by complex numbers $\lambda_1$ and $\lambda_2$}
\eject
\hbox to 10cm{\hfill}
\vskip 3.5cm
\centertext{Fig.~3.4 The two independent cycles on the torus}
\vskip 7.7cm
\centertext{Fig.~3.5 Transformation of $\tau$ from Dehn twist around the
$a$ cycle}
\hfill\vfill
\centertext{Fig.~3.6 Transformation of $\tau$ from Dehn twist around the
$b$ cycle}
\eject
\hbox to 10cm{\hfill}
\vskip 17.3cm
\centertext{Fig.~3.7 Fundamental domain $\cal F$ in moduli space\\
and its images under $S$ and $T$}
\hfill\vfill\eject
{\hb{3.2}{\bfs Complications for Models Based on General KM Algebras}}
\sectionnumstyle{arabic}\sectionnum=2\equationnum=0
\vskip .5cm
Complete classification of modular invariant one-loop partition functions
exists only for some of the simplest
conformal field theories, in particular the minimal discrete series with
$c<1$, and the
models based on $SU(2)_K$ Ka\v c-Moody algebras.\mpr{capelli87}
These MIPFs are formed from bilinears of
characters, $\chi_l^{(K)}$, of $SU(2)_K$,
which we label by twice the spin,
$l=2s$, of the corresponding $SU(2)$ representation
($l = 0$ to $K$).\footnote{The values of $l$
correspond to the dimensions of the highest weight representations
(primary fields) meeting unitary conditions
for an $SU(2)_K$ algebra. Throughout this chapter generic highest weight
representations of an $SU(2)_K$ algebra are denoted by $\Phi_l$ or,
where there will be no confusion,
simply by $l$. When discussing holomorphic and
anti-holomorphic primary fields, generically $l$ will represent the former
and $\bar l$ the latter.}
These MIPF's
were constructed and found to be in one-to-one correspondence
with the simply-laced Lie algebras:
$$\eqalign{\noindent Z(A_{K+1}) &= \sum_{l = 0}^{K} |\chi_\l|^2\,;\,\, K \geq
1\cr
Z(D_{{K\over 2} + 2}) &= \cases{\sum_{l_{EVEN}=0}^{{K\over 2} -2}
|\chi_l + \chi_{K-l}|^2 + 2|\chi_{{K\over 2}}|^2;\,\, K\in
4 Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+\cr
\sum_{l_{EVEN}=1}^{K} |\chi_l|^2 + |\chi_{{K\over
2}}|^2\cr
\, \, \, \, + \sum_{l_{ODD}=1}^{{K\over 2}-2} (\chi_l
\chi_{K-l} + c.c.); \,\, K\in 4 Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+ +2\cr}\cr
Z(E_6) &= |\chi_0 + \chi_6|^2 + |\chi_3 + \chi_7|^2 + |\chi_4 +
\chi_{10}|^2 \,; \,\, K = 10\cr
Z(E_7) &= |\chi_0 + \chi_{16}|^2 + |\chi_4 + \chi_{12}|^2 +
|\chi_6 + \chi_{10}|^2 + |\chi_8|^2\cr
&\,\,\,\,\,\,\,\,\,\, + [(\chi_2 + \chi_{14})\chi_8^* + c.c.]\,;\,\,
K = 16\cr
Z(E_8) &= |\chi_0 + \chi_{10} + \chi_{18} + \chi_{28}|^2 +
|\chi_6 + \chi_{12} + \chi_{16} + \chi_{22}|^2\,; \,\, K =
27\cr}\eqno\eqnlabel{c1}$$
The $D_{{K\over 2}+2}$ partition function is formed from twisting of the
$A_{K+1}$ partition function by the simple current $J_S = (0, K)$ for
$K\in 4Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+$ or by the non-simple current
$J_{NS} = (1,K-1)$\footnote[\phantom{4}]{Fields with both holomorphic and
anti-holomorphic components are denoted by either
$(\Phi _l,\bar {\Phi } _{\bar l})$ or $(l,\bar l)$. In either case
the first element is holomorphic and the second is
anti-holomorphic. The product of two such fields, resulting from
tensoring $SU(2)_{K_A}$ and $SU(2)_{K_B}$ algebras, is
denoted by $(l_A,\bar l_A;m_B,\bar m_B)$, where $l_A$ and $\bar l_A$ are
holomorphic and antiholomorphic primary fields for the $SU(2)_{K_A}$
algebra. $m_B$ and $\bar m_B$ are to be interpreted
similarly for the $SU(2)_{K_B}$.}
for $K\in 4Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+ +2$.\footnote{We follow the standard definition for simple
currents.\mpr{schellekens89c}
A simple current $J$ is a primary field which when fused with
any other primary field (including itself) $\Phi_l$ of the K-M algebra produces
only one primary
field as a product state:
$$ J\otimes \Phi_l = \Phi_{l'}$$
A non-simple current $J'$, when fused with at least one other primary field
(possibly itself), produces more than one primary field:
$$ J'\otimes \Phi_l = \sum_{l'}\Phi_{l'}$$}
The exceptional invariants of $E_6$ and
$E_8$ originate via conformal embeddings\mpr{bouwknegt87}
of $A_1 \subset C_2$ and $A_1
\subset G_2$
respectively. $Z(E_7)$ can be derived by the more involved process of first
conformally embedding ${SU(2)\over Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2} \otimes {SU(3)\over Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3}$ in $E_8$,
and then gauging away the $SU(3)\over Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ contribution.
The reason for the correspondence between $SU(2)_K$ modular invariants and
simply-laced Lie groups is not fully understood. General arguments have shown
that for any simply-laced Lie group a modular invariant solution can be
constructed for affine ${SU}(2)$ at a specific level.\mpr{kaku91}
But we are not aware of a
complete explanation
as to why these are one-to-one.
Expressing these partition functions in the general form
$$ Z = \sum_{l,\bar l} N_{l,\bar l}\chi_l\bar\chi_{\bar l}\,\, ,
\eq{aorbpartfn} $$
it was realized that: (1) for each MIPF the values of $N_{l,\bar l}$ for
$l=\bar l$ coincide with the exponents of the associated simply-laced Lie
algebra. These exponents give the degree (minus one) of a system of
independent generators of the ring of invariant polynomials in these
algebras; (2) the level $K$ at which a specific modular invariant exists
obeys the rule $K+2=\kappa$, where $\kappa$ is the Coxeter number of the Lie
algebra. Classification of MIPFs for tensor products of $SU(2)_{K_i}$ may
shed more light on the underlying significance of this.
For tensor products of
other theories no procedures have been developed that give all of the
possible
modular invariants, but a few simple algorithms exist for modifying a known
modular invariant to produce another one, in particular the orbifold
construction\mpr{dixon85} and the related
operation of twisting by a simple current.\mpr{schellekens89}
In this chapter,
we make some proposals aimed at the general problem of
classifying all possible modular invariants for conformal field theories
constructed by tensoring together models whose modular invariants are already
known. By a tensor product of two theories, say $A$ and $B$, we mean a theory
whose chiral algebra includes the chiral algebras of both the $A$ and $B$
theories. As a consequence, the central charge of the combined theory will be
the sum of those for the individual factors, the chiral blocks that make up
amplitudes will be constructed from the products of the individual chiral
blocks, and the characters will be
products of the individual characters. Thus the partition function
of the $A\otimes B$ theory is restricted to
the form
$$Z^{AB}=\sum_{l,m,\bar{l},\bar{m}}
N^{AB}_{lm\bar{l}\bar{m}}\chi^A_l\chi^B_m\bar{\chi}^A_{\bar{l}}
\bar{\chi}^B_{\bar{m}}\,\, .\eqno\eqnlabel{zab}$$
The approach taken here derives rules by iteration in the
number of terms in the tensor products,
{\it i.e.}, we consider the conditions placed on higher
order tensor products by the requirements of modular invariance of lower
order tensor products. We also discuss the degrees of freedom in MIPFs
that remain after these conditions have been applied to the higher order
terms.
For an application of this process, we concentrate
on the specific case of tensor products of two
$SU(2)_{K_i}$ K-M algebras
and their MIPFs.
This is investigated for two reasons: ~(1) for insight
into the density of MIPFs derived by simple currents compared to the
total space of
MIPFs;\footnote{Knowledge of the density of simple current MIPFs will play a
significant role in understanding the total space of MIPFs. In the last
few years A.N. Schellekens {\it et al.}\mpr{schellekens89} have made
significant progress
towards complete classification of simple current modular
invariants (SCMI's)
for rational conformal field theories (RCFTs). These classifications
appear amenable for
generalization to SCMI's for tensor products of
RCFTs. Therefore, understanding of the density of
SCMI's compared to the total space of MIPFs is very constructive,
for this will reveal the size of the space of solutions that cannot be found
through Schellekens' approach.}
and (2) as a first step towards developing a
systematic set of rules for constructing MIPFs out of tensor products of
characters for general K-M algebras and minimal models.\mpr{warner90}
The latter issue was first discussed in Ref.~\pr{lewellen}.
Completion of this set of rules generalizes the work
in \pr{kawai87a}, \pr{antoniadis87} and \pr{antoniadis88}, wherein the
process for creating consistent
({\it i.e.}, modular invariant) models from tensor products of Ising
models (the free fermion
approach) is derived. These papers reveal how an infinite set of
consistent free fermion models can be constructed, with the majority based on
left-right (L-R) asymmetric
modular invariants. Ref.~\pr{lewellen} suggests that the majority of
consistent models formed from tensor products of K-M algebras and
minimal models may likewise be L-R asymmetric. As
with the free fermion models, the L-R asymmetric cases may comprise the
larger, and perhaps more interesting, class of models.
The combined tensor product theory is {\it not} restricted to be simply the
product of the
individual theories; the operators in the combined theory need not
be diagonal ({\it i.e.}, left-right symmetric), and in general the fusion rules
for
the operator products will be modified. The latter point is the chief
complication in the general problem. The allowed tensor product theories built
from free bosons or fermions have been successfully categorized, because the
possible fusion rules in these theories are almost trivial; likewise
twisting a theory by a simple current gives unambiguously a new theory, because
the new fusion rules are unambiguous.
The difficulty of this
procedure in general (compared to that for the free fermion or boson models)
becomes clear from the transformation properties of the
characters under $S$ and $T$ transformations,
the generators of the modular group $PSL(2,Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ )$.
For an Ising (free fermion) model, there are three non-zero characters. Each of
these transforms
under $S$ or $T$ into another one of the three characters
(possibly times a phase):
$$ \eqalign{T:\qquad &\chi\pmatrix{A\cr A\cr} \rightarrow e^{i\pi/24}
\chi\pmatrix{A\cr P\cr}\cr
\qquad &\chi \pmatrix{A\cr P\cr} \rightarrow e^{-i\pi/24} \chi \pmatrix{A\cr
A\cr}\cr
\qquad &\chi\pmatrix{P\cr A\cr} \rightarrow e^{i\pi/12} \chi\pmatrix{P\cr
A\cr}\cr
\qquad & \cr
S:\qquad &\chi \pmatrix{A\cr A\cr} \rightarrow \chi \pmatrix{A\cr A\cr}\cr
\qquad & \chi \pmatrix{P\cr A\cr} \rightarrow \chi \pmatrix{A\cr P\cr}\cr
\qquad & \chi \pmatrix{A\cr P\cr} \rightarrow \chi \pmatrix{P\cr A\cr}\cr}
\eqno\eqnlabel{c2}$$
where,
$$
\chi \pmatrix{A\cr A\cr} = \chi_0 + \chi_{1/2}\, , \,\,
\chi \pmatrix{A\cr P\cr} = \chi_0 - \chi_{1/2}\, , \,\,
{\rm ~and~}\,\,
\chi \pmatrix{P\cr A\cr} = \sqrt{2} \chi_{1/16}\, ,
\eq{chisttrans}$$
Here $\chi_i, i = 0, 1/16, 1/2$ are the characters of the primary
fields of conformal dimension $h = 0$,
$1/16$, $1/2$ in the $c = 1/2$ critical Ising model.
$P(A)$ denotes (anti-)periodic
boundary conditions around one of the two non-contractible
loops of the world sheet torus.\mpr{kaku91} In this case, the $S$ and $T$
transformations act on the
characters in the manner of generic simple currents denoted by
$J_S$ or $J_T$,
respectively, twisting the corresponding primary states $\Phi_i$.
In other words, the outcome of the
transformation or fusion is, respectively, a single character or primary
field:
{\settabs 7\columns
\+ && $S:\quad\chi_i \rightarrow \chi_{j}$
&$\,\,$ ;& $T:\quad\chi_i \rightarrow \chi_{k}$\cr
\+ && $J_S \otimes \Phi_i = \Phi_{j}$
&$\,\,$ ;& $J_T \otimes \Phi_i = \Phi_{k}\,\, .$\cr}
However, in the generic case for a K-M algebra or minimal model, $S$
transforms a character $\chi_l$
in the manner of a
non-simple current, $J_{NS}$, acting on a generic primary field $\Phi_l$.
The outcome of the transformation or fusion is in general is not a single
term, but a sum of terms:
$$\eqalign {S: ~\chi_l &\Rightarrow \sum_{l'} S(l,l') \chi_{l'}\cr
J_{NS} \otimes \Phi_l &= \sum_{l'} N_{J_{NS}l}^{l'}\Phi_{l'}}\,\, .
\eqno\eqnlabel{c4}$$
(As shown by E. Verlinde\mpr{verlinde88}, the (positive integer) coefficients
$N_{J_{NS}l}^{l'}$ are related to the matrix elements of the $S$
transformation matrix:
$$ N_{J_{NS}l}^{l'}= \sum_{n} {S(J_{NS},n)~S(l,n)~S(l',n)\over S(0,n)}
\,\, ,\eq{verlindeneq}$$
where $0$ denotes the vacuum state or identity field.) The complicated
transformations of tensor products of generic K-M characters $\chi_l$
make
complete classification of associated MIPFs much more difficult than in the
free fermion or boson case.
In section 3.3 we consider the extent to which the integer coefficients
$N^{AB}_{lm\bar{l}\bar{m}}$ in the partition function of the tensor product
theory are constrained once we know all of the allowed possibilities for the
corresponding coefficients $N^A_{l\bar{l}}$ and $N^B_{m\bar{m}}$ in the factor
theories. In section 3.4 we investigate the more general problem of combining
theories whose holomorphic and anti-holomorphic degrees of freedom
need not possess the same chiral algebras. That is, we consider
partition functions of the form,
$Z^{AB}=\sum_{l,\bar{m}}N_{l\bar{m}}\chi^A_l\bar{\chi}^B_{\bar{m}}$.
In the following
sections we are interested ultimately
in classifying consistent conformal field theories, not just modular invariant
combinations of characters. Accordingly, we invoke consistency
conditions for amplitudes on the plane when they constrain
the states that can appear in the partition function.
\sectionnumstyle{blank}\vskip .5cm
{\hb{3.3}{\bfs Constraints on Tensor Product Modular Invariants}}\vskip .5cm
\sectionnumstyle{arabic}\sectionnum=3\equationnum=0
In order for the tensor product partition function (\puteqn{zab}) to be
invariant
under the generators of modular transformations, $T$ and $S$, we must have,
$$\eqalign{T\quad{\rm invariance:}\quad &h_l+h_m=h_{\bar{l}}+
h_{\bar{m}}\pmod{1}\quad {\rm if}\quad
N^{AB}_{lm\bar{l}\bar{m}}\ne 0\cr
S\quad{\rm invariance:}\quad &N^{AB}_{lm\bar{l}\bar{m}}=\sum_{l^\prime,
m^\prime,\bar{l}^\prime,\bar{m}^\prime} N^{AB}_{l^\prime m^\prime
\bar{l^\prime}\bar{m^\prime}}S^A_{ll^\prime}S^B_{mm^\prime}
\bar{S}^A_{\bar{l}\bar{l}^\prime}\bar{S}^B_{\bar{m}\bar{m}^\prime}
\,\, ,\cr}\eqno\eqnlabel{minv}$$
where $h_l$ denotes the conformal dimension of the primary field represented
by the label $l$, {\it etc.}
We assume that the solutions to the corresponding equations for the
factor theories are known. That is, we are given all possibilities
(labeled by $i$)
for non-negative integer coefficients $N^{A,i}_{l\bar{l}}$ such that,
$$\eqalign{&h_l=h_{\bar{l}}
\pmod{1}\quad {\rm if}\quad N^{A,i}_{l\bar{l}}\ne 0\cr
{\rm and}\quad\quad&N^{A,i}_{l\bar{l}}=\sum_{l^\prime,
\bar{l}^\prime} N^{A,i}_{l^\prime \bar{l^\prime}}S^A_{ll^\prime}
\bar{S}^A_{\bar{l}\bar{l}^\prime}\,\, ,\cr}\eqno\eqnlabel{aminv}$$
and similarly for $N^{B,j}_{m\bar{m}}$. We can get relations between the
integer coefficients in equations (\puteqn{minv}) and (\puteqn{aminv})
by multiplying (\puteqn{minv}) by $N^{A,i}_{l\bar{l}}$ and summing
over $l$ and $\bar{l}$,
$$\eqalign{\sum_{l,\bar{l}}N^{A,i}_{l\bar{l}}N^{AB}_{lm\bar{l}\bar{m}}
&=\sum_{l,\bar{l},l^\prime,
m^\prime,\bar{l}^\prime,\bar{m}^\prime}N^{A,i}_{l\bar{l}}
N^{AB}_{l^\prime m^\prime
\bar{l^\prime}\bar{m^\prime}}S^A_{ll^\prime}S^B_{mm^\prime}
\bar{S}^A_{\bar{l}\bar{l}^\prime}\bar{S}^B_{\bar{m}\bar{m}^\prime}\cr
&=\sum_{l^\prime,
m^\prime,\bar{l}^\prime,\bar{m}^\prime}N^{A,i}_{l^\prime\bar{l^\prime}}
N^{AB}_{l^\prime m^\prime
\bar{l^\prime}\bar{m^\prime}}S^B_{mm^\prime}
\bar{S}^B_{\bar{m}\bar{m}^\prime}
\,\, ,\cr }\eqno\eqnlabel{amom}$$
where we have used (\puteqn{aminv}) and the symmetry of $S$ to
simplify the right-hand side. The resulting
equation is precisely of the form (\puteqn{aminv}) for the $B$ theory,
therefore we must have,
\subequationnumstyle{alphabetic}
$$\sum_{l,\bar{l}}N^{A,i}_{l\bar{l}}N^{AB}_{lm\bar{l}\bar{m}}
=\sum_jn^{A,i}_j N^{B,j}_{m\bar{m}}\,\, ,\eqno\eqnlabel{constr-a}$$
where $n^{A,i}_j$ are integers.
This constrains some combinations of coefficients in the $AB$ theory to be
linear combinations (with integer coefficients) of the allowed
coefficients in the $B$ theory, which are presumed known. There is an analogous
constraint arising from taking the appropriate traces over the $B$ theory
indices in (\puteqn{minv}),
$$\sum_{m,\bar{m}}N^{B,j}_{m\bar{m}}N^{AB}_{lm\bar{l}\bar{m}}
=\sum_j n^{B,j}_i N^{A,i}_{l\bar{l}}\,\, ,\eqno\eqnlabel{constr-b}$$
and a further constraint arising from taking
appropriate
traces over both sets of indices in either possible order,
$$\eqalignno{
\sum_{l,\bar{l}\atop m,\bar m}
N^{B,j}_{m\bar{m}}N^{A,i}_{l\bar{l}}
N^{AB}_{lm\bar{l}\bar{m}}
& = \sum_{m,\bar m}N^{B,j}_{m,\bar m}
\left( \sum_j n^{A,i}_j N^{B,j}_{m\bar{m}}\right)\,\, ,
& \eqnlabel{constr-c}\cr
& = \sum_{l,\bar l}N^{A,i}_{l,\bar l}
\left( \sum_i n^{B,j}_i N^{A,i}_{l\bar{l}}\right)\,\, .
& \eqnlabel{constr-d}}$$
\subequationnumstyle{blank}
Note that the number
of constraint equations increases as the factor theories become more complex
(in the sense of having more possible modular invariants), and also as the
tensor products theories have more factors.
These equations constrain part of the
operator content of the tensor product theories, which we wish to classify.
Often, this information, together with some simple consistency requirements
for
conveniently chosen amplitudes on the plane, serves to completely determine
the
allowed possibilities for the tensor product modular invariants. For
concreteness, we illustrate with a simple example.
\vskip .5cm
{\hc{3.3.a} {\sl Example: $SU(2)_{K_A}\otimes SU(2)_{K_B}$ Tensor Product
Theories.}
\vskip .5cm
$SU(2)_{K}$ has $K+1$ unitary primary fields, which we label by
twice the spin, $l=2s$, of the corresponding SU(2) representation.
Their conformal dimensions are $h_l= {{l(l+2)}\over 4(K+2)}$.
The matrix $S$,
implementing the modular transformation $\tau\rightarrow-1/\tau$
on the Ka\v c-Moody characters, is
$$S^K_{ll^\prime}=\left(2\over K+2\right)^{1/2}\sin\left(\pi (l+1)(l^\prime +1)
\over K+2\right)\,\, .\eqno\eqnlabel{ssu}$$
The fusion rules, which we will make use of
momentarily, are
$$\phi_l\times\phi_{l^\prime}=\sum^{{\rm min}(l+l^\prime,2K-l-l^\prime)}_{
{m=\vert l-l^\prime\vert \atop m-\vert l-l^\prime\vert\,\,{\rm even}}}
\phi_m \,\, .\eqno\eqnlabel{fra}$$
For simplicity we only consider the tensor product
theories with holomorphic and anti-holomorphic chiral algebras
SU(2)$_{K_A}\otimes$SU(2)$_{K_B}$ for both $K_A$ and $K_B$ odd. Then the only
possible modular invariants for the factor theories are the diagonal ones,
$N_{l\bar{l}}=\delta_{l\bar{l}}$.\footnote{For our purposes we need to
consider all sets of
non-negative integer coefficients $N_{l\bar{l}}$ that give rise to $S$ and
$T$ invariant partition functions, but not necessarily only ones
with a unique vacuum state ($N_{00}=1$).
Relaxing this condition in the SU(2) case
does not expand the space of possible solutions, aside from a trivial
multiplicative constant.} Applying the constraint equations
(\puteqn{constr-a}-d) gives the conditions,
$$\eqalign{\sum_{l=\bar{l}}
N^{AB}_{lm\bar{l}\bar{m}}&=a\delta_{m\bar{m}}\,\, ;\quad a\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+\cr
\sum_{m=\bar{m}} N^{AB}_{lm\bar{l}\bar{m}}&=b\delta_{l\bar{l}}\,\, ;\quad\quad
b\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+\cr
\sum_{l=\bar{l}}\sum_{m=\bar{m}}
N^{AB}_{lm\bar{l}\bar{m}}&=a(K_B+1)=b(K_A+1)\,\, .
\cr}\eqno\eqnlabel{nabcon}$$
If we label the primary operators in the tensor product theory by the
corresponding $l$ values of the factor theories, {\it e.g.},
$(l,m\vert\bar{l},\bar{m})$, then the integer $a$ is equal, in particular, to
the number of primary operators in the theory of the form $(j,0\vert j,0)$.
These are pure $A$ theory operators and so must form a closed operator
subalgebra of the $A$ theory. Similarly, $b$ must equal the dimension of some
closed operator algebra in the $B$ theory. This is useful because we know
(from studying the consistency of amplitudes on the plane) all
consistent closed operator sub-algebras of
SU(2) Ka\v c-Moody theories.\mpr{christe87} For $K$
odd these sub-algebras (labeling them by their dimensions, $d$) are
$$ \eqalign{d = 1: &~~\{\Phi_0\} {\rm \, \, (the~identity)}\cr
d = 2: &~~\{\Phi_0,\, \, \Phi_{K}\}\cr
d = {K + 1\over 2}: &~~\{\Phi_l;~~0\leq {\rm ~even~}l\le K\}{\rm
{}~(the~allowed~integer~spin~representations)}\cr
d = K + 1: &~~\{\Phi_l;~0\leq l\leq K\}\cr}\,\, .\eqno\eqnlabel{clsa}$$
Thus, in the tensor product theory we know all of the possibilities for
operators of the form $(j,0\vert j,0)$ or $(0,j\vert 0,j)$. Given
(\puteqn{nabcon}) and
the uniqueness of the vacuum state $(0,0\vert 0,0)$ in the tensor product
theory, the multiplicities of the operators in the closed sub-algebras must be
as given in (\puteqn{clsa}).
We can now write down all of the possibilities for $a,b,K_A$ and $K_B$
that are consistent with (\puteqn{nabcon}) and (\puteqn{clsa}),
and consider each type of tensor product modular invariant individually:
\vskip .5cm
\noindent (1) $a = K_A +1, ~~b = K_B+1$ : \hskip 1.5em
Here we have $N^{AB}_{l\bar l m\bar m} =
\delta_{l\bar l} \delta_{m\bar m} + M^{AB}_{l \bar l m \bar m}$,
with $M^{AB}_{l \bar l m \bar m}$ traceless with respect to both $l, \bar l$
and $m, \bar m$. It is easy to see that $M^{AB}$ must in fact vanish, leaving
us with the simple uncorrelated tensor product of the SU(2)$_{K_A}$ and
SU(2)$_{K_B}$ diagonal modular invariants. Were this not the case, then
$M^{AB}$ by itself would give rise to a modular invariant which did not include
the term containing the identity operator. But this is not possible since
(from (\puteqn{ssu}) and quite generally
in a unitary theory) $S_{0l}>0$ for all $l$.
\vskip .5cm
\noindent (2) $a = {K_A + 1\over 2}, ~~b = {K_B +
1\over2}$ : \hskip 1.5em
In this case the operators diagonal in either of the factor theories comprise
the set $\{(l,m\vert l,m)\}$ with $l$ and $m$ both odd or both even. This set
contains the operators $(1,K_B\vert 1,K_B)$ and $(K_A,1\vert K_A, 1)$.
The non-diagonal operators in the theory, $(i,j\vert m,l)$ must have a
consistent operator product with these two operators, in particular at least
some of the operators appearing in the naive fusion with them (using the rules
(\puteqn{fra}) must have integer spins ($h-\bar{h}\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $).
This restricts
the non-diagonal operators $(i,j\vert m,l),~i\ne m,~j\ne l$, to those
satisfying $i+m=K_A$ and $j+l=K_B$. For these operators, in turn, to have
integer spin we have either: $i-j$ even and $K_A+K_B=0 \pmod{4}$; or
$i-j$ odd and $K_A-K_B=0 \pmod{4}$. Taking all such operators,
the former case gives the modular invariant obtained from the
simple tensor product invariant of case (1) by twisting by the simple current
$(K_A,K_B\vert 0,0)$; the latter is obtained by twisting by $(K_A,0\vert
0,K_B)$. An extension of the argument given in (1) using the fact that
$S^{K_A}_{iK_A}S^{K_B}_{jK_B} >0$ for all $i-j$ even, shows that these are the
only possibilities in this category.
\vskip .5cm
\noindent (3) $a=1,~b=2,~K_B=2K_A+1$ or
$a=2,~b=1,~K_A=2K_B+1$: \hskip 1.5em
Take $a=1$, $b=2$ so $K_B=2K_A+1$. The model must include
the states $(0,0\vert 0,0)$ and $(0,K_B\vert 0,K_B)$ but no other states of the
form $(0,l\vert 0,l)$, $(i,0\vert i,0)$ or $(j,K_B\vert j,K_B)$.
There must also be two states of
the form $(K_A,j\vert K_A,j)$. Demanding that the fusion products of these
states with themselves are consistent with the above restriction requires
$j=0$, or $K_B$, but then the states themselves are inconsistent with the
restriction. Thus there are no possible consistent theories within this
category.
\vskip .5cm
\noindent (4) $a=2,~b={K_B+1\over 2},~K_A=3$ or
$a={K_A+1\over 2},~b=2,~K_B=3$: \hskip 1.5em
This case differs from case (2) with $K_A=3$ and/or $K_B=3$,
in that the $d=2$ closed subalgebra of the SU(2)$_3$ theory consists of
$\{\Phi_0,\Phi_3\}$ instead of $\{\Phi_0,\Phi_2\}$ as in (2). If $a=2$,
$b={K_B+1\over 2}$ and $K_A=3$, then the operators diagonal in either factor
theory comprise the set $\{ (0,l\vert 0,l), (3,l\vert 3,l)~l$ even; $(1,j\vert
1,j), (2,j\vert 2,j)~j$ odd$\}$. There must be additional non-diagonal
operators, $(i,j\vert l,m)$ $i\ne l$, $j\ne m$, if there are to be any modular
invariants in this category. If \hbox {$(i,j\vert l,m)$} appears then
\hbox {$(3-i,j\vert 3-l,m)$}
appears also. For both operators to have integer spin, $i$ and $l$ must be both
even or both odd. Thus there must be operators of the form $(0,j\vert 2,m)$
or $(1,p\vert 3,l)$. Fusing these
with the operators
\hbox {$(1,K_B\vert 1,K_B)$}
from
the diagonal part of the theory produces the operators
\hbox {$(1,K_B-j\vert 1,K_B-m)$}
and/or
\hbox {$(1,K_B-j\vert 3,K_B-m)$} and \hbox {$(0,K_B-p\vert 2,K_B-l)$}
and/or
\hbox {$(2,K_B-p\vert 2,K_B-l)$,}
respectively.
It is easy to see that if the former
fields have integer spin then none of the possible fusion products do. Thus,
there can be no consistent theories in this category.
\vskip .5cm
\noindent (5) $ K_A=K_B\equiv K $, $a= b= 1$ or $a = b = 2$ : \hskip 1.5em
The situation
becomes more complicated for $K_A = K_B \equiv K$. For these cases we have
additional trace equations,
$$\eqalign{\sum_{l=\bar{m}} N^{AB}_{lm\bar{l}\bar{m}}\, &=a^\prime
\delta_{m\bar{l}}\,\, ;\quad\quad
a^\prime\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+\cr
\sum_{m=\bar{l}} N^{AB}_{lm\bar{l}\bar{m}}\, &=b^\prime
\delta_{l\bar{m}}\,\, ;\quad\quad b^\prime\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+\,\,
.\cr}\eqno\eqnlabel{morcon}$$
If the values of $a^\prime$ and $b^\prime$ correspond to any of cases
(1)---(4),
then the invariants are precisely as given above, with the factor
theories permuted. Thus we only need to consider the cases:
(5a) $a=b=a^\prime=b^\prime=1$, (5b) $a=b=a^\prime=b^\prime=2$,
and (5c) $a=b=1,~
a^\prime=b^\prime=2$. Case (5b) is most quickly disposed of. The operators in
the theory include the closed subalgebra $\{(0,0\vert 0,0),(0,K\vert 0,K)\}$,
$\{(0,0\vert 0,0),(K,0\vert K,0)\}$, $\{(0,0\vert 0,0),(K,0\vert 0,K)\},$ and
$\{(0,0\vert 0,0),(0,K\vert K,0)\}$. For the operator algebra with these
together to be closed the chiral fields $(K,K\vert 0,0)$ and $(0,0\vert K,K)$
must appear, but for K odd these do not have integer conformal dimension.
Therefore this case is ruled out.
Cases (5a) and (5c) can also be ruled out as follows. In both cases there
must be fields $(1,j\vert 1,j)$ and $(p,1\vert p,1)$ with a single choice for
$j$ and $p$ in each case. Consider the four-point correlation function on the
plane $\langle (1,j\vert 1,j)(1,j\vert 1,j)(p,1\vert p,1)(p,1\vert p,1)
\rangle$. In one channel the only possible intermediate state primary fields
that can appear, consistent with the restrictions of cases (5a) or (5c), are
$(0,0\vert 0,0)$ and $(2,2\vert 2,2)$. In the cross channels only a subset of
the states of the form $(p\pm 1,j\pm 1\vert p\pm 1,j\pm 1)$ can appear as
intermediates. We know from the four-point amplitudes $\langle (1\vert 1)
(1\vert 1) (j\vert j) (j\vert j)\rangle$ and $\langle (1\vert 1)
(1\vert 1) (p\vert p) (p\vert p)\rangle$ in the factor theories that the chiral
blocks making up the amplitudes have two-dimensional monodromy, so the
blocks appearing in the tensor product theory must have four-dimensional
monodromy.
There is no way, then, to assemble the two chiral blocks corresponding to the
allowed intermediate primaries $(0,0\vert 0,0)$ and $(2,2\vert 2,2)$ in such a
way that the four-point function in the tensor product theory can be monodromy
invariant ({\it i.e.}, single-valued).
To summarize: We have used the constraints
(\puteqn{constr-a}-\puteqn{constr-d}) and the
consistency of
conveniently chosen fusion rules and amplitudes to find the only consistent
tensor product theories of the type $SU(2)_{K_A}\otimes SU(2)_{K_B}$, with
$K_A,K_B$ odd. These turn out to be the simple (uncorrelated) product of the
diagonal invariants of the factor theories and all theories obtained from
them by twisting by the allowed simple current fields that can be built
from the identity and fields labeled $K_A$ and $K_B$.
\sectionnumstyle{blank}\vskip .5cm
{\hb{3.4}{\bfs Left-Right Asymmetric Modular Invariants}}\vskip .5cm
\sectionnumstyle{arabic}\sectionnum=4\equationnum=0
So far we have considered tensor product conformal field theories that
are diagonal in the sense that for each holomorphic conformal field theory
factor there is a corresponding anti-holomorphic conformal field theory factor
with an isomorphic chiral algebra.
While these are the relevant theories to consider
for statistical mechanics applications, it is natural in the construction of
heterotic string theories to consider conformal field theories that are
inherently left-right asymmetric as well. For these, the methods discussed
above
do not apply. Nonetheless, we can exploit known properties of left-right
symmetric conformal field theories to construct modular invariants even for
inherently asymmetric theories by using the following result: Given two
consistent diagonal rational conformal field theories ({\it a priori}
with different
chiral algebras) with modular invariant partition functions
$Z^A=\sum\chi^A_i\bar{\chi}^A_i$ and $Z^B=\sum\chi^B_i\bar{\chi}^B_i$,
the left-right asymmetric partition function given by
$Z^{A\bar{B}}=\sum\chi^A_i\bar{\chi}^B_i$ will be modular invariant
if and only if: (1) the conformal dimensions
agree modulo 1, or more precisely
$h^A_i-c^A/24=h^B_i-c^B/24\pmod{1}$; and (2) the
fusion rules of the two theories coincide, $\phi^A_i\times \phi^A_j=\sum_k
N_{ij}^k\phi^A_k$ and $\phi^B_i\times \phi^B_j=\sum_k N_{ij}^k\phi^B_k$.
Condition (1) is obviously necessary and sufficient for $Z^{AB}$ to be
$T$ invariant. Condition (2) is almost immediate given Verlinde's
results.\mpr{verlinde88} $Z^{AB}$ is invariant under the $S$
transformation only if the
$S$ matrices implementing the modular transformations on the characters of the
$A$ and $B$ theories coincide, $S^A_{ij}=S^B_{ij}$. As Verlinde showed, the
fusion rule coefficients determine the $S$ matrix,\footnote{To be precise,
Verlinde showed that the eigenvalues, $\lambda_i^{(j)}$, of the matrices
$(N_i)_l^{\ k}$ satisfy
$\lambda_i^{(j)}=S_{ij}/S_{0j}$ but there could be an ambiguity in the
choice of superscript $(j)$ labeling each member of the set
of eigenvalues of $(N_i)_l^{\ k}$. We believe in the present case that this
ambiguity is fixed given $T$ and the requirement $(ST)^3=1$, but have no
proof.} so condition (2) is required
for $S^A=S^B$. In employing this relation it is crucial to define the primary
fields with respect to the full $A$ and $B$ chiral algebras.
As a simple example, consider the theories $A= SO(31)$ level-one,
and $B= (E_8)$ level-two, both with central charge $c=31/2$.
The consistent diagonal theories have partition functions,
$$Z^A=
\chi_0\bar{\chi}_0+ \chi_{{1\over 2}}\bar{\chi}_{{1\over 2}}+
\chi_{{31\over 16}}\bar{\chi}_{{31\over 16}}\eq{so31pf}$$
and
$$Z^B=
\chi_0\bar{\chi}_0+ \chi_{{3\over 2}}\bar{\chi}_{{3\over 2}}+
\chi_{{15\over 16}}\bar{\chi}_{{15\over 16}}\,\, ,\eq{e8pf}$$
where the characters are labeled
by the conformal dimensions of the associated primary fields.
The fusion rules in
both theories are analogous to those in the Ising model.
The asymmetric partition function,
$$Z^{AB}=
\chi^A_0\bar{\chi}^B_0+ \chi^A_{{1\over 2}}\bar{\chi}^B_{{3\over 2}}+
\chi^A_{{31\over 16}}\bar{\chi}^B_{{15\over 16}}\,\, ,\eq{asymppf}$$
satisfies conditions (1)
and (2), and so is itself a modular invariant. This can
also be constructed by choosing appropriate boundary conditions for a
collection of 31 free, real fermions.
A more interesting example, which cannot be constructed from free bosons or
fermions or by twisting a known invariant by a simple current, is the
following. For the $A$ theory we take the simple tensor product of the
diagonal
theories for G$_2$ level-one and $SU(3)$ level-two;
for the $B$ theory the simple
tensor product of the diagonal theories for F$_4$ level-one and the three
state
Potts model. The central charges coincide: $c^A=14/5+16/5=6$;
$c^B=26/5+4/5=6$.
The primary fields appearing in each theory are, for G$_2$ level-one
the identity and 7 ($h={2\over 5}$); for $SU(3)$ level-two the identity, 3 and
$\bar{3}$ ($h={4\over 15}$), 6 and ${\bar 6}$ ($h={2\over 3}$), and 8
($h={3\over 5}$); for F$_4$ level-one the identity and 26
($h={3\over 5}$); and for
the Potts model, the primaries, labeled by their conformal dimensions, are 0,
${2\over 5}$, ${2\over 3}$, ${\bar {2\over 3}}$, ${1\over 15}$, and
${\bar {1\over 15}}$.
To economically list the fusion rules for these theories we can simply list
the non-vanishing three-point amplitudes (where we represent the field by its
conformal dimension). Besides the obvious ones involving the identity operator
($\langle \phi\bar{\phi} 0\rangle$) these are: for G$_2$ level-one
$\langle {2\over 5}, {2\over 5}, {2\over 5}\rangle$; for $SU(3)$ level-two,
$\langle
{4\over 15},{4\over 15},{4\over 15}\rangle$, $\langle {4\over 15},{4\over 15},
\bar{{2\over 3}}\rangle$, $\langle
{4\over 15},\bar{{4\over 15}},{3\over 5}\rangle$, $\langle {4\over 15},{3\over
5},{2\over 3}\rangle$, $\langle
{2\over 3},{2\over 3},{2\over 3}\rangle$, $\langle {3\over 5},{3\over 5},
{3\over 5}\rangle$, and the conjugates of
these; for F$_4$ level-one, $\langle {3\over 5},{3\over 5},{3\over 5}\rangle$;
and for the three
state Potts model, $\langle
{1\over 15},{1\over 15},{1\over 15}\rangle$, $\langle {1\over 15},{1\over 15},
\bar{{2\over 3}}\rangle$, $\langle
{1\over 15},\bar{{1\over 15}},{2\over 5}\rangle$, $\langle {1\over 15},{2\over
5},{2\over 3}\rangle$, $\langle
{2\over 3},{2\over 3},{2\over 3}\rangle$, $\langle {2\over 5},{2\over 5},
{2\over 5}\rangle$, and the conjugates of
these. All of the non-zero fusion rule coefficients, $N_{ijk}$, for these four
theories are equal to one.
Given the obvious similarities of the fusion rules and conformal dimensions
of these theories, it is not difficult to verify that the asymmetric partition
function given by,
$${\eqalign{Z^{AA^{\prime}BB^{\prime}}=
&\chi^A_{0}\chi^{A^{\prime}}_{0}\bar{\chi}^B_{0}
\bar{\chi}^{B^{\prime}}_{0}+\chi^A_{0}\chi^{A^{\prime}}_{{3\over 5}}
\bar{\chi}^B_{{3\over 5}}
\bar{\chi}^{B^{\prime}}_{0}+\chi^A_{0}\chi^{A^{\prime}}_{{2\over 3}}
\bar{\chi}^B_{0}
\bar{\chi}^{B^{\prime}}_{{2\over 3}}+\chi^A_{0}\chi^{A^{\prime}}_{\bar{{2\over
3}}}
\bar{\chi}^B_{0}\bar{\chi}^{B^{\prime}}_{\bar{{2\over 3}}}+\cr &\chi^A_{0}
\chi^{A^{\prime}}_{{4\over 15}}\bar{\chi}^B_{{3\over 5}}
\bar{\chi}^{B^{\prime}}_{\bar{{2\over 3}}}+\chi^A_{0}\chi^{A^{\prime}}_{
\bar{{4\over 15}}}
\bar{\chi}^B_{{3\over 5}}\bar{\chi}^{B^{\prime}}_{{2\over 3}}+\chi^A_{{2\over
5}}
\chi^{A^{\prime}}_{0}\bar{\chi}^B_{0}\bar{\chi}^{B^{\prime}}_{{2\over 5}}+
\chi^A_{{2\over 5}}\chi^{A^{\prime}}_{{3\over 5}}\bar{\chi}^B_{{3\over 5}}
\bar{\chi}^{B^{\prime}}_{{2\over 5}}+\cr &\chi^A_{{2\over 5}}
\chi^{A^{\prime}}_{{2\over 3}}
\bar{\chi}^B_{0}\bar{\chi}^{B^{\prime}}_{\bar{{1\over 15}}}+\chi^A_{{2\over 5}}
\chi^{A^{\prime}}_{\bar{{2\over 3}}}\bar{\chi}^B_{0}\bar{\chi}^{B^{\prime}}_{
{1\over 15}}+
\chi^A_{{2\over 5}}\chi^{A^{\prime}}_{{4\over 15}}\bar{\chi}^B_{{3\over 5}}
\bar{\chi}^{B^{\prime}}_{{1\over 15}}+\chi^A_{{2\over 5}}\chi^{A^{\prime}}_{
\bar{{4\over 15}}}
\bar{\chi}^B_{{3\over 5}}\bar{\chi}^{B^{\prime}}_{\bar{{1\over 15}}}\,\,
,\cr}}\eq{gsfp1}
$$
satisfies the two conditions for modular invariance.
Here $A$, $A^{\prime}$, $B$, and $B^{\prime}$ denote the G$_2$, $SU(3)$,
F$_4$,
and Potts theories, respectively.
An alternative sewing of the operators in these four conformal
field theories gives rise to the diagonal E$_6$ level-one modular invariant
$${\eqalign{Z^{{\rm E}_6}=
&\chi^A_{0}\chi^{A^{\prime}}_{0}\bar{\chi}^B_{0}
\bar{\chi}^{B^{\prime}}_{0}+\chi^A_{{2\over 5}}\chi^{A^{\prime}}_{{3\over 5}}
\bar{\chi}^B_{0}
\bar{\chi}^{B^{\prime}}_{0}+\chi^A_{0}\chi^{A^{\prime}}_{0}\bar{\chi}^B_{
{3\over 5}}
\bar{\chi}^{B^{\prime}}_{{2\over 5}}+\chi^A_{{2\over 5}}\chi^{A^{\prime}}_{
\bar{{3\over 5}}}
\bar{\chi}^B_{{3\over 5}}\bar{\chi}^{B^{\prime}}_{{2\over 5}}+\cr &\chi^A_{0}
\chi^{A^{\prime}}_{{2\over 3}}\bar{\chi}^B_{0}
\bar{\chi}^{B^{\prime}}_{\bar{{2\over 3}}}+\chi^A_{0}\chi^{A^{\prime}}_{\bar{
{2\over 3}}}
\bar{\chi}^B_{0}\bar{\chi}^{B^{\prime}}_{{2\over 3}}+\chi^A_{0}
\chi^{A^{\prime}}_{{2\over 3}}\bar{\chi^B_{{3\over 5}}}\bar{\chi}^{
B^{\prime}}_{{1\over 15}}+
\chi^A_{0}\chi^{A^{\prime}}_{\bar{{2\over 3}}}\bar{\chi}^B_{{3\over 5}}
\bar{\chi}^{B^{\prime}}_{\bar{{1\over 15}}}+\cr &\chi^A_{{2\over 5}}\chi^{A^{
\prime}}_{{4\over 15}}
\bar{\chi}^B_{{3\over 5}}\bar{\chi}^{B^{\prime}}_{\bar{{1\over 15}}}+\chi^A_{
{2\over 5}}
\chi^{A^{\prime}}_{\bar{{4\over 15}}}\bar{\chi}^B_{{3\over 5}}\bar{\chi}^{B^{
\prime}}_{{1\over 15}}+
\chi^A_{{2\over 5}}\chi^{A^{\prime}}_{{4\over 15}}\bar{\chi}^B_{0}
\bar{\chi}^{B^{\prime}}_{{2\over 3}}+\chi^A_{{2\over 5}}\chi^{A^{\prime}}_{
\bar{{4\over 15}}}
\bar{\chi}^B_{0}\bar{\chi}^{B^{\prime}}_{\bar{{2\over 3}}}\,\, .\cr}}
\eq{gsfp2} $$
It is natural to suppose that the asymmetric modular invariant,
\pe{gsfp1}, can be
obtained from the symmetric one, \pe{gsfp2},
by twisting by the appropriate field or
fields.
This intuition is correct, but the twisting is not by a simple current
operator, and correspondingly there is no definite algorithm for achieving
it. In the symmetric theory the chiral algebra is enlarged (to
E$_6\otimes$E$_6$). Twisting by a simple current\mpr{schellekens89c}
cannot reduce the chiral algebra, and here gives back the same
theory. There is, however, a candidate field that is primary under the {\it
smaller} chiral algebra of the asymmetric theory, and which has simple fusion
rules when defined with respect to this algebra, namely the field $(0,{2\over
3}\vert 0,{2\over 3})$. Twisting $Z^{E_6}$ by this
operator, that is throwing
out those operators which when fused with
$(0,{2\over 3}\vert 0,{2\over3})$ give
$T$ noninvariant states while adding those $T$ invariant operators which
result from fusing, gives only a subset of the characters in the
asymmetric theory. To
get the full set we must add the operators formed by fusing $({2\over
5},{3\over 5}\vert {3\over 5},{2\over 5})$ with itself under the now modified
fusion rules of the new theory ({\it i.e.,} those preceding eq.~\pe{gsfp1}),
(which {\it a priori} is an ambiguous procedure).
Similarly, twisting the asymmetric invariant by any combinations of simple
currents in that theory gives back the same invariant. In order to obtain
$Z^{E_6}$ we have to twist
by the non-simple current $({2\over 5},{3\over 5}\vert
0,0)$, with suitably modified fusion rules, which again is an ambiguous
procedure.
\hfill\vfill\eject
{\hb{3.5}{\bfs Concluding Comments on MIPFs}}
\vskip .5cm
The techniques introduced in sections 3.3 and 3.4
make the classification of modular
invariants for tensor product theories built from a small number of factors
feasible. A complete classification of the invariants for
$SU(2)_{K_A}\otimes SU(2)_{K_B}$ theories, that is the straightforward
extension of the results of section 3.2 to even $K$, may, in particular, prove
interesting if there is some generalization of the ADE classification found for
the single theories. Nonetheless, a complete classification for tensor product
theories built with many factors is not likely to be found,
given the enormous number of
possibilities. For the purposes of string model building a procedure for
constructing any new class of invariants,
such as the one known for free field
constructions, would be progress. Perhaps a generalization of the twisting
procedure to operators with nontrivial (or altered) fusion rules, as suggested
by the example in section 3.3, could produce one. In this regard, the results
of \pr{warner90, roberts92},
(which have been extensively exploited recently by Gannon\mpr{gannon92})
are an intriguing step though not totally satisfactory. In these works,
new tensor product modular invariants
are obtained by shifting the momentum
lattice of a free boson theory, but at the cost of
sacrificing positivity of the coefficients in the partition function.
Finally we must stress that the condition of (one-loop)
modular invariance alone is
insufficient to guarantee a consistent conformal field theory; for
constructions not based on free fields we must still check that there is a
consistent operator algebra.
\hfill\vfill\eject
\pagenum=46
\chapternumstyle{blank}\sectionnumstyle{blank}
\noindent {\bf Chapter 4: Fractional Superstrings}\vskip .8cm
{\hb{4.1}{\bfs Introduction to Fractional Superstrings}}\vskip .5cm
\chapternumstyle{arabic}\chapternum=4
\sectionnumstyle{arabic}\sectionnum=1
\equationnum=0
In the last few years, several generalizations of standard (supersymmetric)
string
theory have been proposed.\mpr{schwarz87,schellekens89,kaku91,pope92}
One of
them\mpr{dienes92b,argyres91b,argyres91d,argyres91a,argyres91c}
uses the (fractional spin)
parafermions introduced from the perspective of
2-D conformal field
theory (CFT) by Zamolodchikov and Fateev\mpr{zamol87} in 1985
and further developed by Gepner\mpr{gepner87} and
Qiu.\footnote{This is not to be confused with
the original definition of ``parafermions.''
The term ``parafermion'' was introduced by H. S. Green in
1953.\mpr{green53} Green's parafermions are defined as
spin-1/2 particles that do not obey standard
anticommutation rules, but instead follow more general trilinear
relations.\mpr{ardalan74,mansouri87,antoniadis86}}
In a series of papers, possible
new string theories with local parafermionic world-sheet
currents (of fractional conformal spin) giving critical dimensions
\hbox{$D=6$, $4$, $3, {\rm ~and~}2$} have been
proposed.\mpr{dienes92b,argyres91b,argyres91d,argyres91a,argyres91c}
At the heart of these new ``fractional superstrings'' are $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _K$
parafermion conformal field theories (PCFT's) with central charge
$c= {2(K-1)\over K+2}$. (Equivalently, these are $SU(2)_K/U(1)$ conformal
field theories.)
The (integer) level-$K$
PCFT contains a set of unitary primary fields $\phi^j_m$, where
$0\leq j$, $\vert m\vert\leq K/2$; $j,\, m\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ /2$, and
$j-m = 0 \pmod{1}$.
These fields have the identifications
$$\phi^j_m = \phi^j_{m+K} = \phi^{{K\over 2}-j}_{m-{K\over 2}}
\,\, .\eqno\eqnlabel{phidents}$$
In the range $\vert m\vert\leq j$, the conformal dimension is
$h(\phi^j_m) = {j(j+1)\over K+2} - {m^2\over K}$.
At a given level
the fusion rules are
$$\phi^{j_1}_{m_1}\otimes\phi^{j_2}_{m_2} = \sum^r_{j=\vert j_1 - j_2\vert}
\phi^j_{m_1+m_2}\, ,\eqno\eqnlabel{fusion}$$
where $r\equiv {\rm min}\, (j_1 + j_2, K-j_1-j_2)$.
This CFT contains a subset of primary fields,
$$\{\phi_i\,\equiv\phi^0_i\equiv\phi^{K/2}_{-K/2 + i}\,\, ;\,\, 0\leq i
\leq
K-1\}
\eqno\eqnlabel{subset}$$
$(\phi^{ \dagger }_i\equiv\phi_{K-i})$
which, under fusion, form a closed subalgebra possessing
a $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _K$ Abelian symmetry:
$$\phi_i\otimes\phi_j=\phi_{(i+j)}\pmod{K}\,.\eqno\eqnlabel{Zk}$$
The conformal dimensions, $h(\phi_i)$, of the fields in this subgroup
have the form
$$h(\phi_i)= {i(K-i)\over K}\,\, . \eq{condimzkcsa}$$
It has been proposed that string models based on tensor products of a
level-$K$ PCFT are
generalizations of the Type II $D=10$
superstring.\mpr{dienes92b,argyres91b,argyres91d,argyres91a,argyres91c}
In these potential models,
the standard $c={1\over 2}$ fermionic superpartner of the
holomorphic world sheet
scalar, $X(z)$, is replaced by the ``energy operator,''
$\epsilon\equiv\phi^1_0$,
of the {\tenBbb Z}$_K$ PCFT.\footnote{Note that $\epsilon$ is not in
the {\tenBbb Z}$_K$ Abelian subgroup, and thus is not a
{\tenBbb Z}$_K$ parafermion, except
for the degenerate $K=2$ superstring case where $\phi^1_0\equiv\phi^0_1$.}
$\epsilon$ has conformal dimension (spin)
$2\over K+2$, which is ``fractional''
({\it i.e.}, neither integral nor half-integral),
for $K\ne 2$.
This accounts for to the name of these models.
Each $\epsilon-X$ pair has a total conformal anomaly (or central charge)
$c={3K\over K+2}$.
The naive generalization of the (holomorphic)
supercurrent (SC) of the standard superstring,
$J_{\rm SC}(z)= \psi (z)\cdot\partial X(z)$
(where $\psi$ is a real world sheet fermion),
to $J_{\rm FSC}=\phi^1_0(z)\cdot\partial X(z)$
proves to be inadequate.\mpr{argyres91c}
Instead, the proposed ``fractional supercurrent'' (FSC) is
$$J_{\rm FSC}(z)= \phi^1_0(z)\cdot \partial_z X(z) + :\phi^1_0 (z)\phi^1_0 (z):\,\, .
\eqno\eqnlabel{current}$$
$:\phi^1_0 (z)\phi^1_0 (z):$ (which vanishes for $K=2$ since $\phi^1_0=\psi$ at $K=2$)
is the first
descendent field of $\phi^1_0$.
$J_{\rm FSC}(z)$ is the generator of a local ``fractional'' world sheet
supersymmetry between $\epsilon(z)$ and $X(z)$,
extending the Virasoro algebra of the stress-energy
tensor $T(z)$. This local current of spin
\hbox{$h(J_{\rm FSC})= 1 + {2\over K+2}$}
has fractional powers of $1\over (z-w)$ in
the OPE with itself, implying a non-local world-sheet interaction and,
hence, producing cuts on the world sheet.
The corresponding chiral ``fractional superconformal
algebra''\mpr{argyres91c} is,
\subequationnumstyle{alphabetic}
$$T(z)T(w)={{1\over 2}c\over (z-w)^4}+{2T(w)\over
(z-w)^2}+...\eqno\eqnlabel{FSCalgebra-a}$$
$$T(z)J_{\rm FSC}(w)={h J_{\rm FSC}(w)\over (z-w)^2}+
{\partial J_{\rm FSC}(w)\over
(z-w)}+...\eqno\eqnlabel{FSCalgebra-b}$$
$$J_{\rm FSC}(z)J_{\rm FSC}(w)={1\over (z-w)^{2h}}+{{2h\over c}T(w)\over
(z-w)^{2h-2}}+{\lambda_K(c_0)J_{\rm FSC}(w)\over (z-w)^{h}}+{{1\over
2}\lambda_K(c_0)\partial J_{\rm FSC}(w)\over
(z-w)^{h-1}}+...\eqno\eqnlabel{FSCalgebra-c}$$
\subequationnumstyle{blank}
where $c=Dc_0$. $D$ is the critical dimension,
$c_0= {3K\over K+2}$ is the central charge for one dimension,
and $\lambda_K$ is a constant.\mpr{argyres91f}
The relationship between critical dimension, $D$,
and the level, $K$, of the paraferm-
ion CFT may be shown to be
$$ D= 2 + {16\over K}\, , \eqno\eqnlabel{dimeqn}$$
for $K=2,\, 4,\, 8,\, 16,\, {\rm and~} \infty$.
In \pr{dienes92b,argyres91b,argyres91d,argyres91a,argyres91c}
the relationship \pe{dimeqn} is
derived by requiring a massless spin-1 particle in the open string
spectrum, produced by $\phi^1_0(z)^{\mu}$
(where $\mu$ is the spacetime index) operating on the vacuum.
\vskip .5cm
{\hc{4.1.a}{\sl Parafermion Characters}}
\vskip .5cm
Before we present the computer generated
fractional superstring partition functions of
refs.~\pr{dienes92b, argyres91b,argyres91d,argyres91e} and follow with
a new derivation of these these partition functions, as a prerequisit
we wish to
discuss the characters $Z(\phi^j_m)$ for the Verma
modules, $[\phi^j_m]$,\footnote{From here on, we do not distinguish between
the primary field $\phi^j_m$ and its complete Verma module $[\phi^j_m]$.
Thus, $\phi^j_m$ can represent either, depending on the context.}
for $j, \vert m\vert< K/2$.
Each verma module contains
a single (holomorphic) parafermionic primary field
$\phi^j_m(z)$ and its parafermion descendents.
\begin{ignore}
$$L_{-1}^a L_{-2}^b L_{-3}^c L_{-4}^d \cdots
J_{-1}^x J_{-2}^y J_{-3}^z J_{-4}^w \cdots (\phi^j_m)$$,
where $L_{-i}$ and $J_$
\end{ignore}
The form of the characters is
$$\eqalignno{
Z(\phi^j_m)&= q^{-c/24} {\rm tr}\, q^{L_0}\cr
&= \eta(\tau)c^{2j}_{2m}(\tau)\, ,&\eqnlabel{2a}}$$
where $q= e^{2\pi\tau}$
and $\eta$ is the Dedekind eta-function,
$$ \eta(\tau) = q^{1/24}\prod^{\infty}_{n=1}(1-q^n)\,\, .
\eqno\eqnlabel{defeta}$$
$c^{2j}_{2m}(\tau)$ is a
string function\mpr{kac80} defined by
\subequationnumstyle{alphabetic}
$$\eqalignno{
c^{2j}_{2m}(\tau)
&= {1\over \eta^3(\tau)}\sum_{x,y} {\rm sign}(x)
q^{x^2(K+2) -y^2K} &\eqnlabel{cfn-a}\cr
&= q^{h^j_m + {1\over 4(K+2)}}{1\over \eta^3}
\sum^{\infty}_{r,s=0} (-1)^{r+s}
q^{r(r+1)/2 + s(s+1)/2 + rs(K+1)}\times\cr
&\quad \left\{ q^{r(j+m) + s(j-m)} - q^{K+1-2j+r(K+1-j-m)+s(K+1-j+m)}\right\}
&\eqnlabel{cfn-b}\cr
&= q^{h^j_m - {c(SU(2)_K)\over 24}}(1 + \cdots)& \eqnlabel{cfn-c}}$$
\subequationnumstyle{blank}
where in \pe{cfn-a} the conditions
\item{1.} $-\vert x\vert<y<\vert x\vert\, ,$
\item{2.} either $x={2j+1\over 2(K+2)} \pmod{1}$
or $({1\over 2} - x)= {2j+1\over 2(K+2)} \pmod{1}\, ;$ and
\item{3.} either $y= {m\over K} \pmod{1}$
or $({1\over 2} + y) = {m\over K} \pmod{1}$
\noindent must be met simultaneously.
($h^j_m \equiv h(\phi^j_m)$ and $c(SU(2)_K)= {3K\over K+2}$.)
These
string functions obey the same equivalences as their associated primary
fields $\phi^j_m$:
\subequationnumstyle{alphabetic}
$$c^{2j}_{2m}= c^{2j}_{2m+2K} = c^{K-2j}_{2m-K}\, .\eqno\eqnlabel{cid-a}$$
Additionally, since $\phi^j_{m}=(\phi^j_{-m})^{\dagger}$,
$$c^{2j}_{2m}= c^{2j}_{-2m}\, .\eq{cid-b}$$
\subequationnumstyle{blank}
Since the $K=2$ theory
is the standard Type II superstring
theory,\footnote{The $K=2$ parafermion model is a
$c={1\over2}$ CFT that
corresponds to a critical Ising (free fermion) model.}
expressing its partition function
in terms of string functions rather than
theta-functions
can be accomplished simply using the following set of identities:
$$ {K=2: \cases{2\eta^2(c^1_1)^2 = \vartheta_2/\eta\, ;\cr
\eta^2(c^0_0 + c^2_0)^2 = \vartheta_3/\eta\, ;\cr
\eta^2(c^0_0 - c^2_0)^2 = \vartheta_4/\eta\, .}}\eq{fpequiv}$$
For each spacetime dimension in these theories, a term in the
partition function of the form (\puteqn{2a})
is tensored with the partition function $Z\left(X\right)$
for an uncompactified chiral boson $X(z)$. Since
$$ Z\left(X\right)\propto {1\over\eta(\tau)}\,\, ,
\eqno\eqnlabel{partstdim}$$
the $\eta(\tau)$ factors cancel out in
$Z(\phi^j_m)\times Z(X)$. Similar cancellation of
$\bar\eta(\bar\tau)$ occurs in the antiholomorphic sector.
In the
following partition functions, we suppress the trivial factor of
$({\rm Im}\, \tau)^{-8/K}$ contributed together by the $D-2$ holomorphic and
anti-holomorphic world sheet boson partition functions.
The purpose of this chapter is to examine a number of issues relating
to these models:
In section 4.2 we derive the
partition functions of the $D=6$, $4$, and $3$ theories (corresponding to
$K=4$, $8$ and $16$ respectively), using the factorization method of Gepner
and Qiu,\mpr{gepner87}
as well as
demonstrating a new approach to obtaining the superstring partition function.
In section 4.3 we consider other necessary elements of string
theory. In particular, we propose a generalization of the GSO
projection that applies to the fractional superstring and we address
the question of whether similar theories at different Ka\v c-Moody levels
can be constructed.
Additionally, a comparison with the superstring is made and
we attempt to elucidate its features in the current, more general
context.
\vskip .5cm
{\hb{4.2}{\bfs Fractional Superstring Partition Functions}}
\vskip .5cm
\sectionnum=2\equationnum=0
Computerized searches demonstrated that for each (and only those) $K$
listed above, there is a unique one-loop partition function
(written in light-cone gauge) that is (1) modular invariant,
(2) contains a term, $(c^0_0)^{D-3}(c^2_0)$,
which is the character for a
massless spacetime spin-2 particle generated by an untwisted
non-chiral
$\phi^1_0(z){\bar{\phi}}^1_0(\overline z)$ field acting on the vacuum,
and (3) has no characters for
tachyonic states.\mpr{dienes92b, argyres91b,argyres91d,argyres91e}
Partition functions with these properties were found to exist only in
10, 6, 4, and 3 dimensions and were presented as:
{\settabs 8 \columns
\+ \cr
\+ $D=10$ & $(K=2)$: &&$Z = \vert A_2\vert^2$, ~where\cr}
$$\eqalignno{ A_2 &= 8(\cc0 0)^7(\cc2 0)+56(\cc0 0)^5(\cc2 0)^3+56(\cc0
0)^3(\cc2
0)^5+8(\cc0 0)(\cc2 0)^7-8(\cc1 1)^8\cr &= {1\over
2}\eta^{-12}(\vartheta^4_3-\vartheta^4_4-\vartheta^4_2) &\eqnlabel{ipart2}\cr}$$
{\settabs 8 \columns
\subequationnumstyle{alphabetic}
\+ $D=6$ & $(K=4)$: &&$Z= \vert A_4\vert^2 + 3\vert B_4\vert^2$, ~where\cr}
$$\eqalignno {A_4 &= 4(\cc0 0+\cc4 0)^3(\cc2 0)-4(\cc2 0)^4-4(\cc2 2)^4+32(\cc2
2)(\cc4 2)^3 &\eqnlabel{ipart4-a}\cr B_4 &= 8(\cc0 0+\cc4 0)(\cc2 0)(\cc4
2)^2+4(\cc0 0+\cc4 0)^2(\cc2 2)(\cc4 2)-4(\cc2 0)^2(\cc2
2)^2&\eqnlabel{ipart4-b}\cr}$$
{\settabs 8 \columns
\subequationnumstyle{blank}
\subequationnumstyle{alphabetic}
\+ $D=4$ & $(K=8)$:
&&$Z= \vert A_8\vert^2 + \vert B_8\vert^2 + 2\vert C_8\vert^2$, ~where\cr}
$$\eqalignno { A_8 & = 2(\cc0 0+\cc8 0)(\cc2 0+\cc6 0)-2(\cc4 0)^2-2(\cc4
4)^2+8(\cc6 4\cc8 4)&\eqnlabel{ipart8-a}\cr B_8 &= 4(\cc0 0+\cc8 0)(\cc6
4)+4(\cc2
0+\cc6 0)(\cc8 4)-4(\cc4 0\cc4 4)&\eqnlabel{ipart8-b}\cr
C_8 &= 4(\cc2 2+\cc6 2)(\cc8 2+\cc8 6)-4(\cc4 2)^2&\eqnlabel{ipart8-c}\cr}$$
{\settabs 8 \columns
\subequationnumstyle{blank}
\subequationnumstyle{alphabetic}
\+ $D=3$ & $(K=16)$:
&&$ Z = \vert A_{16}\vert^2 + \vert C_{16}\vert^2 $, ~where\cr}
$$\eqalignno{A_{16} &= \cc2 0+c^{14}_0-\cc8 0-\cc8 8+2\cc{14}
8&\eqnlabel{ipart16-a}\cr
C_{16} &= 2\cc2 4+2\cc{14} 4-2\cc8 4\,\, .&\eqnlabel{ipart16-b}\cr}$$
\subequationnumstyle{blank}
These closed-string partition functions
$Z({\rm level-} K)$
all have the
general form $$ Z({\rm level-} K) = \vert A_K\vert^2 + \vert B_K\vert^2 + \vert
C_K\vert^2\,\, .
\eqno\eqnlabel{pf1}$$
The $D=10$ partition function, in string function format, was
obtained by the authors of refs.~\pr{dienes92b, argyres91b, argyres91d}
as a check of their program, both by computer generation and by the
$K=2$ string functions/Jacobi $\vartheta$-functions equivalences.
In the above partition functions,
the characters for the massless graviton (spin-2 particle) and
gravitino (spin-${3\over 2}$) are terms in the
$A_K$--sector, $\vert A_K\vert^2$.
The $D<10$ fractional superstrings have a new feature not
present in the standard $D=10$ superstrings.
This is the existence of the
massive $B_K$-- and $C_K$--sectors. These additional
sectors were originally derived in the computer program of the authors of
refs. \hbox{\pr{dienes92b,argyres91b,argyres91d} by applying $S$
transformations}
to the $A_K$--sector and then demanding modular invariance of the
theory.
An obvious question with respect to these partition functions is
how to interpret the relationship between the spacetime spin of the
physical
states and the subscripts of the corresponding characters in the partition
functions.
The solution is not immediately transparent for general $K$.
$K=2$ is, of course, the exception.
Based on the aforementioned identites \pe{fpequiv},
we see that terms with all $n_i\equiv 2m_i=0$
correspond to spacetime bosons, while those with all $n_i\equiv 2m_i= K$
correspond to spacetime fermions. This rule also seems to be followed
by terms in $A_K$ for all $K$.
In the $B_K$-- and $C_K$--sectors, interpretation is much less clear.
There have been two suggested meanings for
the terms in $B_K$ and $C_K$. The first hypothesis is that
these terms
correspond to massive spacetime anyons, specifically
spin-${1\over 4}$ particles for $B_K$, and
massive spin-${1\over 8}$ particles for $C_K$.
The second alternative is that
the $B_K$--sector particles are spacetime fermions and bosons, but
with one (for $K=8$) or two (for $K=4$) spatial dimensions
compactified.\mpr{dienes92b, argyres91b,argyres91d,argyres91e}
Along this line of thought, the $C_K$--sector particles are still
interpreted to be spacetime anyons,
but with spin-${1\over4}$ rather than spin-${1\over8}$.
Until present, the general concensus has been that spacetime
anyons can presumably exist only in three or less uncompactified dimensions.
This would seem to contradict suggestions that the $D=4$ or $D=6$ models
may contain spacetime anyons unless at least one or two dimensions,
respectively, are compactified.
Based on other, independent, reasons
we will suggest this compactification does automatically occur in
fractional superstring models.
Further, we will also show that there are possibly no physical states in
the $C_K$--sector for $K=8$.
At each level of $K$, the contribution of each sector is separately zero.
This is
consistent with spacetime SUSY and suggests cancellation
between bosonic and fermionic terms at each mass level.
This leads to the following
identities:\mpr{dienes92b}
$$A_2=A_4=B_4=A_8=B_8=C_8=A_{16}=C_{16}=0\,\, .\eqno\eqnlabel{partident}$$
In this section, we will introduce
a new method for generating these partition functions that reveals
(1) new aspects of
the relationship between
the $B_K$-- and $C_K$--sectors and the $A_K$--sector,
and (2) the evidense for
spacetime supersymmetry in all sectors.
(Specifically, these type II models should have $N=2$ spacetime SUSY, with the
holomorphic and antiholomorphic sectors each effectively contributing an
$N=1$ SUSY. Hence, heterotic fractional superstrings would only possess $N=1$
SUSY.)
We will
demonstrate that cancellation suggestive of spacetime SUSY results from
the action of a simple twist current used in the
derivation of these partition functions. Only by this twisting can
cancellation between bosonic and fermionic terms occur at each mass level
in the $A_K$-- and $B_K$--sectors. The same twisting
results in a
``self-cancellation'' of terms in the $C_K$--sector, and does, indeed,
suggest the anyonic spin-${1\over4}$
interpretation of the $C_K$--sector states.
\vskip .5cm
{\hc{4.2.a}{\sl New Derivation of the Partition Functions}}\vskip .5cm
We find the computer generated partition functions listed above
not to be in the most suggestive form.
By using the string function equivalences,
(\puteqn{cid-a}-b),
the partition functions for the level-$K$ fractional superstrings in
refs.~\pr{dienes92b,argyres91b,argyres91d,argyres91e} with critical spacetime
dimensions
\hbox {$D= 2 + {16\over K}= 10, 6,$} {$\,4,\, {\rm~and~}3$}
can be rewritten (in light-cone gauge) in the form below.
\begin{ignore}
Effectively, the rewritten partition functions
correspond to first replacing a single character, $c^l_n$, by the symmetrized
sum of characters, ${1\over 2} (c^l_n + c^{K-l}_n)$ and then forming subsectors
in each mod-squared term, based on the values of the subscripts of the
characters.
\end{ignore}
{\settabs 8 \columns
\+ \cr
\+ $D=10$ & $(K=2)$: &&$Z = \vert A_2\vert^2$, ~where\cr}
$$\eqalignno{ A_2 &= {1\over 2}\left\{ (c^0_0 + c^2_0)^8 - (c^0_0 -
c^2_0)^8 \right\}_{\rm boson} - 8(c^1_1)^8_{\rm fermion}\cr &= 8\left\{
(c^0_0)^7
c^2_0 + 7(c^0_0)^5(c^2_0)^3 +7(c^0_0)^3(c^2_0)^5 +
c^0_0(c^2_0)^7\right\}_{\rm boson} -
8(c^1_1)^8_{\rm fermion} &\eqnlabel{part2}}$$
{\settabs 8 \columns
\subequationnumstyle{alphabetic}
\+ \cr
\+ $D=6$ & $(K=4)$: &&$Z= \vert A_4\vert^2 + 3\vert B_4\vert^2$, ~where\cr}
$$\eqalignno {A_4 &= {\rm\hskip .37 truecm}4\left\{(c^0_0 + c^4_0)^3
(c^2_0) - (c^2_0)^4\right\}\cr &\quad + 4\left\{(c^0_2 + c^4_2)^3 (c^2_2) -
(c^2_2)^4\right\}&\eqnlabel{part4-a}\cr B_4 &= {\rm\hskip
.37truecm}4\left\{(c^0_0 +
c^4_0)(c^0_2+c^4_2)^2(c^2_0) - (c^2_0)^2(c^2_2)^2\right\}\cr &\quad +
4\left\{(c^0_2 + c^4_2)(c^0_0+c^4_0)^2(c^2_2) -
(c^2_2)^2(c^2_0)^2\right\}&\eqnlabel{part4-b}\cr}$$
{\settabs 8 \columns
\subequationnumstyle{blank}
\hfill\vfill\eject
\subequationnumstyle{alphabetic}
\+ $D=4$ & $(K=8)$:
&&$Z= \vert A_8\vert^2 + \vert B_8\vert^2 + 2\vert C_8\vert^2$, ~where\cr}
$$\eqalignno { A_8 & = {\rm\hskip .37 truecm}2\left\{(c^0_0 + c^8_0)(c^2_0
+ c^6_0) - (c^4_0)^2\right\}\cr &\quad +2\left\{(c^0_4+ c^8_4)(c^2_4+c^6_4)
- (c^4_4)^2\right\}&\eqnlabel{part8-a}\cr B_8 &= {\rm\hskip .37
truecm}2\left\{(c^0_0+c^8_0)(c^2_4+c^6_4)-(c^4_0c^4_4)\right\}\cr &\quad +
2\left\{(c^0_4+c^8_4)(c^2_0+c^6_0)-(c^4_4c^4_0)\right\}&\eqnlabel{part8-b}\cr
C_8 &= {\rm
\hskip .37truecm}2\left\{(c^0_2 + c^8_2)(c^2_2 + c^6_2) -
(c^4_2)^2\right\}\cr &\quad +2\left\{(c^0_2 + c^8_2)(c^2_2 + c^6_2) -
(c^4_2)^2\right\}&\eqnlabel{part8-c}\cr}$$ {\settabs 8 \columns
\subequationnumstyle{blank}
\subequationnumstyle{alphabetic}
\+ $D=3$ & $(K=16)$:
&&$ Z = \vert A_{16}\vert^2 + \vert C_{16}\vert^2 $, ~where\cr}
$$\eqalignno{A_{16} &= {\rm\hskip .3 truecm}\left\{(c^2_0 + c^{14}_0) -
c^8_0\right\}\cr &\quad +\left\{(c^2_8 + c^{14}_8) - c^8_8\right\}
&\eqnlabel{part16-a}\cr
C_{16} &= {\rm\hskip .3 truecm}\left\{(c^2_4 + c^{14}_4) -
c^8_4\right\}\cr &\quad +\left\{(c^2_4 + c^{14}_4) -
c^8_4\right\}\,\, .&\eqnlabel{part16-b}\cr}$$
\subequationnumstyle{blank}
The factorization method of Gepner and Qiu\mpr{gepner87} for
string function partition functions allows us to rederive
the fractional superstring partition functions in this new form
systematically. Using this approach we can express a
general parafermion partition function (with the level
of the string functions henceforth suppressed),
\subequationnumstyle{alphabetic}
$$ Z= \vert \eta\vert^2 \sum N_{l,n,\bar l,\bar n} c^l_n \bar c^{\bar
l}_{\bar n}\, ,\eqno\eqnlabel{partfn2-a}$$
in the form
$$Z= \vert\eta\vert^2\sum
{1\over 2}L_{l,\bar l}M_{n,\bar n}c^l_n \bar c^{\bar l}_{\bar n}\,
,\eqno\eqnlabel{partfn2-b}$$
\subequationnumstyle{blank}
(with $c^{l=2j}_{n=2m}=0$ unless $l-n\in 2Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $ since $\phi^j_m=0$ for
$j-m\not\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $). As a result of the factorization,
$$N_{l,n,\bar l,\bar n} = {1\over 2}L_{l,\bar l}\, M_{n,\bar n}\,\, ,
\eqno\eqnlabel{Nfactorization}$$
we can construct all modular invariant
partition functions (MIPF's) for parafermions from a tensor product
of modular invariant solutions for the $(l,\bar l)$ and
$(n,\bar n)$ indices separately. This results from the
definition of level-$K$ string functions, $c^l_n$, in terms of the
$SU(2)_K$ characters $\chi_l$ and the Jacobi theta-function,
$\vartheta_{n,K}$:\footnote{The associated relationship between the level-$K$
$SU(2)$ primary fields $\Phi^j$ and the parafermionic $\phi^j_m$ is
$$\Phi^j= \sum_{m=-j}^j \phi^j_m\, :\exp\left\{ {i{m\over \sqrt{K}}
\varphi}\right\}:$$
where $\varphi$ is the $U(1)$ boson field of the $SU(2)$ theory.}
$$\chi_l(\tau)
= \sum^K_{n= -K+1} c^l_n(\tau)\vartheta_{n,K}(\tau)\,
,\eqno\eqnlabel{partfn4}$$ where the theta-function is defined by
$$\vartheta_{n,K}(\tau) = \sum_{p\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ + {n\over 2K}}{\rm e}^{2\pi i K p^2\tau}\,
,\eqno\eqnlabel{thetafn}$$ and $\chi_l$ is the character for the spin-${l\over
2}$ representation of $SU(2)_K$, $$\chi_l(\tau) =
{\vartheta_{l+1,K+2}(\tau)-\vartheta_{-l-1,K+2}(\tau)\over
\vartheta_{1,2}(\tau)-\vartheta_{-1,2}(\tau)}
\, .\eqno\eqnlabel{chifn}$$
This factorization is seen in the transformation properties of
$c^l_n$ under the modular group generators $S$ and
$T$,
\subequationnumstyle{alphabetic}
$$\eqalignno{
S: c^l_n &\rightarrow {1\over \sqrt{-i\tau K(K+2)}}
\sum_{l'=0}^{K}\sum_{n'= -K+1 \atop l'-n`\in 2Z}^K \exp\left\{{i\pi n
n'\over K}\right\} \sin\left\{{\pi (l+1)(l'+1)\over K+2}\right\} c^{l'}_{n'}
{\rm ~~~~~~~~~}
&\eqnlabel{ctrans-a}\cr
T: c^l_n &\rightarrow \exp
\left\{ 2\pi i
\left(
{l(l+2)\over 4(K+2)} - {n^2\over 4K} - {K\over 8(K+2)}\right)
\right\} c^l_n\,\, .
&\eqnlabel{ctrans-b}\cr}$$
\subequationnumstyle{blank}
Thus, (\puteqn{partfn2-b}) is modular invariant if and only if the
$SU(2)$ affine partition function
$$ W(\tau,\overline\tau)=
\sum_{l,\bar l= 0}^K L_{l,\bar l}\chi_l(\tau)\bar\chi_{\bar
l}(\bar\tau)\eqno\eqnlabel{partfn3}$$
and the $U(1)$ partition function
$$ V(\tau,\overline\tau)= {1\over\vert\eta(\tau)\vert^2}
\sum_{n,\bar n= -K+1}^K M_{n,\bar n}\vartheta_{n,K}(\tau,\overline\tau)
\bar\vartheta_{\bar n,K}(\tau,\overline\tau)
\eqno\eqnlabel{partfn10}$$ are simultaneously modular invariant.
That is, $N_{l,n,\bar l,\bar n}= {1\over 2}L_{l,\bar l}M_{n,\bar n}$
corresponds to a
MIPF (\puteqn{partfn2-a}) if and only if $L_{l,\bar l}$ and $M_{n,\bar n}$
correspond to MIPF's of the forms (\puteqn{partfn3}) and
(\puteqn{partfn10}), respectively.
This factorization is also possible for parafermion tensor product theories,
with matrices $\bmit L$ and $\bmit M$ generalized to tensors. Any
tensor $\bmit M$ corresponding to a MIPF for $p$ factors of $U(1)$
CFT's can be written as a tensor product of $p$ independent matrix $\bmit M$
solutions to (\puteqn{partfn10}) twisted by simple
currents $\cal J$.\mpr{cleaver}
This approach greatly simplifies the derivation of
the fractional superstring partition functions,
while simultaneously suggesting much about the meaning
of the different sectors, the origin of spacetime supersymmetry, and
related ``projection'' terms. We now proceed with the
independent derivations of $\bmit L$ and $\bmit M$ for the PCFT's.
\vskip .5cm
{\hc{4.2.b}{\sl Affine Factor and ``W'' Partition Function}}\vskip .5cm
In the $A_K$--sectors defined by
eqs.~(\puteqn{part4-a}, \puteqn{part8-a}, \puteqn{part16-a})
the terms inside the first (upper) set of brackets
carry ``$n\equiv 2m=0$'' subscripts
and can be shown, as our prior discussion suggested,
to correspond to spacetime bosons;
while the terms inside the second (lower) set carry ``$n\equiv 2m= K/2$''
and correspond to spacetime fermions.
(See eqs.~(4.2.36a-b).)
Expressing the $A_K$--sector in this form
makes a one--to--one correspondence
between bosonic and fermionic states in the
$A_K$--sector manifest. If we remove the subscripts on the string functions
in the bosonic and fermionic subsectors
(which is analogous to replacing $c^l_n$ with
$\chi_l$), we find the subsectors become equivalent.
In fact, under this
operation of removing the ``$n$'' subscripts and replacing
each string function by its corresponding affine character
(a process we denote by $\buildrel {\rm
affine}\over\Longrightarrow$), all sectors become the same up to an
integer coefficient:
\subequationnumstyle{alphabetic}
{\settabs 8 \columns
\+ \cr
\+ $D=6$ &$(K=4)$:\cr} $$A_4,B_4 \hbox to 1cm{\hfill}{\buildrel {\rm
affine}\over\Longrightarrow}\hbox to 1cm{\hfill}
A_4^{\rm aff}\equiv (\chi_0+\chi_K)^3\chi_{K/2}-(\chi_{K/2})^4
\eqno\eqnlabel{affine-a}$$
\hfill\vfill\eject
{\settabs 8 \columns
\+ $D=4$ &$(K=8)$:\cr} $$A_8,B_8,C_8
\hbox to 1cm{\hfill}{\buildrel {\rm affine}\over\Longrightarrow}
\hbox to 1cm{\hfill}
A_8^{\rm aff}\equiv (\chi_0+\chi_K)(\chi_2+\chi_{K-2})
- (\chi_{K/2})^2 \eqno\eqnlabel{affine-b}$$
{\settabs 8 \columns\+
$D=3$
&$(K=16)$:\cr} $$A_{16},C_{16}
\hbox to 1cm{\hfill}{\buildrel {\rm affine}\over\Longrightarrow}
\hbox to 1cm{\hfill}
A_{16}^{\rm aff}\equiv (\chi_2 + \chi_{K-2}) -\chi_{K/2}\,\, .
\eqno\eqnlabel{affine-c}$$
\subequationnumstyle{blank}
\noindent
We see that the $B$-- and $C$--sectors are not arbitrary additions,
necessitated only by modular invariance, but rather are naturally
related to the physically motivated $A$--sectors.
Thus, the affine factor in each
parafermion partition function is:
$$ Z_{{\rm affine}}
(K) = \vert A_K^{\rm aff}\vert^2\,\, ,\eqno\eqnlabel{affine2}$$
where we see that
eqs.~(\puteqn{affine-a}-c) all have the
general form
$$ A_K^{\rm aff} \equiv (\chi_0
+\chi_K)^{D-3}(\chi_2+\chi_{K-2}) -
(\chi_{K/2})^{D-2}\,\, . \eqno\eqnlabel{affall}$$
\noindent
(Note that the modular
invariance of $W$ requires that $A_K^{\rm aff}$ transforms back into
itself under $S$.)
The class of partition functions (\puteqn{affine2}) is indeed modular
invariant and possesses special properties.
This is easiest to show for
$K=16$. The $SU(2)_{16}$ MIPF's for $D=3$ are trivial to classify, since
at this level the A--D--E classification forms a complete basis set of modular
invariants, even for MIPF's containing terms with negative coefficients. The
only free parameters in $K=16$ affine partition functions $Z\left(
SU(2)_{16}\right)$ are integers $a$, $b$, and $c$, where $$Z\left(
SU(2)_{K=16}\right) =
a\times Z({\rm A}_{17})+b\times Z({\rm D}_{10})+c\times Z({\rm E}_7)\,\,
.\eqno\eqnlabel{affine3}$$
Demanding that neither a left- nor a right-moving tachyonic state be in
the Hilbert space of states in the $K=16$
fractional superstring when the intercept $v$, defined by $$L_0\vert
{\rm physical}\rangle = v\vert {\rm physical}\rangle\, ,
\eqno\eqnlabel{intercept}$$
is positive, removes these degrees of freedom and requires $a= -(b+c)=0$,
independent of the possible $U(1)$ partition
functions.
These specific values for $a$, $b$, and $c$ give us (\puteqn{affine2}) for
this level:
$$W\left( K=16 \right) = Z({\rm D}_{10})-Z({\rm E}_{7})
= \vert A^{\rm aff}_{16}\vert^2
\,\, .\eqno\eqnlabel{affine4}$$
Though not quite as straightforward a process,
we can also derive the affine partition functions $W(K)$ for the
remaining levels. The affine factors in the
\hbox {$K=4 {\rm ~and~} 8$}
partition
functions involve twisting by a non-simple current.
(See footnote p. for the definition of non-simple current.)
These cases
correspond to theories that are the difference between a
${\bigotimes\atop {D-2\atop {\rm factors}}} {\rm D}_{{K\over 2} +2}$ tensor
product model
and a ${\bigotimes\atop{D-2\atop {\rm factors}}} {\rm D}_{{K\over 2} +2}$
tensor
product model twisted by the affine
current
$$J_{\rm non-simple}^{K,~{\rm affine}}=(\Phi^{K\over
4})^{D-2}\bar\Phi^1(\bar\Phi^0)^{D-3}\,\, .\eqno\eqnlabel{jkaff}$$
The equivalent parafermionic twist current is obvious,
$$J_{\rm non-simple}^{K,~{\rm parafermion}}=
(\phi^{K\over 4}_0)^{D-2}(\bar\phi^1_0)
(\bar\phi^0_0)^{D-3}\,\, .\eqno\eqnlabel{affine11}$$ (This derivation
applies to
the $K=16$ case
also.)\footnote{We
have left off the spacetime indices on most of the following currents
and fields. We are working in light-cone gauge so only indices for
transverse modes are implied.
The $D-2$ transverse dimensions are assigned
indices in the range 1 to $D-2$ (and are generically represented by
lowercase Greek superscripts.) When spacetime indices
are suppressed,
the fields, and their corresponding characters in the partition
function,
acting along directions 1 to $D-2$
are ordered in equations from left to right, respectively, for both
the holomorphic and antiholomorphic sectors separately.
Often, we will be still more implicit
in our notation and will express $r$ identical factors of $\phi^j_m$ along
consecutive directions (when these directions are either all compactified or
uncompactified) as $(\phi^j_m)^r$. Thus, eq.~\pe{affine11} for $K=8$ means
$$J_{\rm non-simple}^{K=8,~{\rm parafermion}}\equiv
(\phi^{K/4}_0)^{\mu=1}(\phi^{K/4}_0)^{\nu=2}(\bar\phi^1_0)^{\bar\mu=1}
(\bar\phi^0_0)^{\bar\nu=2}\,\, .$$}
\hfill\vfill\eject
{\hc{4.2.c}{\sl Theta-Function Factor and the ``$V$'' Partition
Function}}\vskip .5cm
We now consider the theta-function factors, $\bmit M$,
carrying the $(n,\bar n)$-indices in the fractional superstring partition
functions.
Since all
$A_K$--, $B_K$--, $C_K$--sectors in the level-$K$ fractional superstring
partition
function (and even the boson and fermion subsectors separately in $A_K$)
contain the
same affine factor, it is clearly the choice of the theta-function
factor which determines the spacetime supersymmetry of the
fractional superstring theories. That is, spacetime spins of particles in the
Hilbert space of states depend upon the
${\bmit M}'s$ that are allowed in tensored versions of
eq.~(\puteqn{partfn10}). In the case of matrix $\bmit M$, rather than a more
complicated tensor, invariance of (\puteqn{partfn10})
under $S$ requires that the components $M_{n\bar n}$
be related by
\subequationnumstyle{alphabetic}
$$ M_{n',\bar n'} = {1\over 2K}\sum_{n,\bar n= -K+1}^{K}
M_{n,\bar n} {\rm e}^{i\pi n
n'/K}{\rm e}^{i\pi \bar n \bar n'/K}\,\, , \eqno\eqnlabel{m-a}$$
and $T$ invariance
demands that $$ {n^2 - \bar n^2\over 4K}\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ \,\, ,
{\rm ~~if~} M_{n,\bar n}\neq 0\,\,. \eqno\eqnlabel{m-b}$$
\subequationnumstyle{blank}
At every level-$K$ there is a unique modular
invariant function corresponding to
each factorization\mpr{gepner87},
$\alpha\times\beta=K$, where $\alpha,\,\, \beta\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $.
Denoting the matrix elements of ${\bmit M}^{\alpha,\beta}$ by
$M^{\alpha,\beta}_{n,\bar n}$,
they are given by\footnote{By eq.~(2.23), $M^{\alpha,\beta}_{n,\bar n}
= M^{\beta,\alpha}_{n,-\bar n}$.
Hence, ${\bmit M}^{\alpha,\beta}$ and ${\bmit M}^{\beta,\alpha}$
result in equivalent fractional superstring partition functions.
To avoid this redundancy, we choose $\alpha\leq\beta$.
Throughout this subsection
we will view the $n$ as representing, simultaneously,
the holomorphic $\vartheta_{n,K}$ characters for $U(1)$ theories
and, in some sense,
the holomorphic string functions, $c^0_{n}$, for parafermions.
($\bar n$ represents the antiholomorphic parallels.)
However, we do not intend to imply that the string functions
can actually be factored into $c^l_0\times c^0_n=c^l_n$. Rather,
we mean to use this in eqs.~(2.31b, 2.33b, 2.35b) only as an
artificial construct for
developing a deeper understanding of the
function of the parafermion primary fields (Verma modules) $\phi^0_m$ in
these models. In the case of the primary fields, $\phi^j_m$,
factorization is, indeed, valid:
$\phi^j_0\otimes\phi^0_m=\phi^j_m$ (for integer $j,\, m$).}
$$M^{\alpha,\beta}_{n,\bar n} = {1\over 2}
\sum_{x\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _{2\beta}\atop y\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _{2\alpha}}
\delta_{n,\alpha x +\beta y}\delta_{\bar n,\alpha x -\beta y}\,\, .
\eqno\eqnlabel{m-c}$$
Thus,
for $K=4$ the two distinct choices for the matrix ${\bmit M}^{\alpha,\beta}$
are ${\bmit M}^{1,4}$ and ${\bmit M}^{2,2}$; for
$K=8$, we have ${\bmit M}^{1,8}$ and ${\bmit M}^{2,4}$; and
for $K=16$, the three alternatives are ${\bmit M}^{1,16}$,
${\bmit M}^{2,8}$, and ${\bmit M}^{4,4}$.
${\bmit M}^{1,K}$ represents the level-$K$ diagonal, {{\it i.e.}} $n=\bar n$,
partition function.
${\bmit M}^{\alpha, \beta={K\over\alpha}}$
corresponds to the
diagonal partition function twisted by a $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _{\alpha}$ symmetry.
(Twisting by $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _{\alpha}$ and $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _{K/\alpha}$ produce isomorphic
models.)
Simple
tensor products of these ${\bmit M}^{\alpha,\beta}$ matrices are insufficient
for
producing fractional superstrings with spacetime SUSY (and, thus, without
tachyons).
We have found that twisting by a special simple $U(1)$ current is required to
achieve this.
Of the potential choices for the $U(1)$ MIPF's,
$V({\rm level~} K)$, the
following are the only ones that produce numerically zero
fractional superstring partition functions:
{\settabs 8 \columns
\+\cr
\+ $D=6$ & $(K=4)$:\cr
\+ \cr}
The ${\bmit M}= {\bmit M}^{2,2}\otimes{\bmit M}^{2,2}
\otimes{\bmit M}^{2,2}\otimes{\bmit M}^{2,2}$ model twisted by the
simple $U(1)$
current\footnote{Recall that the parafermion primary fields $\phi^0_m$
have simple fusion rules,
$$\phi^0_m\otimes\phi^0_{m'}=\phi^0_{m+m' \pmod{K}}$$
and form a {\tenBbb Z}$_K$ closed subalgebra. This fusion rule, likewise,
holds for
the $U(1)$ fields $:\exp\{i{m\over K}\varphi\}:$. This isomorphism
makes it clear that any simple $U(1)$ current,
${\cal J}_K$, in this subsection
that contains only integer $m$ can be
expressed equivalently either in terms of these parafermion fields
or in terms of $U(1)$ fields.
(We specify integer $m$ since $\phi^0_{m}=0$ for half--integer $m$.)
In view of the following discussion, we define all of the
simple twist currents, ${\cal J}_K$, as composed of the former.
(Please note, to distinguish between simple $U(1)$ currents
and affine currents, the U(1) currents
appear in calligraphy style, as above.)}
$${\cal J}_4\equiv \phi_{K/4}^0\phi_{K/4}^0\phi_{K/4}^0\phi_{K/4}^0
\bar\phi_0^0\bar\phi_0^0\bar\phi_0^0\bar\phi_0^0 \eqno\eqnlabel{j4}$$
results in the following $U(1)$ partition functions:
\subequationnumstyle{alphabetic}
$$\eqalignno{V\left(K=4\right) &= \hbox to
.25cm{\hfill}[(\vartheta_{0,4}+\vartheta_{4,4})^4(\bar\vartheta_{0,4}
+ \bar\vartheta_{4,4})^4 +
(\vartheta_{2,4} +\vartheta_{-2,4})^4(\bar\vartheta_{2,4}+ \bar\vartheta_{-2,4})^4\cr
&\hbox to 1em{\hfill}
+ (\vartheta_{0,4} +\vartheta_{4,4})^2(\vartheta_{2,4}+\vartheta_{-2,4})^2(\bar\vartheta_{
0,4} +\bar\vartheta_{4,4})^2(\bar\vartheta_{2,4} + \bar\vartheta_{-2,4})^2\cr
&\hbox to 1em{\hfill}
+ (\vartheta_{2,4}+\vartheta_{-2,4})^2(\vartheta_{0,4}+\vartheta_{4,4})^2(\bar\vartheta_{2,4} +\bar\vartheta_{-2,4})^2
(\bar\vartheta_{0,4} + \bar\vartheta_{4,4})^2]_{\rm untwisted}\cr
&&\eqnlabel{nn4-a}\cr
&\hbox to 1em{\hfill}
+ [(\vartheta_{2,4}+\vartheta_{-2,4})^4(\bar\vartheta_{0,4}+\bar\vartheta_{4,4})^4 + (\vartheta_{4,4}+\vartheta_{0,4})^4
(\bar\vartheta_{2,4}+\bar\vartheta_{-2,4})^4\cr
&\hbox to 1em{\hfill}
+ \hbox to
.15cm{\hfill}(\vartheta_{2,4}+\vartheta_{-2,4})^2(\vartheta_{4,4}+\vartheta_{0,4})^2(\bar\vartheta_{0,4}+\bar\vartheta_{4,4})^2
(\bar\vartheta_{2,4}+\bar\vartheta_{-2,4})^2\cr
&\hbox to 1em{\hfill}
+ \hbox to .15cm{\hfill}(\vartheta_{4,4}+\vartheta_{0,4})^2(\vartheta_{2,4}+\vartheta_{-2,4})^2
(\bar\vartheta_{2,4} + \bar\vartheta_{-2,4})^2(\bar\vartheta_{0,4} +\bar\vartheta_{4,4})^2]_{\rm twisted}\,\, .}$$
Writing this in parafermionic form, and then
using string function identities, followed by
regrouping according to $A_4$ and $B_4$ components,
results in
$$Z({\rm theta~ factor,~} K=4)=
\vert (c^0_0)^4 + (c^0_2)^4\vert^2_{_{(A_4)}}
+ \vert (c^0_0)^2(c^0_2)^2 + (c^0_2)^2(c^0_0)^2
\vert^2_{_{(B_4)}}\,\, .
\eqno\eqnlabel{nn4-b}$$
\subequationnumstyle{blank}
{\settabs 8\columns
\+ \cr
\+ $D=4$ & $(K=8)$:\cr
\+\cr}
The ${\bmit M}= {\bmit M}^{2,4}\otimes{\bmit M}^{2,4}$ model twisted by the
simple $U(1)$ current $${\cal J}_8\equiv
\phi_{K/4}^0\phi_{K/4}^0\bar\phi_0^0\bar\phi_0^0\eqno\eqnlabel{j8}$$
results in
\subequationnumstyle{alphabetic}
$$\eqalignno{V\left(K=8\right) &= \hbox to
.35cm{\hfill}[(\vartheta_{0,8}+\vartheta_{8,8})(\bar\vartheta_{0,8}
+ \bar\vartheta_{8,8}) +
(\vartheta_{4,8}+\vartheta_{-4,8})(\bar\vartheta_{4,8}+\bar\vartheta_{-4,8})]^2_{\rm untwisted}\cr
&\hbox to 1em{\hfill}
+ [(\vartheta_{2,8}+\vartheta_{-6,8})(\bar\vartheta_{2,8}+\bar\vartheta_{-6,8})
+(\vartheta_{-2,8}+\vartheta_{6,8})(\bar\vartheta_{-2,8}+\bar\vartheta_{6,8})]^2_{\rm untwisted}\cr
&&\eqnlabel{nn8-a}\cr
&\hbox to 1em{\hfill}
+ [(\vartheta_{4,8}+\vartheta_{-4,8})(\bar\vartheta_{0,8} + \bar\vartheta_{8,8})
+ (\vartheta_{0,8}+\vartheta_{8,8})(\bar\vartheta_{4,8} + \bar\vartheta_{-4,8})]^2_{\rm twisted}\cr
&\hbox to 1em{\hfill} +
[(\vartheta_{6,8}+\vartheta_{-2,8})(\bar\vartheta_{2,8}+\bar\vartheta_{-6,8})
+(\vartheta_{2,8}+\vartheta_{-6,8})(\bar\vartheta_{-2,8} +\bar\vartheta_{6,8})]^2_{\rm twisted}\,\, .}$$
Hence,
$$ Z({\rm theta~factor,~} K=8)= \vert (c^0_0)^2 +
(c^0_4)^2\vert^2_{_{(A_8)}} + \vert (c^0_0)(c^0_4)
+(c^0_4)(c^0_0)\vert^2_{_{(B_8)}} + 4\vert
(c^0_2)^2\vert^2_{_{(C_8)}}\,\, . \eqno\eqnlabel{nn8-b}$$
\subequationnumstyle{blank}
\hfill\vfill\eject
{\settabs 8\columns
\+ \cr
\+ $D=3$ & $(K=16)$:\cr
\+\cr}
The ${\bmit M}= {\bmit M}^{4,4}$ model twisted by the simple $U(1)$ current
$${\cal J}_{16}\equiv \phi_{K/4}^0\bar\phi_0^0\eqno\eqnlabel{j16}$$
produces,
\subequationnumstyle{alphabetic}
$$\eqalignno{ V\left(K=16\right) &= \hbox to .38cm{\hfill}\vert (\vartheta_{0,16} +
\vartheta_{16,16}) +
(\vartheta_{8,16} +\vartheta_{-8,16})\vert^2_{\rm untwisted}\cr
&\cr
&\hbox to 1em{\hfill}
+ \vert (\vartheta_{4,16} + \vartheta_{-4,16}) + (\vartheta_{12,16} +\vartheta_{-12,16})
\vert^2_{\rm untwisted}\,\, .
&\eqnlabel{nn16-a}}$$
Thus,
$$Z({\rm theta~factor,~} K=16)=\vert c^0_0 + c^0_8\vert^2_{_{(A_{16})}} +
4\vert c^0_4\vert^2_{_{(C_{16})}}\,\, .
\eqno\eqnlabel{nn16-b}$$
\subequationnumstyle{blank}
(In this case the twisting is trivial since ${\cal J}_{16}$ is in the initial
untwisted model.)
The partition function for the standard $D=10$ superstring can also be
factored into affine and theta-function parts:
{\settabs 8\columns
\+ \cr
\+ $D=10$ & $K=2$:\cr}
\subequationnumstyle{alphabetic}
$$ A_2 {\buildrel {\rm affine}\over\Longrightarrow}
\sum_{i {\rm ~odd~}= 1}^7 {8\choose i}(\chi_0)^{i}(\chi_K)^{8-i} -
(\chi_{K/2})^8\,\, . \eqno\eqnlabel{affine10-a}$$
The accompanying $U(1)$ factor is
$$\eqalignno{
Z\left({\rm theta~factor},\, K=2\right)&=
\phantom{35\times}\vert (\vartheta_{0,2})^8 +
(\vartheta_{1,2})^8 + (\vartheta_{-1,2})^8 + (\vartheta_{2,2})^8\vert^2\cr
&\phantom{= } + 35\vert(\vartheta_{0,2} + \vartheta_{2,2})^4
(\vartheta_{1,2} + \vartheta_{-1,2})^4\vert^2\cr
&\phantom{= } + 35[(\vartheta_{1,2}+ \vartheta_{-1,2})^4
(\vartheta_{0,2}+\vartheta_{2,2})^4]
[(\bar\vartheta_{0,2}+\bar\vartheta_{2,2})^4
(\bar\vartheta_{1,2}+\bar\vartheta_{-1,2})^4]\cr
& &\eqnlabel{affine10-b}}$$
\subequationnumstyle{blank}
which\footnote{Note that the effective
$Z\left((n,\bar n),K=2\right)$ contributing to eq.~(\puteqn{part2})
reduces to just the first mod-squared term in eq.~\pe{affine10-b} since
$c^l_n\equiv 0$ for $l-n\neq 0 \pmod{2}$.}
originates from the
$${\bmit M}= {\bmit M}^{2,1}\otimes{\bmit M}^{2,1}\otimes{\bmit
M}^{2,1}\otimes{\bmit M}^{2,1}\otimes{\bmit M}^{2,1}\otimes{\bmit M}^{2,1}
\otimes{\bmit M}^{2,1}\otimes{\bmit M}^{2,1}$$
model twisted by the (simple) current
$${\cal J}^{\rm theta}_2\equiv ( :\exp\{i\varphi/2 \}:)^8\,\, .
\eqno\eqnlabel{j2}$$
The difference between the factorization for $K=2$
and those for $K>2$ is that
here we cannot define an actual parafermion twist current $(\phi_{K/4}^0)^8$
since $\phi^0_{K/4}=0$ for $K=2$.
All of the above simple $U(1)$ twist currents are of the general form
$${\cal J}_K =
(\phi^0_{K/4})^{D-2}(\bar\phi^0_0)^{D-2}\,\, {\rm for~}K>2\,\, .
\eqno\eqnlabel{gensc}$$
We believe this specific class of twist currents
is the key to spacetime
supersymmetry in the parafermion models.\footnote{$\bar {\cal J}_K=
(\phi^0_0)^{D-2}(\bar\phi^0_{K/4})^{D-2}$ is automatically generated as a
twisted state.} Without its twisting effect,
numerically zero fractional superstring MIPF's in three, four, and six
dimensions cannot be formed and, thus, spacetime SUSY would be impossible.
This twisting also reveals much about the necessity of
non-$A_K$--sectors. Terms from the twisted and untwisted
sectors of these models become equally mixed in the $\vert A_K\vert^2$,
$\vert B_K\vert^2$, and $\vert C_K\vert^2$ contributions to the level $K$
partition function.
Further, this twisting keeps the string functions with $n\not\equiv 0,\, K/2
\pmod{K}$ from mixing with those possessing $n\equiv 0,K/2 \pmod{K}$. This is
especially significant since we believe the former string functions
in the $C_K$--sector
likely
correspond to spacetime fields of fractional spin-statistics ({\it i.e.,}
anyons)
and the latter in both $A_K$ and $B_K$ to spacetime bosons and
fermions. If mixing were allowed, normal spacetime SUSY would be
broken and replaced by a fractional supersymmetry, most-likely ruining
Lorentz invariance for $D>3$.
Since in the antiholomorphic sector ${\cal J}_K$ acts as the identity, we
will focus on its effect in the holomorphic sector.
In the $A_K$--sector the operator
$(\phi_{K/4}^0)^{D-2}$ transforms the bosonic (fermionic)
nonprojection
fields into the
fermionic (bosonic) projection fields and
vice-versa.\footnote{We use the same language as the authors of
refs.~\pr{argyres91e}. Nonprojection refers to the
bosonic and fermionic fields in the $A^{\rm boson}_K$ and $A^{\rm fermion}_K$
subsectors, respectively, corresponding to string functions with positive
coefficients, whereas projection fields refer to those
corresponding to string functions with negative signs. With this definition
comes an overall minus sign coefficient on $A_K^{\rm fermion}$, as shown in
eq.~(4.2.38a). For example,
in (4.2.38b), the bosonic non-projection fields are
\hbox {$(\phi^0_0 + \phi^2_0)^3(\phi^1_0)$}
and the bosonic projection is
$(\phi^1_0)^4$. Similarly, in (4.2.38c)
the fermionic non-projection
field is $(\phi^1_1)^4$ and the projections are $(\phi^0_1 +\phi^2_1)^3
(\phi^1_1)$.}
For example, consider
the effect of this twist current on the fields represented in
\subequationnumstyle{alphabetic}
$$ A_4= A^{\rm boson}_4 - A^{\rm fermion}_4\,\, ,
\eqno\eqnlabel{abosferm-a}$$
where
$$\eqalignno{A^{\rm boson}_4 &\equiv 4\left\{ (c^0_0 + c^4_0)^3(c^2_0) -
(c^2_0)^4\right\}
&\eqnlabel{abosferm-b}\cr
A^{\rm fermion}_4 &\equiv 4\left\{
(c^2_2)^4 - (c^0_2 + c^4_2)^3(c^2_2)\right\}
\,\, .
&\eqnlabel{abosferm-c}\cr}$$
\subequationnumstyle{blank}
Twisting by $(\phi^0_{K/4})^{D-2}$ transforms the related fields according to
\subequationnumstyle{alphabetic}
$$\eqalignno{ (\phi^0_0 + \phi^2_0)^3(\phi^1_0) &\hbox to 1cm{\hfill}
{\buildrel {(\phi^0_{K/4})^{D-2}}\over\Longleftrightarrow}\hbox to 1cm{\hfill}
(\phi^2_1 + \phi^0_1)^3 (\phi^1_1)
&\eqnlabel{phitwist-a}\cr (\phi^1_0)^4 &\hbox to 1cm{\hfill}
{\buildrel {(\phi^0_{K/4})^{D-2}}\over\Longleftrightarrow}\hbox to 1cm{\hfill}
(\phi^1_1)^4\,\, .
&\eqnlabel{phitwist-b}\cr}$$
\subequationnumstyle{blank}
Although the full meaning of the projection fields is not yet understood,
the authors of refs.~\pr{argyres91b} and \pr{argyres91e} argue that the
corresponding string functions
should be interpreted as ``internal'' projections, {\it i.e.},
cancellations of degrees of freedom in the fractional superstring models.
Relatedly, the authors show that when the $A_K$--sector is
written as $A^{\rm boson}_K - A^{\rm fermion}_K$, as defined above,
the $q$-expansions of
both $A^{\rm boson}_K$ and $A^{\rm fermion}_K$ are all positive.
Including the
fermionic projection terms results in the identity
\subequationnumstyle{alphabetic}
$$\eta^{D-2} A_K^{\rm fermion} = (D-2)\left( {(\vartheta_2)^4\over
16\eta^4}\right)^{{D-2\over 8}}\,\, .
\eqno\eqnlabel{af-a}$$
Eq.~(\puteqn{af-a}) is the standard theta-function expression for $D-2$
worldsheet Ramond
Major-
ana-Weyl fermions. Further, $$\eta^{D-2} A_K^{\rm boson} = (D-2)\left(
{(\vartheta_3)^4 - (\vartheta_4)^4\over 16\eta^4}\right)^{{D-2\over 8}}\,\, .
\eqno\eqnlabel{af-b}$$
\subequationnumstyle{blank}
Now consider the $B_K$--sectors. For $K=4$ and $8$, the operator
$(\phi^0_{K/4})^{D-2}$ transforms the primary fields corresponding to the
partition functions terms in the first set of brackets on the RHS of
eqs.~(\puteqn{part4-b}, \puteqn{part8-b})
into the fields represented by the partition functions terms in the
second set. For example, in the $K=4$ ($D=6$) case
\hfill\vfill\eject
\subequationnumstyle{alphabetic}
$$\eqalignno{(\phi^0_0 + \phi^2_0)(\phi^1_0)(\phi^0_1 + \phi^2_1)^2
&\hbox to 1cm{\hfill}
{\buildrel {(\phi^0_{K/4})^{D-2}}\over\Longleftrightarrow}\hbox to 1cm{\hfill}
(\phi^0_1 + \phi^2_1)
(\phi^1_1)(\phi^2_0+\phi^0_0)^2 \hbox to .5cm{\hfill}& \eqnlabel{phib-a}\cr
(\phi^1_0)^2(\phi^1_1)^2 &\hbox to 1cm{\hfill}
{\buildrel {(\phi^0_{K/4})^{D-2}}\over\Longleftrightarrow}
\hbox to 1cm{\hfill} (\phi^1_1)^2(\phi^1_0)^2\,\, .&
\eqnlabel{phib-b}\cr}$$
\subequationnumstyle{blank}
Making an analogy with what occurs in the $A_K$--sector, we suggest that
$(\phi^0_{K/4})^{D-2}$ transforms bosonic (fermionic)
nonprojection fields into fermionic (bosonic) projection fields and
vice-versa in the $B_K$--sector also.
Thus, use of the twist current ${\cal J}_K$ allows for bosonic and fermionic
interpretation of these
fields\footnote{Similar conclusions have been reached by K. Dienes and P.
Argyres for different reasons. They have, in fact, found theta-function
expressions for the $B_K^{\rm boson}$-- and $B_K^{\rm fermion}$--
subsectors.\mpr{dienes92a}}:
\subequationnumstyle{alphabetic}
$$B_4= B^{\rm boson}_4 - B^{\rm fermion}_4\,\, ,\eqno\eqnlabel{bbf-a}$$
where
$$\eqalignno{B^{\rm boson}_4 &\equiv
4\left\{(c^0_0 + c^4_0)(c^2_0)(c^0_2+c^4_2)^2 -
(c^2_0)^2(c^2_2)^2\right\}&\eqnlabel{bbf-b}\cr
B^{\rm fermion}_4 &\equiv
4\left\{(c^2_2)^2(c^2_0)^2 - (c^0_2+c^4_2)(c^2_2)(c^0_0
+c^4_0)^2\right\}\,\, .&\eqnlabel{bbf-c}\cr}$$
\subequationnumstyle{blank}
What appears as the projection term, $(c^2_0)^2(c^2_2)^2$, for the
proposed bosonic part acts as the nonprojection term for the fermionic half,
when the subscripts are reversed. One interpretation is
this implies a
compactification of two transverse dimensions.\footnote{This was also
suggested in ref.~\pr{argyres91b} working from a different approach.} The
spin-statistics of the physical states of the $D=6$
model as observed in four-dimensional uncompactified spacetime would
be determined
by the (matching) $n$ subscripts of the first two string
functions\footnote{Using the subscripts $n'$ of last two string functions
to define spin-statistics in $D=4$ uncompactified spacetime
corresponds to interchanging the definitions of $B^{\rm boson}_4$ and
$B^{\rm fermion}_4$.}
(corresponding to the two uncompactified transverse dimensions) in each term
of four string functions, $c^{l_1}_n c^{l_2}_n c^{l_3}_{n'} c^{l_4}_{n'}$ .
The $B_8$ terms can
be interpreted similarly when one dimension is compactified.
However, the
$C_K$--sectors are harder to interpret. Under
$(\phi^0_{K/4})^{D-2}$ twisting, string functions with $K/4$ subscripts
are invariant, transforming back into themselves. Thus, following
the pattern of $A_K$ and $B_K$ we would end up writing, for example,
$C_{16}$ as
\subequationnumstyle{alphabetic}
$$C_{16}= C_{16}^a - C_{16}^b \eqno\eqnlabel{cspin-a}$$ where,
$$\eqalignno{
C^a_{16} &\equiv (c^2_4 + c^{14}_4) - c^8_4
& \eqnlabel{cspin-b}\cr
C^b_{16} &\equiv c^8_4 -
(c^2_4 + c^{14}_4)\,\, . & \eqnlabel{cspin-c}\cr}$$
\subequationnumstyle{blank}
The transformations of the corresponding primary fields are not quite as
trivial, though.
$(\phi^1_2 + \phi^7_2)$ is transformed into its conjugate field
$(\phi^7_{-2} + \phi^1_{-2})$ and likewise $\phi^4_2$ into
$\phi^4_{-2}$,
suggesting that $C^a_{16}$ and $C^b_{16}$ are the partition functions
for conjugate fields. Remember, however, that $C_{16}=0$. Even though we may
interpret this sector as containing two conjugate spacetime fields, this
(trivially) means that the partition function for each is identically zero.
We refer to
this effect in the $C_K$--sector as ``self-cancellation.'' One
interpretation is that there are no states in the $C_K$--sector of the Hilbert
space that survive all of the internal projections. If this is correct,
a question may arise as to the consistency of the $K=8$ and
$16$ theories. Alternatively, perhaps anyon statistics allow two
(interacting?) fields of either identical fractional spacetime spins
$s_1=s_2={2m\over K}$, or spacetime spins related by $s_1={2m\over K}= 1- s_2$,
where in both cases $0<m<{K\over 2} \pmod{1}$,
to somehow cancel each other's contribution to the partition function.
Using the $\phi^j_m\equiv\phi^j_{m+K}\equiv\phi^{{K\over
2}-j}_{m-{K\over 2}}$ equivalences at level $K\in 4Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $, a PCFT has $K/2$
distinct classes of integer $m$ values. If one associates these classes with
distinct spacetime spins (statistics) and assumes $m$ and $-m$ are also
in the same classes since
$(\phi^0_m)^{\dagger}= \phi^0_{-m}$, then the
number of spacetime spin classes reduces to ${K\over 4} +1$. Since $m=0$
$(m= {K\over 4})$ is
associated with spacetime bosons (fermions),
we suggest that general $m$ correspond to particles of
spacetime spin ${2\vert m \vert \over K}$,
${2m\over K} +Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+ $, or $ Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+ -{2m\over K} $.
If this is so, most likely
${\rm spin}(m)\in \{ {2m\over K},
Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+ + {2m\over K} \}$ for $0< m< K/4 \pmod{K/2}$
and ${\rm spin}(m)\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ^+ - {2\vert m\vert \over K}$
for $-K/4< m<0 \pmod{K/2}$. This is one of the few
spin assignment rules that maintains the equivalences of the fields $\phi^j_m$
under
$(j,\, m)\rightarrow({k\over 2}-j,\, m-{K\over 2})\rightarrow(j,\, m+K)$
transformations. According
to this rule, the fields in the $C_K$--sectors have quarter spins
(statistics),
which agrees with prior claims.\mpr{dienes92b,argyres91b,argyres91d}
Also, we do not believe
products of primary fields in different $m$ classes in the $B_K$--sectors
correspond to definite spacetime spin states unless some dimensions are
compactified. Otherwise by our interpretation of $m$ values above, Lorentz
invariance in uncompactified spacetime would be lost.
In particular, Lorentz invariance requires that either all or none of the
transverse modes in uncompactified spacetime be fermionic spinors.
Further, $B$--sector particles apparently cannot
correspond to fractional spacetime spin particles for a
consistent theory. Thus, the $D=6\, (4)$ model must have two (one) of its
dimensions compactified.\footnote{This implies the $D=6,\, 4$ partition
functions are incomplete. Momentum (winding) factors
for the two compactified dimensions would have to be added (with modular
invariance maintained).}
The $B_8$--sector of the $D=4$ model
appears necessary for more reasons than just modular invariance of the
theory.
By the above spacetime
spin assignments, this model suggests massive
spin-quarter states anyons in the $C_K$--sectors,
which presumably cannot exist in $D>3$ uncompactified dimensions.
However, the $B_K$--sector, by forcing compactification to three dimensions
where anyons are allowed, would save the model, making it self-consistent.
Of course, anyons in the $K=16$ theory with $D_{\rm crit}=3$ are physically
acceptable. (Indeed, no $B_K$--sector
is needed and none exists, which would otherwise reduce the theory to zero
transverse dimensions.) Thus,
$K=8$ and $K=16$ models are probably both allowed solutions for three
uncompactified spacetime dimensional models.
If this interpretation is correct then it is
the $B_K$--sector for $K=8$ which makes that theory self-consistent.
An alternative, less restrictive, assignment of spacetime spin
is possible. Another view is that the $m$ quantum number is not
fundamental for determining spacetime spin. Instead, the
transformation of states under $\phi^{j}_{K/4}$ can be considered to
be what divides the set of states into spacetime bosonic and fermionic
classes. With this interpretation, compactification in the $B_K$--sector
is no more necessary than in the $A_K$--sector. Unfortunately, it is not
{\it a priori} obvious, in this approach, which group of states is bosonic,
and which fermionic. In the $A_K$--sector, the assignment can also
be made phenomenologically.
In the $B_K$--sector, we have no such guide.
Of course, using the $m$ quantum number to determine spacetime spin does
not truly tell us which states have bosonic or fermionic statistics,
since the result depends on the arbitrary choice of which of the
two (one) transverse dimensions to compactify.
A final note of caution involves multiloop modular invariance.
One-loop modular invariance amounts
to invariance under $S$ and $T$ transformations. However modular
invariance at higher orders requires an additional
invariance under $U$ transformations: Dehn twists mixing cycles of
neighboring tori of $g>1$ Riemann
surfaces.\mpr{kawai87a,antoniadis87,antoniadis88}
We believe neither our new method of generating the one-loop
partitions, nor the original method of Argyres {\it et al.} firmly proves the
multi-loop modular invariance that is required for a truly consistent
theory.
\vskip .5cm
{\hb{4.3}{\bfs Beyond the Partition Function: Additional Comments}}\vskip .5cm
\sectionnum=3\equationnum=0
In the last section, we introduced a new derivation of the fractional
superstring partition functions. However,
this previous discussion
did not fully demonstrate the consistency of the
fractional superstrings. Further comparisons
to the $K=2$ superstring are of assistance for this.
Here in this section, we comment on such related aspects of
potential string theories.
We consider the analog of the GSO
projection and the uniqueness of the ``twist'' field $\phi^{K/2}_{K/2}$ for
producing spacetime fermions.
First, however we investigate bosonized representations of the fractional
superstrings and what better understanding of the models, this approach
might reveal.
\vskip .5cm
\sectionnumstyle{blank}
{\hc{4.3.a}{\sl Bosonization of the $K=4$ Theory.}}
\sectionnumstyle{arabic}\sectionnum=3\equationnum=0
\vskip .5cm
Several papers\markup{[\putref{li88}]}
have examined the issue of bosonization of $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _K$
parafermion CFTs. Since $0\leq c(K)\leq 2$
for these theories,
generically a $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _K$ model can be bosonized using two distinct free bosonic
fields, with one carrying a background charge. The chiral
energy-momentum tensor
for a free bosonic field $X$ with background charge $\alpha_0$ is
$$ T(z)= {1\over 2}[\partial_z X(z)]^2 - {\alpha_0\over 2}\partial^2_z
X(z)\,\, ,
\eqno\eqnlabel{emtensor}$$
which results in
$$ c(X) = 1 - 3(\alpha_0)^2\,.\eqno\eqnlabel{central}$$
For $2<K<\infty$,\footnote{The only primary field for $K=1$ is the vacuum
and, as discussed prior, the $K=2$ theory is the
$c={1\over 2}$ critical Ising (free fermion) model.}
only two $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _K$ theories (those at $K= 3,\, 4$) do not require two free
real bosonic fields in the bosonized version and only for $K= 4$ is a
background charge unnecessary since $c(K=4)=1$. The bosonization process
for the
$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ parafermion CFT is straightforward since $c=1$ CFTs have only
three classes
of solutions,
corresponding to a boson propagating on (1) a torus of radius
$R$,
(2) a
$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ orbifold of radius $R$, or (3) discrete orbifold spaces defined on
$SU(2)/\Gamma_i$, where $\Gamma_i$ are discrete subgroups of
$SU(2)$.\mpr{kiritsis88}
The $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ parafermion CFT is identical\mpr{ginsparg88}
to the $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ orbifold at radius
$R=\sqrt{6}/2$ (and $R=1/\sqrt{6}$ by duality).
The $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ primary fields with their conformal dimensions and corresponding
partition functions for the Verma modules are listed in Table 4.1.
\vskip .4cm
\centertext{Table 4.1 $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ Primary Fields}
$$\vbox{\settabs 3\columns
\+ {\underbar{\hbox to 3.7cm{\hfill Primary Fields\hfill}}}
& {\underbar{Conformal Dimension h}}
& {\underbar{Partition Fn.}}\cr
\+ \hbox to 3.7cm{\hfill $\phi^0_0\equiv\phi_0$\hfill}
&\hbox to 4.3cm{\hfill 0\hfill}
&\hbox to .9cm{\hfill}$\eta c^0_0$ \cr
\+ \hbox to 3.7cm{\hfill $\phi^2_{-1}=\phi^0_1\equiv\phi_1
= \phi^{ \dagger }_3$\hfill}
&\hbox to 4.3cm{\hfill $3\over 4$\hfill}
&\hbox to .9cm{\hfill}$\eta c^4_2$\cr
\+ \hbox to 3.7cm{\hfill $\phi^2_0=\phi^0_2\equiv\phi_2$\hfill}
&\hbox to 4.3cm{\hfill 1\hfill}
&\hbox to .9cm{\hfill}$\eta c^4_0$ \cr
\+ \hbox to 3.7cm{\hfill $\phi^2_{1}=\phi^0_3\equiv\phi_3
=\phi^{ \dagger }_1$\hfill}
&\hbox to 4.3cm{\hfill $3\over 4$\hfill}
&\hbox to .9cm{\hfill}$\eta c^4_2$ \cr
\+ \hbox to 3.7cm{\hfill $\phi^1_0\equiv\epsilon$\hfill}
&\hbox to 4.3cm{\hfill $1\over 3$\hfill}
&\hbox to .9cm{\hfill}$\eta c^2_0$ \cr
\+ \hbox to 3.7cm{\hfill $\phi^1_1=\phi^1_{-1}$\hfill}
&\hbox to 4.3cm{\hfill $1\over 12$\hfill}
&\hbox to .9cm{\hfill}$\eta c^2_2$ \cr
\+ \cr
\+ \hbox to 3.7cm{\hfill $\phi^{1/2}_{-1/2}$\hfill}
&\hbox to 4.3cm{\hfill $1\over 16$\hfill}
&\hbox to .9cm{\hfill}$\eta c^1_{-1}$ \cr
\+ \hbox to 3.7cm{\hfill $\phi^{1/2}_{1/2}$\hfill}
&\hbox to 4.3cm{\hfill $1\over 16$\hfill}
&\hbox to .9cm{\hfill}$\eta c^1_1$ \cr
\+ \hbox to 3.7cm{\hfill $\phi^{3/2}_{-1/2}$\hfill}
&\hbox to 4.3cm{\hfill $9\over 16$\hfill}
&\hbox to .9cm{\hfill}$\eta c^3_{-1}$ \cr
\+ \hbox to 3.7cm{\hfill $\phi^{3/2}_{1/2}$\hfill}
&\hbox to 4.3cm{\hfill $9\over 16$\hfill}
&\hbox to .9cm{\hfill}$\eta c^3_1$ \cr}$$
\vskip .2cm
An $S^1/Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ orbifold at radius $R$ has the
partition function
\subequationnumstyle{alphabetic}
$$\eqalignno {Z_{\rm orb}(R) &= {1\over 2}\{Z(R)
+ {\vert\eta\vert\over\vert\vartheta_2\vert}
+ {\vert\eta\vert\over\vert\vartheta_3\vert}
+ {\vert\eta\vert\over\vert\vartheta_4\vert}\}
& \eqnlabel{orb-a}\cr
&= {1\over 2}\{Z(R) + {\vert\vartheta_3\vartheta_4\vert\over
\vert\eta\vert^2} + {\vert\vartheta_2\vartheta_4\vert\over \vert\eta\vert^2}
+ {\vert\vartheta_2\vartheta_3\vert\over
\vert\eta\vert^2}\} & \eqnlabel{orb-b}\cr}$$
where $\vartheta_{i= 1{\rm ~to~} 4}$ are the classical Jacobi theta-functions.
$$ Z(R) = {1\over \eta\bar\eta}\sum^{\infty}_{m,n=\infty}
q^{\{{m\over 2R} + nR\}^2/2}\bar q^{\{{m\over 2R} - nR\}^2/2}
\eqno\eqnlabel{pf99}$$
\subequationnumstyle{blank}
is the partition function for a free scalar boson compactified on a circle
of radius $R$.
For $R={\sqrt{6}\over 2}$ the generalized momentum states
$p={m\over\sqrt{6}}+{n\sqrt{6}\over 2}$ can be categorized into four
classes
based on the value of ${p^2\over 2} \pmod{1}$. The classes are
${p^2\over 2}= 0,{1\over 12},{1\over 3}, {\rm ~and~} {3\over 4}\pmod{1}$.
$p= {m\over\sqrt{6}} + {n\sqrt{6}\over 2}$ and
$\bar p= {m\over\sqrt{6}} - {n\sqrt{6}\over 2}$ belong to the same
class.
That is,
$${1\over 2}(p^2 -\bar p^2) \equiv 0 {\rm ~mod~} 1\, ,\eqno\eqnlabel{pf2}$$
(as required by modular invariance or, equivalently, by level
matching.)\mpr{narain86,narain87}
The untwisted sector of the model corresponds to the first two terms on the
right-hand side of eq.~(\puteqn{pf99}) and the twisted sector, the remaining
two terms. The factor of ${1\over 2}$ is due to the GSO projection from
the $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ orbifolding, requiring invariance of states under
$g: ~~X(z,\bar z)\rightarrow -X(z,\bar z)$. In the untwisted sector this
invariance requires pairing together of momentum
states $\vert m,n\rangle$ and $\vert -m,-n\rangle$ and so projects out
half the original number of states in the untwisted sector.
The second term in (\puteqn{orb-a}) and (\puteqn{orb-b}) correspond to
states antiperiodic along the ``time'' loop and thus can only be states
built from (net even numbers) of $\alpha(z)$ and $\bar \alpha(\bar z)$'s acting
on $\left\vert m=n=0\right\rangle$.
The twisted sector states correspond to a total even number of
$\alpha_{r}$ and $\bar \alpha_{r}$, $r\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ +{1\over2}$, oscillations acting on
the
$\left\vert m=n=0\right\rangle$
twisted vacuum with $h=\bar h={1\over16}$. Thus the twisted states have
conformal dimensions of the form
$(h,\bar h)\in ({1\over16}+Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ ,\, {1\over16}+Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ )$
or $({1\over16}+Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ +{{1\over2}},\, {1\over16}+Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ +{{1\over2}})$.
The first six primary fields of the $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _K$ PCFT listed in Table 4.2 have
representations in the untwisted sector of $Z$(orbifold, $R={\sqrt6\over2}$)
and the latter four have representations in the twisted
sector.\footnote {We note that independent of the choice of the affine factor
in the partition functions of section 4.2, the required ($n$,$\bar n$)
partition functions of (\puteqn{nn4-a}, \puteqn{nn8-a}, \puteqn{nn16-a})
effectively remove from the theory
the primary fields with half integer $j$, $m$. The only theory which uses
the twisted sector is the $K=2$ superstring. The significance of this
observation is under investigation.} From the classes of
${p^2\over2}=0$, ${1\over12}$, $1\over3$,
$3\over4$ $\pmod{1}$
states we find the following identities for string functions:
\subequationnumstyle{alphabetic}
$$\eqalignno
{\vert\eta c^0_0\vert^2 + \vert\eta c^4_0\vert^2
&={1\over2}
\left\{
{1\over\vert\eta\vert^2}
\sum_{{p^2\over2}\equiv 0 {\rm~mod~} 12}
q^{({m\over2R}+nR)^2/2}
\bar q^{({m\over2R}-nR)^2/2} +
{\vert\vartheta_3\vartheta_4\vert\over\vert\eta\vert^2}
\right\}{\hbox to .75cm{\hfill}}
&\eqnlabel{defc-a}
\cr
\vert\eta c^2_2\vert^2
&={1\over2}{1\over\vert\eta\vert^2}
\sum_{{p^2\over2}\equiv 1 {\rm ~mod~} 12}
q^{({m\over2R}+nR)^2/2}
\bar q^{({m\over2R}-nR)^2/2}
&\eqnlabel{defc-b}
\cr
\vert\eta c^2_0\vert^2
&={1\over2}{1\over\vert\eta\vert^2}
\sum_{{p^2\over2}\equiv 4 {\rm ~mod~} 12}
q^{({m\over2R}+nR)^2/2}
\bar q^{({m\over2R}-nR)^2/2}
&\eqnlabel{defc-c}
\cr
\vert\eta c^4_{2}\vert^2 =
\vert\eta c^4_{-2}\vert^2
&={1\over4}{1\over\vert\eta\vert^2}
\sum_{{p^2\over2}\equiv 9 {\rm ~mod~} 12}
q^{({m\over2R}+nR)^2/2}
\bar q^{({m\over2R}-nR)^2/2}
&\eqnlabel{defc-d}
\cr
\vert\eta c^1_{1}\vert^2 +
\vert\eta c^3_{1}\vert^2
&=
\vert\eta c^1_{-1}\vert^2 +
\vert\eta c^3_{-1}\vert^2
={1\over4}
\left\{
{\vert\vartheta_2\vartheta_4\vert\over\vert\eta\vert^2} +
{\vert\vartheta_2\vartheta_3\vert\over\vert\eta\vert^2}
\right\}\,\, .
&\eqnlabel{defc-e}
\cr}$$
\subequationnumstyle{blank}
Identities for the primary fields partition functions are
possible from this bosoni-
zation. Since only the $\phi^j_m$ with integer
$j$, $m$
are in the $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ model, we will henceforth concentrate on the untwisted
sector of the $S/Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ model. We make the following identifications between
the primary fields of the two models:
\vskip .4cm
\centertext{Table 4.2 Primary Field Representation From Orbifold Bosonization}
$$\vbox{\settabs 3 \columns
\+{$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ Primary Field}
&{\hbox to 1.2cm{\hfill} $S/Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ \hbox to 1.2cm{\hfill}}
&\hbox to .4cm{\hfill $h$\hfill}
\cr
\+ {\overline{\phantom{$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ Primary Field}}}
&{\overline{\phantom{\hbox to 1.2cm{\hfill} $S/Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ \hbox to 1.2cm{\hfill}}}}
&{\overline{\phantom{\hbox to .4cm{\hfill $h$\hfill}}}}
\cr
\+\hbox to 3.1cm{\hfill $\phi_0(z)$\hfill}
&\hbox to 3.6cm{\hfill 1\hfill}
&\hbox to .4cm{\hfill 0\hfill}
\cr
\+\hbox to 3.1cm{\hfill $\phi_1(z)+\phi_{-1}(z)$\hfill}
&\hbox to 3.6cm{\hfill ${\rm e}^{i{3\over\sqrt6} X(z)}
+ {\rm e}^{-i{3\over\sqrt6} X(z)}$\hfill}
&\hbox to .4cm{\hfill $3\over4$\hfill}
\cr
\+\hbox to 3.1cm{\hfill $\phi_2(z)$\hfill}
&\hbox to 3.6cm{\hfill $i\partial X$\hfill}
&\hbox to .4cm{\hfill 1\hfill}
\cr
\+\hbox to 3.1cm{\hfill $\epsilon(z)$\hfill}
&\hbox to 3.6cm{\hfill ${\rm e}^{i{2\over\sqrt6} X(z)}
+ {\rm e}^{-i{2\over\sqrt6} X(z)}$\hfill}
&\hbox to .4cm{\hfill $1\over3$\hfill}
\cr
\+\hbox to 3.1cm{\hfill $\phi^1_1(z)$\hfill}
&\hbox to 3.6cm{\hfill ${\rm e}^{i{1\over\sqrt6} X(z)}
+ {\rm e}^{-i{1\over\sqrt6} X(z)}$\hfill}
&$1\over12$\cr}$$\vskip .2cm
\noindent ($\phi_1$ and $\phi_{-1}$ must be paired together since the
$S/Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ physical state is ${\rm e}^{+}+{\rm e}^{-}$, where $\epsilon^{+}\equiv
{\rm e}^{i{3\over\sqrt6} X}$, $\epsilon^{-}\equiv {\rm e}^{-i{3\over\sqrt6} X}$.)
Perhaps the first aspect that becomes apparent is how to represent the
fractional supercurrent, $J_{\rm FSC}$ as $J_{\rm FSC}^+ + J_{\rm FSC}^-$:
\subequationnumstyle{alphabetic}
$$\eqalignno{
J_{{\rm FSC},\, {-{4\over3}}}
&=\epsilon\partial X + :\epsilon\epsilon:
&\eqnlabel{supcur-a}\cr
&=\epsilon^+\partial X + :\epsilon^+\epsilon^+:+\epsilon^-\partial X +
:\epsilon^-\epsilon^-:
&\eqnlabel{supcur-b}\cr
&= J^+_{{\rm FSC},\, {-{4\over3}}}
+ J^-_{{\rm FSC},\, {-{4\over3}}}
&\eqnlabel{supcur-c}\cr}$$
\subequationnumstyle{blank}
with ${\rm e}^{\pm i{4\over\sqrt6}}$ the only candidates for
$:\epsilon^{\pm}\epsilon^{\pm}:\,$.\footnote{Subsequently,
[\putref{argyres91e}]
has shown that closure of the fractional current and energy-momentum OPEs
requires $:\epsilon^{\pm}\epsilon^{\pm}:= {\rm e}^{\pm i{4\over\sqrt6}}$
be the descendent term in $G^{\mp}$, respectively.}
Since the identities (\puteqn{defc-a}--\puteqn{defc-e}) involve
$\vert\eta c^{2j}_{2m}\vert^2$ rather
than just $\eta c^{2j}_{2m}$, they do not necessarily imply the exact
equivalence of the parafermion and orbifold models. However, more
fundamental identities for the string functions do exist.
Since none of the $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ parafermion fields
connected with the twisted orbifold sector appear in the $
K=4$ FSC model, we can
look just at a left-moving (holomorphic) boson compactified
on a circle
with $R={\sqrt6}$, but not $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ twisted.
\subequationnumstyle{alphabetic}
$$Z(z,R={\sqrt6})=
{1\over\eta}
\sum_{m=-\infty}^{\infty}
q^{[{m\over R}]^2/2}\,\, .\eqno\eqnlabel{zpart-a}$$
If we change summation using $m=6n+i$, $i=0 {\rm ~to~}5$,
then the partition
function can be split into\footnote{Note that $m=i \pmod{6}$ terms
are equivalent to $m=-i \pmod{6}$ terms, so if we include a factor of
two, we need only sum over $i=0 {\rm ~to~} 3$.}
$$Z(z,R={\sqrt6})=
{1\over\eta}
\sum_{n={-\infty}\atop{i=0 {\rm ~to~} 5}}^{\infty}
q^{[{(6n+i)\over R}]^2/2}\,\, .\eqno\eqnlabel{zpart-b}$$\subequationnumstyle{blank}
This suggests the following more succinct
identities:\footnote{These were verified up to $q^{1300}$ using mathematica.}
\subequationnumstyle{alphabetic}
$$\eqalignno
{\eta c^2_2
&={1\over\eta}q^{1\over12}
\sum_{n=-\infty}^{\infty}
q^{3n^2+n}
&\eqnlabel{id4-a}
\cr
\eta c^2_0
&={1\over\eta}q^{1\over3}
\sum_{n=-\infty}^{\infty}
q^{3n^2+2n}
&\eqnlabel{id4-b}
\cr
\eta c^4_2 = \eta c^4_{-2}
&={1\over 2\eta}q^{3\over4}
\sum_{n=-\infty}^{\infty}
q^{3n^2+3n}
&\eqnlabel{id4-c}
\cr
\eta (c^0_0+c^4_0)
&={1\over\eta}
\sum_{n=-\infty}^{\infty}
q^{3n^2}\,\, .
&\eqnlabel{id4-d}
\cr}$$
\subequationnumstyle{blank}
The corresponding free boson representations of the parafermion primary
fields are given in Table 4.3:
\centertext{\hbox to 1cm{\hfill}}
\centertext{Table 4.3 Primary Field Representation From $R= {{\sqrt{6}}}$
Bosonization}
$$\vbox{\settabs 4 \columns
\+ {{$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ Parafermion}}\hskip .5cm {{$R={\sqrt 6}$ Boson Rep.}}
&&\hbox to 2.65cm{\hfill Verma Module\hfill}
&\hbox to 4cm{\hfill Boson Rep.\hfill}\cr
\+ {\overline{\phantom{$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ Parafermion}}}\hskip .5cm
{\overline{\phantom{\hbox to 3.63cm{$R={6}$ Boson Rep.\hfill}}}}
&&\hbox to .6cm{\hfill}{\overline{\phantom{\hbox to 2.65cm{\hfill Verma
Module\hfill}}}}
&\overline{\phantom{\hbox to 4cm{\hfill Boson Rep.\hfill}}}\cr
\+ \hbox to 2.8cm{\hfill $\phi_0$\hfill}\hskip .5cm
\hbox to 3.6cm{\hfill 1\hfill}
&&\hbox to 2.65cm{\hfill $[\phi_0]$\hfill}
&$\{ 1,\, {\rm e}^{i{6n\over\sqrt 6}X} + {\rm e}^{-i {6n\over\sqrt 6}}\, ;$\cr
\+ &&& \hskip 2.7cm $n>0 \} $\cr
\+ \hbox to 2.8cm{\hfill $\phi_1$\hfill}\hskip .5cm
\hbox to 3.6cm{\hfill ${\rm e}^{i{3\over\sqrt6} X}$\hfill}
&&\hbox to 2.65cm{\hfill [$\phi_1$]\hfill}
&\hbox to 4cm{\hfill $\{{\rm e}^{i{6n+3\over\sqrt6} X}\}$\hfill}\cr
\+ \hbox to 2.8cm{\hfill $\phi_{-1}$\hfill}\hskip .5cm
\hbox to 3.6cm{\hfill ${\rm e}^{-i{3\over\sqrt6} X}$\hfill}
&&\hbox to 2.65cm{\hfill [$\phi_{-1}$]\hfill}
&\hbox to 4cm{\hfill $\{{\rm e}^{-i{6n+3\over\sqrt6} X}\}$\hfill}\cr
\+ \hbox to 2.8cm{\hfill $\phi_2$\hfill}\hskip .5cm
\hbox to 3.6cm{\hfill $i\partial X$\hfill}
&&\hbox to 2.65cm{\hfill [$\phi_{-2}$]\hfill}
&\hbox to 4cm{\hfill $\{\alpha_{-n}\}$\hfill}\cr
\+ \hbox to 2.8cm{\hfill $\epsilon$\hfill}\hskip .5cm
\hbox to 3.6cm{\hfill ${\rm e}^{i{2\over\sqrt6} X}$,
${\rm e}^{-i{2\over\sqrt6} X}$\hfill}
&&\hbox to 2.65cm{\hfill [$\epsilon$]\hfill}
&\hbox to 4cm{\hfill $\{{\rm e}^{i{6n+2\over\sqrt6} X}\}$,
$\{{\rm e}^{-i{6n+2\over\sqrt6} X}\}$\hfill}\cr
\+ \hbox to 2.8cm{\hfill $\phi^1_1$\hfill}\hskip .5cm
\hbox to 3.6cm{\hfill ${\rm e}^{i{1\over\sqrt6} X}$,
${\rm e}^{-i{1\over\sqrt6} X}$\hfill}
&&\hbox to 2.65cm{\hfill [$\phi^1_1$]\hfill}
&\hbox to 4cm{\hfill $\{{\rm e}^{i{6n+1\over\sqrt6} X}\}$,
$\{{\rm e}^{-i{6n+1\over\sqrt6} X}\}$\hfill}\cr}$$\vskip .2cm
In this representation
$\phi_1$ and $\phi_{-1}$ need not be paired together.
Also, $\epsilon$ and $\phi^1_1$ have double representations.
For $\epsilon$, this allows the fractional supercurrent,
$J_{\rm FSC}$, to be
expressed as
$J^+_{{\rm FSC}}+ J^-_{{\rm FSC}}$.
For $\phi^1_1$, this should correspond to the two
spin modes, call them $(+)$ and $(-)$. The zero mode of one representation
of $\epsilon$
should act as a raising operator between these spin states and the other as
a lowering operator:
$${\eqalign{ \epsilon^+_0 (+) &= (-)\cr
\epsilon^+_0 (-) &= 0\cr
\epsilon^-_0 (-) &= (+)\cr
\epsilon^-_0 (+) &= 0\,\, .\cr}}\eqno\eqnlabel{spinmodes}$$
The free boson/orbifold representations of the $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _4$ parafermion
CFT should be a valuable tool for better understanding the $K=4$ FSC model,
especially its associated partition function.
\hfill\vfill\eject
{\hc{4.3.b}{\sl Generalized Commutation Relations and the GSO Projection}}
\vskip .5cm
One of the major complications of
generalizing from the $K=2$ fermion case to $K>2$ is that the parafermions
(and bosonic field representations) do not have simple commutation
relations.\markup{[\putref{zamol87}]}
What are the commutation relations for non-(half) integral spin particles?
Naively,
the first possible generalization of standard (anti-)commutation
relations for two
fields $A$ and $B$ with fractional spins seems to be:
$$AB-{\rm e}^{[i4\pi\, {\rm spin}(A)\, {\rm spin}(B)]}BA=0\eqno\eqnlabel{wrong}$$
(which reduces to the expected result for bosons and fermions). This is too
simple a generalization, however.\markup{[\putref{wilczek90}]}
Fractional spin particles must be representations
of the braid group.\mpr{wilczek90}
Zamolodchikov and Fateev\markup{[\putref{zamol87}]} have
shown that worldsheet parafermions (of fractional spin) have complicated
commutation relations that
involve an infinite number of modes of a given field. For example:
\subequationnumstyle{alphabetic}
$$\eqalignno{\sum_{l=0}^{\infty} C^{(l)}_{(-1/3)}&\left[
A_{n+(1-q)/3-l} A^{\dagger}_{m-(1-q)/3 +l} +
A^{\dagger}_{m- (2-q)/3-l}A_{n-(2-q)/3 +l}
\right]=\cr
&-{1\over 2}\left( n-{q\over 3}\right) \left( n+1 -{q\over 3}\right)
\delta_{n+m,0} + {8\over 3c}L_{n+m}&\eqnlabel{commut-a}}$$
and
$$\sum_{l=0}^{\infty} C^{(l)}_{(-2/3)}
\left[
A_{n-q/3-l}A_{m+(2-q)/3+l}-
A_{m-q/3-l}A_{n+(2-q)/3+l}\right]
= {\lambda\over 2}\left( n-m \right)A^{\dagger}_{(2-2q)/3 +n+m}\,\, ,
\eqno\eqnlabel{commut-b}$$
\subequationnumstyle{blank}
where $A$ is a parafermion field, and
$L_n$ are the generators of the
Virasoro algebra.
$\lambda$ is a real coefficient,
$n$ is integer, and $q=0,1,2 \pmod{3}$ is a $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ charge of
Zamolodchikov and Fateev that can be
assigned to each primary field in the $K=4$ model.
The coefficients, $C^{(l)}_{(\alpha)}$, are determined by
the power expansion
$$(1-x)^{\alpha}= \sum_{l=0}^{\infty} C^{(l)}_{(\alpha)}x^l\,\,
.\eqno\eqnlabel{cla}$$ As usual,
$c\equiv {2(K-1)\over K+2}$ is the central charge of the level-K PCFT.
These commutation relations were derived from the OPE of the related
fields.\markup{[\putref{zamol87}]}
(Hence more terms in a given OPE result in more complicated
commutation relations.)
Similar relations between the modes of two different primary
fields can also be derived from their OPE's.
The significance of these commutation relations is that they
severely reduce the number of
distinct physical states in parafermionic models. There are
several equivalent ways of creating a given physical state from the
vacuum using different mode excitations from different parafermion
primary fields
in the same CFT. Thus, the actual Hilbert space of states for this $K=4$
model will be much reduced compared to the space prior to moding
out by these equivalences.\footnote{These equivalences
have subsequently been explicitly shown and the distinct low ${\rm mass}^2$
fields determined in Argyres {\it et al.}\markup{[\putref{argyres91e}]}}
Although the fields in the PCFT do not (anti-)commute,
but instead have complicated commutation relations, some insight can be
gained by comparing the $D=6$, $K=4$ FSC model to the standard $D=10$
superstring. We can, in fact, draw parallels between $\epsilon$ and the
standard fermionic superpartner, $\psi$, of an uncompactified boson X.
In the free fermion approach, developed simultaneously
by Kawai, Lewellen and Tye
and by
Antoniadis, Bachas and Kounas, generalized GSO projections based on boundary
conditions of the world sheet fermions are
formed.\mpr{kawai87a,antoniadis87,antoniadis88}
Fermions with
half-integer modes (NS-type) are responsible for $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _1$ (trivial)
projections; fermions
with integer modes (R-type) induce $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ projections. In the non-Ramond
sectors these $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ projections remove complete states, while in the
Ramond sector itself,
eliminate half of the spin modes, giving chirality. Fermions
with general complex boundary conditions,
$$\psi (\sigma_1=2\pi)
=-{\rm e}^{i{\pi x}}\psi (\sigma_{1}=0)\,\, ,\eq{fbc}$$
where $x\equiv {a\over b}$ is rational with
$a$ and $b$ coprime and chosen in the range
$-1\leq x < 1$,
form in the non-Ramond sector $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _{2b}$ projections if $a$ is odd and $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _b$
projections if $a$ is even.
For free-fermion models, the GSO operator, originating
from a sector where the world sheet
fermions $\psi^i$ have boundary conditions
\subequationnumstyle{alphabetic}
$$\psi^i(2\pi)= -{\rm e}^{i\pi x^i}\psi^i(0)\, ,\eqno\eqnlabel{fbc-a}$$
and that acts on a physical state $\vert {\rm phys}\rangle_{\vec y}$
in a sector where the same
fermions have boundary conditions
$$\psi^i(2\pi)= -{\rm e}^{i\pi y^i}\psi(0)\, ,\eqno\eqnlabel{fbc-b}$$
\subequationnumstyle{blank}
takes the form,
$$\left\{{\rm e}^{i\pi {\vec x}\cdot {\vec F}_{\vec y}}=\delta_{\vec y}
C({\vec y}\vert {\vec x})\right\}\vert {\rm phys}\rangle_{\vec y}
\eqno\eqnlabel{gso}$$
for states surviving the projection.
Those states not satisfying the demands of the GSO operator for at least
one sector $\vec x$ will not appear in the partition function of the
corresponding model.\footnote{In eq.~(\puteqn{gso}),
${\vec F}_{\vec y}$ is the (vector) fermion number
operator for states in sector $\vec y$. $\delta_{\vec y}$
is $-1$ if either the left-moving or right-moving $\psi^{\rm sfpacetime}$ are
periodic and 1 otherwise.
$C(\vec y\vert\vec x)$ is a phase with value chosen from an allowed set
of order $g_{\vec y,\vec x} = GCD(N_{\vec y},N_{\vec x})$, where
$N_{\vec y}$ is the lowest positive integer such that
$$N_{\vec y}\times \vec y = \vec 0 \pmod{2}\,\, .$$}
The boundary conditions are encoded in
the mode expansions of the complex fermion field, $\psi^+$,
and its complex conjugate, $\psi^-$, on a torus.
These have the following form for a general twist by $x\equiv {a\over b}$:
\subequationnumstyle{alphabetic}
$$ \eqalignno{
\psi^+ (\sigma_1, \sigma_2) &=
\sum_{n= 1}^{\infty}
[
\psi_{n-1/2-x/2}^{\alpha}\,
\exp\left\{ -i(n - 1/2 - x/2) (\sigma_1 +\sigma_2) \right\} \cr
&\phantom{\sum_{n= 1}^{\infty}}\hbox to .2truecm{\hfill}
+ \psi_{1/2 - n - x/2}^{\beta}\,
\exp\left\{-i(1/2 - n - x/2)(\sigma_1+\sigma_2 )\right\}
]
&\eqnlabel{psimodes-a}\cr
\psi^- (\sigma_1, \sigma_2) &=
\sum_{n= 1}^{\infty}[
\psi_{1/2 - n + x/2}^{\alpha}\,
\exp\left\{-i(1/2 - n + x/2 )(\sigma_1+\sigma_2)
\right\}\cr
&\phantom{\sum_{n= 1}^{\infty}}\hbox to .2truecm{\hfill}
+ \psi_{n - 1/2 +x/2}^{\beta}\,
\exp\left\{-i( n - 1/2 +x/2)(\sigma_1 +\sigma_2)\right\}]
&\eqnlabel{psimodes-b}}$$
\subequationnumstyle{blank}
(where ${\psi^{\alpha}_r}^{ \dagger }\equiv\psi^{\alpha}_{-r}$ and
${\psi^{\beta}_r}^{ \dagger }\equiv\psi^{\beta}_{-r}$).
$\psi_r^{\alpha}$ and $\psi_r^{\beta}$ are independent modes. Thus,
\subequationnumstyle{alphabetic}
$$\eqalignno{
\psi^+(\sigma_1+2\pi)&=
e^{+i 2\pi (1/2)}\, e^{i\pi x}\, \psi^+(\sigma_1)
&\eqnlabel{twistbcnspsi-a}\cr
\psi^-(\sigma_1+2\pi)&=
e^{-i 2\pi(1/2)}\, e^{-i\pi x}\, \psi^-(\sigma_1).
&\eqnlabel{twistbcnspsi-b}}$$
\subequationnumstyle{blank}
The specification of the fields is completed by stating the commutation
relation that the modes obey,
\subequationnumstyle{alphabetic}
$$\eqalignno{
\left\{{\psi^{\alpha}_c}^{ \dagger },\psi^{\alpha}_d\right\}
&=\left\{ {\psi^{\beta}_c}^{ \dagger },\psi^{\beta}_d\right\}=
\delta_{cd}\,\, , &\eqnlabel{anticomm-a}\cr
\left\{{\psi^{\alpha}_c}^{ \dagger },\psi^{\beta}_d\right\}
&=\left\{ {\psi^{\beta}_c}^{ \dagger },\psi^{\alpha}_d\right\}= 0
\,\, . &\eqnlabel{anticomm-b}}
$$
\subequationnumstyle{blank}
A similar analysis can be done with the $\epsilon = \phi^1_0$
fields of the $K=4$ parafermion theory.
The normal untwisted ({\it i.e.,} Neveu-Schwarz) modes of $\epsilon$ are
$\epsilon^+_{-{1\over3}-n}$ and $\epsilon^-_{{1\over3}-n}$
where $n\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $. That is, untwisted $\epsilon= (\epsilon^+, \epsilon^-)$
has the following normal-mode expansions.
{\settabs 3\columns
\+ \cr
\+ N-S Sector:\cr}
\subequationnumstyle{alphabetic}
$$ \eqalignno{
\epsilon^+ (\sigma_1, \sigma_2) &=
\sum_{n= 1}^{\infty}
[
\epsilon_{n-1/3}^{\alpha}
\,
\exp\left\{ -i(n - 1/3) (\sigma_1 +\sigma_2) \right\} \cr
&\phantom{\sum_{n= 1}^{\infty}}\hbox to .2truecm{\hfill}
+ \epsilon_{2/3 - n }^{\beta}
\,
\exp\left\{-i(2/3 - n )(\sigma_1+\sigma_2 )\right\}
]
&\eqnlabel{emodes-a}\cr
\epsilon^- (\sigma_1, \sigma_2) &=
\sum_{n= 1}^{\infty}[
\epsilon^{\alpha}_{1/3 - n }\, \exp\left\{-i(1/3 - n
)(\sigma_1+\sigma_2)\right\}\cr
&\phantom{\sum_{n= 1}^{\infty}}\hbox to .2truecm{\hfill}
+ \epsilon^{\beta}_{n - 2/3 }\,
\exp\left\{-i( n - 2/3 )(\sigma_1 +\sigma_2)\right\}]
&\eqnlabel{emodes-b}}$$
\subequationnumstyle{blank}
(where ${\epsilon^{\alpha}_r}^{ \dagger }=\epsilon^{\alpha}_{-r}$ and
${\epsilon^{\beta}_r}^{ \dagger }=\epsilon^{\beta}_{-r}$).
Similarly, the associated boundary conditions in this sector are
\subequationnumstyle{alphabetic}
$$ \eqalignno{
\epsilon^+(\sigma_1+2\pi)&=
{\rm e}^{+i 2\pi/3}\, \epsilon^+(\sigma_1)
&\eqnlabel{bcns-a}\cr
\epsilon^-(\sigma_1+2\pi)&=
{\rm e}^{-i 2\pi/3}\, \epsilon^-(\sigma_1)\,\, .
&\eqnlabel{bcns-b}}$$
\subequationnumstyle{blank}
Like the standard fermion, the $\epsilon$ operators at $K=4$
can be in twisted sectors,
where the normal-mode
expansions have the following form.
\hfill\vfill\eject
{\settabs 3\columns
\+ General Twisted Sector:\cr}
\subequationnumstyle{alphabetic}
$$\eqalignno{
\epsilon^+ (\sigma_1, \sigma_2) &= \sum_{n= 1}^{\infty}[
\epsilon^{\alpha}_{n - {1/3} - {x/2} }\,
\exp\left\{ -i(n - {1/3} - {x/2} )(\sigma_1 +\sigma_2)\right\}\cr
&\phantom{\sum_{n= 1}^{\infty}}\hbox to .2truecm{\hfill}
+ \epsilon^{\beta}_{{2/3} - n - {x/2} }\,
\exp\left\{-i({2/3} - n - {x/2} )(\sigma_1 +\sigma_2)\right\} ]
&\eqnlabel{twistmodes-a}\cr
\epsilon^- (\sigma_1, \sigma_2) &=
\sum_{n= 1}^{\infty}[
\epsilon^{\alpha}_{{1/3} - n + {x/2} }\,
\exp\left\{-i({1/3} - n + {x/2} )(\sigma_1 +\sigma_2)\right\}\cr
&\phantom{\sum_{n= 1}^{\infty}}\hbox to .2truecm{\hfill}
+ \epsilon^{\beta}_{n - {2\over 3} + {x/2}}\,
\exp\left\{ -i( n - {2/3} + x/2)(\sigma_1 +\sigma_2)\right\} ]\,\, .
&\eqnlabel{twistmodes-b}}$$
\subequationnumstyle{blank}
The associated boundary conditions are
\subequationnumstyle{alphabetic}
$$\eqalignno{
\epsilon^+(\sigma_1+2\pi)&=
{\rm e}^{+i 2\pi (1/3)}\, {\rm e}^{i\pi {x}}\, \epsilon^+(\sigma_1)
&\eqnlabel{twistbcns-a}\cr
\epsilon^-(\sigma_1+2\pi)&=
{\rm e}^{-i 2\pi(1/3)}\, {\rm e}^{-i\pi {x}}\, \epsilon^-(\sigma_1)\,\, .
&\eqnlabel{twistbcns-b}}$$
\subequationnumstyle{blank}
The complicated
commutation relations of the modes of $\epsilon$ have already been discussed.
(See eq.~\pe{commut-a}.)
{}From the analogy of free-fermion models, we suggest that in $K=4$ parafermion
models the presence of a sector containing twisted $\epsilon$ fields
with boundary conditions
(\puteqn{twistbcns-a}) or (\puteqn{twistbcns-b}) will result in
$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _{b}$ or $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2\timesZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _{b}$ GSO projections,
depending on whether $a$ is even or odd respectively. (We assume as before
that $a$ and $b$ are relative primes but now use the range
$-2/3\leq x\equiv a/b< 4/3$.)
Zero-modes correspond to a twist by $x= -2/3$.
Whatever the GSO projection is, states resulting from $D$ factors
of $\epsilon_0$ acting on the fermionic
vacuum must survive, in order to have spacetime fermions. Thus, we
conjecture that the presence of these (twisted) zero-modes
$\epsilon_n$, $n\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $ in a model, results in a
generalized $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ GSO projection.
Likewise for $K=8$ and $16$,
one might expect $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _5$ and
$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _9$ projections, respectively. Such projections for
$K=8$ and $16$ could be significantly altered though, by the effects of the
non-Abelian braiding of the non-local interactions.
One other aspect to notice is that within the range $-2/3\leq x<4/3$
there are actually two distinct N-S sectors, corresponding not just to
$x=0$, but also to $x= 2/3$. This is associated with the
$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ symmetry that interchanges
$\epsilon^+$ and $\epsilon^-$. This symmetry may
explain the origin of the additional $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ GSO-type projection we will
shortly discuss.
For the $K=4$ FSC model, one expects a GSO projection to depend on a
generalization of fermion number. However, the naive generalization to
parafermion number, $F(\phi^1_0)$, is insufficient. We find that we must also
consider the multiplicities of the twist field, $\phi^1_1$, and the field
$\phi^0_1$, which increases the $m$ quantum number by one while keeping
$j$ constant.
In order to derive the MIPF we discovered that, indeed, a $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ projection
must be applied to both the left-moving modes
(LM) and right-moving modes (RM) independently.
Survival of a physical state, $\vert {\rm phys}\big>$, in the Hilbert space
under this $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ projection requires\footnote{Note, this projection alone
does not prevent mixing holomorphic $A_4$--sector and
antiholomorphic $B_4$--sector terms.
This is prevented by the standard requirement
$M^2_{\rm LM}=M^2_{\rm RM}$, {\rm i.e.,} $L_0 = \bar L_0$, which here results
in
the RM factors in the partition function being the complex conjugates of
the LM, giving only mod-squared terms in the partition functions.
This allows us to examine only the left-movers in detail in the
following.}
\subequationnumstyle{alphabetic}
$$\left\{{\rm e}^{\left\{i\pi \vec {2\over 3}\cdot\left[ {\vec F}_{{\rm
LM}\,{\rm (RM)}}(\phi^1_0)
+ {\vec F}_{{\rm LM}\, {\rm (RM)}}(\phi^1_1)\right]\right\}}
= {\rm e}^{i\pi {2\over3}}\right\}
\vert {\rm phys}\big>\,\, ,\eqno\eqnlabel{gsoz3-a}$$
or equivalently
$$\left\{Q_{3,\, {\rm LM}\,{\rm (RM)}}\equiv\sum_i F_{i,\, {\rm LM} \,
{\rm (RM)}}(\phi^1_0)
+ \sum_i F_{i,\, {\rm LM} \,{\rm (RM)}}(\phi^1_1) = 1 \pmod{3}
\right\} \vert {\rm phys}\big>\,\, , \eqno\eqnlabel{gsoz3-b}$$
where $F_i(\phi^j_m)_{{\rm LM}\,{\rm (RM)}}$ is the number operator for the
field
$\phi^j_m$ along
the $i$ direction for left-moving (right-moving) modes.
Prior to projection by this extended GSO operator,
we consider all physical states associated with
the LM partition function terms in the expansion of
$(c^0_0 + c^4_0 + c^2_0)^4$ or $(c^2_2 + c^4_2)^4$
to be in the $A_4$--sector.
Similarly, we initially place in the $B_4$--sector
all the LM physical states associated with the partition function terms in the
expansion of $(c^2_2 + c^4_2)^2(c^0_0 + c^4_0 +c^2_0)^2$ or
$(c^0_0 + c^4_0 + c^2_0)^2(c^2_2 + c^4_2)^2$.
There is however, a
third class of states; let us call this the ``$D_4$'' class. This last
class would be present in the original Hilbert space if not for an additional
$Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ GSO projection. Left-moving states in $D$ class, would have partition
functions that are terms in the expansion of
$(c^0_0 + c^4_0 + c^2_0)^3(c^2_2 + c^4_2)$ or
$(c^2_2 + c^4_2)^3(c^0_0+c^4_0+c^2_0)$. The thirty-two $D_4$ terms in
the expansions are likewise divisible into subclasses based on their
associated $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ charges, $Q_3$.\footnote{For clarity, we are
always pairing $c^0_0$ and $c^4_0$ in these partition functions, rather than
expanding $(c^0_0 +c^4_0)^n\,$, as is done in Table 4.4.}
Twelve have charge $0 \pmod{3}$, twelve have charge $1 \pmod{3}$
and eight have charge $2 \pmod{3}$. Without the $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ projection, it
is impossible to
to keep only the wanted terms in the $A_4$-- and $B_4$--sectors,
while projecting away all of the $D_4$--sector terms.
Simple variations of the projection (\puteqn{gsoz3-a}) cannot accomplish this.
All $D_4$ terms can be eliminated, without further projections on the $A$ and
$B$ terms, by a $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ projection defined by
$$\left\{\sum_i F_{i,\, {\rm LM} \,{\rm (RM)}}(\phi^1_1)
+ \sum_i F_{i,\,{\rm LM} \,{\rm (RM)}}(\phi^0_{\pm 1}) = 0 {\rm ~mod~} 2
\right\} \vert {\rm phys}\big>\,\, . \eqno\eqnlabel{gsoz3-c}$$
\subequationnumstyle{blank}
(Note that for $K=2$,
$\phi^1_1$ is equivalent to the identity field and $\phi^1_0$ is
indistinguishable from the usual fermion, $\phi^1_0$. Thus for $K=2$
there is no additional $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ GSO projection.)
Consideration
of these $D_4$ class states reveals some physical meaning to
our particular $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ charge and the additional $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ projection.
First, in all sectors the charge $Q_3$ commutes with
$(\phi^0_{K/4})^{D-2}$, which transforms between non-projection and
projection states of opposite spacetime statistics in the $A_4$-- and
$B_4$--sectors.
Second, the values of this charge are also associated with specific
${\rm mass}^2\pmod{1}$ levels. Third, only for the $A_4$-- and
$B_4$--sector
states does ${\rm mass}^2\pmod{1}$ commute with the same
twist operator $(\phi^0_{K/4})^{D-2}$.
Recall, in section 4.2 we suggested that
twisting by this latter field was the key
to spacetime SUSY. Without any of our projections the
${\rm mass}^2$ levels $\pmod{1}$
of states present would be
${\rm mass}^2= 0,~ {1\over 12},~ {2\over 12},~\dots {11\over 12}$.
When acting on $D_4$--sector fields,
$(\phi^0_{K/4})^{D-2}$ transforms
\hbox {${\rm mass}^2={i\over 12} \pmod{1}$} states into
\hbox {${\rm mass}^2={ i +6\over 12} \pmod{1}$} states.
Thus, if present in a model, states in the $D_4$--sector paired by
the supersymmetry operator $(\phi^0_{K/4})^{D-2}$
would have to be associated with
different mod-squared terms of the
partition function, in order to preserve $T$ invariance. As a result, the
paired contributions to the partition function could not cancel,
proving that $D$
terms cannot be part of any supersymmetric theory.
Although ${\rm mass}^2 \pmod 1$ commutes with $(\phi^0_{K/4})^{D-2}$ in the
$A(Q_3=0)$,
$A(Q_3=-1)$, $B(Q_3=0)$, $B(Q_3=-1)$ subsectors, within these subsectors
(1) there is either a single bosonic state or fermionic state of lowest
mass without superpartner of equal mass, and/or
(2) the lowest mass states are tachyonic.
(See Table 4.4.)\footnote{Our assignments of states as spacetime
bosons or fermions in the $B$-sector,
uses an additional projection that we believe distinguishes between the
two. Following the pattern in
eq.~\pe{part4-b}
with bosonic/fermionic assignment of related states defined in
eqs.~(4.2.33a-c), we suggest that for these states the two primary fields,
$\phi^{j_3}_{m_3}$ and $\phi^{j_4}_{m_4=m_3}$
(implicitely) assigned compactified spacetime
indices must be the same, {\it i.e.,} $j_3= j_4$,
or else must form a term in the expansion of
$(\phi^0_0 + \phi^2_0)^2$. This second case is related to $\phi^0_0$ and
$\phi^2_0$ producing the same spacetime fermion field, $\phi^2_1$, when
separately twisted by $\phi^0_{K/4}$.
(Note however that $\phi^2_1\otimes\phi^0_{K/4}=\phi^0_0$ only.)
Following this rule, neither the states
corresponding to
$(c^2_0)(c^0_0)(c^2_2)(c^4_2)$ and $(c^2_2)(c^4_2)(c^2_0)(c^0_0)$,
(which transform between each other under twisting by
$\phi^0_{K/4}\phi^0_{K/4}\phi^0_{K/4}\phi^0_{K/4}$)
nor those associated with
$(c^2_0)(c^4_0)(c^2_2)(c^4_2)$ and $(c^2_2)(c^4_2)(c^2_0)(c^4_0)$,
survive the projections as either spacetime bosons or fermions.
However, for completeness we include these partition functions
in the B-sector columns of Table 4.4. We define the associated states as
either spacetime bosons or fermions based on the value of $m_3=m_4$.
This is academic, though, because the states do not survive the
{\tenBbb Z}$_3$ projection.}
Thus, our specific GSO
projections in terms of
our $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ charge projection and our $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _2$ projection equate to spacetime
SUSY, uniquely so.
\hfill\vfill\eject
\def\cbox#1{\hbox to 0 pt{\hss#1\hss}}
\centertext{Table 4.4 Masses of $K=4$ Highest Weight States\\
(Represented by Their Associated Characters)\\
\hbox to 1cm{\hfill}}
\halign to \hsize{%
#\hfil \tabskip=1em& \hfil#\tabskip=0em& #\hfil \tabskip=1em plus 0.2em&
#\hfil \tabskip=0.9em plus 0.1em& \hfil#\tabskip=0.2em& \hfil#\hfil&
#\hfil \tabskip=0.1em& \hfil#\tabskip=0em& #\hfil \tabskip=1em plus 0.2em&
#\hfil \tabskip=0em\cr
& \cbox{$A$-Sector}&&&&&& \cbox{$B$-Sector}&& \cr
\multispan4\hrulefill& & Survives & \multispan4\hrulefill \cr
\ Boson& \cbox{Mass$^2$}&&
\ Fermion& $Q_3$& GSO&
\ Boson ?& \cbox{Mass$^2$}&&
\ Fermion ?\cr
$(c^4_0)^2(c^4_0)^2$& 3& ${2\over3}$&
& 0& No&
$(c^4_0)^2(c^4_2)^2$& 3& ${1\over6}$&
$(c^4_2)^2(c^4_0)^2$ \cr
$c^2_0\>\> c^4_0\>\>(c^4_0)^2$ & 3&&
& 1& Yes&
$c^2_0\>\> c^4_0\>\> (c^4_2)^2$& 2& ${1\over2}$&
$c^2_2\>\> c^4_2\>\>(c^4_0)^2$ \cr
$c^0_0\>\> c^4_0\>\> (c^4_0)^2$& 2& ${2\over 3}$&
$(c^4_2)^2(c^4_2)^2$& 0& No&
$c^0_0\>\> c^4_0\>\> (c^4_2)^2$& 2& ${1\over6}$&
$(c^4_2)^2c^0_0\>\> c^4_0\>\>$ \cr
$(c^4_0)^2(c^2_0)^2$& 2& ${1\over 3}$&
& $-1$& No
& $(c^4_0)^2(c^2_2)^2$& 1& ${5\over 6}$&
$(c^4_2)^2(c^2_0)^2$ \cr
$(c^2_0)^2(c^4_0)^2$&&&
&&&
$(c^2_0)^2(c^4_2)^2$&&&
$(c^2_2)^2(c^4_0)^2$ \cr
&&&
&&&
$c^2_0\>\> c^4_0\>\> c^2_2\>\> c^4_2\>\>$&&&
$c^2_2\>\> c^4_2\>\> c^2_0\>\> c^4_0\>\>$ \cr
$c^2_0\>\> c^0_0\>\> (c^4_0)^2$& 2&&
$c^2_2\>\>c^4_2\>\>(c^4_2)^2$& 1& Yes&
$c^2_0\>\> c^0_0\>\> (c^4_2)^2$& 1& ${1\over 2}$&
$c^2_2\>\> c^4_2\>\> c^0_0\>\> c^4_0\>\>$ \cr
$c^2_0\>\> c^4_0\>\> (c^2_0)^2$& 1& ${2\over 3}$&
& 0& No&
$c^2_0\>\> c^4_0\>\> (c^2_2)^2$& 1& ${1\over6}$&
$c^2_2\>\> c^4_2\>\> (c^2_0)^2$ \cr
$(c^0_0)^2(c^4_0)^2$&&&
&&&
$(c^0_0)^2(c^4_2)^2$&&&
\cr
$(c^4_0)^2(c^0_0)^2$&&&
&&&
&&&
$(c^4_2)^2(c^0_0)^2$ \cr
$c^0_0\>\> c^4_0\>\> (c^2_0)^2$& 1& ${1\over 3}$&
$(c^2_2)^2(c^4_2)^2$& $-1$& No&
$c^0_0\>\> c^4_0\>\> (c^2_2)^2$&& ${5\over6}$&
\cr
&&&
$(c^4_2)^2(c^2_2)^2$&&&
$c^0_0\>\> c^2_0\>\> c^2_2\>\> c^4_2\>\>$&&&
$c^2_2\>\> c^4_2\>\> c^0_0\>\> c^2_0\>\>$ \cr
$(c^2_0)^2(c^2_0)^2$& 1&&
& 1& Yes&
$(c^2_0)^2(c^2_2)^2$&& ${1\over 2}$&
$(c^2_2)^2(c^2_0)^2$ \cr
$c^2_0\>\> c^4_0\>\> (c^0_0)^2$&&&
&&&
&&&
$c^2_2\>\> c^4_2\>\> (c^0_0)^2$ \cr
$c^2_0\>\> c^0_0\>\> (c^2_0)^2$&& ${2\over 3}$&
$c^2_2\>\>c^4_2\>\>(c^2_2)^2$& 0& No&
$c^2_0\>\> c^0_0\>\> (c^2_2)^2$&& ${1\over6}$&
\cr
$c^0_0\>\> c^4_0\>\> (c^0_0)^2$&&&
&&&
&&&
\cr
$(c^0_0)^2(c^2_0)^2$&& ${1\over 3}$&
& $-1$& No&
$(c^0_0)^2(c^2_2)^2$& $-$& ${1\over6}$&
\cr
$(c^2_0)^2(c^0_0)^2$&&&
&&&
&&&
$(c^2_2)^2(c^0_0)^2$ \cr
$c^2_0\>\> c^0_0\>\> (c^0_0)^2$& 0&&
$(c^2_2)^2(c^2_2)^2$& 1& Yes&
&&&
\cr
$(c^0_0)^2(c^0_0)^2$& $-$& ${1\over 3}$&
& 0& No&
&&&
\cr
}
\begin{ignore}
{\settabs 8 \columns
\+ \cr
\+ & \hskip 1cm $A$-Sector &&\hskip 1cm $Q_3$ &Survives&&\hskip 1cm $B$-Sector\cr
\+ \underbar{\hbox to 6.6cm{\hfill}}&&&\hskip 1cm
{\underbar{\phantom{$Q_3$}}}&
{\underbar{\hbox to 1.5cm{GSO ?\hfill}}} &\underbar{\hbox to 7.7cm{\hfill}}
\cr
\+ {Boson} &\hskip 1cm\hskip .25cm {${\rm Mass}^2$}
&\hskip 1cm {Fermion} &&&{Boson ?} &\hskip .7cm\hskip .25cm {${\rm Mass}^2$}
&\hskip .7cm {Fermion ?}\cr
\+ $(c^4_0)^2(c^4_0)^2$ &\hskip 1cm\hbox to .5cm{\hfill} $3{2\over3}$ &&\hskip 1cm\phantom{$-$}0
&\hskip .45cm No & $(c^4_0)^2(c^4_2)^2$
&\hskip .7cm\hbox to .5cm{\hfill} $3{1\over6}$ &\hskip .7cm $(c^4_2)^2(c^4_0)^2$\cr
\+ $(c^2_0)\hbox to .15cm{\hfill} (c^4_0)(c^4_0)^2$ &\hskip 1cm\hbox to .5cm{\hfill} 3 &&\hskip 1cm\phantom{$-$}1
&\hskip .45cm Yes
& $(c^2_0)\hbox to .15cm{\hfill} (c^4_0)\hbox to .15cm{\hfill} (c^4_2)^2$
&\hskip .7cm\hbox to .5cm{\hfill} $2{1\over2}$ &\hskip .7cm $(c^2_2)\hbox to .15cm{\hfill} (c^4_2)(c^4_0)^2$\cr
\+ $(c^0_0)\hbox to .15cm{\hfill} (c^4_0)\hbox to .15cm{\hfill} (c^4_0)^2$ & $\hskip 1cm\hbox to .5cm{\hfill} 2{2\over 3}$
&\hskip 1cm $(c^4_2)^2(c^4_2)^2$
&\hskip 1cm\phantom{$-$}0 &\hskip .45cm No
& $(c^0_0)\hbox to .15cm{\hfill} (c^4_0)\hbox to .15cm{\hfill} (c^4_2)^2$ &\hskip .7cm\hbox to .5cm{\hfill} $2{1\over6}$ &\hskip .7cm
$(c^4_2)^2(c^0_0)\hbox to .15cm{\hfill} (c^4_0)$ \cr
\+ $(c^4_0)^2(c^2_0)^2$ &\hskip 1cm\hbox to .5cm{\hfill} $2{1\over 3}$ && \hskip 1cm $-1$
&\hskip .45cm No
& $(c^4_0)^2(c^2_2)^2$ &\hskip .7cm\hbox to .5cm{\hfill} $1{5\over 6}$ &\hskip .7cm $(c^4_2)^2(c^2_0)^2$\cr
\+ $(c^2_0)^2(c^4_0)^2$ &&&&& $(c^2_0)^2(c^4_2)^2$ &&\hskip .7cm
$(c^2_2)^2(c^4_0)^2$\cr
\+ &&&&& $(c^2_0)\hbox to .15cm{\hfill} (c^4_0)\hbox to .15cm{\hfill} (c^2_2)\hbox to .15cm{\hfill} (c^4_2)$ & &\hskip .7cm $(c^2_2)\hbox to .15cm{\hfill}
(c^4_2)\hbox to .15cm{\hfill} (c^2_0)\hbox to .15cm{\hfill} (c^4_0)$\cr
\+ $(c^2_0)\hbox to .15cm{\hfill} (c^0_0)\hbox to .15cm{\hfill} (c^4_0)^2$ &\hskip 1cm\hbox to .5cm{\hfill} 2
&\hskip 1cm $(c^2_2)(c^4_2)(c^4_2)^2$ &\hskip 1cm\phantom{$-$}1 &\hskip .45cm Yes
& $(c^2_0)\hbox to .15cm{\hfill}(c^0_0)\hbox to .15cm{\hfill} (c^4_2)^2$ &\hskip .7cm\hbox to .5cm{\hfill}
$1{1\over 2}$ &\hskip .7cm $(c^2_2)\hbox to .15cm{\hfill} (c^4_2)\hbox to .15cm{\hfill} (c^0_0)\hbox to .15cm{\hfill} (c^4_0)$\cr
\+ $(c^2_0)\hbox to .15cm{\hfill} (c^4_0)\hbox to .15cm{\hfill} (c^2_0)^2$ &\hskip 1cm\hbox to .5cm{\hfill} $1{2\over 3}$
&&\hskip 1cm\phantom{$-$}0 &\hskip .45cm No &
$(c^2_0)\hbox to .15cm{\hfill} (c^4_0)\hbox to .15cm{\hfill} (c^2_2)^2$
&\hskip .7cm\hbox to .5cm{\hfill} $1{1\over6}$ & $\hskip .7cm (c^2_2)\hbox to .15cm{\hfill} (c^4_2)\hbox to .15cm{\hfill} (c^2_0)^2$\cr
\+ $(c^0_0)^2(c^4_0)^2$&&&&& $(c^0_0)^2(c^4_2)^2$\cr
\+ $(c^4_0)^2(c^0_0)^2$&&&&&&&\hskip .7cm $(c^4_2)^2(c^0_0)^2$\cr
\+ $(c^0_0)\hbox to .15cm{\hfill} (c^4_0)\hbox to .15cm{\hfill} (c^2_0)^2$ &\hskip 1cm\hbox to .5cm{\hfill} $1{1\over 3}$
&\hskip 1cm $(c^2_2)^2(c^4_2)^2$ &\hskip 1cm $-1$ &\hskip .45cm No
& $(c^0_0)\hbox to .15cm{\hfill} (c^4_0)\hbox to .15cm{\hfill} (c^2_2)^2$ &\hskip .7cm\hbox to .5cm{\hfill}\hskip .22cm ${5\over6}$\cr
\+ &&\hskip 1cm $(c^4_2)^2(c^2_2)^2$
&&& $(c^0_0)\hbox to .15cm{\hfill} (c^2_0)\hbox to .15cm{\hfill} (c^2_2)\hbox to .15cm{\hfill} (c^4_2)$
&&\hskip .7cm $(c^2_2)\hbox to .15cm{\hfill} (c^4_2)\hbox to .15cm{\hfill} (c^0_0)\hbox to .15cm{\hfill} (c^2_0)$\cr
\+ $(c^2_0)^2(c^2_0)^2$ &\hskip 1cm\hbox to .5cm{\hfill} 1 & &\hskip 1cm\phantom{$-$}1 &\hskip .45cm Yes
& $(c^2_0)^2(c^2_2)^2$ &
$\hskip .7cm\hbox to .5cm{\hfill}\hskip .22cm {1\over 2}$ &\hskip .7cm $(c^2_2)^2(c^2_0)^2$\cr
\+ $(c^2_0)\hbox to .15cm{\hfill}(c^4_0)\hbox to .15cm{\hfill} (c^0_0)^2$ &&&&&&&\hskip .7cm $(c^2_2)\hbox to .15cm{\hfill} (c^4_2)\hbox to .15cm{\hfill}
(c^0_0)^2$\cr
\+ $(c^2_0)\hbox to .15cm{\hfill} (c^0_0)\hbox to .15cm{\hfill} (c^2_0)^2$ &\hskip 1cm\hbox to .5cm{\hfill}\hskip .22cm ${2\over 3}$
&\hskip 1cm $(c^2_2)(c^4_2)(c^2_2)^2$
&\hskip 1cm\phantom{$-$}0 &\hskip .45cm No
& $(c^2_0)\hbox to .15cm{\hfill} (c^0_0)\hbox to .15cm{\hfill} (c^2_2)^2$
&\hskip .7cm\hbox to .5cm{\hfill}\hskip .22cm ${1\over6}$\cr
\+ $(c^0_0)\hbox to .15cm{\hfill} (c^4_0)\hbox to .15cm{\hfill} (c^0_0)^2$&\cr
\+ $(c^0_0)^2(c^2_0)^2$
&\hskip 1cm\hbox to .5cm{\hfill}\hskip .22cm ${1\over 3}$
&&\hskip 1cm $-1$ &\hskip .45cm No & $(c^0_0)^2(c^2_2)^2$
&\hskip .57cm\hbox to .5cm{\hfill} $-{1\over6}$ \cr
\+ $(c^2_0)^2(c^0_0)^2$&&&&&&&\hskip .7cm $(c^2_2)^2(c^0_0)^2$\cr
\+ $(c^2_0)\hbox to .15cm{\hfill} (c^0_0)\hbox to .15cm{\hfill} (c^0_0)^2$ &\hskip 1cm\hbox to .5cm{\hfill}\hskip .22cm 0
&\hskip 1cm $(c^2_2)^2(c^2_2)^2$
&\hskip 1cm\phantom{$-$}1 &\hskip .45cm Yes \cr
\+ $(c^0_0)^2 (c^0_0)^2$
&\hskip .9cm\hbox to .5cm{\hfill} $-{1\over 3}$ &&\hskip 1cm\phantom{$-$}0 &\hskip .45cm No\cr
\+ \cr}
\end{ignore}
\hfill\vfill\eject
\centertext{Table 4.5 Mass Sectors as Function of $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ Charge}
{\settabs 7\columns
\+\cr
\+ Lowest $M^2$ & $M^2$ mod 1 & Sector & $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ Charge & Sector
& $M^2$ mod 1 & Lowest $M^2$\cr
\+ \overline{\hbox to 5.9cm{\hfill}}&&
&\overline{\hbox to 1.8cm{\hfill}}
&\overline{\hbox to 6.7cm{\hfill}}\cr
\+\hskip .65cm \hbox to .35cm{\hfill} 0
&\hskip .8cm 0
&\hskip .4cm $A$
&\hskip .15cm $Q_3={\hbox to .3cm{\hfill}}1$
&\hskip .4cm $B$
&\hskip .8cm ${6\over 12}$
&\hskip .65cm ${6\over 12}$\cr
\+\cr
\+\hskip .65cm $-{1\over 12}$
&\hskip .8cm ${11\over 12}$
&\hskip .4cm $D$
&\hskip .15cm $Q_3= {\hbox to .3cm{\hfill}}0$
&\hskip .4cm $D$
&\hskip .8cm ${5\over 12}$
&\hskip .65cm ${5\over12}$\cr
\+\cr
\+\hskip .65cm $-{2\over 12}$
&\hskip .8cm ${10\over 12}$
&\hskip .4cm $B$
&\hskip .15cm $Q_3= -1$
&\hskip .4cm $A$
&\hskip .8cm ${4\over 12}$
&\hskip .65cm ${4\over12}$\cr
\+\cr
\+\hskip .65cm $-{3\over 12}$
&\hskip .8cm ${9\over 12}$
&\hskip .4cm $D$
&\hskip .15cm $Q_3= {\hbox to .3cm{\hfill}}1$
&\hskip .4cm $D$
&\hskip .8cm ${3\over 12}$
&\hskip .65cm ${3\over 12}$\cr
\+\cr
\+\hskip .65cm $-{4\over 12}$
&\hskip .8cm ${8\over 12}$
&\hskip .4cm $A$
&\hskip .15cm $Q_3={\hbox to .3cm{\hfill}}0$
&\hskip .4cm $B$
&\hskip .8cm ${2\over 12}$
&\hskip .65cm ${2\over 12}$\cr
\+\cr
\+\hskip .65cm\hbox to .35cm{\hfill} ${7\over 12}$
&\hskip .8cm ${7\over 12}$
&\hskip .4cm $D$
&\hskip .15cm $Q_3= -1$
&\hskip .4cm $D$
&\hskip .8cm ${1\over 12}$
&\hskip .65cm ${1\over 12}$\cr
\+ \cr}
\noindent (In Table 4.5, columns one and seven give the lowest ${\rm mass}^2$ of a
state with center
column $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ charge in the appropriate sector. For the $D$ sector states,
under $(\phi^0_{K/4})^{D-2}$ twistings, ${\rm mass}^2$ values in column two
transform into ${\rm mass}^2$ values in column six of the same row and
vice-versa.)
Unlike in the $K=2$ case, for $K=4$
the $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ projection in the Ramond sector wipes out complete spinor fields,
not
just some of the modes within a given spin field. This type of projection
does not occur in the Ramond sector for $K=2$ since there are no fermionic
states with fractional ${\rm mass}^2$ values in the $D=10$ model.
Note also that
our $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ GSO projections relate to the $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _3$ symmetry
pointed out in
[\putref{zamol87}] and briefly commented on following
eqs.~(\puteqn{commut-a}-b).
For $K=8$, a more generalized $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _5$ projection
holds true for all
sectors. For the $K=16$ theory, there are too few terms and products of
string functions to determine if a $Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ _9$ projection
is operative. In the
$K=4$ case, the value of our LM (RM) $Q_3$ charges for states surviving the
projection is set by demanding that the massless spin-2 state
$\epsilon^{\mu}_{-{1\over3}}\bar\epsilon^{\bar\nu}_{-{1\over3}}\left\vert 0
\right\rangle$ survives. In the $A_K$--, $B_K$--,
(and $C_K$-- for $K=8,16$) sectors, these
projections result in states with squared masses of $0+$ integer,
${1\over 2} +$ integer, and ${3\over 4} +$ integer, respectively.
\vskip .5cm
{\hc{4.3.c}{\sl The Unique Role of the Twist Field, $\phi_{K/4}^{K/4}$.}}
\vskip .5cm
In this subsection
we examine whether other consistent models are possible if one
generalizes from the twist
field, $\phi^{K/2}_{K/2}$ to another that could fulfill its role.
When it is demanded that the standard
twist and $\epsilon\equiv\phi^1_0$ fields of reference
[\putref{dienes92b,argyres91b,argyres91d}], be used
we can derive
the critical dimensions of possible models simply by observing that
$K=2,\, 4,\, 8,$ and $16$ are the only levels for which
\subequationnumstyle{alphabetic}
$$h(\phi^1_0)/h(\phi^{K/4}_{K/4})\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ \,\, . \eqno\eqnlabel{stdim-a}$$
If we assume (as in [\putref{dienes92b}]) that the
operator $(\phi_{K/4}^{K/4})^{\mu} $ acting on the (tachyonic) vacuum produces
a massless spacetime
spinor vacuum along the direction $\mu$, and $(\phi^1_0)^{\mu}$ produces a
massless spin-1 state, then for spacetime supersymmetry
(specifically $N=2$ SUSY for fractional type II theories and $N=1$
for fractional heterotic)
$h(\phi^1_0)/h(\phi^{K/4}_{K/4})$ must equal the number
of transverse
spin modes, {\it i.e.,}
$$\eqalignno{ h(\phi^1_0) &= (D-2) h(\phi^{K/4}_{K/4})\cr
{2\over K+2}&= (D-2) {K/8\over K+2}\,\, .&\eqnlabel{stdim-b}\cr}$$
Hence,
$$ D= 2 + {16\over K}\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ \,\, .\eqno\eqnlabel{stdim-c}$$
\subequationnumstyle{blank}
Thus, from this one assumption, the possible integer spacetime
dimensions are determined along with theassociated levels.
Perhaps not coincidentally, the allowed dimensions are precisely the ones
in which classical supersymmetry is possible. This is clearly a
complementary method to the approach for determining $D$ followed
in refs.~[\putref{dienes92b,argyres91b,argyres91d}].
Demanding eq.~(\puteqn{stdim-a}) guarantees
\hbox{spin-1} and \hbox{spin-1/2} superpartners in the open string a
(\hbox{spin-2} and \hbox{spin-3/2} in the closed string) with
$${\rm mass}^2= {\rm mass}^2({\rm vacuum}) + h(\phi^1_0) = {\rm mass}^2({\rm
vacuum}) +
(D-2)* h(\phi^{K/4}_{K/4})\, .\eqno\eqnlabel{mass1}$$
(Double the total mass$^2$ for the closed string.)
{\it A priori} simply demanding the ratio
be integer in eq.~(\puteqn{stdim-a}) is not
sufficient to guarantee local spacetime supersymmetry in the closed string.
However, in the previous
subsections it proved to be;
for the $K=4$ model the masslessness of the open string
\hbox{(spin-1, spin-1/2)} pair occurred automatically
and hence also in the closed string for the \hbox{(spin-2, spin-3/2)}
pair.
\centertext{\hbox to 1cm{\hfill}}
\centertext{Figure 4.1 Supersymmetry of Lowest Mass States of Fractional
Open String}
\vskip .5cm
{\centertext{\underline{\hbox to 4in{$m^2({\rm spin}-1)\hfill =\hfill
m^2({\rm spin}-1/2)$}}}}\\
{\centertext{\hbox to 1in{\hfill}}}\\
{\centertext{\hbox to 4in{$h(\phi^1_0)$ $\big\Uparrow$\hfill
$(D-2)\times h(\phi^{K/4}_{K/4})$ $\big\Uparrow$}}}\\
{\centertext{\hbox to 1 in{\hfill}}}\\
{\centertext{\underline{\hbox to 4 in{\hfill $m^2({\rm vacuum})$\hfill}}}}
\vskip 1cm
In fractional superstrings, the primary field
$\phi^{K/4}_{K/4}\equiv\phi^{K/4}_{-K/4}$ for $K=4,\, 8,\, {\rm and~} 16$,
and its associated character
\hbox{$Z^{K/4}_{K/4}=\eta c^{K/2}_{K/2}$}, are
viewed as the generalizations of
$\phi^{1/2}_ {1/2}$ at $K=2$ and $(\vartheta_2/\eta)^{1/2}$.
Are there any other parafermion operators at
additional levels $K$ that could be used to transform the bosonic vacuum
into a massless fermionic vacuum
and bring about local spacetime supersymmetric models? The
answer is that by demanding masslessness\footnote{Masslessness of
at least the left- (right)-moving spin-1 spacetime fields (whose
tensor product forms the massless spin-2 graviton in a closed string)
is of course required
for a consistent string theory. Consistent two-dimensional field
theories with
$$\eqalignno{
{\rm ~lowest~mass}&{\rm ~of~left-~(right-)moving~spacetime~spin-1~fields}
=\cr
& {\rm ~lowest~mass~of~left-~(right-)moving~spacetime~spin-1/2~fields}
\equiv M_{\rm min} > 0}$$
may exist (as we discuss below) but, the physical interpretation of such
models is not clear, (other than to say they would not be theories with
gravity.}
of the (spin-1, spin-1/2) pair,
there is clearly no other choice for $K<500$. (We believe this will
generalize to $K<\infty$.)
The proof is short. We do not start from the assumption that the
massless spin-1 fields are a result of the $\phi^1_0$ fields. Rather,
to the contrary, the necessity of choosing $\phi^1_0$
appears to be the result of the uniqueness of
$\phi^{K/4}_{K/4}$.
{\settabs 2\columns
\+ \cr}
Proof: Assume we have a consistent (modular invariant) closed fractional
superstring theory
at level-$K$ with supersymmetry in $D$ dimensional spacetime,
($N=2$ for type-II and $N=1$ for heterotic).
Let the massless left- (right-)moving spin-1 field be
$(\phi^{j_1}_{m_1})^{\mu} \vert {\rm vacuum}>$. This requires that
$\phi^{j_1}_{m_1}$ have conformal dimension
$$h(\phi^{j_1}_{m_1})= c_{\rm eff}/24= (D-2) {K\over 8(K+2)}\,\,
.\eqno\eqnlabel{spin1cd}$$
Thus, the twist field $\phi^{j_2}_{m_2}$ that produces the spinor vacuum along
one of the $D-2$
transverse dimensions must have conformal dimension
$$h(\phi^{j_2}_{m_2})= {K\over 8(K+2)}\,\, . \eqno\eqnlabel{spin1half}$$
For $K<500$ the only primary fields with this dimension are the series
of $\phi^{K/4}_{K/4}$ for $K\in 2Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $, and the accidental solutions
$\phi^2_0$ for $K=48$, $\phi^3_0$ for $K=96$,
and $\phi^{9/2}_{7/2}$ for $=98$. Being fields with $m=0$, neither
$\phi^2_0$ nor $\phi^3_0$ at any level
cannot be used to generate spacetime fermions. The $\phi^{9/2}_{7/2}$
alternative is not acceptable either because at $K=98$ there is not an
additional field to replace $\epsilon$. In other words, there is not
a field to be paired with $\phi^{9/2}_{7/2}$ whose
conformal dimension is an integer multiple of $\phi^{9/2}_{7/2}$'s.
(A proof of the uniqueness of $\phi^{K/4}_{K/4}$ for all
$K$ is being prepared by the author.)
Confirmation of $\phi^{K/4}_{K/4}$ as the spin-1/2 operator, though, does not
immediately lead one to conclude that $\epsilon$ is the only possible
choice for producing massless boson fields. Table 4.6 shows alternative
fields at new levels $K\not= 2,\, 4,\, 8,$ or $16$ whose conformal
dimension is one, two, or four times the conformal dimension of
$\phi^{K/4}_{K/4}$.
(Note that successful alternatives to
$\epsilon$ would lead to a relationship between level and
spacetime dimension differing from eq.~(\puteqn{stdim-c}).) However, nearly
all alternatives are of the form $\phi^{j>1}_0$ and we would expect that
modular invariant models using
$\phi^{j>1}_0$ to create massless bosons, would necessarily include
(at least) the tachyonic state, $(\phi^1_0)^{\mu}\vert {\rm vacuum}\rangle$.
That is, we do not believe valid GSO projections
exist which can project away these tachyons while simultaneously
keeping the massless graviton and gravitino and giving modular invariance.
Further, the remaining fields on the list have $m\not= 0\pmod{K}$.
Each of these
would not have the correct fusion rules with itself, nor with
$\phi^{K/4}_{K/4}$ to be a spacetime boson.
\centertext{\hbox to 1cm{\hfill}}
\centertext{Table 4.6 Fields $\phi^{j_1}_{m_1}\neq \phi^1_0$ with Conformal
Dimensions in Integer Ratio with $h(\phi^{K/4}_{K/4})$}
{\settabs 5 \columns
\+\cr
\+ & {$K$} & {$\phi^j_m$} &
{$h(\phi^j_m)/h(\phi^{K/4}_{K/4})$}\cr
\+ & $\overline{\hbox to 9.3cm{\hfill}}$\cr
\+ & 12 & $\phi^2_0$ & \hskip 1.3cm 4\cr
\+ & 24 & $\phi^2_0$ & \hskip 1.3cm 2\cr
\+ & & $\phi^3_0$ & \hskip 1.3cm 4\cr
\+ & 36 & $\phi^7_6$ & \hskip 1.3cm 4\cr
\+ & 40 & $\phi^4_0$ & \hskip 1.3cm 4\cr
\+ & 48 & $\phi^2_0$ & \hskip 1.3cm 1\cr
\+ & & $\phi^3_0$ & \hskip 1.3cm 2\cr
\+ & 60 & $\phi^5_0$ & \hskip 1.3cm 4\cr
\+ & 80 & $\phi^4_0$ & \hskip 1.3cm 2\cr
\+ & 84 & $\phi^6_0$ & \hskip 1.3cm 4\cr
\+ & 96 & $\phi^3_0$ & \hskip 1.3cm 1\cr
\+ & 112 & $\phi^7_0$ & \hskip 1.3cm 4\cr
\+ & 120 & $\phi^5_0$ & \hskip 1.3cm 2\cr
\+ & $\vdots$ & $\vdots$ & \hskip 1.3cm $\vdots$\cr
\+\cr}
Lastly, we want to consider the possibility that there is meaning
to (non-stringy) two-dimensional field theories that
contain neither supergravity nor even gravity. Instead let a
model of this type contain only a global supersymmetry. The
lowest mass spin-1 ($(\phi^1_0)^{\mu}\vert {\rm vacuum}>$)
and spin-1/2 ($(\phi^{j_3}_{m_3})^{D-2}\vert {\rm vacuum}>$)
left- or right-moving fields would be related by
$${\rm mass}^2({\rm vacuum}) + h(\phi^1_0) = {\rm mass}^2({\rm vacuum}) +
(D-2)\times h(\phi^{j_3}_{m_3})\, .\eqno\eqnlabel{massive}$$
In PCFT's there
is only a very small number (12) of potential candidates for
$\phi^{j_3}_{m_3}$.
(Like $\phi^{K/4}_{K/4}$ these twelve are all of the form
$\phi^{j_3}_{\pm j_3}$.) We are
able to reduce the number of candidates down to this finite number very
quickly by proving no possible candidate could have $j_3>10$, independent of
the level. We demonstrate this as follows:
Any potential level-$K$ candidate $\phi^{j_3}_{m_3}$ must satisfy the
condition of
$${K\over K+2}\left[ j_3(j_3+1) -2\right] \leq (m_3)^2 \leq (j_3)^2 \leq K^2/4
\,\, .\eqno\eqnlabel{constraint}$$
By parafermion equivalences (\puteqn{cid-a}-b),
$\vert m\vert\leq j\leq K/2$ can be required for any level-$K$
fields.
The other half of the inequality,
${K\over K+2}\left[ j_3(j_3+1) -2 \right]\leq m^2$
results from the weak requirement that the
conformal dimension of the candidate (spacetime) spin-1/2 field,
$\phi^{j_3}_{m_3}$, creating the fermion ground state
along one spacetime direction cannot be greater than the conformal
dimension of $\epsilon$, {\it i.e.,} $h(\phi^{j_3}_{m_3})\leq h(\phi^1_0)$.
{}From eq.~(\puteqn{constraint}),
we can determine both the minimum and maximum values
of $K$, for a given $j_3$, (independent of $m_3$).
These limits are \hbox{$K_{\rm min}= 2j_3$} and
\hbox{$K_{\rm max}= {\rm ~int}\left( {2(j_3)^2\over j_3-2} \right)$.}
Thus, the number of different levels that can correspond to the field
$\phi^{j_3}_{m_3}$ is int$\left({{5j_3-2}\over {j_3-2}}\right)$.
This number quickly decreases to six as $j_3$ increases to ten and
equals five for $j_3$ greater than ten.
For a given $j_3$, we will express the levels under consideration as
\hbox{$K_i= 2j_3 +i$}.
Also, since \hbox{$K_{\rm min}=2j_3$},
the weak constraint on $m_3$
implies that we need only consider $\phi^{j_3}_{m_3= \pm j_3}$ fields.
Thus, our search reduces to finding fields $\phi^{j_3}_{\pm j_3}$ whose
conformal dimensions satisfy
$${{h(\phi^1_0)\over h(\phi^{j_3}_{\pm j_3})} =
{{2\over K_i +2}\over {j_3(j_3+1)\over K_i+2} - {(j_3)^2 \over K_i}}\inZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ }
\,\, .
\eqno\eqnlabel{ratioj}$$
Clearly, there are no solutions to
eq.~(\puteqn{ratioj}) for $i=0 {\rm ~~to~~} 4$ and $j_3>10$.
Hence, our range of possible alternative sources for fermionic
ground states reduces
to only cnsidering
those $\phi^{j_3}_{\pm j_3}$ with $0<j_3\leq 10$. Within this
set of $j_3$'s, a computer
search reveals the complete set of fields that
obey eq.~(\puteqn{ratioj}), as shown in Table 4.7.
\hfill\vfill\eject
\centertext{Table 4.7 Potential Alternatives, $\phi^{j_3}_{\pm m_3}$,
to $\phi^{K/4}_{K/4}$ for Spin Fields}
{\settabs 7 \columns
\+ \cr
\+ {$j_3$} & {$\pm m_3$} & {$K$}
& {$i$} & {$h(\phi^1_0)$} & {$h(\phi^{j_3}_{m_3})$}
& $D$\cr
\+ $\overline{\hbox to 14.1cm{\hfill}}$\cr
\+ 1/2 & 1/2 & 2 & 1 & 1/2 & 1/16 & 10 **\cr
\+ & & 3 & 2 & 2/5 & 1/15 & 8\cr
\+ & & 5 & 4 & 2/7 & 2/35 & 7 \cr
\+ \cr
\+ 1 & 1 & 3 & 1 & 2/5 & 1/15 & 8 \cr
\+ & & 4 & 2 & 1/3 & 1/12 & 6 **\cr
\+ & & 6 & 4 & 1/4 & 1/12 & 5 \cr
\+ \cr
\+ 3/2 & 3/2 & 9 & 6 & 2/11 & 1/11 & 4 \cr
\+ \cr
\+ 2 & 2 & 5 & 1 & 2/7 & 2/35 & 7\cr
\+ & & 6 & 2 & 1/4 & 1/12 & 5 \cr
\+ & & 8 & 4 & 1/5 & 1/10 & 4 **\cr
\+ \cr
\+ 5/2 & 5/2 & 25 & 20 & 2/27 & 2/27 & 3 \cr
\+ \cr
\+ 3 & 3 & 9 & 3 & 2/11 & 1/11 & 4 \cr
\+ & & 18 & 12 & 1/10 & 1/10 & 3 \cr
\+ \cr
\+ 4 & 4 & 16 & 8 & 1/9 & 1/9 & 3 **\cr
\+ \cr
\+ 6 & 6 & 18 & 6 & 1/10 & 1/10 & 3 \cr
\+ \cr
\+ 10 & 10 & 25 & 5 & 2/27 & 2/27 & 3 \cr
\+ \cr}
\hfill\vfill\eject
The sets of solutions for
$j_3= {1\over 2},\, 1,\, {\rm and~} 2$
are related. The existence of a set
\hbox{$\{i=1,\, 2,\, {\rm and~} 4\}$} of solutions for any
one of these $j_3$ implies identical sets \hbox{$\{ i\}$} for the remaining
two $j_3$ as well.
The known $\phi^{K/4}_{K/4}$ solutions (marked with a **)
correspond to the \hbox{$i=1,\, 2,\, {\rm and~} 4$} elements in the
\hbox{$j_3={1\over 2},\, 1,\, {\rm and~}2$} sets respectively. Whether this pattern
suggests anything about the additional related $\phi^{j_3}_{\pm j_3}$ in
these sets, other than explaining their appearance in the above table, remains
to be seen.
The set of distinct fields can be further reduced. There is
a redundancy in the above list. Among this list,
for all but the standard $\phi^{K/4}_{K/4}$ solutions,
there are two fields at each level, with different values of $j_3$.
However, these pairs are related by
the field equivalences (\puteqn{phidents}):
\subequationnumstyle{alphabetic}
$$\eqalignno{
\hbox to 1cm{$\phi^{1/2}_{\pm 1/2}$\hfill}
&\equiv\hbox to 1cm{$\phi^1_{\mp 1}$\hfill} {\rm~~at~level~~} K=3
&\eqnlabel{id-a}\cr
\hbox to 1cm{$\phi^{1/2}_{\pm 1/2}$\hfill}
&\equiv\hbox to 1cm{$\phi^2_{\mp 2}$\hfill} {\rm~~at~level~~} K=5
&\eqnlabel{id-b}\cr
\hbox to 1cm{$\phi^1_{\pm 1}$\hfill}
&\equiv\hbox to 1cm{$\phi^2_{\mp 2}$\hfill} {\rm~~at~level~~} K=6
&\eqnlabel{id-c}\cr
\hbox to 1cm{$\phi^{3/2}_{\pm 3/2}$\hfill}
&\equiv\hbox to 1cm{$\phi^3_{\mp 3}$\hfill} {\rm~~at~level~~} K=9
&\eqnlabel{id-d}\cr
\hbox to 1cm{$\phi^3_{\pm 3}$\hfill}
&\equiv\hbox to 1cm{$\phi^6_{\mp 6}$\hfill} {\rm~~at~level~~} K=18
&\eqnlabel{id-e}\cr
\hbox to 1cm{$\phi^{5/2}_{\pm 5/2}$\hfill}
&\equiv\hbox to 1cm{$\phi^{10}_{\mp 10}$\hfill} {\rm~~at~level~~} K= 25
\,\, . &\eqnlabel{id-f}\cr}$$
\subequationnumstyle{blank}
Because $\phi^j_m$ and $\phi^j_{-m}$ have identical partition functions
and $\phi^j_{-m}\equiv(\phi^j_m)^{\dagger}$
we can reduce the number of possible alternate fields in half, down to
six. (Note that we not been distinguishing between $\pm$ on $m$ anyway.)
If we want models with {\it minimal} (global) super Yang-Mills Lagrangians
we can reduce the number of the fields to investigate further.
Such theories
exist classically only in $D_{\rm SUSY}=10,\, 6,\, 4,\, 3,\, ({\rm and~} 2)$
spacetime. Thus we can consider only
those $\phi^{j_3}_{\pm j_3}$ in the above list that have integer conformal
dimension
ratios of $D_{\rm SUSY}-2 = h(\phi^1_0)/h(\phi^{j_3}_{j_3})= 8,\, 4,\, 2,\,
{\rm and~} 1$.
This would reduce the fields to consider to just the two
new possibilities for $D= 4$, and $3$ since there there are no new additional
for $D=10$ or $6$.\newpage
{\hb{4.4}{\bfs Concluding Discussion}}\vskip .5cm
\sectionnum=4
A consistent generalization of the superstring would be an
important
development. Our work has shown that the fractional
superstring has many intriguing features that merit further study. The
partition functions for these theories have simple
origins when derived systematically
through the factorization approach of Gepner and Qiu.
Furthermore, using this affine/theta-function factorization of the
parafermion partition functions, we have related the $A_K$--sector
containing the graviton and
gravitino with the massive sectors, $B_K$ and $C_K$. A bosonic/fermionic
interpretation
of the $B_K$--subsectors was given. Apparent ``self-cancellation'' of the
$C_K$--sector was shown, the meaning of which is under investigation.
A possible GSO projection was found, adding
hope that the partition functions have a natural physical
interpretation.
Nevertheless, fundamental questions
remain concerning the ghost system and current algebra, which prevent a
definite conclusion as to whether or not these are consistent theories.
Perhaps most important are arguments suggesting that fractional superstrings
in $D$ dimensions are not formed from tensor products of $D$ separate
$SU(2)_{K}/U(1)$ CFT's. Rather, a tensor product
CFT may be an illusion of the tensor product form of the partition function.
Instead of having a total conformal anomaly contribution of $c=12$ from
matter, the appearance in the six-dimensional ($K=4$) theory of
extra null states at the
critical value suggests that $c=10$.
This would require a non-tensor product representation of
the fractional superconformal algebra
(\puteqn{FSCalgebra-a}-\puteqn{FSCalgebra-c}).
However, even if the theories
are ultimately shown to be inconsistent, we believe that this
program will at least provide interesting identities and new insight into
the one case that we know is consistent, $K=2$. In other words, viewed in this
more general context, we may understand better what is special about
the usual superstring.
On the other hand, fractional superstrings may eventually prove to
be a legitimate class of solutions to a new string theory. This class would
then join the ranks of bosonic, type-II, and heterotic string theories.
Further, it is claimed
that MIPF's for heterotic fractional superstrings with left-movers at
level-$K_1$ and right-movers at level-$K_2$ are also possible.\mpr{dienes92b}
Let us call these heterotic$_{(K_1,K_2)}$ models. The simplest of this class,
the heterotic$_{(1,K)}$ model,
should have $SO(52-2D)= SO(48- 16/K)$ gauge symmetry.
The dominant view holds that the bosonic, type-II, and (standard)
heterotic theories
are uniquely defined by their underlying extended Virasoro
algebras, with many solutions (models) existing for each
theory.footnote{The
number of uncompactified dimensions is regarded as a parameter in the space of
solutions.} An alternate view\mpr{schellekens89c} suggests that heterotic
and type-II strings are related to subregions in the parameter space of
bosonic strings.
Specifically, the claim is that for {\it any} heterotic (type-II)
string there exists a bosonic string that faithfully represents its
properties.
This means that the classification of heterotic and type-II
strings is contained within that of the bosonic strings,
$${\rm Bosonic~Strings} \supset {\rm Heterotic~Strings}
\supset {\rm Type~II~Strings}\,\, . \eq{subclasses}$$
Thus, theoretically,
once the conditions for modular invariance of bosonic strings
are known, determination of them for heterotic or type-II is transparent.
The basis for this mapping
is that the non-unitary supersymmetric ghost system can be
transformed into a unitary conformal field theory.
This transformation preserves both conformal and modular invariance.
The partition functions of a new
unitary theory satisfies all the consistency conditions to serve
as a partition function for
a bosonic string compactified on a lattice.\mpr{schellekens89c}
Where might (heterotic) fractional superstrings, if proven consistent,
fit in this scheme? If their ghost system is finally understood,
perhaps the same mapping technique can be applied. If so, can
fractional superstrings be represented by subclasses of bosonic
strings? We suspect that the answer is ``yes,''
given that fractional (heterotic) strings
are found to be consistent.
Further, we would expect the fractional heterotic superstrings to
correspond specifically to a subset of heterotic (or type-II if $K_1=K_2$)
strings. This is suggested by the apparent spacetime SUSY of
heterotic$_{(K_1,K_2)}$ strings,
even though the local world sheet symmetry is ``fractional.''
\hfill\vfill\eject
\noindent {\bf Appendix A: Dynkin Diagrams for Lie Algebras and KM algebras }
\vskip 7truecm
\hfill\vfill
\noindent Figure A.1 Generalized Dynkin diagrams of the finite KM algebras
({\it i.e.}, of the compact simple Lie algebras)
\vskip 1.6truecm
Each simple root of a Lie algebra or a KM algebra is represented by a
circle. Two circles not directly connected by at least one line imply two
orthogonal roots. One line connecting circles signifies a $120^{\circ}$
angle between corresponding roots, two lines $135^{\circ}$, three lines
$150^{\circ}$, and four lines $180^{\circ}$.
For non-simply-laced algebras, arrows point toward the shorter of two
connected roots. The co-marks associated with the simple roots of a Lie
algebra appear inside the circles.
\eject
\hbox to 1cm{\hfill}
\hfill\vfill
\centertext{Figure A.2 Generalized Dynkin diagrams of the untwisted
affine KM algebras}
\eject
\hbox to 1cm{\hfill}
\hfill\vfill
\centertext{Figure A.3 Generalized Dynkin diagrams of the twisted
affine KM algebras}
\eject
\noindent{{\bf Appendix B:} Proof that completeness of the A-D-E classification of
modular\hfill\\}
\noindent{\phantom{\bf Appendix B:} invariant partition functions for $SU(2)_K$ is
unrelated to uniqueness\hfill\\}
\noindent{\phantom{\bf Appendix B:} of the vacuum\hfill}
\vskip .5cm
In this appendix we prove that relaxing the condition of uniqueness of
the vacuum does not allow new solutions to ${SU}(2)_K$ MIPFs. The
allowed solutions are still just the A-D-E classes. We prove this through a
review of Cappelli, Itzykson, and Zuber's (CIZ's) derivation of the A-D-E
classification.\mpr{capelli87} We treat the coefficients
$N_{l\bar l}$ in the partition function of eq.~(B.5) as the components of a
symmetric matrix, $\bmit N$, that operates between vectors of characters
$\vec\chi$:
$$\eqalign {
Z &= \sum_{l\bar l} N_{\bar l l}\bar\chi_{\bar
l}^{(K)}(\bar\tau)\chi_{l}^{(K)}(\tau)\cr
&\equiv \bigl\langle\vec\chi\vert{\bmit
N}\vert{\vec\chi}\bigr\rangle}\,\, . \eqno ({\rm B}.1)$$
In this notation, under ${\bmit V}\in {\bmit S},{\bmit T}\}$,
${\bmit V}^{ \dagger }{\bmit V}=1$,
the partition function
transforms as
$$ Z_{\bmit V}= \bigl\langle\vec\chi\vert{\bmit V^{ \dagger }}{\bmit N}{\bmit
V}\vert{\vec\chi}\bigr\rangle\,\, , \eqno ({\rm B}.2)$$
where $\bmit S$ and $\bmit T$ are the matrix representations of the standard
$S$ and $T$ modular transformations for a specific conformal field theory
(CFT).
Thus, a partition function $Z$ for a specific CFT is modular invariant
iff $\bmit N$ commutes with $\bmit S$ and $\bmit T$ in the given CFT.
CIZ proved that for a general $SU(2)_K$ algebra the commutant
of ${\bmit S}$ and ${\bmit T}$ is generated by a
set of linearly independent symmetric matrix operators, $\Omega_{\delta}$, labeled by
$\delta$, a divisor of $n\equiv K+2$. Thus, for $(B.1)$ to be a MIPF
$\bmit N$
must be formed from the basis set of $\Omega_{\delta}$.
In \pr{capelli87}, CIZ showed that the
additional requirements of (1) uniqueness of the vacuum ($N_{11}=1$) and
(2) absence of $\bar\chi_{\bar l}\chi_l$ with coefficients
$N_{\bar l l}<0$
constrain MIPFs for $SU(2)_K$ to the A-D-E classification.
This is apparent from the
limited possibilities
for $\sum_{\delta} c_{\delta}\Omega_{\delta}$ with $c_{\delta}\in Z$
that produce MIPF's satisfying both (1) and (2). (See Table B.1 below.)
\vskip .4cm
\noindent Table B.1 A--D--E Classification in Terms of $\Omega_{\delta}$ Basis Set
$$\vbox{\settabs 3 \columns
\+ \cr
\+ {\underbar {$n\equiv$ level $+2$}}\hskip 1.35cm
{\underbar {Basis Elements For MIPF's}}
&& {\underbar {A--D--E Classification}}\cr
\+ \hbox to 2.3cm{\hfill $n\geq
2$\hfill}\hskip 1.35cm\hbox to 4.95cm{\hfill $\Omega_n$\hfill}
&& \hbox to 3.9cm{\hfill $(A_{n-1})$\hfill}\cr
\+ \hbox to 2.3cm{\hfill $n {\rm ~even}
$\hfill}\hskip 1.35cm\hbox to 4.95cm{\hfill $\Omega_n + \Omega_2$\hfill}
&& \hbox to 3.9cm{\hfill $(D_{n/2 +1})$\hfill}\cr
\+ \hbox to 2.3cm{\hfill $n = 12$\hfill}\hskip 1.35cm\hbox to 4.95cm{\hfill $
\Omega_{n=12} + \Omega_3 + \Omega_2$\hfill}
&& \hbox to 3.9cm{\hfill $(E_6)$\hfill}\cr
\+ \hbox to 2.3cm{\hfill $n =
18$\hfill}\hskip 1.35cm\hbox to 4.95cm{\hfill $\Omega_{n=18}
+ \Omega_3 + \Omega_2$\hfill}
&& \hbox to 3.9cm{\hfill $(E_7)$\hfill}\cr
\+ \hbox to 2.3cm{\hfill $n = 30$\hfill}\hskip 1.35cm\hbox to 4.95cm{\hfill $
\Omega_{n=30} + \Omega_5 + \Omega_3 + \Omega_2$\hfill}
&& \hbox to 3.9cm{\hfill $(E_8)$\hfill}\cr}$$
In actuality, relaxing requirement (1) to
$N_{11}\geq 1$ does not enlarge this solution
set. Rather, the solutions in column two of Table B.1
are simply multiplied by an
overall constant, $N_{1 1}=c_n$. Our proof proceeds along the lines of
\pr{capelli87}:
\vskip .5cm
Let
$\alpha (\delta)= {\rm ~GCF}(\delta,\bar \delta \equiv n/\delta)$ and
$N=2(K+2)$.
Next we define $p\equiv \bar \delta/\alpha$ and
$p'\equiv \delta/\alpha$. Then we choose a pair of
integers $\rho$, $\sigma$ such that $\rho p - \sigma p' = 1$. From these
we form $\omega(\delta) = \rho p + \sigma p' \pmod{N/\alpha^2}$,
which leads to the following equations:\footnote{Note for future reference that
interchanging the roles of $\delta$ and $\bar \delta$ in these equations
amounts to replacing $\omega$ by $-\omega$.}
$$\eqalign{\omega^2 -1 &= 4\rho\sigma p p' = 0 \pmod{2 N/\alpha^2};\cr
\omega + 1 &= 2\rho p \pmod{N/\alpha^2};\cr
\omega - 1 &= 2\sigma p' \pmod{N/\alpha^2}\,\, .\cr}
\eqno ({\rm B}.3)$$
An $N\times N$-dimensional matrix $\Omega_{\delta}$
operates on an enlarged set of
characters $\chi_{\lambda}$, with $\lambda$ defined mod $N$.
The ``additional'' characters carry indices in the range
$-(K+2)<\lambda \pmod{N}<0$ and
are trivially related to the customary $SU(2)_K$ characters,
carrying positive
indices $\pmod{N}$, by
$\chi_{\lambda}= -\chi_{-\lambda}$ and $\chi_{\xi (K+2)}=0$
for $\xi \in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $.\footnote{The
character $\chi_l$, for $0<l<K+1$, is associated here with
the primary field that is an $l$-dimensional representation
of $SU(2)$. This notation differs from that used
in the rest of the thesis.
Outside of this appendix,
the character corresponding to the primary field
in the $l$-dimensional representation is denoted by $\chi_{l-1}$.
In particular, while here the character for the singlet representation is
denoted by $\chi_1$, elsewhere it is denoted by $\chi_0$.}
This means the overall sign coefficient in the $q$-expansion of
$(\chi_{\lambda})$ is positive for $0<\lambda \pmod{N} <n $ and negative
for $n<\lambda \pmod{N} <N $.
The components of a matrix, $\Omega_{\delta}$, are defined to be:
$$({\Omega_{\delta}})_{\lambda,\lambda'}=\cases{
0,&if $\alpha\not\vert\lambda$ or $\alpha\not\vert\lambda'$;\cr
\sum_{\xi \pmod{\alpha}}\delta_{\lambda',\omega\lambda + \xi N/\alpha},
&otherwise.\cr}\eqno({\rm B}.4)$$
Thus, $\Omega_n$ is the $N\times N$-dimensional identity matrix.
A general MIPF for $SU(2)$ at level $K$ can be written as
$$ Z(\tau,\, \bar\tau) = {1\over 2}\sum_{\lambda,\lambda'
{\pmod{N}}}\bar\chi_{\lambda}
(\bar \tau)(\sum_{\delta} c_{\delta}\Omega_{\delta})_{\lambda \lambda'} \chi_{\lambda'}
(\tau)~~.\eqno({\rm B}.5)$$
We divide the integers $\lambda\neq 0\pmod{N}$ into two disjoint sets $U$
and $L$ with $\lambda \in U$ if $1\leq\lambda\leq n-1$ and $\lambda\in L$
if $n+1\leq\lambda\leq 2n-1$. Therefore, $L\equiv -U\pmod{N}$
and we choose U as the
fundamental domain over which $\lambda$ is varied for $\chi_{\lambda}$.
The matrices $\Omega_{\delta}$ have the following properties between their elements:
$$\eqalignno{(\Omega_{\delta})_{\lambda,\lambda'} &= (\Omega_{\delta})_{-\lambda,-\lambda'} &(B.6)\cr
(\Omega_{\delta}\chi)_{\lambda} &= (\Omega_{n/\delta}\chi)_{-\lambda}\cr
&= -(\Omega_{n/\delta}\chi)_{\lambda}\,\, .&(B.7)\cr}$$
We use these relationships
to reexpress the partition function $(B.5)$ as
$$\eqalignno{Z &= \sum_{\lambda\in U, \lambda' \pmod{N}}
\bar\chi_{\lambda}(\bar \tau)\sum_{\delta\mid n}(c_{\delta}\Omega_{\delta})_{\lambda,
\lambda'}
\chi_{\lambda'}(\tau)&(B.8)\cr
&= \sum_{\lambda,\lambda'\in U}\bar\chi_{\lambda}(\bar \tau)
\bigl\{\sum_{\delta\mid
n}c_{\delta}[(\Omega_{\delta})_{\lambda,\lambda'}-(\Omega_{\delta})_{\lambda,-\lambda'}\bigr\}\chi_{\lambda'}(\tau)&(B.9)\cr
&= {1\over 4}\sum_{\lambda,\lambda'\in U,L}
\bar\chi_{\lambda}(\bar\tau)\bigl\{\sum_{(\delta,\bar \delta)
{\rm ~pairs},\,\delta\mid n} (c_{\delta} - c_{\bar
\delta})[(\Omega_{\delta})_{\lambda,\lambda'}
- (\Omega_{\bar
\delta})_{\lambda,\lambda'}]\bigr\}\chi_{\lambda'}(\tau)&(B.10)\cr}$$
with $c_{\delta}\geq 0$ and $c_{\delta}> 0$ implying $c_{\bar
\delta=n/\delta}=0$ by
convention. Two properties of these partition functions become apparent:
(i) either $\Omega_n$ or $\Omega_1$ contribute to the coefficient of
the vacuum state $\bar\chi_1 \chi_1$ but not both,
and (ii) the $\Omega_{\delta}$ corresponding to
$\delta^2=n$ makes no net contribution
to the partition function.\footnote{CIZ
shows (ii) from a different approach.}
The coefficient of $\bar\chi_1$ is (choosing $c_n\geq 1$ a
nd, therefore, $c_1=0$)
$$c_n\chi_1 + \sum_{\delta \not= n,1;~\alpha(\delta)=1}c_{\delta}
\chi_{\omega(\delta)}\eqno ({\rm B}.11)$$
(since $(\Omega_{\delta})_{\lambda,\lambda'}=0 {\rm ~~unless~~} \alpha\vert
\lambda^{(')}$). For $1<\delta<n$, $\omega(\delta)\in (Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ /NZ\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ )^*$
are all distinct from
$\pm 1$.\footnote{Here
({\tenBbb Z}$/N${\tenBbb Z})$^*\equiv$ integers (mod $N$) that are prime to
$N$. We also define
\hbox {$ U^* \equiv $({\tenBbb Z}$/N${\tenBbb Z})$^*\cap U$}
and\break
\hbox {$ L^* \equiv $({\tenBbb Z}$/N${\tenBbb Z})$^*\cap L$.}}
Additionally, $\omega(\delta)= -\omega(\delta')$
implies $\delta' = n/\delta$ for $\alpha(\delta')=\alpha(\delta)$.
This forces $\omega(\delta)$ to
belong to $U$ for $c_{\delta}>0$ and
$\alpha(\delta)=1$. Otherwise, if $\omega(\delta)\in L$, then
$c_{\delta}\chi_{\omega(\delta)}= -c_{\delta}\chi_{-\omega(\delta)}$ would
contribute negatively to the coefficient
of $\bar\chi_1$. This would require a positive
$c_{\delta'}\chi_{\omega'(\delta')}$ contribution from a $\delta'$,
such that $\alpha(\delta')=1$
and $\omega(\delta')= -\omega(\delta)$. But this implies that
$\delta'=n/\delta$. Hence,
$c_{\delta}$ and $c_{n/\delta}$ must both be positive definite, which
is excluded.
\hfill\break
\noindent {\it Lemma 1}:
\rm (i) $\alpha_{\rm min}= 1$ or $2$, (ii) if
$\alpha_{\rm min}=2$, then the unique partition function (other than the
diagonal A-type) is
$$ \Omega_n + \Omega_2 {\rm ~with~} n= 0 \pmod{4}\,\, .\eqno ({\rm B}.12)$$
\vskip .5cm
\hskip .6cm Proof:
$\alpha_{\rm min}$ is defined as the lowest $\alpha(\delta)$ of those
$\delta \not= n,1$ with $c_{\delta}>0$. The coefficient of
$\bar\chi_{\lambda=\alpha_{\rm min}}$ is
$$ c_n\chi_{\lambda'=\alpha_{\rm min}} + \sum_{\delta\neq
1;\alpha(\delta)=\alpha_{\rm min}} \sum_{\xi= 0}^{\alpha_{\rm
min}-1}c_{\delta}\chi_{\lambda'=\omega(\delta)
\alpha_{\rm min} + \xi N/\alpha(\delta)}.\eqno ({\rm B}.13)$$
For $\alpha(\delta)>1$ the
$\lambda' = \omega(\delta)\lambda + \xi N/\alpha(\delta)$ of $\Omega_{\delta}$ in
eq.~(B.8) correspond to vertices of an $\alpha$-sided
polygon. (Eq.~(B.8) is the form of the
partition function that we use from here on.
These $\lambda'$ can be viewed as points on a circle
of radius $N/2\pi$, where one half of the circle corresponds to the region $U$
and the opposite half to the region $L$. Any $\lambda'$ in $L$ must be
compensated by a vertex point
$-\lambda' \pmod{N}$ in $U$ either (1) from the same polygon, (2) a
different polygon, or (3) from the point corresponding to
$\chi_{\lambda'=\lambda}$ from $\Omega_n$.\newpage
\centertext{Figure B.1 The integers (mod $N$) mapped to a circle of radius
$N/2\pi$}
\vskip 9cm
Independent of the value of $c_n\geq 1$,
only the negative contribution to the coefficient of
$\bar\chi_{\lambda}$
from at most one
point, $\lambda'\in L$, on the $\alpha$-gon of a generic $\Omega_{\delta}$
can be compensated
by the contribution from the single point $-\lambda'>0 \pmod{N}$
from $\Omega_n$.
If $\lambda'<0 \pmod{N}$ from $\Omega_{\delta}$ must be cancelled by a
$\lambda'>0 \pmod{N}$ from
$\Omega_n$ then $c_n\geq c_{\delta}$.
Any $\Omega_{\delta}$ with $\alpha\geq 4$ must have at least one point of its related
$\alpha$-gon in $L$. The case of only one point in $L$ corresponds to
$\alpha(\delta) = 4$ and a set of indices
$$ \lambda' = \omega\alpha + \xi N/\alpha\in 0,{n\over 2},n,{3n\over 2}
\pmod{N}\,\, .\eqno ({\rm B}.14)$$
This set of indices
only exists at $n=\alpha^2$, {\it i.e.,} $n=\delta^2$.
Therefore, the corresponding $\Omega_{\delta}=\Omega_{\sqrt{n}}$ cannot
contribute to the partition function. Consider then partition functions
with $\alpha_{\rm min}\geq 4$.
Based on the above, the coefficient
of $\bar\chi_{\alpha_{\rm min}}$ in any of
these models will contain at least two different negative $\chi_{\lambda'\in
L}$ terms. Hence at least one of these terms must be
cancelled by methods (1) or (2). First consider method (2).
Assume $\Omega_{\delta'}$ can compensate a negative term from $\Omega_{\delta}$.
This requires that
$$\omega'(\delta')\alpha_{\rm min}\equiv -\omega(\delta)\alpha_{\rm min}
\pmod{N/\alpha_{\rm min}}.\eqno ({\rm B}.15)$$
Equivalently,
$\omega'(\delta')\equiv -\omega(\delta) \pmod{N/\alpha^2_{\rm min}}$,
which again implies $\delta\delta'=n$, in contradiction to
$c_{n/\delta}=0$
if
$c_{\delta}>0$. Hence method (2) cannot be used. Similarly,
cancelling the negative terms from $\Omega_{\delta}$ with positive ones from the same
$\alpha$-gon of $\Omega_{\delta}$ implies that $2\omega\equiv 0 \pmod{N/\alpha_{\rm min}}$,
which again is only possible for $n=\alpha_{\rm min}^2 = \alpha^2(\delta)$.
Thus,
neither methods (1) nor (2) can be used in the cancellation of negative terms
in partition functions. Thus,
MIPFs with $\alpha_{\rm min}\geq 4$ cannot exist for any value of $c_n$.
We now consider potential MIPFs with $\alpha_{\rm min}=3$. In this case
it is possible for just one point, $\lambda'$, from a $\Omega_{\delta}$ to be in $L$.
This can be cancelled by a $\chi_3$ term
from $\Omega_n$. However, then $\omega\equiv -1 \pmod{N/9}$ and the terms
from $\Omega_{\delta}$ would be $\chi_{-3} + \chi_{-3+N/3} + \chi_{-3-N/3}$, where
$\pm N/3-3\in U\cup \{0,n\}$. This implies $n\leq 9$ while
$9\vert n$. Hence, we have another case of $n=\alpha_{\rm min}^2$.
Therefore, MIPFs cannot have $\alpha_{\rm min}>2$, independent of the
coefficient
$c_n$. The $2-$gon (line) resulting from a $\Omega_{\delta}$ with $\alpha=2$ has as
its vertices the two points $2\omega$ and $2\omega + n$. The coefficient of
$\chi_2$ for a model with $\alpha_{\rm min}=2$ is
$$\chi_2 + \sum_{\alpha(\delta)=2} c_{\delta}(\chi_{\lambda'=2\omega}+
\chi_{\lambda'=2\omega+n}).\eqno ({\rm B}.16)$$
Three sets of values are possible for
$\lambda'=2\omega, 2\omega+n$. One corresponds to the excluded case
$n=\alpha^2_{\rm min}$, the second to another excluded by contradiction
(requiring both $2\omega=0 \pmod{n}$ and $\omega^2 = 1 \pmod{n}$).
The remaining case corresponds to a unique $\delta$ with $\alpha(\delta)=2$.
Choosing $\omega\pmod{N/4}$) $2\omega=-2 \pmod{N}$,
$2\omega+n\in U$,
we find the negative term, $\chi_{2\omega}$, is compensated by
$\chi_2$ from $\Omega_n$. This requires $n\equiv 4m\in 4Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $, and
$\omega(\delta)\equiv
-1$ mod $2m$, which is only satisfied by $\delta=2$. Since there is
one-to-one cancellation between $\chi_2$ and
$\chi_{2\omega}$, $c_n>c_2$ is also mandatory. If $c_n=c_2$ and all other
$c_{\delta}=0$ for even $n$, then the resulting partition function is
simply an integer multiple of the D-type solution. Further,
$c_n>c_2$, with all other $c_{\delta}=0$, can be expressed as
$(c_n - c_2)Z(A) + (c_2)Z(D)$.
Thus, for $\alpha_{\rm min}=2$, the freedom to have
$c_n\geq 1$ for even $n$ only increases the solution set of MIPFs beyond the
A-D-E
class if and only if (iff)
additional $\Omega_{\delta}$'s (not in E class for $n=12,18,30$) with $\alpha(\delta)\geq
3$ can be included
with $c_n(\Omega_n +\Omega_2)$. We now show that is not possible to include
such terms and keep the partition function positive.
For $\lambda\in U$, $(\Omega_n + \Omega_2)\chi_{\lambda}$ equals
$\chi_{\lambda}$ if $\lambda$ is odd and $\chi_{n-\lambda}$ if $\lambda$
is even. That no other $\Omega_{\delta}$'s are allowed in MIPFs
with $\alpha_{\rm min}=2$ is evident
by repetition of prior arguments after replacing
$\chi_{\lambda}$ with $\chi_{n-\lambda}$ if $\lambda$ is even.
The only remaining candidate for giving non-zero $c_{\delta}$ are
those $\Omega_{\delta}$ corresponding to $\alpha$-gons with exactly 2 vertices in $L$. This
limits consideration to $\Omega_{\delta}$ with $\alpha(\delta)= 3,4,5,6$. The negative
contributions from any such $\Omega_{\delta}$ must be cancelled by positive from
$\Omega_n$ and $(\Omega_n + \Omega_2)$.
First we consider the coefficients of
$\bar\chi_3$ ($\bar\chi_5$) coming
from a $\Omega_{\delta}$ with $\alpha(\delta)= 3\,(5)$,
respectively. In these cases $(\Omega_n + \Omega_2)$ acts identically to
$\Omega_n$ since $\lambda$ is odd. The prior arguments that eliminated
partition functions with $\alpha_{\rm min}= 3\, ,\, 5$ likewise exclude any
$\Omega_{\delta}$ with $\alpha(\delta)=3\, ,\, 5$. The two $\delta'\in L$ for
a $\Omega_{\delta}$ with $\alpha(\delta)=4$ that contribute
negative coefficients $\chi_{\delta'\in L}$ to $\bar\chi_4$ can be
cancelled by a combination
of $\Omega_n$ and $(\Omega_n + \Omega_2)$ iff
$\delta'= N-4\, ,~ n+4$. However, this only occurs for $N=32$, {\it i.e.,}
$n=\alpha^2$. By similar argument the last possibility,
which is including a $\Omega_{\delta}$ with $\alpha(\delta)=6$,
requires $N=36$, which by factorization of $n$ only allows $\alpha=1\, ,~
3$. Thus no additional $\Omega_{\delta\neq n,2}$ are allowed for
$\alpha_{\rm min}=2$ even when $c_n>1$.
So {\it lemma} 1 of
CIZ is independent of the restriction of $c_n = 1$.
\hfill\break
\noindent {\it Lemma 2}:
If $n$ is odd, then the unique possibility is $\Omega_n$.
\vskip .5cm
\hskip .6cm Proof:
We show this by contradiction. Assume, contrary to {\it lemma} 2, that
MIPFs can be formed from additional combinations of $\Omega_{\delta}$'s.
Recall that
{\it lemma 1} requires that $\alpha_{\rm min}=1$ for odd $n$ and
consider specifically
the coefficient of $\bar\chi_{2^{\gamma}}$ for $2^{\gamma}<n$.
Since an off $n$ limits the $\Omega_{\delta}$'s, that contribute terms to the coefficient
of $\bar\chi_{2^{\gamma}}$,
to those with $\alpha(\delta)=1$, this coefficient is:
$$ c_n\chi_{2^{\gamma}} + \sum_{\alpha(\delta)=1}
c_{\delta}\chi_{\omega(\delta)
2^{\gamma}}~~.\eqno ({\rm B}.17)$$
By prior argument all of these $\omega(\delta)\in U$.
That is, $(0<\omega<n)$.
Consider the case of $2^{\gamma}\omega\in L$. The resulting
negative contribution to the coefficient would require
an additional $\omega'(\delta')$ from some $\Omega_{\delta'}$
(including the possibility $\omega'=1$) such that
$2^{\gamma}(\omega + \omega')\equiv 0 \pmod{N}$.
For $\gamma =1$, $2(\omega +\omega')\equiv 0 \pmod{N}$. However,
$\omega + \omega'<N$ implies
$2(\omega +\omega')<2N$. This leads to
$2(\omega + \omega')= N$, {{\it i.e.}} $\omega + \omega'=n$;
but $\omega$ and $\omega'$ are both odd by
$(\omega^{(')})^2\equiv 1 \pmod{2N}$ (from $\omega^2 - 1\equiv 0
\pmod{2N/\alpha^2}$). Thus $\omega +\omega'$ is even
while $n$ is by assumption odd.
To resolve this potential contradiction requires
$2\omega \in U$ and
$\omega < n/2$. This argument can be iterated to prove that any $\gamma$,
with the property that
$2^{\gamma} \in U$, requires $2^{\gamma}\omega\in U$ with
$0<\omega<n/2^{\gamma}$.
Now consider $\gamma_{\rm max}$ defined by
$2^{\gamma_{\rm max}}<n<2^{\gamma_{\rm max}+1}$.
Based on above arguments, we require that
$0<\omega<n/2^{\gamma_{\rm max}}$. From the defining eq.~of
$\omega(\delta)$ we can show that
$\omega(\delta)\bar\delta\equiv \bar\delta \pmod{N}$
for $\bar\delta=n/\delta$. Since $n$ has been chosen odd, $\delta>2$.
So $\bar\delta<2^{\gamma_{\rm max}}{1\over{\delta/2}}<2^{\gamma_{\rm max}}$
while
$\omega\bar\delta<{n\over{2^{\gamma}}}2^{\gamma_{\rm max}}$.
Thus $\omega\bar\delta\equiv\bar\delta \pmod{N}$ implies $\omega(\delta)=1$,
in contradiction to the assumption that $1<\delta<n$ (for which
$\vert\omega(\delta)\vert >1$). The only solution is that
$c_{\delta\neq n}=0$ in eq.~(B.17).
Thus, {\it lemma} 2 is also true independent of whether
$c_n=1$.
Since {\it lemmas} 1 and 2 are still valid for $c_n>1$, the only remaining
possibility for new types of MIPFs corresponds to $n$ even and
$\alpha_{\rm min}=1$ (the latter implying $\omega^2\equiv 1 \pmod{2N}$
and $\omega\in U$).
Henceforth
we assume these values for $n$ and $\alpha_{\rm min}$ and
consider the last {\it lemma} of CIZ.
\hfill\break
\noindent{\it Lemma 3}:
For $n$ even, $n\not=12,30$,
$\omega\in U^*$, $\omega^2=1 \pmod{2N}$, $\omega\not=1$,
and $\omega\not= n-1$ if $n=2 \pmod{4}$, there exists
$\lambda\in U^*$ such that $\omega\lambda\in L^*$.
\vskip .5cm
The $\{\delta,\bar\delta\}$ pairs excluded from the claims of {\it lemma 3}
by the conditions imposed within it are $\{n,1\}$ for any even $n$ since
$w(n)=1$, $w(1)= -1\not\in U^*$;
$\{2,n/2\}$ for $n=2 \pmod{4}$ where $\alpha=1$ and $\omega(2)= n-1$;
$\{3,4\}$ for $n=12$; $\{2,15\}$, $\{3,10\}$, $\{5,6\}$ for $n=30$.
As {\it lemma 3} in no way involves the coefficients $c_{\delta}$,
generalizing $c_n=1$ to
{\hbox{$c_n\geq 1$ clearly does not invalidate it.}}
So we assume {\it lemma} 3 and show that the conclusions based upon it
do not alter if $c_n>1$.
Consider the coefficient of $\bar\chi_{\lambda\in U^*}$ (for $n$ odd and
$\alpha_{\rm min}=1$).
Only $\Omega_{\delta}$ with
$\alpha(\delta)=1$ can contribute and
the $\omega(\delta)$ corresponding to $c_{\delta}\not= 0$ must
have the property that
$\omega\lambda\in U$, for all $\lambda\in U$.
This can be shown by contradiction: Assume instead
that $\omega\lambda\in L^*$. Cancellation of this term requires another
$\omega'(\delta')$ such that $\omega'\lambda\in U^*$ and
$(\omega' + \omega)\lambda \equiv 0 \pmod{N}$, which requires that
$\omega' + \omega \equiv 0 \pmod{N}$, since $\lambda$ is invertible
$\pmod{N}$. However,
$\omega'\equiv -\omega \pmod{N}$ implies $\delta'=n/\delta$ in
contradiction to $c_{\delta}>0$ implying $c_{n/\delta}=0$.
Now apply {\it lemma} 3:
For $n\equiv 0 \pmod{4}$ and $n\not= 12$, all $\Omega_{\delta}$,
with $\delta\not= n$ are $\alpha(\delta)=1$, are
excluded by this {\it lemma} since no $\omega$ exists that
always giving $\omega\lambda\in U$ for
all $\lambda$.
For $n\equiv 2 \pmod{4}$, if $n\not= 30$, the only possible allowed $\Omega_{\delta}$
with $\alpha(\delta)=1$ is $\Omega_2$ ($\omega=n-1$). From prior arguments
$c_2$ must be less than $c_n$. The matrix, $\bmit N$,
for MIPFs with even $n$,
$\alpha_{\rm min}=1$ and $c_n$, $c_2\not=0$
can be expressed as:
$$ {\bmit N} = c^A_n\Omega_n + c^D_n(\Omega_n + \Omega_2)
+ [c^E_n(\Omega_n +\Omega_2) + \sum_{{\rm odd~}\alpha(\delta)\geq
3}c_{\delta\not= n,2}\Omega_{\delta}]\,\, .\eqno ({\rm B}.18)$$
The first term on the RHS is just a multiple of the (diagonal) A-type
and the second term a multiple of the D-type.
In $(B.18)$, all $c_{\delta\not= n,2}$ for
$n\not= 12, 18, 30$ must vanish. The arguments demanding this parallel those
for {\it lemma 1}. Let $\hat\alpha_{\rm min}$ be the smallest of odd
$\alpha(\delta)\geq 3$ associated
with $c_{\delta}>0$ and consider the coefficients of
$\bar\chi_{\hat\alpha_{\rm min}}$.\footnote{We consider
only odd $\alpha$ since $4{\not\vert}(n\equiv 2 \pmod{4})$.}
$c^E_n(\Omega_n+\Omega_2)_{\lambda,\lambda'}\chi_{\lambda'}$ contributes
the coefficients
$ c^D_n(\chi_{\hat\alpha_{\rm min}} + \chi_{n-\hat\alpha_{\rm min}})$
while $c^A\Omega_n$ contributes additional
$ c^A\chi_{\hat\alpha_{\rm min}}$.
Any $\lambda'\in L$ from an $\Omega_{\delta}$ with $c_{\delta}>0$ and
$\alpha(\delta)=\hat\alpha_{\rm min}$ can only be compensated by the positive
$\lambda'= \hat\alpha_{\rm min}, n-\hat\alpha_{\rm min}$ coming from
$\Omega_{\delta=n}$ or $\Omega_{\delta=2}$.
Odd $\alpha$-gons with just one or two
vertices in $L$ are limited to $\alpha= 3, 5$. In the event
of two vertices in $L$, $\hat\alpha_{\rm min}=3$ requires
${2n\over 3}= (n+\hat\alpha_{\rm min}) - {n\over 2}$,
{\it i.e.,} $n=18$.
Thus $\delta=3$ and $c_{\delta=3}\leq c^E$. $c_3=c^E$ for $n=18$ forms the
standard $E_7$ invariant.\footnote{We let $c_3= c^E$ since
$c^E - c_3$ can be redefined as a contribution to $c^D$.}
By the same logic, $\hat\alpha_{\rm min}= 5$ requires
${4n\over 5}= (n-\hat\alpha_{\rm min}) - {n\over 2}$,
{{\it i.e.}} $n= {50\over 3}$,
which is not allowed since $n\in Z\!\!\!Z} \def\R{R\!\!\!R} \def\C{C\!\!\!C} \def\tenBbb{ $).
The last possibility, that
of a 3-gon having only one vertex in $L$, that is
compensated
by either $\lambda'= 3$ from $\Omega_n$ or $\lambda'= n-3$ from $\Omega_2$
was shown not possible in discussion of {\it lemma} 1.
The only remaining cases not covered by any of the {\it lemmas} are
$n= 12, 30$ with $\alpha_{\rm min}=1$. It is straightforward to show that
in these cases too,
$c_n>1$ does not lead to new MIPFs, only to multiples of A, D, or E classes.
Therefore, we conclude that
relaxing the condition of uniqueness of the vacuum does not enlarge
the solution space of MIPFs beyond the A-D-E classification of Cappelli,
Itzykson, and Zuber.
Whether this rule can be applied to
MIPFs of other Ka\v c-Moody algebras, we do not know.
\hfill\vfill\eject
\chapternumstyle{blank}
\noindent {\bf\chapter{References:}}
\begin{putreferences}
\reference{abbott84}{L.F.~Abbott and M.B.~Wise, {\it Nucl.~Phys.~}{\bf B244}
(1984) 541.}
\reference{alvarez86}{L.~Alvarez-Gaum\' e, G.~Moore, and C.~Vafa,
{\it Comm.~Math.~Phys.~}{\bf 106} (1986) 1.}
\reference{antoniadis86} {I.~Antoniadis and C.~Bachas, {\it Nucl.~Phys.~}{\bf B278}
(1986) 343;\\
M.~Hama, M.~Sawamura, and H.~Suzuki, RUP-92-1.}
\reference{li88} {K.~Li and N.~Warner, {\it Phys.~Lett.~}{\bf B211} (1988)
101;\\
A.~Bilal, {\it Phys.~Lett.~}{\bf B226} (1989) 272;\\
G.~Delius, preprint ITP-SB-89-12.}
\reference{antoniadis87}{I.~Antoniadis, C.~Bachas, and C.~Kounnas,
{\it Nucl.~Phys.~}{\bf B289} (1987) 87.}
\reference{antoniadis87b}{I.~Antoniadis, J.~Ellis, J.~Hagelin, and D.V.~Nanopoulos,
{\it Phys.~Lett.~}{\bf B149} (1987) 231.}
\reference{antoniadis88}{I.~Antoniadis and C.~Bachas, {\it Nucl.~Phys.~}{\bf B298}
(1988) 586.}
\reference{ardalan74}{F.~Ardalan and F.~Mansouri, {\it Phys.~Rev.~}{\bf D9} (1974)
3341; {\it Phys.~Rev.~Lett.~}{\bf 56} (1986) 2456;
{\it Phys.~Lett.~}{\bf B176} (1986) 99.}
\reference{argyres91a}{P.~Argyres, A.~LeClair, and S.-H.~Tye,
{\it Phys.~Lett.~}{\bf B235} (1991).}
\reference{argyres91b}{P.~Argyres and S.~-H.~Tye, {\it Phys.~Rev.~Lett.~}{\bf 67}
(1991) 3339.}
\reference{argyres91c}{P.~Argyres, J.~Grochocinski, and S.-H.~Tye, preprint
CLNS 91/1126.}
\reference{argyres91d}{P.~Argyres, K.~Dienes and S.-H.~Tye, preprints CLNS 91/1113;
McGill-91-37.}
\reference{argyres91e} {P.~Argyres, E.~Lyman, and S.-H.~Tye
preprint CLNS 91/1121.}
\reference{argyres91f}{P.~Argyres, J.~Grochocinski, and S.-H.~Tye,
{\it Nucl.~Phys.~}{\bf B367} (1991) 217.}
\reference{dienes92a}{K.~Dienes, Private communications.}
\reference{dienes92b}{K.~Dienes and S.~-H.~Tye, {\it Nucl.~Phys.~}{\bf B376} (1992)
297.}
\reference{athanasiu88}{G.~Athanasiu and J.~Atick, preprint IASSNS/HEP-88/46.}
\reference{atick88}{J.~Atick and E.~Witten, {\it Nucl.~Phys.~}{\bf B2 }
(1988) .}
\reference{axenides88}{M.~Axenides, S.~Ellis, and C.~Kounnas,
{\it Phys.~Rev.~}{\bf D37} (1988) 2964.}
\reference{bailin92}{D.~Bailin and A.~Love, {\it Phys.~Lett.} {\bf B292}
(1992) 315.}
\reference{barnsley88}{M.~Barnsley, {\underbar{Fractals Everywhere}} (Academic
Press, Boston, 1988).}
\reference{bouwknegt87}{P.~Bouwknegt and W.~Nahm,
{\it Phys.~Lett.~}{\bf B184} (1987) 359;\\
F.~Bais and P.~Bouwknegt, {\it Nucl.~Phys.~}{\bf B279} (1987) 561;\\
P.~Bouwknegt, Ph.D.~Thesis.}
\reference{bowick89}{M.~Bowick and S.~Giddings, {\it Nucl.~Phys.~}{\bf B325}
(1989) 631.}
\reference{bowick92}{M.~Bowick, SUHEP-4241-522 (1992).}
\reference{bowick93}{M.~Bowick, Private communications.}
\reference{brustein92}{R.~Brustein and P.~Steinhardt, preprint UPR-541T.}
\reference{capelli87} {A.~Cappelli, C.~Itzykson, and
J.~Zuber, {\it Nucl.~Phys.~}{\bf B280 [FS 18]} (1987) 445;
{\it Commun.~Math.~Phys.~}113 (1987) 1.}
\reference{carlitz}{R.~Carlitz, {\it Phys.~Rev.~}{\bf D5} (1972) 3231.}
\reference{candelas85}{P.~Candelas, G.~Horowitz, A.~Strominger, and E.~Witten,
{\it Nucl.~Phys.~}{\bf B258} (1985) 46.}
\reference{cateau92}{H.~Cateau and K.~Sumiyoshi,
{\it Phys.~Rev.~}{\bf D46} (1992) 2366.}
\reference{christe87}{P.~Christe, {\it Phys.~Lett.~}{\bf B188} (1987) 219;
{\it Phys.~Lett.~}{\bf B198} (1987) 215; Ph.D.~thesis (1986).}
\reference{clavelli90}{L.~Clavelli {\it et al.}, {\it Int.~J.~Mod.~Phys.~}{\bf A5}
(1990) 175.}
\reference{cleaver92a}{G.~Cleaver. {\it ``Comments on Fractional Superstrings,''}
To appear in the Proceedings of the International Workshop on String
Theory, Quantum Gravity and the Unification of Fundamental Interactions,
Rome, 1992.}
\reference{cleaver93a}{G.~Cleaver and D.~Lewellen, {\it Phys.~Rev.~}{\bf B300}
(1993) 354.}
\reference{cleaver93b}{G.~Cleaver and P.~Rosenthal, preprint CALT 68/1756.}
\reference{cleaver93c}{G.~Cleaver and P.~Rosenthal, preprint CALT 68/18__.}
\reference{cleaver}{G.~Cleaver, Unpublished research.}
\reference{cornwell89}{J.~F.~Cornwell, {\underbar{Group Theory in Physics}},
{\bf Vol. III}, (Academic Press, London, 1989).}
\reference{deo89a}{N.~Deo, S.~Jain, and C.~Tan, {\it Phys.~Lett.~}{\bf
B220} (1989) 125.}
\reference{deo89b}{N.~Deo, S.~Jain, and C.~Tan, {\it Phys.~Rev.~}{\bf D40}
(1989) 2626.}
\reference{deo92}{N.~Deo, S.~Jain, and C.~Tan, {\it Phys.~Rev.~}{\bf D45}
(1992) 3641.}
\reference{deo90a}{N.~Deo, S.~Jain, and C.-I.~Tan,
in {\underbar{Modern Quantum Field Theory}},
(World Scientific, Bombay, S.~Das {\it et al.} editors, 1990).}
\reference{distler90}{J.~Distler, Z.~Hlousek, and H.~Kawai,
{\it Int.~Jour.~Mod.~Phys.~}{\bf A5} (1990) 1093.}
\reference{distler93}{J.~Distler, private communication.}
\reference{dixon85}{L.~Dixon, J.~Harvey, C.~Vafa and E.~Witten,
{\it Nucl.~Phys.~}{\bf B261} (1985) 651; {\bf B274} (1986) 285.}
\reference{dixon87}{L.~Dixon, V.~Kaplunovsky, and C.~Vafa,
{\it Nucl.~Phys.~}{\bf B294} (1987) 443.}
\reference{drees90}{W.~Drees, {\underbar{Beyond the Big Bang},}
(Open Court, La Salle, 1990).}
\reference{dreiner89a}{H.~Dreiner, J.~Lopez, D.V.~Nanopoulos, and
D.~Reiss, preprints MAD/TH/89-2; CTP-TAMU-06/89.}
\reference{dreiner89b}{H.~Dreiner, J.~Lopez, D.V.~Nanopoulos, and
D.~Reiss, {\it Phys.~Lett.~}{\bf B216} (1989) 283.}
\reference{ellis90}{J.~Ellis, J.~Lopez, and D.V.~Nanopoulos,
{\it Phys.~Lett.~}{\bf B245} (1990) 375.}
\reference{fernandez92}{R.~Fern\' andez, J.~Fr\" ohlich, and A.~Sokal,
{\underbar{Random Walks, Critical Phenomena, and Triviality in}}
{\underbar{Quantum Mechanics}}, (Springer-Verlag, 1992).}
\reference{font90}{A.~Font, L.~Ib\'a\~ nez, and F.~Quevedo,
{\it Nucl.~Phys.~}{\bf B345} (1990) 389.}
\reference{frampton88}{P.~Frampton and M.~Ubriaco, {\bf D38} (1988) 1341.}
\reference{francesco87}{P.~di Francesco, H.~Saleur, and J.B.~Zuber,
{\it Nucl.~Phys.~} {\bf B28 [FS19]} (1987) 454.}
\reference{frautschi71}{S.~Frautschi, {\it Phys.~Rev.~}{\bf D3} (1971) 2821.}
\reference{gannon92}{T.~Gannon, Carleton preprint 92-0407.}
\reference{gasperini91}{M.~Gasperini, N.~S\'anchez, and G.~Veneziano,
{\it Int.~Jour.~Mod.~Phys.~}{\bf A6} (1991) 3853;
{\it Nucl.~Phys.~}{\bf B364} (1991) 365.}
\reference{gepner87}{D.~Gepner and Z.~Qiu, {\it Nucl.~Phys.~}{\bf B285} (1987)
423.}
\reference{gepner87b}{D.~Gepner, {\it Phys.~Lett.~}{\bf B199} (1987) 380.}
\reference{gepner88a}{D.~Gepner, {\it Nucl.~Phys.~}{\bf B296} (1988) 757.}
\reference{ginsparg88}{P.~Ginsparg, {\it Nucl.~Phys.~}{\bf B295 [FS211]}
(1988) 153.}
\reference{ginsparg89}{P.~Ginsparg, in \underbar{Fields, Strings and Critical
Phenomena}, (Elsevier Science Publishers, E.~Br\' ezin and
J.~Zinn-Justin editors, 1989).}
\reference{gross84}{D.~Gross, {\it Phys.~Lett.~}{\bf B138} (1984) 185.}
\reference{green53} {H.~S.~Green, {\it Phys.~Rev.~}{\bf 90} (1953) 270.}
\reference{hagedorn68}{R.~Hagedorn, {\it Nuovo Cim.~}{\bf A56} (1968) 1027.}
\reference{kac80}{V.~Ka\v c, {\it Adv.~Math.~}{\bf 35} (1980) 264;\\
V.~Ka\v c and D.~Peterson, {\it Bull.~AMS} {\bf 3} (1980) 1057;
{\it Adv.~Math.~}{\bf 53} (1984) 125.}
\reference{kac83}{V.~Ka\v c, {\underbar{Infinite Dimensional Lie Algebras}},
(Birkh\" auser, Boston, 1983);\\
V.~Ka\v c editor, {\underbar{Infinite Dimensional Lie Algebras and Groups}},
(World Scientific, Singapore, 1989).}
\reference{kaku91}{M.~Kaku, \underbar{Strings, Conformal Fields and Topology},
(Springer-Verlag, New York, 1991).}
\reference{kawai87a} {H.~Kawai, D.~ Lewellen, and S.-H.~Tye,
{\it Nucl.~Phys.~}{\bf B288} (1987) 1.}
\reference{kawai87b} {H.~Kawai, D.~Lewellen, J.A.~Schwartz,
and S.-H.~Tye, {\it Nucl.~Phys.~}{\bf B299} (1988) 431.}
\reference{kazakov85}{V.~Kazakov, I.~Kostov, and A.~Migdal,
{\it Phys.~Lett.~}{\bf B157} (1985) 295.}
\reference{khuri92}{R.~Khuri, CTP/TAMU-80/1992; CTP/TAMU-10/1993.}
\reference{kikkawa84}{K.~Kikkawa and M.~Yamasaki, {\it Phys.~Lett.~}{\bfB149}
(1984) 357.}
\reference{kiritsis88}{E.B.~Kiritsis, {\it Phys.~Lett.~}{\bf B217} (1988) 427.}
\reference{langacker92}{P.~Langacker, preprint UPR-0512-T (1992).}
\reference{leblanc88}{Y.~Leblanc, {\it Phys.~Rev.}{\bf D38} (1988) 38.}
\reference{lewellen87}{H.~Kawai, D.~Lewellen, and S.-H.`Tye,
{\it Nucl.~Phys.~}{\bf B288} (1987) 1.}
\reference{lewellen}{D.~C.~Lewellen, {\it Nucl.~Phys.~}{\bf B337} (1990) 61.}
\reference{lizzi90}{F.~Lizzi and I.~Senda, {\it Phys.~Lett.~}{\bf B244}
(1990) 27.}
\reference{lizzi91}{F.~Lizzi and I.~Senda, {\it Nucl.~Phys.~}{\bf B359}
(1991) 441.}
\reference{lust89}{D.~L\" ust and S.~Theisen,
{\underbar{Lectures on String Theory,}} (Springer-Verlag, Berlin, 1989).}
\reference{maggiore93}{M.~Maggiore, preprint IFUP-TH 3/93.}
\reference{mansouri87} {F.~Mansouri and X.~Wu, {\it Mod.~Phys.~Lett.~}{\bf A2}
(1987) 215; {\it Phys.~Lett.~}{\bf B203} (1988) 417;
{\it J.~Math.~Phys.~}{\bf 30} (1989) 892;\\
A. Bhattacharyya {\it et al.,} {\it Mod.~Phys.~Lett.~}{\bf A4} (1989)
1121; {\it Phys.~Lett.~}{\bf B224} (1989) 384.}
\reference{narain86} {K.~S.~Narain, {\it Phys.~Lett.~}{\bf B169} (1986) 41.}
\reference{narain87} {K.~S.~Narain, M.H.~Sarmadi, and C.~Vafa,
{\it Nucl.~Phys.~}{\bf B288} (1987) 551.}
\reference{obrien87}{K.~O'Brien and C.~Tan, {\it Phys.~Rev.~}{\bf D36} (1987)
1184.}
\reference{parisi79}{G.~Parisi, {\it Phys.~Lett.~}{\bf B81} (1979) 357.}
\reference{polchinski88}{J.~Polchinski, {\it Phys.~Lett.~}{\bf B209} (1988)
252.}
\reference{polchinski93}{J.~Polchinski, Private communications.}
\reference{pope92}{C.~Pope, preprint CTP TAMU-30/92 (1992).}
\reference{raiten91}{E.~Raiten, Thesis, (1991).}
\reference{roberts92}{P.~Roberts and H.~Terao, {\it Int.~J.~Mod.~Phys.~}{\bf A7}
(1992) 2207;\\
P.~Roberts, {\it Phys.~Lett.~}{\bf B244} (1990) 429.}
\reference{sakai86}{N.~Sakaii and I.~Senda, {\it Prog.~Theo.~Phys.~}
{\bf 75} (1986) 692.}
\reference{salomonson86}{P.~Salomonson and B.-S.~Skagerstam, {\it
Nucl.~Phys.~}{\bf B268} (1986) 349.}
\reference{schellekens89} {A.~N.~Schellekens and S.~Yankielowicz,
{\it Nucl.~Phys.~}{\bf B327} (1989) 3;\\
A.~N.~Schellekens, {\it Phys.~Lett.~}{\bf 244} (1990) 255;\\
B.~Gato-Rivera and A.~N.~Schellekens, {\it Nucl.~Phys.~}{\bf B353} (1991)
519; {\it Commun.~Math.}
{\it Phys.~}145 (1992) 85.}
\reference{schellekens89b}{B.~Schellekens, ed. \underbar{Superstring Construction},
(North-Holland Physics, Amsterdam, 1989).}
\reference{schellekens89c}{B.~Schellekens, CERN-TH-5515/89.}
\reference{schwarz87}{M.~Green, J.~Schwarz, and E.~Witten,
\underbar{Superstring Theory}, {\bf Vols. I \& II},
(Cambridge University Press, New York, 1987).}
\reference{turok87a}{D.~Mitchell and N.~Turok, {\it Nucl.~Phys.~}{\bf B294}
(1987) 1138.}
\reference{turok87b}{N.~Turok, Fermilab 87/215-A (1987).}
\reference{verlinde88}{E.~Verlinde, {\it Nucl.~Phys.~}{\bf B300}
(1988) 360.}
\reference{warner90}{N.~Warner, {\it Commu.~Math.~Phys.~}{\bf 130} (1990) 205.}
\reference{wilczek90} {F.~Wilczek, ed. \underbar {Fractional Statistics and Anyon
Superconductivity}, (World Scientific, Singaore, 1990) 11-16.}
\reference{witten92}{E.~Witten, preprint IASSNS-HEP-93-3.}
\reference{vafa1}{R.~Brandenberger and C.~Vafa, {\it Nucl.~Phys.}
{\bf B316} (1989) 391.}
\reference{vafa2}{A.A.~Tseytlin and C.~Vafa, {\it Nucl.~Phys.}
{\bf B372} (1992) 443.}
\reference{zamol87}{A.~Zamolodchikov and V.~Fateev, {\it Sov.~Phys.~}JETP
{\bf 62} (1985) 215; {\it Teor.~}{\it Mat.}
{\it Phys.~}{\bf 71} (1987) 163.}
\end{putreferences}
\bye
|
2,869,038,153,755 | arxiv | \section{Introduction}
\vspace{-1mm}
Recently, the machine learning research community has devoted considerable efforts and financial outlay to scaling deep neural networks (DNNs) to enormous sizes ($175$ billion parameter-counts in GPT-3~\citep{brown2020language}). Although such overparameterization simplifies the training of DNNs and dramatically improves their generalization~\citep{bartlett2021deep,du2018gradient,kaplan2020scaling}, it may severely obstruct the practical usage on resource-limited platforms like mobile devices, due to its large memory footprint and inference time~\citep{hoefler2021sparsity}. Pruning is one of the effective remedies that can be dated back to~\citet{lecun1990optimal}: it can eliminate substantial redundant model parameters and boost the computational and storage efficiency of DNNs.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{Figs/res18_cf10.pdf}
\vspace{-8mm}
\caption{{\small Achieved test accuracy over different sparsity levels of diverse unstructured and structural subnetworks. Sparse models from classical channel-wise structural pruning algorithms~\citep{he2017channel,liu2017learning,bartoldson2019generalization,molchanov2019importance} can not match the full accuracy of the dense model.}}
\label{fig:tisser}
\vspace{-5mm}
\end{figure}
Such benefits drive numerous interests in designing model pruning algorithms~\citep{han2015deep,han2015learning,ren2018admmnn,he2017channel,liu2017learning}. Among this huge family, an emerging representative studies the prospect of training \textit{sparse subnetworks} in lieu of the full dense models without impacting performance~\citep{frankle2018the,chen2020lottery}. For instance, \citet{frankle2018the} demonstrates that dense models contain sparse, matching subnetworks~\citep{frankle2020linear} (a.k.a. \textit{winning tickets}) capable of training in isolation from the original initialization to match or even surpass the full accuracy. This phenomenon is referred to as the \textit{lottery tickets hypothesis} (LTH), which indicates several impressive observations: $(i)$ usually extreme sparsity levels (e.g., $90\%$, $95\%$) can be achieved without sacrificing the test accuracy; $(ii)$ the located winning ticket maintains undamaged expressive power as its dense counterpart, and can be easily trained from scratch or early-epoch weights~\citep{Renda2020Comparing,frankle2020linear} to recover the full performance. These advances are positive signs of the substantial potential of sparse DNNs.
However, almost all LTH literature investigates unstructured sparsity only. In practical scenarios, it brings little hardware efficiency benefits due to the poor data locality and low parallelism~\citep{he2017channel,mao2017exploring,wen2016learning} caused by highly irregular sparse patterns. Meanwhile, most of the accelerators are optimized for dense matrix operations~\citep{han2016eie}, which means there is limited speedup for unstructured pruned subnetworks even if the sparsity level exceeds $95\%$~\citep{wen2016learning}. Structural pruning~\citep{he2017channel,liu2017learning} as an alternative to exploring sparse subnetworks, removes the entire filter or channel in DNNs to gain more computational efficiency at the cost of (more) accuracy degradation. As shown in Fig.~\ref{fig:tisser}, traditional channel-wise structural pruning approaches~\citep{he2017channel,bartoldson2019generalization,molchanov2019importance} quickly degrade performance and cannot lead to winning tickets, which was also echoed in~\citet{You2020Drawing}.
\vspace{-0.5mm}
In our paper, we present the first study into the \textit{structural lottery tickets}, which explores hardware-friendly structural sparsity (including channel-wise and group-wise patterns) in order to find lottery tickets. Specifically, we start from unstructured sparse subnetworks, and then adopt proposed \textit{refilling} techniques to create channel-wise structural sparsity by growing back the pruned elements within the most important channels and abandoning the rest. Our results (Section~\ref{sec:main_res}) show such refined channel-wise structural subnetworks win the lottery at a moderate sparsity level with $\sim 50\%$ running time savings on an Nvidia 2080 TI GPU. In order to push the compression ratio higher, we introduce a \textit{regrouping} algorithm based on hypergraph partitioning~\citep{rumi2020accelerating} to establish group-wise structural patterns which are more amenable to pruning due to the shape flexibility of grouped dense blocks. These group-wise structural winning tickets achieve $\sim60\%$ running time savings at $50\%\sim80\%$ sparsity without any performance degradation compared to the dense models.
\vspace{-0.5mm}
Note that this paper focuses on \textit{general} structural sparse patterns capable of acceleration, including conventional channel-wise sparsity and other fine-grained structural sparsity. The latter actually becomes prevailing recently since it achieves superior performance and maintains satisfied speedup, sparking great interest in industries such as NVIDIA (N:M)~\citep{zhou2021learning} and Google (Block-wise)~\citep{shangguan2019optimizing}. Meanwhile, unlike~\citet{zhou2021learning}, our group-wise sparse patterns do
NOT need any specific hardware accelerators and are generally applicable to common GPU devices. Lastly, although we mainly investigate inference efficiency, our proposals can also enable efficient training in transfer learning paradigms as demonstrated in Appendix~\ref{sec:more_results}. Our main contributions lie in the following aspects:
\vspace{-1mm}
\begin{itemize}
\item To our best knowledge, we are the first to demonstrate the existence of structurally sparse winning tickets at non-trivial sparsity levels (i.e., $>30\%$), and with both channel-wise and group-wise sparse patterns.
\item We propose the \textit{refilling} technique and introduce the \textit{regrouping} algorithm to form channel-wise and group-wise structural sparsity. Such refined structural subnetworks match the trainability and expressiveness of dense networks, while enabling the inference speedup on practical hardware platforms like GPU machines (general and not tied to particular hardware).
\item Extensive experiments validate our proposal on diverse datasets (i.e., CIFAR-10/100, Tiny-ImageNet, and ImageNet) across multiple network architectures, including ResNets, VGG, and MobileNet. Specifically, our structural winning tickets achieve $53.75\%\sim64.93\%$ GPU running time savings at $45\%\sim80\%$ channel- and group-wise sparsity.
\end{itemize}
\vspace{-1mm}
\vspace{-1mm}
\section{Related Work}
\vspace{-1mm}
\begin{figure*}[t]
\centering
\vspace{-0.5em}
\includegraphics[width=0.95\linewidth]{Figs/StructureLTH.pdf}
\vspace{-4mm}
\caption{{\small Overview of our proposals including refilling, refilling+, and regrouping, which turn unstructured sparse mask into channel-wise and group-wise structured sparse masks.}}
\vspace{-5mm}
\label{fig:methods}
\end{figure*}
\paragraph{Pruning.} Network pruning is a technique that aims at eliminating the unnecessary model parameters~\citep{blalock2020state}, which can effectively shrink models for the deployment on resource-constrained devices~\citep{lecun1990optimal,hanson1988comparing}. Pruning algorithms are roughly categorized into two groups: (1) unstructured pruning~\citep{lecun1990optimal,han2015deep,han2015learning,ren2018admmnn,Zhang_2018} with irregular sparse patterns; (2) structural pruning~\citep{he2017channel,liu2017learning,li2016pruning,hu2016network,wen2016learning,hong2018efficient} with structural sparse patterns such as layer-wise, channel-wise, block-wise, column-wise, etc..
Within the group of unstructured pruning methods, \citet{han2015deep,han2015learning} remove insignificant connections of models in the post-training stage, with respect to certain heuristics like weight/gradient magnitudes; during training sparsification is also another popular trend for pruning by leveraging $\ell_0$ regularization~\citep{louizos2017learning} or alternating direction method of multipliers (ADMM)~\citep{ren2018admmnn,Zhang_2018}. Recently, several pruning-at-initialization methods~\citep{Wang2020Picking,snip,tanaka2020pruning} are proposed to identify critical unstructured connections for gradient-flow preserving, without any training. Although the unstructured sparse model has superior performance, it usually suffers from poor data locality and low parallelism~\citep{he2017channel,mao2017exploring,wen2016learning}, which make it hard to be accelerated in real-world hardware platforms.
On the contrary, structural pruning is more hardware-friendly at the cost of notable accuracy loss when the compression ratio increases. \citet{he2017channel,liu2017learning} slim the network channels via $\ell_1$ regularization, and~\citet{bartoldson2019generalization} selects important channels according to heuristics of feature maps. To combine the benefits of structural and unstructured pruning, hybrid pruning strategies have been introduced to pursue more general structural spares patterns which are also capable of acceleration. For example, convolution kernels with half regular sparsity~\citep{chen2018sc} or pattern-based structural sparsity~\citep{ma2020pconv} or vector-wise~\citep{Zhu2019STC} and group-wise~\citep{rumi2020accelerating} regular sparsity.
\vspace{-1mm}
\paragraph{The lottery tickets hypothesis (LTH).} The lottery ticket hypothesis (LTH)~\citep{frankle2018the} conjectures that there exists a sparse subnetwork called winning ticket within a dense network, whose performance can match with the dense network when training from the same initialization. With the assistance of weight rewinding techniques~\citep{Renda2020Comparing,frankle2020linear}, the original LTH can be scaled up to larger networks and datasets. The existence of winning tickets are broadly verified under diverse contexts, such as image classification~\citep{frankle2018the,pmlr-v139-zhang21c,chen2020lottery2,ma2021good,gan2021playing,chen2021you}, natural language processing~\cite{gale2019state,chen2020lottery}, generative adversarial networks~\cite{chen2021gans,chen2021ultra}, graph neural networks~\cite{chen2021unified}, and reinforcement learning~\cite{yu2019playing}. However, all of the above LTH literature only locate \textit{unstructured} sparse winning tickets, which can hardly bring hardware efficiency boost to real-world applications.
As the most related work, \citet{You2020Drawing} finds structural winning tickets at only low sparsity levels around $30\%$ in a few cases. It again reveals the complication and difficulty of identifying computation-friendly sparse patterns. Another concurrent work~\citep{alabdulmohsin2021generalized} investigates a generalized LTH with weight space factorization, which is orthogonal to our work.
\begin{table*}[t]
\centering
\vspace{-4mm}
\caption{Implementation details which follow the standard settings in~\citet{ma2021sanity}.}
\label{tab:all_exp}
\begin{adjustbox}{width=1\textwidth}
\begin{threeparttable}
\begin{tabular}{l|cccccccccc}
\toprule
\multirow{2}{*}{Settings} & \multicolumn{4}{c}{CIFAR-10} & \multicolumn{4}{c}{CIFAR-100} & \multicolumn{1}{c}{Tiny-ImageNet} & \multicolumn{1}{c}{ImageNet}\\ \cmidrule(lr){2-5} \cmidrule(lr){6-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11}
& WRN-32-2 & RN-18 & MBNet-v1 & VGG-16 & WRN-32-2 & RN-18 & MBNet-v1 & VGG-16 & RN-50 & RN-50\\ \midrule
Batch Size & 128 & 128 & 128 & 128 & - & - & 64 & - & 32 & - \\ \midrule
Weight Decay & \multicolumn{1}{c}{$1\times10^{-4}$} & $1\times10^{-4}$ & $1\times10^{-4}$ & \multicolumn{1}{c}{$2\times10^{-4}$} & $2\times10^{-4}$ & $2\times10^{-4}$ & $2\times10^{-4}$ & \multicolumn{1}{c}{$5\times10^{-4}$} & $5\times10^{-4}$ & $1\times10^{-4}$\\ \midrule
\multirow{1}{*}{Learning Rate} & \multicolumn{9}{c}{0.1;$\times0.1$ at 80,120 epoch of total 160 epochs} & 0.1;$\times0.1$ at 30,60 epoch of total 95 epochs\\ \midrule
Optimizer & \multicolumn{10}{c}{SGD~\citep{ruder2016overview} with a momentum of 0.9}\\ \midrule
Model Size & $1.86$ M & $11.22$ M & $3.21$ M & $14.72$ M & $1.86$ M & $11.22$ M & $3.21$ M & $14.72$ M & $25.56$ M & $25.56$ M \\
\bottomrule
\end{tabular}
\end{threeparttable}
\end{adjustbox}
\vspace{-4mm}
\end{table*}
\vspace{-3mm}
\paragraph{Sparse convolutional neural network acceleration on GPU.} Previous works have explored the acceleration of sparse convolution operations in two different directions. \underline{One direction} is to design efficient implementation of unstructured pruned networks for improved data locality and utilization of hardware~\citep{chen2018escort,park2016faster}. For example, \citet{dong2019acorns} proposes ``Acorns" to accelerate the sparse computations of convolution kernels with an input sparsity. \citet{peng2017adaptive} has proposed a matrix splitting algorithm for efficient inference of convolutional neural networks (CNN). Nvidia's cuSPARSE\footnote{\scalebox{0.66}{\url{https://docs.nvidia.com/cuda/archive/10.2/cusparse/index.html}}} library contains various efficient sparse matrix computation algorithms like SpMM on GPUs, drawing great attention to efficient scientific computing. Furthermore, advanced approaches are developed based on SpMM, such as Adaptive Sparse Tiling (ASpT)~\citep{hong2019adaptive}. ASpT significantly improves the data usage of SpMM and achieves the current state-of-the-art performance among SpMM implementation variants. \underline{Another direction} focuses on more hardware-friendly pruning methods~\citep{chen2018sc,ma2020pconv,niu2020patdnn}. During the model pruning, these works aim to maintain certain regular sparse patterns, which benefit the hardware processing/computing of corresponding sparse matrices. However, \citet{chen2018sc} achieves unsatisfactory compression ratio, while the pruning methods used in~\citet{ma2020pconv} and \citet{niu2020patdnn} require dedicated compiler optimization to accelerate network execution.
\vspace{-1mm}
\section{Methodology}
\subsection{Notations and Preliminaries} \label{sec: preliminaries}
\paragraph{Sparse subnetworks and pruning methods.} In this paper, we mainly follow the routine notations in~\cite{frankle2018the, Renda2020Comparing}. For a network $f(x;\theta)$ with input samples $x$ and model parameters $\theta$, a sparse subnetwork is a network $f(x;m\odot\theta)$ with a binary pruning mask $m\in\{0,1\}^{|\theta|}$, where $\odot$ is the element-wise product. In other words, it is a copy of dense network $f(x;\theta)$ with some weights fixed to $0$. If the non-fixed remaining weights are distributed irregularly, we call it \textbf{unstructured} sparse patterns (e.g., the \textit{left} of Figure~\ref{fig:methods}); if they are clustered into channels or groups, we name it \textbf{structural} sparse patterns (e.g., the \textit{right} of Figure~\ref{fig:methods}).
To obtain the desired sparse subnetworks, we consider and benchmark multiple classical pruning algorithms: (1) \textit{random pruning} (\texttt{RP}) which usually works as a necessary baseline for the sanctity check~\citep{frankle2018the}; (2) \textit{one-shot magnitude pruning} (\texttt{OMP}) by eliminating a part of model parameters with the globally smallest magnitudes~\citep{han2015deep}; (3) \textit{the lottery ticket hypothesis}~\citep{frankle2018the} with iterative weight magnitude pruning (\texttt{LTH-IMP} or \texttt{IMP} for simplicity)~\citep{han2015deep}. As adopted in LTH literature~\citep{frankle2018the}, we identify the sparse lottery tickets by iteratively removing the $20\%$ of remaining weight with the globally smallest magnitudes, and rewinding model weights to the original random initialization~\citep{frankle2018the} or early training epochs~\citep{Frankle2020The,chen2020lottery2}. In this paper, the model weights are rewound to the eighth epoch (i.e., the $5\%$ of the entire training process) for all CIFAR, Tiny-ImageNet, and ImageNet experiments. (4) \textit{pruning at initialization} mechanisms. We choose several representative approaches such as \texttt{SNIP}~\citep{lee2018snip}, \texttt{GraSP}~\citep{Wang2020Picking}, and \texttt{SynFlow}~\citep{tanaka2020pruning}, which explore sparse patterns at random initialization with some gradient flow-based criterion. (5) \textit{Alternating Direction Method of Multipliers} (\texttt{ADMM}) for punning. It is a well-known optimization-based pruning method~\citep{niu2020patdnn,Zhang_2018}, which can obtain superior compression ratios with little performance degradation for deep neural networks. Note that all pruning approaches are mainly conducted over networks without counting their classification heads~\citep{frankle2018the}.
\vspace{-1mm}
\paragraph{Structural winning tickets.} We begin by extending the original lottery tickets hypothesis to the context of structural sparse patterns. A subnetwork $f(x;m\odot\theta)$ is a structural winning ticket for an algorithm $\mathcal{A}^{\mathcal{T}}_t$ if it satisfies: \ding{172} training subnetworks $f(x;m\odot\theta)$ with algorithm $\mathcal{A}_t^{\mathcal{T}}$ results in performance measurement on task $\mathcal{T}$ no lower than training dense networks $f(x;\theta)$ with algorithm $\mathcal{A}^{\mathcal{T}}_t$, where $\theta$ is the original random initialization $\theta_0$ or early rewound weights like $\theta_{5\%}$, and $t$ is the training iterations; \ding{173} the non-zero elements in pruning mask $m$ are clustered as channels, groups or other hardware-friendly structural patterns.
\vspace{-1mm}
\paragraph{Implementation details.} We conduct experiments on diverse combinations of network architectures and datasets. Specifically, we adopt Wide-ResNet-32-2~\citep{zagoruyko2016wide} (or WRN-32-2), ResNet-18~\citep{he2016deep} (or RN-18), MobileNet-v1 (or MBNet-v1)~\citep{howard2017mobilenets}, and VGG-16~\citep{simonyan2014very} on both CIFAR-10~\citep{krizhevsky2009learning} and CIFAR-100 datasets. ResNet-50 (or RN-50) is evaluated on both Tiny-ImageNet~\citep{le2015tiny} and ImageNet~\citep{deng2009imagenet} datasets. Table~\ref{tab:all_exp} includes more training and evaluation details of our experiments.
\subsection{Refilling for Structural Patterns}
\vspace{-1mm}
It is well-known that the irregular sparsity patterns from unstructured magnitude pruning block the acceleration on practical hardware devices. To overcome the limitation, we propose a simple \textit{refilling} strategy to reorganize the unstructured sparse patterns and make them more hardware friendly. Specifically, we \underline{first} select important channels from the unstructured subnetwork according to certain criteria. The number of picked channels are depended on the desired sparsity level. \underline{Then}, the pruned elements are grown back to be trainable (i.e., unpruned) and are reset to the same random initialization or early rewound weights. \underline{Lastly}, the rest parameters in the remaining insignificant channels will be removed. In this way, we refill important channels and empty the rest to create a channel-wise structural sparse pattern that essentially brings computational reductions. Note that the picking criterion can be the number of remaining weights in the channel, or the channel's weight statistics or feature statistics or salience scores, which are comprehensively investigated in the ablation (Section~\ref{sec:more_results}). The complete pipeline and illustration are summarized in Algorithm~\ref{alg:IMP_Refill} and Figure~\ref{fig:methods}, respectively.
\vspace{-4mm}
\begin{minipage}{0.48\textwidth}
\begin{algorithm}[H]
\caption{\texttt{IMP} with rewinding step $i$} \label{alg:IMP}
\begin{algorithmic}[1]
\REQUIRE {$f(x;\theta_0)$, unstructured sparsity $s$}
\ENSURE {$f(x; m\odot\theta_i)$}
\STATE Set the pruning mask $m=\boldsymbol{1}\in\mathbb R^{|\theta|}$\\
\STATE Train $f(x;\theta_0)$ for $i$ steps: $f(x;\theta_i)=\mathcal{A}_i^{\mathcal{T}}(f(x;\theta_0))$ \\
\WHILE {not reach sparsity $s$}
\STATE Train $f(x;m\odot\theta_i)$ for $t-i$ steps: $f(x;m\odot\theta_t)=\mathcal{A}_{t-i}^{\mathcal{T}}(f(x;m\odot\theta_i))$ \\
\STATE Pruning $20\%$ of remaining weight of $m\odot\theta_t$, and update $m$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\end{minipage}
\vspace{-6mm}
\begin{minipage}{0.48\textwidth}
\begin{algorithm}[H]
\caption{\texttt{IMP-Refill(+)}} \label{alg:IMP_Refill}
\begin{algorithmic}[1]
\REQUIRE {$f(x;m\odot\theta_i)$ with unstructured sparsity $s$ (Algo.~\ref{alg:IMP})}
\ENSURE {$f(x;m\odot\theta_i)$ with channel-wise structural mask $m$ at sparsity $\tilde{s}$}
\STATE Calculate importance scores of each channel according to certain criterion \\
\STATE Pick top-$k$ channels in $m$, refill back their $0$ (pruned) elements with $1$ (trainable) and update $m$, maintaining $\tilde{s}\sim s$ \\
\STATE Pick and refill back extra channels in $m$ with $\tilde{s}^+< s$ \\
\textcolor{gray}{\# Optional for \texttt{Refill+}}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\vspace{-6mm}
\begin{minipage}{0.48\textwidth}
\begin{algorithm}[H]
\caption{\texttt{IMP-Regroup}} \label{alg:regroup}
\begin{algorithmic}[1]
\REQUIRE {$f(x;m\odot\theta_i)$ with unstructured sparsity $s$ from Algorithm~\ref{alg:IMP}, hyperparameters $t_1$, $t_2$, $b_1$, and $b_2$}
\ENSURE {$f(x;m\odot\theta_i)$ with group-wise structural mask $m$ at sparsity $s^*$}
\WHILE {dense block can be found}
\STATE {Divide the rows of the sparse pruning mask $m$ into $t_1$ groups using hypergraph partitioning (hMETIS)\footnote{\scalebox{0.75}{\url{http://glaros.dtc.umn.edu/gkhome/metis/hmetis/overview}}} \\
\FOR{group $c_i\in\{c_1,c_2,\dots,c_{t_1}\}$ }{
\IF{$c_i$ has $\ge b_1$ rows}{
\STATE Select columns in $c_i$ that has no less than $t_2$ non-zero items \\
\IF{$\ge b_2$ columns are selected}{
\STATE Group and Refill the selected columns as well as rows to a dense block, and update $m$ \\
}
\ENDIF
}
\ENDIF
}
\ENDFOR
}
\ENDWHILE
\STATE Set other elements out of dense blocks to $0$ \\
\end{algorithmic}
\end{algorithm}
\end{minipage}
\vspace{-4mm}
Here we provide a detailed description of how many and which channels we choose to refill. Our main experiments adopt the $\ell_1$ norm of channel weights as the picking criterion to score the channel importance due to its superior performance. Let $\theta^l\in\mathbb{R}^{c_{\mathrm{out}}\times n}$ denotes the parameters of the convolutional layer $l$, where $c_{\mathrm{out}}$ is the number of output channel and $n$ is the continued product of the number of input channel, channel height and weight, as shown in Figure~\ref{fig:methods}. $\theta^l_i\in\mathbb{R}^{n}$ represents the weights in the $i$th kernel and $m^l_i\in\{0,1\}^{|\theta^l_i|}$ is the corresponding mask. We first calculate the $\ell_1$ norm of $m^l_i\odot\theta^l_i$, which is a summation of the absolute value of remaining weights in the kernel $i$. Then we use it to pick the top-$k$ scored kernels, which will be fully refilled. $k=\lceil s^l\times c_{\mathrm{out}} \times n\rceil$, where $s^l$ is the original layerwise sparsity and $c_{\mathrm{out}} \times n$ is the total number of weights in kernel $i$. Meanwhile, the rest $c_\mathrm{out}-k$ kernels are dropped for efficiency gains.
Furthermore, we propose a soft version, \textit{refilling+}, to make a redemption for the aggressive nature of wiping out all remaining channels. It picks and re-actives an extra proportion of channels to slow down the network capacity reduction, as indicated by shallow blue blocks in Figure~\ref{fig:methods}.
\begin{figure*}[t]
\centering
\vspace{-0.5em}
\includegraphics[width=1\linewidth]{Figs/ImageNet.pdf}
\vspace{-9mm}
\caption{{\small (\textit{Curve plots}) Testing accuracy (\%) over network sparsity (\%) on Tiny-ImageNet and ImageNet datasets with ResNet-50 ($25.56$ M). (\textit{Radar plots}) The end-to-end inference time saving of extreme structural winning tickets. Unstructured subnetworks or dense models do not have structural sparsity, and thus they are plotted as dots in the axes of accuracy in the corresponding radar plot. The rightmost plot includes three extreme regroup tickets with accuracy drop $<1\%$, where ``RG S: $x\%$" indicates unstructured sparsity before regrouping.}}
\vspace{-0.7em}
\label{fig:imagenet_res}
\end{figure*}
\subsection{Regrouping for Structural Patterns}
Although proposed \textit{refilling+} reorganizes the unstructured mask and produces useful channel-wise structural subnetworks, it is rigid and inelastic since the smallest manageable unit is a kernel. In other words, the dense matrices in identified structural patterns have a restricted shape where one dimension must align with the kernel size $n$, i.e., the continued product of the number of input channels, channel height, and weight. Motivated by~\citet{rumi2020accelerating}, we introduce a \textit{regrouping} strategy (Figure~\ref{fig:methods}) to create more fine-grained group-wise structural patterns with flexible shapes for remaining dense matrices.
$\rhd$ \textbf{How to perform regrouping?} \textit{Regrouping} aims to find and extract dense blocks of non-pruned elements in the sparse weight matrix. These blocks have diverse shapes, as demonstrated in Figure~\ref{fig:methods}, which are usually smaller in size compared to the original sparse matrix. Note that a channel/kernel can be regarded as a special case of the dense block.
As described in Algorithm~\ref{alg:regroup}, to achieve the goal, we first need to find similar rows and columns, and then bring them together. Specifically, We adopt the Jaccard similarity~\citep{rumi2020accelerating,10.1145/3332466.3374546} among non-zero columns as the similarity between two rows in the sparse matrix, which is calculated as a cardinality ratio of the intersections to the union of non-zero columns. For instance, kernel $1$ and kernel $2$ in Figure~\ref{fig:methods} (upper left) share three columns in eight non-zero distinct columns, and their similarity is $\frac{3}{8}$. Then, if two rows have a larger similarity, they can form a denser block when we group them together. Take Figure~\ref{fig:methods} as an example. We can group kernel $1,2,3$'s non-zero columns $1,3,6,11$ with at least two elements together, which leads to the first orange dense block.
More precisely, we take the hypergraph partitioning in the regrouping algorithm to generate dense blocks. It treats each row and column from the sparse matrix as a node and hyperedge in the hypergraph, where hyperedge (i.e., column) connects the corresponding nodes (i.e., row). Then, the pair-wise similarity is leveraged to locate an optimal partitioning, which can be achieved with hMETIS\footnote{\scalebox{0.75}{\url{http://glaros.dtc.umn.edu/gkhome/metis/hmetis/overview}}}. More details are referred to~\citet{rumi2020accelerating}. After obtaining the desired dense blocks, we enable all their parameters to be trainable by refilling the corresponding pruned elements. Note that refilling these pruned weights does not cause any efficiency loss since the size of the blocks is fixed, while it potentially maximizes the usage of these blocks and brings accuracy gains. Meanwhile, the rest parameters not included in the dense blocks will be discarded, i.e., setting the corresponding position in binary mask $m$ to zero, for reducing the computational overhead as illustrated in Figure~\ref{fig:methods}. It is because any parameters outside the dense blocks require extra weights loading and have little data reuse~\citep{rumi2020accelerating}, which harms the trade-off of accuracy and efficiency.
$\rhd$ \textbf{How refilled / regrouped dense blocks be beneficial?} We notice that the common tools like cuDNN~\citep{chetlur2014cudnn} have a significant drawback that the inference time does not linearly change with the number of kernels, since they are only optimized for kernel matrices with a multiple of $32$ rows~\citep{radu2019performance}. For example, as stated in~\citet{rumi2020accelerating}, a convolutional layer with $10$ kernels might have a similar inference time with a convolutional layer with $32$ kernels. However, the number of kernels in these dense blocks is almost arbitrary, so a more sophisticated GEMM-based efficient implementation~\citep{rumi2020accelerating} is needed to accelerate better our refilled / regrouped structural patterns. Following~\citet{rumi2020accelerating}, we split a kernel with $r$ rows into two parts: one has $[r/32]\times 32$ rows and the other one has $r$ mod $32$ rows. First, we directly apply the standard GEMM-based convolution algorithm with shared memory to cache the input and output matrix. For the second part, due to the poor data reuse of input matrices, we choose to cache the kernel and output matrices for an improved cache hit rate and overall performance. More details are referred to~\citet{rumi2020accelerating}.
\begin{figure*}[t]
\centering
\vspace{-0.5em}
\includegraphics[width=1\linewidth]{Figs/CIFAR_small.pdf}
\vspace{-8mm}
\caption{{\small Testing accuracy (\%) over sparsity (\%) on CIFAR-10/100 with Wide-ResNet-32-2 ($1.86$ M) and MobileNet-v1 ($3.21$ M).}}
\vspace{-0.4em}
\label{fig:cifar_res_small}
\end{figure*}
\begin{figure*}[t]
\centering
\vspace{-0.5em}
\includegraphics[width=1\linewidth]{Figs/CIFAR_large_model.pdf}
\vspace{-8mm}
\caption{{\small (\textit{Curve plots}) Testing accuracy (\%) over sparsity (\%) on CIFAR-10/100 with large models VGG-16 ($14.72$ M) and RN-18 ($11.22$ M). (\textit{Radar plots}) The end-to-end inference time saving of extreme structural winning tickets. Note that unstructured subnetworks or dense models do not have structural sparsity, and thus they are plotted as dots in the axes of accuracy in the corresponding radar plot.}}
\vspace{-1em}
\label{fig:cifar_res_large}
\end{figure*}
\section{The Existence of Structural Winning Ticket} \label{sec:main_res}
\paragraph{Tiny-ImageNet and ImageNet.} In this section, we reveal the existence of our proposed structural winning tickets on ImageNet and Tiny-ImageNet with ResNet-50 backbone. Results of unstructured \texttt{IMP}, channel-wise structural \texttt{IMP-Refill(+)}, and group-wise structural \texttt{IMP-Regroup} are collected in the Figure~\ref{fig:imagenet_res}. The end-to-end inference time\footnote{TorchPerf \scalebox{0.85}{(\url{https://github.com/awwong1/torchprof})} is adopted as our tool to benchmark both the end-to-end and layer-wise running time on GPU devices.} of obtained structural winning tickets with extreme sparsity levels are presented, which is calculated on a single 2080 TI GPU with a batch size of $64$. Extreme sparsity is defined as maximum sparsity when the subnetwork has superior accuracy to its dense counterpart.
From Tiny-ImageNet results in Figure~\ref{fig:imagenet_res} (\textit{left}), several positive observations can be drawn: \ding{182} Structural winning tickets with $60\%$ channel-wise structural sparsity and $74\%$ group-wise structural sparsity are located by \texttt{IMP-Refill} and \texttt{IMP-Regroup} respectively, which validate the effectiveness of our proposals. \ding{183} Although at the high sparsity levels (i.e., $>50\%$), \texttt{IMP-Refill+} outperforms \texttt{IMP-Refill} if they are from the same unstructured IMP subnetworks. Considering the overall trade-off between channel-wise structural sparsity and accuracy, \texttt{IMP-Refill} appears a clear advantage. A possible explanation is that \textit{refilling+} seems to bring undesired channels which potentially result in a degraded performance trade-off. \ding{184} \texttt{IMP-Regroup} performs better at high sparsities. It is within expectation since fine-grained group-wise structural patterns tend to make the networks be more amenable to pruning. \ding{185} Extreme channel- / group-wise structural winning tickets with $45\%\sim50\%$ / $74\%$ sparsity from \texttt{IMP-Refill(+)} / \texttt{IMP-Regroup} achieve $57.53\%\sim61.79\%$ / $64.84\%$ GPU running time savings, without sacrificing accuracies.
As for large-scale ImageNet experiments, the conclusion are slightly different: \ding{182} There is almost no difference between the performance of \texttt{IMP-Refill} and \texttt{IMP-Refill+}, and both can not find channel-wise structural winning tickets. But it seems to suggest our picking rule (i.e., channel weights' $\ell_1$ norm) provides a great estimation for channel importance, although it is too aggressive for ImageNet experiments. \ding{183} The group-wise structural winning ticket at $31\%$ sparsity still exist in (RN-50, ImageNet), while the low sparsity brings limited $1\%$ time savings. For a better efficiency and performance trade-off, \texttt{IMP-Regroup} is capable of locating structural subnetworks at $51\%$ / $58\%$ sparsity with $53.75\%$ / $60.23\%$ time savings and $0.33\%$ / $0.95\%$ accuracy drop.
\begin{figure*}[t]
\centering
\vspace{-0.1em}
\includegraphics[width=1\linewidth]{Figs/Ablation.pdf}
\vspace{-8mm}
\caption{{\small (\textit{Left}) Performance of structural tickets grouped from diverse initial unstructured masks. (\textit{Middle}) Performance of group-wise structural tickets with different weight rewinding. (\textit{Right}) Performance comparisons between \texttt{IMP-Regroup} and group-aware \texttt{IMP} as described in Algorithm~\ref{alg:G_IMP}. Testing accuracies (\%) over network sparsity levels (\%) are reported on (RN-18,C10).}}
\vspace{-2mm}
\label{fig:source_res}
\end{figure*}
\vspace{-2mm}
\paragraph{CIFAR with diverse network architectures.} We then validate our approaches on CIFAR-10/100 (C10/100) with diverse network backbones including Wide-ResNet-32-2, MobileNet-v1, VGG-16, and ResNet-18. Based on the extensive results in Figure~\ref{fig:cifar_res_small} and~\ref{fig:cifar_res_large}, we find: \ding{182} On \{(WRN-32-2,C10), (WRN-32-2,C100), (MBNet-v1,C10), (MBNet-v1,C100), (VGG-16,C10), (VGG-16,C100), (RN-18,C10), (RN-18,C100)\} schemes, we consistently disclose the existence of structural winning tickets with \{$53\%$, $28\%$, $67\%$, $0\%$, $60\%$, $40\%$, $50\%$, $0\%$\} channel-wise sparsity and \{$66\%$, $36\%$, $72\%$, $56\%$, $80\%$, $80\%$, $78\%$, $78\%$\} group-wise sparsity from \texttt{IMP-Refill(+)} and \texttt{IMP-Regroup}, respectively. \ding{183} With the same network, pursuing channel-wise sparse patterns on CIFAR-100 is more challenging than it on CIFAR-10, possibly due to the larger dataset complexity. On the same dataset, larger networks tend to have larger extreme sparsities for both channel- and group-wise structural winning tickets, with the exception of \texttt{IMP-Refill(+)} on (RN-18, C100). \ding{184} At the middle sparsity levels (i.e., $<50\%$), \texttt{IMP-Regroup} behaves closely to \texttt{IMP-Refill(+)}, while \texttt{IMP-Regroup} has a superior performance at high sparsity levels. \ding{185} Up to \{$57.75\%$, $60.60\%$, $55.45\%$, $64.93\%$\} GPU running time savings are obtained by group-wise structural winning tickets with undamaged performance on \{(VGG-16,C10), (VGG-16,C100), (RN-18,C10), (RN-18,C100)\}, which surpass \texttt{IMP}, \texttt{IMP-Refill(+)}, and dense models by a significant efficiency margin. A exception is that \texttt{IMP-Refill} on (VGG-16,C10) achieves the best time savings, i.e., $63.11\%$.
\vspace{-2mm}
\paragraph{Layer-wise speedups.} Figure~\ref{fig:layerwise} and~\ref{fig:layerwise_c100} shows the layer-wise speedup performance of convolution operations in VGG-16's extreme structured winning tickets from different algorithms.\texttt{IMP-Regroup} presents impressive layer-wise speedups up to $6.67$x compared to others, especially on the last a few layers (e.g., conv. $12$). The possible reasons lie in two aspects: ($i$) the latter layers reach a larger compression ratio and have greater potentials for acceleration; ($ii$) the \textit{regrouping} algorithm prefers convolutional layers (i.e., latter layers in VGG-16) with a larger number of kernels which benefits to group appropriate dense blocks, as also suggested by~\citet{rumi2020accelerating}.
\begin{figure}[!ht]
\centering
\vspace{-0.1em}
\includegraphics[width=1\linewidth]{Figs/layerwise_c10.pdf}
\vspace{-8mm}
\caption{{\small The layer-wise performance of convolution operations in extreme structural winning tickets of (VGG-16, C10). The first six conv. operations are omitted since there is no meaningful speedup, coincided with~\citet{rumi2020accelerating}. Marks like ``C: 2.77" indicate the layer-wise compression ratio of \texttt{IMP-Regroup}.}}
\vspace{-2mm}
\label{fig:layerwise}
\end{figure}
\section{Ablation Study and Visualization}
\vspace{-1mm}
\paragraph{Different sources of unstructured masks.} Intuitively, the initial unstructured sparse mask should plays an essential role in the achievable performance of our proposed ``post-processing techniques". We therefore conduct a comprehensive ablation study about the various sources of the initial sparse masks in Figure~\ref{fig:source_res}, including \texttt{IMP}, \texttt{OMP}, \texttt{RP}, \texttt{SNIP}, \texttt{GraSP}, \texttt{SynFlow}, and \texttt{ADMM}. The details of comparison methods are in Section~\ref{sec: preliminaries}. We observe that \texttt{IMP} and \texttt{OMP} provide initial unstructured masks with the top-$2$ highest quality for our \textit{regrouping} algorithm, in terms of the train-from-scratch accuracy of grouped structural subnetworks.
\vspace{-1mm}
\paragraph{Different initialization for the re-training.} Initialization~\citep{frankle2018the,Renda2020Comparing} as another key factor in LTH, also contributes significantly to the existence of winning tickets. To exhaustively investigate the effect from different initialization (e.g., rewound weights), we launch experiments started from diverse rewound weights ($\{5\%, 10\%,20\%, 50\%, 100\%\}$ of total training epochs) as well as a random re-initialization. In Figure~\ref{fig:source_res}, using $50\%$ rewound weight reaches the overall best performance; other weight rewinding setups perform similarly and clearly surpass random re-initializing at sparsity levels $>30\%$.
\vspace{-1mm}
\paragraph{Group-aware \texttt{IMP}.} This work mainly focuses on the post-processing of unstructured sparse masks. Another possibility is integrating \textit{regrouping} into \texttt{IMP} by alternatively performing unstructured magnitude pruning and regrouping, which we term as group-aware \texttt{IMP}. From Fig.~\ref{fig:source_res}, it has a worse performance due to the stricter constraint on sparse patterns, compared to \texttt{IMP-Regroup}.
\vspace{-1mm}
\paragraph{Extra study.} More investigations about ($1$) transfer tickets and training efficiency; ($2$) comparison with random tickets; ($3$) ablation on different training settings; ($4$) FLOPs saving; ($5$) visualization of sparse masks are in Appendix~\ref{sec:more_results}.
\vspace{-1mm}
\section{Conclusion}
\vspace{-1mm}
In this paper, we challenge the ``common sense" that an identified \texttt{IMP} winning ticket can only have unstructured sparsity, which severely limits its practical usage due to the irregular patterns. We for the first time demonstrate the existence of structural winning tickets by leveraging post-processing techniques, i.e., \textit{refilling(+)} and \textit{regrouping}. The located channel- and group-wise structural subnetworks achieve significant inference speedups up to $6.67$x on hardware platforms. In this sense, our positive results bridge the gap between the lottery ticket hypothesis and practical accelerations in real-world scenarios. We would be interested in examining LTH with more effective structural sparsity for real-time mobile computing in future work.
\vspace{-0.2em}
\section*{Acknowledgment}
\vspace{-1mm}
Z.W. is in part supported by an NSF EPCN project (\#2053272). Y.W. is in part supported by an NSF CMMI project (\#2013067).
|
2,869,038,153,756 | arxiv | \section{Introduction}
The purpose of a measurement is
to determine the value of a physical quantity.
One often speaks of the {\it true value}, an idealized concept
achieved by an infinitely precise
and accurate measurement, i.e. immune from errors. In practice the result
of a measurement is expressed in terms of the best
estimate of the true value and of a related uncertainty.
Traditionally the various
contributions to the overall uncertainty are classified
in terms of {\it ``statistical''} and {\it ``systematic''}
uncertainties, expressions
which reflect the sources of the experimental
errors (the quote marks indicate that a different
way of classifying uncertainties
will be adopted in this paper).
``Statistical'' uncertainties
arise from variations in the results of repeated observations
under (apparently) identical conditions. They vanish
if the number of observations becomes very large
(``the uncertainty is dominated by systematics'', is the typical
expression used in this case) and can be treated - in most
of cases, but with some exceptions of great relevance
in High Energy Physics - using conventional statistics based on the
frequency-based definition of probability.
On the other hand, it is not possible to treat
``systematic'' uncertainties coherently in the
frequentistic framework. Several ad hoc prescriptions for how
to combine ``statistical'' and ``systematic'' uncertainties
can be found in text books and in the literature:
{\sl ``add them linearly''};
{\sl ``add them linearly if $\ldots$, else add them
quadratically''};
{\sl ``don't add them at all''}, and so on (see, e.g.,
part 3 of \cite{DIN}). The ``fashion'' at the moment is to
add them quadratically if they are considered
independent, or to build a covariance matrix of ``statistical''
and ``systematic'' uncertainties
to treat general cases.
These procedures are not justified by conventional
statistical theory, but they are accepted
because of the pragmatic good sense of physicists.
For example, an experimentalist may be
reluctant to add twenty or more
contributions linearly to evaluated the uncertainty
of a complicated measurement, or decides
to treat
the correlated ``systematic'' uncertainties
``statistically'', in both cases
unaware of, or simply not caring about, about violating
frequentistic principles.
The only way to deal with these and related
problems in a consistent way
is to abandon the frequentistic interpretation
of probability introduced at the beginning of this century,
and to recover the intuitive concept of probability
as {\it degree of belief}. Stated differently, one needs to associate
the idea of probability to the lack of knowledge,
rather than to the outcome of repeated experiments.
This has been recognized also by the International Organization
for Standardization (ISO) which assumes the subjective definition
of probability
in its
{\it ``Guide to the expression of uncertainty in measurement''}\cite{ISO}.
These notes are organized as follow:
\begin{itemize}
\item
sections 1-5 give a general introduction
to subjective probability;
\item
sections 6-7 summarize some concepts and formulae concerning
random variables, needed for many applications;
\item
section 8 introduces the problem of measurement uncertainty
and deals with the terminology.
\item
sections 9-10 present the analysis model;
\item
sections 11-13 show several physical applications of the model;
\item
section 14 deals with the approximate methods needed
when the general solution becomes complicated; in this context
the ISO recommendations
will be presented and discussed;
\item
section 15 deals with uncertainty propagation. It
is particularly short because, in this scheme, there
is no difference between the treatment of ``systematic'' uncertainties
and indirect measurements; the section simply
refers the results of sections
11-14;
\item
section 16 is dedicated to a detailed discussion about
the covariance matrix of correlated data and the trouble
it may cause;
\item
section 17 was added as an
example of a more complicate inference (multidimensional unfolding)
than those treated in sections 11-15.
\end{itemize}
\newpage
\section{Probability}
\subsection{What is probability?}
The standard answers to this question are
\begin{enumerate}
\item
``the ratio of the number of favorable cases to the
number of all cases'';
\item
``the ratio of the times the event occurs in a test series
to the total number of trials in the series''.
\end{enumerate}
It is very easy to show that neither of these
statements can define the concept of probability:
\begin{itemize}
\item
Definition (1) lacks
the clause ``if all the cases are
\underline{equally probable}''. This has
been done here intentionally, because people often forget it.
The fact that the definition of probability makes use of the term
``probability'' is clearly embarrassing. Often in text books the
clause is replaced by ``if all the cases are
equally possible'', ignoring that in this
context ``possible''
is just a synonym of ``probable''. There is no way out.
This statement does not
define probability but
gives, at most, a useful rule for evaluating it -
assuming we
know what probability is, i.e. of what we are talking about.
The fact that this definition is labelled
``classical'' or ``Laplace'' simply shows that some
authors are not
aware of what the ``classicals'' (Bayes, Gauss, Laplace, Bernoulli, etc)
thought about this matter. We shall call this ``definition''
{\it combinatorial}.
\item
definition (2) is also incomplete, since it lacks the condition
that the number of trials must be very large (``it goes to infinity'').
But this is a minor point. The crucial point is that the
statement merely defines the relative {\it frequency} with
which an event
(a ``phenomenon'')
occurred in the past. To use frequency as a measurement of
probability we have to assume that the phenomenon
occurred in the past, and will occur in the future,
\underline{with the same probability}. But who can tell if this hypothesis
is correct? Nobody: \underline{we}
have to guess in every single case. Notice that, while in the
first ``definition'' the assumption of equal probability
was explicitly stated, the analogous clause is often
missing from the second one. We shall call this ``definition''
{\it frequentistic}.
\end{itemize}
We have to conclude that if we want to make use of these
statements
to assign a numerical value to probability, in those cases
in which \underline{we judge} that the clauses are satisfied, we need
a better definition of probability.
\subsection{Subjective definition of probability}
\begin{figure}[t]
\centering\epsfig{file=dago29.eps,width=\linewidth,clip=}
\caption{\sf Certain and uncertain events.}
\label{fig:probability}
\end{figure}
So, {\it ``what is probability?''}
Consulting
a good dictionary
helps.
Webster's states, for example, that ``{\it probability}
is the quality, state, or degree of being probable'',
and then that {\it probable} means ``supported by evidence strong
enough to make it likely though not certain to be true''.
The concept of probable arises in reasoning when the concept
of {\it certain} is not applicable. When it is impossible to
state firmly if an {\it event} (we use this word as a synonym
for {\it any possible statement}, or {\it proposition},
relative to past, present or future)
is {\it true} or {\it false}, we just say that this is
{\it possible}, {\it probable}. Different events may have different
{\it levels} of probability, depending whether we think that they are more
likely to be true or false (see Fig. ~\ref{fig:probability}).
The concept of probability
is then simply
\begin{quote}
{\it a measure of the \underline{degree of belief}
that an event will}\footnote{The use of the future tense
does not imply that this definition can only be
applied for future events. ``Will occur'' simply
means that the statement ``will be proven to be true'',
even if it refers to the past. Think for example of
``the probability that it was raining in Rome on the day
of the battle of Waterloo''.} {\it occur}.
\end{quote}
This is the kind of definition that one finds in
Bayesian books\cite{Jeffreys,Winkler,Definetti3,Press,Bernardo}
and the formulation cited here is that given in
the ISO {\it ``Guide to Expression of
Uncertainty in Measurement''}\cite{ISO},
of which we will talk later.
At first sight this definition does not seem to be superior to
the combinatorial or the frequentistic ones.
At least they give some practical
rules to calculate ``something''. Defining
probability as {\it ``degree of belief''} seems too vague to
be of any utility. We need, then, some explanation of its
meaning; a tool to evaluate it - and
we will look at this tool (Bayes' theorem)
later. We will end this section with some
explanatory remarks on the definition, but
first let us discuss the
\underline{advantages} of this definition:
\begin{itemize}
\item
it is natural, very general and it can be applied to any thinkable
event, independently of the feasibility of making an
inventory of all (equally) possible and favorable cases, or
of repeating the experiment under conditions
of equal probability;
\item
it avoids the linguistic schizophrenia of having to
distinguish ``scientific'' probability from
``non scientific'' probability used in
everyday reasoning (though a meteorologist
might feel offended to hear that
evaluating the probability of rain tomorrow
is ``not scientific'');
\item
as far as measurements are concerned, it allows
us to talk about the probability of the {\it true value} of a physical
quantity. In the frequentistic
frame it is only possible to talk about the probability
of the {\it outcome} of an experiment, as the true value is
considered to be a constant. This approach is
so unnatural that most physicists speak of
``$95\,\%$ probability that the mass of the Top quark is
between $\ldots$'',
\underline{although} they believe that the correct definition of probability
is the limit of the frequency;
\item
it is possible to make a very general theory of uncertainty
which can take into account any source of statistical and
systematic error, independently of their distribution.
\end{itemize}
To get a better understanding of the subjective definition of
probability let us take a look at \underline{odds in betting}.
The higher the
degree of belief
that an event will occur, the higher
the amount of money $A$ that someone (``a rational better'')
is ready to pay in order to receive a sum of money $B$ if the event
occurs. Clearly the bet must be acceptable (``coherent''
is the correct adjective), i.e. the amount of money $A$
must be smaller or equal to $B$
and not negative (who would accept such a bet?).
The cases of $A=0$ and $A=B$ mean that the events are considered
to be false or true, respectively,
and obviously it is not worth betting on certainties.
They are just limit cases, and in fact they can be
treated with standard logic.
It seems reasonable\footnote{This is not always
true in real life. There are also other
practical problems related to betting which have been
treated in the
literature. Other variations of the
definition have also been proposed, like the one
based on the {\it penalization rule}. A discussion of the
problem goes beyond the purpose of these notes.}
that the amount of money $A$ that one is willing to pay
grows linearly
with the degree of belief.
It follows that if someone thinks that
the probability of the event $E$ is $p$, then he
will bet $A=pB$ to get
$B$ if the event occurs, and to lose $pB$
if it does not. It is easy to
demonstrate that the condition of ``coherence''
implies that $0\le p\le 1$.
What has gambling to do with physics? The
definition of probability through
betting odds has to be considered {\it operational}, although there is no
need to make a bet (with whom?) each time one
presents a result. It has the important role of forcing
one to make an
{\it honest} assessment of the value of probability that
one believes. One could replace money with other forms
of gratification or penalization, like the increase or
the loss of scientific reputation. Moreover, the
fact that this operational procedure is not to
be taken literally should not be surprising. Many
physical quantities are defined in a similar way.
Think, for example, of the text book definition of
the electric field, and try to use it
to measure $\vec{E}$
in the proximity of an electron.
A nice example comes from the definition of a poisonous chemical
compound: it {\it would be lethal \underline{if} ingested}.
Clearly it is preferable to keep this operational definition
at a hypothetical level, even though it is the
best definition of the concept.
\subsection{Rules of probability}
\begin{figure}
\centering\epsfig{file=dago19.eps,width=\linewidth,clip=}
\caption{\sf Venn diagrams and set properties.}
\label{fig:Venn}
\end{figure}
The subjective definition of probability, together with the condition
of {\it coherence}, requires that $0 \le p \le 1$. This is one of the rules
which probability has to obey. It is possible, in fact, to demonstrate
that coherence yields to the standard rules of probability,
generally known as {\it axioms}. At this point
it is worth
clarifying the relationship between the axiomatic approach
and the others:
\begin{itemize}
\item
combinatorial and frequentistic ``definitions''
give
useful rules for evaluating probability, although
they do not, as it is often claimed,
define the concept;
\item
in the axiomatic approach one refrains
from defining what the probability
is and how to evaluate it: probability
is just any real number which satisfies the axioms.
It is easy to demonstrate that the probabilities
evaluated using the combinatorial and the frequentistic
prescriptions do in fact satisfy the axioms;
\item
the subjective approach to probability, together with the
coherence requirement,
\underline{defines} what probability is and provides
the rules which its evaluation must obey; these rules
turn out to
be the same as the axioms.
\end{itemize}
Since everybody is familiar with the axioms and with the analogy
$events\Leftrightarrow sets$ (see Tab. ~\ref{tab:eventi_insiemi}
and Fig. ~\ref{fig:Venn})
let us remind ourselves of the {\it rules of probability} in this form:
{ \small
\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|l|c|} \hline
\multicolumn{1}{|c|}{Events} & \multicolumn{2}{|c|}{sets} \\ \hline
& & symbol \\ \hline
event & set & $E$ \\
certain event & sample space & $\Omega$ \\
impossible event & empty set & $\emptyset$ \\
implication & inclusion & $E_1\subseteq E_2$\\
& (subset) & \\
opposite event & complementary set & $\overline{E}$
\hspace{0.4cm}($E\cup \overline{E} = \Omega$) \\
(complementary) & & \\
logical product (``AND'') & intersection & $E_1 \cap E_2$ \\
logical sum (``OR'') & union & $E_1 \cup E_2$ \\
incompatible events& disjoint sets & $E_1 \cap E_2 = \emptyset$\\
complete class & finite partition&
$\left\{ \begin{array}{l}
E_i \cap E_j = \emptyset \ (i\ne j)\\
\cup_i E_i = \Omega
\end{array}\right.$ \\ \hline
\end{tabular}
\end{center}
\caption{\sf Events versus sets.}
\label{tab:eventi_insiemi}
\end{table}
}
\begin{description}
\item[Axiom 1] $0 \leq P(E) \leq 1$;
\item[Axiom 2] $ P(\Omega) = 1$ (a certain event has probability 1);
\item[Axiom 3]
$ P(E_1 \cup E_2) = P(E_1)+P(E_2)$, if
$ E_1 \cap E_2 = \emptyset$
\end{description}
{}From the basic rules the following properties can be derived:
\begin{description}
\item[1:]
$P(E) = 1 - P(\overline E) $;
\item[2:]
$P(\emptyset) = 0$;
\item[3:]
if $A\subseteq B$ then $P(A) \leq P(B) $;
\item[4:]
$P(A\cup B) = P(A) + P(B) - P(A\cap B)$\,.
\end{description}
We also anticipate here a fifth property which will be discussed
in section ~\ref{ss:conditional}:
\begin{description}
\item[5:] $ P(A\cap B) = P(A|B)\cdot P(B) = P(A)\cdot P(B|A)\,.$
\end{description}
\subsection{Subjective probability and ``objective''%
description of the physical world}
The subjective definition of probability seems
to contradict
the aim of physicists to describe the laws of Physics
in the most
objective way (whatever this means $\ldots$).
This is one of the reasons why many regard
the subjective definition of probability
with suspicion (but
probably the main reason is because
we have been taught at University that
``probability is frequency''). The main philosophical
difference between this concept of probability and an
objective definition that
{\it ``we would have liked''} (but which does not exist in reality)
is the fact that $P(E)$ is not an intrinsic characteristic
of the event $E$, but depends on the {\it status of information}
available to whoever evaluates $P(E)$.
The ideal concept of ``objective''
probability is recovered when everybody has the ``same'' status of
information. But even in this case it would be better to speak of
{\it intersubjective} probability. The best way to convince
ourselves about
this aspect of probability is to try to ask practical
questions and to evaluate the probability in specific cases,
instead of seeking refuge in abstract questions. I find, in fact,
that, paraphrasing a famous statement about Time,
``Probability is objective as long as I am not asked to evaluate it''.
Some examples:
\begin{description}
\item[Example 1:]
``What is the probability
that a molecule of nitrogen at atmospheric pressure and room
temperature has a velocity between 400 and 500 m/s?''. The answer
appears easy: ``take the Maxwell distribution formula from a text
book, calculate an integral and get a number. Now
let us change the question:
{\it ``I give you a vessel containing nitrogen
and a detector}
{\it capable of
measuring the speed of a single molecule and you
set up the apparatus. Now, what is the probability that the
\underline{first}
molecule that hits the detector has a velocity between
400 and 500 m/s?''}. Anybody who has minimal experience (direct
or indirect) of experiments would hesitate before answering.
He would study the problem carefully and perform
preliminary measurements and checks.
Finally he would {\it probably}
give not just a single number, but a range of possible numbers
compatible with the formulation of the problem. Then
he starts the experiment and eventually, after 10 measurements,
he may form
a different opinion about the outcome of the eleventh measurement.
\item[Example 2:]
``What is the probability that the gravitational constant $G_N$
has a value between $6.6709\cdot 10^{-11}$ and $6.6743 \cdot 10^{-11}$
$\mbox{m}^3\mbox{kg}^{-1}\mbox{s}^{-2}$?''. Last year you
could have looked at the latest
issue of the Particle Data Book\cite{PDG}
and answered that the probability was $95\,\%$. Since then - as you
probably know - three new measurements of $G_N$ have been
performed\cite{gn}
and we now have \underline{four} numbers which do not agree
with each other (see Tab. \ref{tab:Gn}).
The probability of the true value of $G_N$
being in that range is currently dramatically decreased.
{ \small
\begin{table}
\begin{center}
\begin{tabular}{cccc} \hline
Institut & $G_N$ $\left(10^{-11}
\frac{\mbox{m}^3}{\mbox{kg}\cdot\mbox{s}^{2}}\right)$ &
$\frac{\sigma(G_N)}{G_N}$ (ppm) &
$\frac{G_N-G_N^C}{G_N^C}$ ($10^{-3}$)\\ \hline
CODATA 1986 (``$G_N^C$'') & $6.6726\pm 0.0009$ & 128 & -- \\
PTB (Germany) 1994 & $6.7154\pm 0.0006$ & 83 & $+6.41\pm 0.16$ \\
MSL (New Zealand) 1994& $6.6656\pm 0.0006$ & 95 & $-1.05\pm 0.16$ \\
Uni-Wuppertal & $6.6685\pm 0.0007$ & 105& $-0.61\pm 0.17$ \\
(Germany) 1995 & & & \\ \hline
\end{tabular}
\end{center}
\caption{\sf Measurement of $G_N$ (see text).}
\label{tab:Gn}
\end{table}
}
\item[Example 3:]
``What is the probability that the mass of the Top
quark, or that of any of the supersymmetric particles, is below
20 or $50\,\mbox{GeV}/c^2$?''. Currently it looks
as if it must be zero. Ten years ago
many experiments were intensively looking
for these particles in those energy ranges.
Because so
many people where searching for them, with
enormous human and capital investment, it means that,
\underline{at that time},
the probability was considered rather \underline{high}, $\ldots$
high enough for fake signals
to be reported as strong evidence for them\footnote{We will talk
later about the influence of {\it a priori} beliefs on the
outcome of an experimental investigation.}.
\end{description}
The above examples show how the evaluation of probability
is \underline{conditioned} by some {\it a priori} (``theoretical'')
prejudices and by some facts (``experimental data''). ``Absolute''
probability makes no sense. Even the classical example
of probability $1/2$ for each of the results in tossing a coin
is only acceptable if: the coin is regular;
it does not remain vertical (not impossible
when playing on the beach);
it does not fall into a manhole; etc.
The subjective point of view is expressed
in a provocative way
by de Finetti's\cite{Definetti3}
\begin{quote}
\begin{center}
``PROBABILITY DOES NOT EXIST''.
\end{center}
\end{quote}
\section{Conditional probability and Bayes' theorem}
\subsection{Dependence of the probability from the
status of information}\label{ss:conditional}
If the status of information changes, the evaluation of
the probability also has to be modified. For example
most people would agree that the probability
of a car being stolen depends on the model, age and parking site.
To take an example from physics, the probability that
in a HERA detector a charged particle
of $1\, \mbox{GeV}$
gives a certain number of ADC counts due
to the energy loss in a gas detector can be evaluated
in a very general way - using High Energy Physics jargon - making a
(huge) Monte Carlo simulation which takes into account all
possible reactions (weighted with their cross sections),
all possible backgrounds, changing all physical and detector
parameters within {\it reasonable} ranges, and also taking into
account the trigger efficiency. The probability
changes if one knows that the particle is a $K^+$: instead of very
complicated Monte Carlo one can just run a single particle
generator. But then it changes further if one also knows the
exact gas mixture,
pressure, $\ldots$, up to the latest
determination of the pedestal and the temperature of the ADC module.
\subsection{Conditional probability}
Although everybody knows the formula of conditional probability,
it is useful to derive it here. The notation is $P(E|H)$,
to be read ``probability of $E$ given $H$'', where $H$ stands for
{\it hypothesis}.
This means: the probability that $E$ will occur if one
already knows that $H$ has occurred\footnote{$P(E|H)$ should not be
confused with $P(E\cap H)$, ``the probability that both
events occur''. For example $P(E\cap H)$ can be very small, but
nevertheless $P(E|H)$ very high: think of the limit case
$$ P(H)\equiv P(H\cap H) \le P(H|H) = 1 \,:$$
``$H$ given $H$'' is a certain event no matter how small $P(H)$ is,
even if $P(H)=0$ (in the sense of Section ~\ref{sec:cont_var}).}.
\newpage
The event $E|H$ can have three values:
\begin{description}
\item[TRUE:] if $E$ is TRUE \underline{and} $H$ is TRUE;
\item[FALSE:] if $E$ is FALSE \underline{and} $H$ is TRUE;
\item[UNDETERMINED:] if $H$ is FALSE; in this case we are merely
uninterested as to what happens to $E$. In terms
of betting, the bet is
invalidated and none loses or gains.
\end{description}
Then $P(E)$ can be written $P(E|\Omega)$,
to state explicitly that it is the probability of
$E$ whatever happens to the rest of the world
($\Omega$ means all possible events). We realize immediately
that this condition is really too vague and nobody would
bet a cent on a such a statement. The reason for usually
writing $P(E)$
is that many conditions
are implicitly - and reasonably - assumed in most
circumstances. In the classical problems of coins and dice, for example,
one \underline{assumes} that they are regular. In the example
of the energy loss,
it was implicit -``obvious''- that the
High Voltage was on (at which voltage?)
and that HERA was running (under which condition?).
But one has to take care: many riddles are
based on the fact that one tries to find a solution which is
valid under more strict conditions than those explicitly stated
in the question (e.g. many people make bad business deals signing
contracts in which what ``was obvious''
was not explicitly stated).
In order to derive the formula of conditional probability
let us assume for a moment that it is reasonable to
talk about
``absolute probability'' $P(E)=P(E|\Omega)$,
and let us rewrite
\begin{eqnarray}
P(E) \equiv P(E|\Omega)
&\underset{\bf a}{=}& P(E\cap\Omega) \nonumber \\
&\underset{\bf b}{=} &
P\left(E\cap (H \cup \overline{H})\right) \nonumber \\
& \underset{\bf c}{=}&
P\left((E\cap H) \cup (E\cap\overline{H})\right) \nonumber \\
& \underset{\bf d}{=}& P(E\cap H) + P(E\cap\overline{H})\,,
\label{eq:cond0}
\end{eqnarray}
where the result has been achieved through the following steps:
\begin{description}
\item[(a):] $E$ implies $\Omega$ (i.e. $E\subseteq \Omega$)
and hence $E\cap\Omega=E$;
\item[(b):] the complementary events $H$ and $\overline{H}$
make a {\it finite partition} of $\Omega$,
i.e. $H \cup \overline{H} = \Omega$;
\item[(c):] distributive property;
\item[(d):] axiom 3.
\end{description}
The final result of (\ref{eq:cond0}) is very simple:
$P(E)$ is equal to the probability that $E$ occurs and $H$ also
occurs, plus the probability that $E$ occurs but $H$ does not
occur. To obtain $P(E|H)$ we just get rid of the subset of
$E$ which does not contain $H$ (i.e. $E\cap\overline{H}$)
and renormalize the probability
dividing by $P(H)$, assumed to be different from zero. This guarantees
that if $E=H$ then $P(H|H)=1$.
The expression of the conditional probability is finally
\begin{equation}
P(E|H) = \frac{P(E\cap H)}{P(H)}\hspace{1.0cm}(P(H)\ne 0)\,.
\label{eq:cond1}
\end{equation}
In the most general (and realistic)
case, where both $E$ and $H$ are conditioned by the occurrence of
a third event $H_\circ$, the formula becomes
\begin{equation}
P(E|H, H_\circ) =
\frac{P\left(E\cap (H| H_\circ)\right) }
{P(H|H_\circ)}\hspace{1.0cm}(P(H|H_\circ)\ne 0)\,.
\label{eq:cond2}
\end{equation}
Usually we shall make use of (\ref{eq:cond1})
(which means $H_\circ=\Omega$) assuming that $\Omega$ has been
properly chosen.
We should also remember that (\ref{eq:cond1}) can be resolved
with respect to $P(E\cap H)$, obtaining the well known
\begin{equation}
P(E\cap H) = P(E|H)P(H)\,,
\label{eq:pcomp1}
\end{equation}
and by symmetry
\begin{equation}
P(E\cap H) = P(H|E)P(E)\,.
\label{eq:pcomp2}
\end{equation}
Two events are called {\it independent} if
\begin{equation}
P(E\cap H) = P(E)P(H)\,.
\end{equation}
This is equivalent to saying that $P(E|H) = P(E)$ and $P(H|E)=P(H)$,
i.e. the knowledge that one event has occurred does not change the
probability of the other. If $P(E|H) \ne P(E)$ then the events
$E$ and $H$ are {\it correlated}. In particular:
\begin{itemize}
\item
if $P(E|H) > P(E)$ then $E$ and $H$ are {\it positively} correlated;
\item
if $P(E|H) < P(E)$ then $E$ and $H$ are {\it negatively} correlated;
\end{itemize}
\subsection{Bayes' theorem}
Let us think of all the possible, mutually exclusive,
hypotheses $H_i$ which could condition
the event $E$. The problem here is the inverse of the previous one:
what is the probability of $H_i$ under the hypothesis that $E$
has occurred? For example,
``what is the probability that a charged
particle which went in a certain direction and has lost
between 100 and $120\,\mbox{keV}$
in the detector,
is a $\mu$, a $\pi$, a $K$, or a $p$?" Our event $E$
is ``energy loss between 100 and $120\,\mbox{keV}$'',
and $H_i$
are the four ``particle hypotheses''.
This example sketches the basic problem for
any kind of measurement: having observed an {\it effect},
to assess the probability of each of the {\it causes } which
could have produced it. This intellectual
process is called {\it inference}, and it will be discussed
after section ~\ref{sec:inference}.
In order to calculate $P(H_i|E)$ let us rewrite the
joint probability $P(H_i\cap E)$, making use of
(\ref{eq:pcomp1}-\ref{eq:pcomp2}),
in two different ways:
\begin{equation}
P(H_i|E)P(E) = P(E|H_i)P(H_i)\,,
\end{equation}
obtaining
\begin{equation}
\boxed{
P(H_i|E) = \frac{P(E|H_i)P(H_i)}{P(E)}\,,
}
\label{eq:bayes1}
\end{equation}
or
\begin{equation}
\boxed{
\frac{P(H_i|E)}{P(H_i)} = \frac{P(E|H_i)}{P(E)}\,.
}
\label{eq:bayes1a}
\end{equation}
Since the hypotheses $H_i$ are mutually exclusive
(i.e. $H_i\cap H_j=\emptyset$, $\forall\, i,j$) and exhaustive
(i.e. $\bigcup_i H_i = \Omega$),
$E$ can be written as $E\cup H_i$, the union of
$E$ with each of the hypotheses $H_i$. It follows that
\begin{eqnarray}
P(E) \equiv P(E \cap \Omega) &=& P\left(E \cap\bigcup_i H_i\right)
= P\left(\bigcup_i (E \cap H_i)\right) \nonumber \\
&=& \sum_i P(E\cap H_i) \nonumber \\
&=& \sum_i P(E|H_i)P(H_i)\,,
\end{eqnarray}
where we have made use of (\ref{eq:pcomp1})
again in the last step.
It is then possible to rewrite (\ref{eq:bayes1})
as
\begin{equation}
\boxed{
P(H_i|E) = \frac{P(E|H_i)P(H_i)}{\sum_j P(E|H_j)P(H_j)}\,.
}
\label{eq:bayes2}
\end{equation}
This is the standard form by which {\it Bayes' theorem}
is known. (\ref{eq:bayes1})
and (\ref{eq:bayes1a}) are also different ways
of writing it. As the denominator of
(\ref{eq:bayes2}) is nothing but a normalization
factor (such that $\sum_i P(H_i|E)=1$), the formula
(\ref{eq:bayes2}) can be
written as
\begin{equation}
\boxed{
P(H_i|E) \propto P(E|H_i)P(H_i) \,.
}
\label{eq:bayes3}
\end{equation}
Factorizing $P(H_i)$ in (\ref{eq:bayes2}), and explicitly writing
the fact that all the events were already
conditioned by $H_\circ$, we can rewrite the formula
as
\begin{equation}
\boxed{
P(H_i|E, H_\circ) = \alpha P(H_i|H_\circ)\,,
}
\label{eq:bayes4}
\end{equation}
with
\begin{equation}
\alpha=\frac{P(E|H_i,H_\circ)}
{\sum_i P(E|H_i, H_\circ)P(H_i|H_\circ)}\,.
\label{eq:bayes5}
\end{equation}
These five ways of rewriting the same formula simply reflect
the importance that we shall give to this simple theorem.
They stress different aspects of the same concept:
\begin{itemize}
\item
(\ref{eq:bayes2}) is the standard way of writing it (although some
prefer (\ref{eq:bayes1}));
\item
(\ref{eq:bayes1a}) indicates that $P(H_i)$ is altered
by the condition $E$ with the same ratio with which
$P(E)$ is altered by the condition $H_i$;
\item
(\ref{eq:bayes3})
is the simplest and the most intuitive way to
formulate the theorem: ''the probability of $H_i$ given $E$ is
proportional to the {\it initial} probability of $H_i$ times
the probability of $E$ given $H_i$'';
\item
(\ref{eq:bayes4}-\ref{eq:bayes5})
show explicitly how
the probability of a certain hypothesis is updated when the
{\it status of information} changes:
\begin{description}
\item[\fbox{ $P(H_i|H_\circ)$}] (also indicated as $P_\circ(H_i)$) is
the {\it initial}, or {\it a priori}, probability (or simply
{\it ``prior''}) of $H_i$, i.e. the probability of this hypothesis
with the status of information available
\underline{before} the
knowledge that $E$ has occurred;
\item[\fbox{$P(H_i|E, H_\circ)$}] (or simply $P(H_i|E)$) is the
{\it final}, or {\it ``a posteriori''}, probability of $H_i$
\underline{after} the new information;
\item[\fbox{ $P(E|H_i, H_\circ)$}] (or simply $P(E|H_i)$) is
called {\it likelihood}.
\end{description}
\end{itemize}
To better understand the terms ``initial'', ``final'' and
``likelihood'', let us formulate the problem in a way closer
to the physicist's mentality, referring to {\it causes} and
{\it effects}: the causes can be all the physical sources
which may produce a certain {\it observable} (the effect). The
likelihoods are - as the word says - the
likelihoods that the effect follows from each of the causes.
Using our example of the $dE/dx$ measurement again, the
causes are all the possible charged particles which can
pass through the detector; the effect is the amount of observed
ionization;
the likelihoods are the probabilities that each of the particles
give that amount of ionization.
Notice that in this example we have fixed all
the other sources of influence: physics process,
HERA running conditions, gas mixture, High Voltage,
track direction, etc.. This is our $H_\circ$.
The problem immediately gets rather complicated (all real cases,
apart from tossing coins and dice, are complicated!).
The real inference would be of the kind
\begin{equation}
P(H_i|E,H_\circ) \propto P(E|H_i, H_\circ)
P(H_i|H_\circ)P(H_\circ),.
\end{equation}
For each status of $H_\circ$ (the set of all the possible values
of the influence parameters) one gets a different result
for the final probability\footnote{The symbol $\propto$
could be misunderstood if one forgets that the proportionality
factor depends on all likelihoods and priors (see (\ref{eq:bayes4})).
This means that, for a given hypothesis $H_i$,
as the status of information $E$ changes,
$P(H_i|E,H_\circ)$ may change
if $P(E|H_i, H_\circ)$ and
$P(H_i|H_\circ)$ remain constant,
if some of the other likelihoods get modified by the new information.}.
So, instead of getting a single number
for the final probability we have a distribution of values. This spread
will result in a large uncertainty of $P(H_i|E)$. This is what
every physicist knows: if the calibration constants of the detector
and the physics process \underline{are not under control},
the \underline{``systematic errors''} are large and the result is
of poor quality.
\subsection{Conventional use of Bayes' theorem}
Bayes' theorem follows
directly from the rules of probability,
and it can be used in any kind of approach. Let us take an
example:
\begin{description}
\item[Problem 1:]
A particle detector has a $\mu$ identification efficiency of $95\,\%$,
and a probability of identifying a $\pi$ as a $\mu$ of $2\,\%$. If a
particle is identified as a $\mu$, then
a trigger is issued. Knowing that
the particle beam is a mixture of $90\,\%$ $\pi$ and $10\,\%$ $\mu$,
what is the probability that a trigger is really fired by a $\mu$?
What is the signal-to-noise ($S/N$) ratio?
\item[Solution:]
The two hypotheses (causes) which could condition the event (effect)
$T$ (=``trigger fired'') are ``$\mu$'' and ``$\pi$''. They are incompatible
(clearly) and exhaustive (90\,\%+10\,\%=100\,\%). Then:
\begin{eqnarray}
P(\mu|T) & = & \frac{P(T|\mu)P_\circ(\mu)}
{P(T|\mu)P_\circ(\mu) + P(T|\pi)P_\circ(\pi)} \\
& = & \frac{0.95\times 0.1}{0.95\times 0.1 + 0.02\times 0.9}=0.84\,,
\nonumber
\label{eq:pi_mu}
\end{eqnarray}
and $P(\pi|T)=0.16$.
The signal to noise ratio is $P(\mu|T)/P(\pi|T)=5.3$. It is interesting
to rewrite the general expression of the signal to noise ratio
if the effect $E$ is observed as
\begin{equation}
S/N = \frac{P(S|E)}{P(N|E)}=\frac{P(E|S)}{P(E|N)}\cdot
\frac{P_\circ(S)}{P_\circ(N)}\,.
\end{equation}
This formula explicitly shows that when there are
{\it noisy conditions}
$$P_\circ(S) \ll P_\circ(N)$$ the experiment must be {\it very selective}
$$P(E|S) \gg P(E|N)$$ in order to have a decent $S/N$ ratio.\\
(How does the $S/N$ change if the particle has to be identified by
two independent detectors in order to give the trigger?
Try it yourself, the answer is $S/N=251$.)
\item[Problem 2:]
Three boxes contain two rings each, but in one of
them they are both gold, in the second both silver,
and in the third one of each type. You have the choice of
randomly extracting
a ring from one of the boxes, the content of which
is unknown to you. You look at the selected ring,
and you then have the possibility of extracting a second ring,
again from
any of the three boxes. Let us assume
the first ring you extract is a gold one.
Is it then preferable to extract the second one
from the same or from a different box?
\item[Solution:] Choosing the same box you have a $2/3$ probability
of getting a second gold ring.
(Try to apply the theorem,
or help yourself with intuition.)
\end{description}
The difference between the two problems, from the conventional
statistics point of view, is that the first
is only meaningful
in the frequentistic approach, the second only in the
combinatorial one. They are, however, both acceptable
from the Bayesian point of view. This is simply
because in this framework there is no
restriction on the definition of probability.
In many and important cases of life and science,
neither of the two
conventional definitions are applicable.
\subsection{Bayesian statistics: learning by experience}
The advantage of the Bayesian approach
(leaving aside the ``little philosophical detail''
of trying to define what probability is) is that one
may talk about the probability of
\underline{any} kind of event, as already
emphasized.
Moreover, the procedure of updating the probability
with increasing information is very similar
to that followed by the
mental processes of rational people. Let us consider
a few examples of ``Bayesian use'' of Bayes' theorem:
\begin{description}
\item[Example 1:] Imagine some persons
listening to a common friend
having a phone conversation with an unknown person $X_i$,
and who
are trying to guess who $X_i$ is. Depending on the knowledge
they have about the friend, on the language spoken,
on the tone of voice, on the subject of conversation, etc.,
they will attribute some probability to several
possible persons. As the conversation goes on they begin
to consider some possible candidates for $X_i$, discarding others,
and eventually fluctuating between two possibilities,
until
the status of information $I$ is such that they are
{\it practically sure} of the identity of $X_i$. This experience
has happened to must of us, and it is not difficult to
recognize the Bayesian scheme:
\begin{equation}
P(X_i|I,I_\circ) \propto P(I|X_i,I_\circ)P(X_i|I_\circ)\,.
\label{eq:telefonata}
\end{equation}
We have put the initial status of information
$I_\circ$ explicitly in (\ref{eq:telefonata})
to remind us that likelihoods and initial probabilities
depend on it. If we know nothing about the person, the final
probabilities will be very {\it vague}, i.e.
for many persons $X_i$ the probability will
be different from zero, without necessarily
favoring any particular person.
\item[Example 2:] A person $X$ meets an old friend $F$ in a pub.
$F$ proposes
that the drinks should be payed for by
whichever
of the two extracts
the card of lower value
from a pack
(according to some rule which is of no
interest to us). $X$ accepts and $F$
wins. This situation happens again in the following days
and it is always $X$ who has to pay.
What is the probability that $F$ has become a cheat, as
the number of consecutive wins $n$ increases?
The two hypotheses are: {\it cheat} ($C$) and {\it honest} ($H$).
$P_\circ(C)$ is low because $F$ is an ``old friend'',
but certainly not zero (you know $\ldots$): let us \underline{assume}
$5\,\%$. To make the problem simpler let us make the approximation
that a cheat always wins (not very clever$\ldots$):
$P(W_n|C)=1)$. The probability of winning if he is honest is, instead,
given by the rules of probability {\it assuming} that
the chance
of winning at each trial is $1/2$ ("why not?", we shall
come back to this point later): $P(W_n|H)=2^{-n}$. The result
\begin{eqnarray}
P(C|W_n) & = & \frac{P(W_n|C)\cdot P_\circ(C)}
{P(W_n|C)\cdot P_\circ(C) + P(W_n|H)\cdot P_\circ(H)}
\nonumber \\
& = & \frac{1\cdot P_\circ(C)}
{1\cdot P_\circ(C) + 2^{-n} \cdot P_\circ(H)}
\label{eq:baro}
\end{eqnarray}
is shown in the following table:
\begin{center}
\vspace{0.5 cm}
\begin{tabular}{|c|c|c|}\hline
$n$ & $ P(C|W_n)$ & $P(H|W_n)$ \\
& (\%) & (\%) \\ \hline
0 & 5.0 & 95.0 \\
1 & 9.5 & 90.5\\
2 & 17.4 & 82.6\\
3 & 29.4 & 70.6\\
4 & 45.7 & 54.3\\
5 & 62.7 & 37.3\\
6 & 77.1 & 22.9\\
$\ldots$ & $\ldots$ & $\ldots$ \\ \hline
\end{tabular}
\vspace{0.5 cm}
\end{center}
Naturally, as $F$ continues to win the suspicion
of $X$ increases. It is important to make two remarks:
\begin{itemize}
\item
the answer is always probabilistic. $X$ can never reach
absolute certainty that $F$ is a cheat,
unless he catches $F$ cheating, or $F$
confesses to having cheated. This is coherent
with the fact that we are dealing with random events
and with the fact that any sequence of outcomes has the
same probability (although there is only one possibility over
$2^n$ in which $F$ is \underline{always} luckier). Making \underline{use}
of $P(C|W_n)$, $X$ can take a \underline{decision} about the
next action to take:
\begin{itemize}
\item
\underline{continue} the game, with
probability $P(C|W_n)$
of \underline{losing}, with certainty, the next time too;
\item
\underline{refuse} to play further, with probability $P(H|W_n)$
of \underline{offending} the innocent friend.
\end{itemize}
\item
If $P_\circ(C)=0$ the final probability will
always remain zero: if $X$ fully trusts $F$,
then he has just to
record the occurrence of a rare event when $n$ becomes large.
\end{itemize}
To better follow the process of updating the probability
when new experimental data become available,
according to the Bayesian scheme
\begin{quote}
{\it ``the final probability of the
present inference is the initial probability
of the next one''}\,,
\end{quote}
let us call $P(C|W_{n-1})$ the probability assigned
after the previous win. The iterative application
of the Bayes formula yields:
\begin{eqnarray}
P(C|W_n) &=& \frac{P(W|C)\cdot P(C|W_{n-1})}
{P(W|C)\cdot P(C|W_{n-1}) +
P(W|H)\cdot P(H|W_{n-1})} \nonumber \\
& = & \frac{1\cdot P(C|W_{n-1})}
{1\cdot P(C|W_{n-1}) + \frac{1}{2} \cdot P(H|W_{n-1})}\,,
\end{eqnarray}
where $P(W|C)=1$ and $P(W|H)=1/2$ are the probabilities of
\underline{each} win.
The interesting result is that
\underline{exactly} the same values of $P(C|W_n)$ of (\ref{eq:baro})
are obtained (try to believe it!).
\end{description}
It is also instructive to see the dependence of the final
probability on the initial probabilities, for a given
number of wins $n$:
\begin{center}
\vspace{0.5 cm}
\begin{tabular}{|c|c|c|c|c|}\hline
& \multicolumn{4}{|c|}{$ P(C|W_n)$} \\
$P_\circ(C)$ & \multicolumn{4}{|c|}{ $(\%)$} \\ \hline
& $n=5$ & $n=10$ &$n=15$ & $n=20$ \\ \hline
$1\,\%$ & 24 & 91 & 99.7 & 99.99 \\
$5\,\%$ & 63 & 98 & 99.94 & 99.998 \\
$50\,\%$ & 97 & 99.90 & 99.997& 99.9999 \\ \hline
\end{tabular}
\vspace{0.5 cm}
\end{center}
As the number of experimental observations increases the conclusions
no longer depend, practically,
on the initial assumptions. This is a crucial
point in the Bayesian scheme and it will be discussed in more detail
later.
\section{Hypothesis test (discrete case)}\label{sec:hyp_test_discr}
Although in conventional statistics books this
argument is usually dealt with in one of the
later chapters, in the Bayesian
approach this is so natural that it is in fact
the first application, as we have seen in the
above examples. We summarize here the procedure:
\begin{itemize}
\item
{\it probabilities are attributed to the different
hypotheses} using initial probabilities and
experimental data (via the likelihood);
\item
the person who makes the inference
- or the ``user'' - will take a decision
of which he is \underline{fully responsible}.
\end{itemize}
If one needs to compare
two hypotheses, as in the example of the signal to noise
calculation, the ratio of the final probabilities
can be taken as a quantitative result of the test.
Let us rewrite the $S/N$ formula in the most general case:
\begin{equation}
\frac{P(H_1|E,H_\circ)}{P(H_2|E,H_\circ)}
= \frac{P(E|H_1, H_\circ)}{P(E|H_2,H_\circ)} \cdot
\frac{ P(H_1|H_\circ)}{P(H_2|H_\circ)}\,,
\label{eq:bayes_factor}
\end{equation}
where again we have reminded ourselves
of the existence of $H_\circ$.
The ratio depends on the
product of two terms: the ratio of the priors
and the ratio of the likelihoods. When there is absolutely
no reason for choosing between the two hypotheses the
prior ratio is 1 and the decision depends only on the
other term, called {\it the Bayes factor}.
If one firmly believes in either hypothesis,
the Bayes
factor is of minor importance, unless it is zero or infinite
(i.e. one and only one of the likelihoods is vanishing).
Perhaps this is disappointing for those who expected
objective certainties from a probability theory, but
\underline{this} is in the nature of things.
\section{Choice of the initial probabilities
(discrete case)}\label{sec:choice1}
\subsection{General criteria}
The dependence of
Bayesian inferences on initial probability is
pointed to by opponents as
the fatal flaw in the theory.
But this criticism is less severe than one might think
at first sight. In fact:
\begin{itemize}
\item
It is impossible to construct a theory
of uncertainty which is not affected by this
``illness''. Those methods which are advertised as being
``objective'' tend in reality to hide the hypotheses on
which they are grounded.
A typical example is
the maximum likelihood method, of which we
will talk later.
\item
as the amount of information increases
the dependence on initial prejudices diminishes;
\item
when the amount of information is very limited,
or completely lacking, there is nothing to be ashamed of if
the inference is dominated by {\it a priori} assumptions;
\end{itemize}
The fact that
conclusions drawn from an experimental result
(and sometimes even the ``result'' itself!)
\underline{often} depend
on prejudices about the phenomenon under study is well known
to all experienced physicists. Some examples:
\begin{itemize}
\item
when doing quick checks on a device, a single
measurement is usually performed if the value is
``what it should be'', but if it is not then
many measurements tend to be made;
\item
results are sometimes influenced by
previous results or by theoretical
predictions. See for example Fig. ~\ref{fig:sistematiche} taken from
the Particle Data Book\cite{PDG}.
The interesting book {\it ``How experiments end''}\cite{end}
discusses, among others, the issue
of \underline{when}
experimentalists are ``happy with the result'' and stop
``correcting for the systematics'';
\item
it can happen that slight deviations from the background
are interpreted as a signal
(e.g. as for the first claim of discovery of
the Top quark in spring '94),
while larger ``signals'' are viewed with suspicion if they
are unwanted by the physics ``establishment''\footnote{A case,
concerning
the search for electron compositeness in $e^+e^-$ collisions,
is discussed in \cite{comp}.};
\item
experiments are planned and financed according to the
prejudices of the moment\footnote{For a recent delightful report,
see \cite{wroblewski}.};
\end{itemize}
\begin{figure}
\centering\epsfig{file=dago24.eps,width=12.5cm,height=18cm,clip=}
\caption{\sf Results on two physical quantities as a function
of the publication date.}
\label{fig:sistematiche}
\end{figure}
These comments are not intended to justify unscrupulous behaviour
or sloppy analysis. They are intended, instead, to remind us
- if need be - that scientific research is ruled by
subjectivity much more than
outsiders imagine. The transition from subjectivity
to ``objectivity'' begins when there
is a large consensus among the most influential people about
how to interpret the results\footnote{{\sl ``A theory
needs to be confirmed by experiments. But it is
also true that an experimental result needs to be} {\it confirmed
by a theory''}. This sentence
expresses clearly - though paradoxically -
the idea that it is difficult to accept a result which is
not rationally justified. An example of results ``not confirmed by
the theory'' are the $R$ measurements in Deep Inelastic Scattering
shown in Fig.~\ref{fig:RDIS}. Given the conflict
in this situation,
physicists tend to believe more in QCD and use the
``low-x'' extrapolations (\underline{of what?})
to correct the data for the
unknown values of $R$.}.
\begin{figure}
\centering\epsfig{file=rqcd3.eps,height=19cm,clip=}
\caption{\sf $R=\sigma_L/\sigma_T$ as a function of the Deep Inelastic
Scattering variable $x$ as measured by experiments and as predicted by
QCD.}
\label{fig:RDIS}
\end{figure}
In this context, the subjective approach to statistical
inference at least teaches us that every assumption must be
\underline{stated clearly}
and all available
information which could influence conclusions
must be weighed
with the maximum \underline{attempt at objectivity}.
What are the rules for choosing the ``right''
initial probabilities?
As one can imagine, this is an open and
debated question
among scientists and philosophers.
My personal point of view is that
one should avoid pedantic discussion of the matter,
because the idea of universally true priors
reminds me terribly of the famous ``angels' sex'' debates.
If I had to give recommendations, they would be:
\begin{itemize}
\item
the {\it a priori} probability should be chosen in the same
spirit as the rational person who places a bet,
seeking to minimize the risk
of losing;
\item
general principles - like those that we will discuss in a while -
may help, but since it may be difficult to apply
elegant theoretical ideas
in all practical situations,
in many circumstances the {\it guess} of the ``expert''
can be relied on for guidance.
\item
avoid using as prior the results of other experiments
dealing with the same open problem, otherwise correlations
between the results would prevent all comparison between the experiments
and thus the detection of any
systematic errors. I find that this point is
generally overlooked by statisticians.
\end{itemize}
\subsection{Insufficient Reason and Maximum Entropy}
The first and most famous criterion for choosing
initial probabilities is the simple
{\it Principle of Insufficient Reason}
(or {\it Indifference Principle}): if there is no reason
to prefer one hypothesis over alternatives, simply attribute
the same probability to all of them. This
was stated as a principle
by Laplace\footnote{It may help in understanding
Laplace's approach
if we consider that he called the theory of probability ``good sense
turned into calculation''.}
in contrast to
Leibnitz' famous {\it Principle of Sufficient Reason}, which, in simple
words, states that "nothing happens without a reason".
The indifference principle applied to coin and die tossing,
to card games or to other simple and symmetric
problems, yields to the well known rule of probability
evaluation that we have called combinatorial.
Since it is impossible not to agree with this point of
view, in the cases that \underline{one judges} that it does apply,
the combinatorial ``definition'' of probability is
recovered in the Bayesian approach if the
word ``definition'' is simply replaced by ``evaluation rule''.
We have in fact already used this reasoning
in previous examples.
A modern and more sophisticated version of the Indifference Principle
is the Maximum Entropy Principle. The information entropy
function of $n$
mutually exclusive events, to each of which a probability $p_i$
is assigned, is defined as
\begin{equation}
H(p_1, p_2,\ldots p_n) = - K\sum_{i=1}^np_i\ln{p_i},
\end{equation}
with $K$ a positive constant. The principle states that
``in making inferences on the basis of partial
information we must use that probability distribution which
has the maximum entropy subject to whatever is known\cite{Jaynes}''.
Notice that, in this case, ``entropy'' is synonymous with
``uncertainty''\cite{Jaynes}.
One can show that, in the case of \underline{absolute}
ignorance about the events $E_i$, the maximization of the
information uncertainty, with the constraint that $\sum_{i=1}^np_i=1$,
yields the classical
$p_i=1/n$ (any other result would have been worrying$\ldots$).
Although this principle is sometimes used in combination
with the Bayes' formula for inferences
(also applied to measurement uncertainty, see
\cite{Weise2}), it will not be
used for applications
in these notes.
\section{Random variables}\label{sec:variables}
In the discussion which follows I will assume that the reader is
familiar with random variables, distributions, probability
density functions, expected values, as well as with the
most frequently used distributions. This section is
only intended as a summary of concepts and as a presentation of the
notation used in the subsequent sections.
\subsection{Discrete variables}
Stated simply, to define a {\it random variable} $X$
means to find a rule which allows a real number
to be related univocally
(but not biunivocally)
to an event ($E$), chosen from those events which
constitute a finite partition of $\Omega$ (i.e. the events
must be exhaustive and mutually exclusive).
One could write this expression
$X(E)$. If the number of possible events is finite then the
random variable is discrete, i.e. it can assume only a
finite number of values.
Since the chosen set of events are mutually exclusive,
the probability of $X=x$ is the sum of the probabilities of all
the events for which $X(E_i)=x$. Note that we shall indicate
the variable
with $X$
and
its numerical realization
with $x$,
and that, differently from
other notations, the symbol $x$
(in place of $n$ or $k$) is also used for discrete variables.
After this short introduction, here is a list of
definitions, properties and notations:
\begin{description}
\item[Probability function:]
\begin{equation}
f(x)=P(X=x)\,.
\end{equation}
It has the following properties:
\begin{eqnarray}
1) & & 0 \leq f(x_i) \leq 1\,;\\
2) & & P(X = x_i\,\cup\, X = x_j)= f(x_i)+f(x_j)\,;\\
3) & & \sum_i f(x_i) = 1\,.
\end{eqnarray}
\item[Cumulative distribution function:]
\begin{equation}
F(x_k) \equiv P(X\leq x_k) = \sum_{x_i\leq x_k} f(x_i)
\, .
\end{equation}
Properties:
\begin{eqnarray}
1) & & F(-\infty) = 0\\
2) & & F(+\infty) = 1\\
3) & & F(x_i) - F(x_{i-1}) = f(x_i)\\
4) & & \lim_{\epsilon \rightarrow o} F(x+\epsilon) = F(x)
\hspace{1.0 cm} (right\ side\ continuity)\,.
\end{eqnarray}
\item[Expected value (mean):]
\begin{equation}
\mu \equiv E[X] = \sum_i x_i f(x_i)\,.\label{eq:media}
\end{equation}
In general, given a function $g(X)$ of $X$:
\begin{equation}
E[g(X)] = \sum_i g(x_i) f(x_i)\,.
\end{equation}
$E[\cdot]$ is a linear operator:
\begin{equation}
E[a X+b] = a E[X] + b \,.
\end{equation}
\item[Variance and standard deviation:]
Variance:
\begin{equation}
\sigma ^2 \equiv Var(X) = E[(X-\mu)^2] = E[X^2] - \mu ^2 \,.
\label{eq:varianza}
\end{equation}
Standard deviation:
\begin{equation}
\sigma = +\sqrt{\sigma^2}\,.
\end{equation}
Transformation properties:
\begin{eqnarray}
Var(a X+b) & = & a^2 Var(X)\,;\\
\sigma(aX+b) & = & |a|\sigma(X)\,.
\end{eqnarray}
\item[Binomial distribution:]
$X\sim {\cal B}_{n,p}$ (hereafter ``$\sim$'' stands for ``follows'');
${\cal B}_{n,p}$ stands for {\it binomial} with parameters
$n$ (integer) and $p$ (real):
\begin{equation}
f(x|{\cal B}_{n,p}) =
\frac{n!}{(n-x)!x!} p^x (1-p)^{n-x} \, ,
\hspace{1.0 cm}
\left\{ \begin{array}{l} n = 1, 2, \ldots, \infty \\
0 \le p \le 1 \\
x = 0, 1, \ldots, n \end{array}\right.\,.
\label{eq:binomial}
\end{equation}
Expected value, standard deviation and {\it variation coefficient}:
\begin{eqnarray}
\mu & = & np \\
\sigma & = & \sqrt{np (1-p)}\\
v \equiv \frac{\sigma}{\mu} &=&
\frac{\sqrt{np(1-p)}}{n p} \propto \frac{1}{\sqrt{n}}\, .
\end{eqnarray}
$1-p$ is often indicated by $q$.
\item[Poisson distribution:]
$X\sim {\cal P}_\lambda$:
\begin{equation}
f(x|{\cal P}_\lambda)=\frac{\lambda^x}{x!} e^{-\lambda}
\hspace{1.0 cm}
\left\{ \begin{array}{l} 0 < \lambda < \infty \\
x = 0, 1, \ldots, \infty\\
\end{array} \right.\,.
\end{equation}
($x$ is integer, $\lambda$ is real.)\\
Expected value, standard deviation and variation coefficient:
\begin{eqnarray}
\mu & = & \lambda \\
\sigma & = & \sqrt{\lambda} \\
v &=& \frac{1}{\sqrt{\lambda}}
\end{eqnarray}
\item[Binomial $\rightarrow$ Poisson:]
\begin{equation}
{\cal B}_{n,p} @>>{\begin{array}{l} n\rightarrow ``\infty'' \\
p\rightarrow ``0'' \\
(\lambda = np) \end{array}}>{\cal P}_\lambda\,.
\end{equation}
\end{description}
\subsection{Continuous variables: probability and
density function}\label{sec:cont_var}
Moving from discrete to continuous variables there are the
usual problems with infinite possibilities,
similar to those found in
Zeno's ``Achilles and the tortoise'' paradox.
In both cases
the answer is given by infinitesimal
calculus. But some comments are needed:
\begin{itemize}
\item
the probability of
each of the realizations of $X$ is zero ($P(X=x)=0$),
but this does \underline{not} mean that each value
is \underline{impossible}, otherwise it would be impossible
to get \underline{any} result;
\item
although all values $x$ have zero probability, one usually
assigns different
degrees of belief to them, quantified by the
{\bf probability density function} $f(x)$. Writing
$f(x_1) > f(x_2)$,
for example,
indicates that our degree of belief in
$x_1$ is greater than that in $x_2$.
\item
The probability that a random variable
lies inside a finite interval, for example
$P(a\leq X \leq b)$,
is instead finite.
If the distance between $a$ and $b$ becomes
infinitesimal, then the probability becomes infinitesimal too.
If all the values of $X$ have the same degree of belief
(and not only equal numerical
probability $P(x)=0$) the infinitesimal
probability is simply proportional to the infinitesimal
interval $dP=kdx$. In the general case the ratio between
two infinitesimal probabilities around two different points
will be equal to the ratio of the degrees of belief in the
points (this argument implies the continuity of $f(x)$
on either side of the values). It follows that $dP=f(x)dx$
and then
\begin{equation}
P(a \leq X \leq b)= \int_{a}^{b}f(x)dx\,;
\end{equation}
\item
$f(x)$ has a dimension inverse to that of the random variable.
\end{itemize}
After this short introduction, here is a list of
definitions, properties and notations:
\begin{description}
\item[Cumulative distribution function:]
\begin{equation}
F(x) = P(X\leq x) = \int_{-\infty}^{x}f(x^\prime)dx^\prime\,,
\end{equation}
or
\begin{equation}
f(x) = \frac{dF(x)}{dx}
\end{equation}
\item[Properties of $f(x)$ and $F(x)$:]\
\begin{itemize}
\item
$f(x) \geq 0\,\,$;
\item
$\int_{-\infty}^{+\infty} f(x)dx=1\,\,$;
\item
$0\leq F(x)\leq 1$;
\item
$P(a\leq X \leq b) = \int_a^bf(x)dx = \int_{-\infty}^bf(x)dx
-\int_{-\infty}^af(x)dx\\
\hspace{2.64cm} = F(b)-F(a)$;
\item
if $x_2 > x_1$ then
$F(x_2) \ge F(x_1)$\,.
\item
$ \lim_{x\rightarrow -\infty} F(x) = 0$\,;\\
$ \lim_{x\rightarrow +\infty} F(x) = 1$\,;
\end{itemize}
\item[Expected value:]
\begin{eqnarray}
E[X] & = & \int_{-\infty}^{+\infty}x f(x)dx\\
E[g(X)] & = & \int_{-\infty}^{+\infty}g(x) f(x)dx.
\end{eqnarray}
\item[Uniform distribution:]\!\footnote{
The symbols of the following distributions
have the parameters within parentheses to indicate that
the variables are continuous.}
$X \sim {\cal K}(a,b)$:
\begin{eqnarray}
f(x|{\cal K}(a,b)) & = & \frac{1}{b-a}
\hspace{0.6cm}(a\le x \le b)\\
F(x|{\cal K}(a,b)) & = &
\frac{x-a}{b-a}\, .
\end{eqnarray}
Expected value and standard deviation:
\begin{eqnarray}
\mu&=& \frac{a+b}{2} \\
\sigma &=&\frac{b-a}{\sqrt{12}}\,.
\end{eqnarray}
\item[Normal (gaussian) distribution:] $X\sim {\cal N}(\mu,\sigma)$:
\begin{equation}
f(x|{\cal N}(\mu,\sigma))
=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}
\hspace{1.0 cm}
\left\{ \begin{array}{l}
-\infty < \mu < +\infty\\
0 < \sigma < \infty\\
-\infty < x < +\infty
\end{array}\right.\,,
\end{equation}
where $\mu$ and $\sigma$ (both real) are the expected value and standard
deviation, respectively.
\item[Standard normal distribution:]
the particular normal distribution of mean 0 and standard
deviation 1, usually indicated by $Z$:
\begin{equation}
Z\sim {\cal N}(0,1)\,.
\end{equation}
\item[Exponential distribution:]
$T \sim {\cal E}(\tau)$:
\begin{eqnarray}
f(t|{\cal E}(\tau)) & = &
\frac{1}{\tau} e^{-t/\tau} \hspace{1.3 cm}
\left\{ \begin{array}{c}
0 \le \tau < \infty \\
0 \le t < \infty
\end{array}\right. \\
F(t|{\cal E}(\tau)) & = & 1-e^{-t/\tau}
\end{eqnarray}
we use of the symbol $t$ instead of $x$ because this distribution
will be applied to the {\it time domain}.\\
{\it Survival probability}:
\begin{equation}
P(T>t) = 1- F(t|{\cal E}(\tau)) = e^{-t/\tau}
\end{equation}
Expected value and standard deviation:
\begin{eqnarray}
\mu & = & \tau\\
\sigma & = & \tau.
\end{eqnarray}
The real parameter $\tau$ has the physical meaning of {\it lifetime}.
\item[Poisson $\leftrightarrow$ Exponential:]
If $X$ (= ``number of counts during the time $\Delta t$'') is
Poisson distributed then $T$ (=``interval of time to be waited -
starting \underline{from any instant} - before the first count
is recorded'') is exponentially distributed:
\begin{eqnarray}
X \sim f(x|{\cal P}_\lambda)
& \Longleftrightarrow &
T \sim f(x|{\cal E}(\tau)) \\
&(\tau = \frac{\Delta T}{\lambda})&
\end{eqnarray}
\end{description}
\subsection{Distribution of several random variables}
We only consider the case of two continuous variables ($X$ and $Y$).
The extension to more variables is straightforward.
The infinitesimal element of probability is
$dF(x,y) = f(x,y)dxdy$, and the probability
density function
\begin{equation}
f(x,y) = \frac{\partial^2F(x,y)}{\partial x\partial y}\,.
\end{equation}
The probability of finding the variable inside a certain
area $A$ is
\begin{equation}
\iint\limits_A f(x,y)dx dy\,.
\end{equation}
\begin{description}
\item[Marginal distributions:]
\begin{eqnarray}
f_X(x) & = & \int_{-\infty}^{+\infty}f(x,y)dy \\
f_Y(y) & = & \int_{-\infty}^{+\infty}f(x,y)dx \,.
\end{eqnarray}
The subscripts $X$ and $Y$ indicate that
$f_X(x)$ and $f_Y(y)$
are only function of
$X$ and $Y$, respectively (to avoid fooling around with different
symbols to indicate the generic function).
\item[Conditional distributions:]
\begin{eqnarray}
f_X(x|y) & = & \frac{f(x,y)}{f_Y(y)} = \frac{f(x,y)}{\int f(x,y)dx} \\
f_Y(y|x) & = & \frac{f(x,y)}{f_X(x)} \\
f(x,y) & = & f_X(x|y)f_Y(y) \\
& = & f_Y(y|x)f_Y(x)\,.
\end{eqnarray}
\item[Independent random variables]
\begin{equation}
f(x,y) = f_X(x) f_Y(y)
\end{equation}
(it implies $f(x|y)=f_X(x)$ and $f(y|x)=f_Y(y)$\,.)
\item[Bayes' theorem for continuous random variables]
\begin{equation}
\boxed{
f(h|e) = \frac{f(e|h)f_h(h)}
{\int f(e|h)f_h(h)dh}\, .
}
\label{eq:bayes_cont}
\end{equation}
\item[Expected value:]
\begin{eqnarray}
\mu_X=E[X] & = & \int\!\!\int_{-\infty}^{+\infty}
\!x f(x,y)dxdy \\
& = & \int_{-\infty}^{+\infty}\!x f_X(x) dx\,,
\end{eqnarray}
and analogously for $Y$. In general
\begin{equation}
E[g(X,Y)] = \int\!\!\int_{-\infty}^{+\infty}
\!g(x,y) f(x,y) dxdy\,.
\end{equation}
\item[Variance:]
\begin{equation}
\sigma_X^2=E[X^2]-E^2[X]\,,
\end{equation}
and analogously for $Y$.
\item[Covariance:]
\begin{eqnarray}
Cov(X,Y) & = & E\left[\left(X-E[X]\right) \left(Y-E[Y]\right)\right]\\
& = & E[X Y]-E[X] E[Y]\,.
\end{eqnarray}
If $Y$ and $Y$ are independent, then $ E[XY]=E[X] E[Y]$
and hence $Cov(X,Y) =0$ (the opposite is true only if $X$,
$Y\sim {\cal N}(\cdot)$).
\item[Correlation coefficient:]
\begin{eqnarray}
\rho(X,Y)&=&\frac{Cov(X,Y)}{\sqrt{Var(X) Var(Y)}}\\
&=& \frac{Cov(X,Y)}{\sigma_X \sigma_Y}\, .
\end{eqnarray}
$$( -1 \le \rho \le 1)$$
\item[Linear combinations of random variables:]\ \\
If $Y=\sum_i c_iX_i$, with $c_i$ real, then:
\begin{eqnarray}
\mu_Y=E[Y] & = & \sum_ic_i E[X_i] = \sum_ic_i\mu_i
\label{eq:linc1} \\
\sigma_Y^2=Var(Y)& = &
\sum_i c_i^2Var(X_i) + 2\sum_{i< j}c_ic_jCov(X_i,X_j)
\label{eq:linc2} \\
& = &
\sum_i c_i^2Var(X_i) + \sum_{i\ne j}c_ic_jCov(X_i,X_j)
\label{eq:linc3} \\
& = & \sum_i c_i^2\sigma_i^2
+\sum_{i\ne j}\rho_{ij}c_ic_j\sigma_i\sigma_j
\label{eq:linc4} \\
& = & \sum_{ij}\rho_{ij}c_ic_j\sigma_i\sigma_j
\label{eq:linc5} \\
& = & \sum_{ij}c_ic_j\sigma_{ij}
\label{eq:linc6} \,.
\end{eqnarray}
$\sigma^2_Y$ has been written in the different ways, with
increasing levels of compactness, that can be found
in the literature. In particular, (\ref{eq:linc6}) uses the convention
$\sigma_{ii}=\sigma^2_i$ and the fact that,
by definition, $\rho_{ii}=1$.
\item[Bivariate normal distribution:] joint probability density function
of $X$ and $Y$ with correlation coefficient $\rho$
(see Fig ~\ref{fig:bivar}):
\begin{figure}
\centering\epsfig{file=bivar.eps,width=\linewidth,clip=}
\caption{\sf Example of bivariate normal distribution.}
\label{fig:bivar}
\end{figure}
\begin{eqnarray}
f(x,y) &=&
\frac{1}{2\pi\sigma_x\sigma_y\sqrt{1-\rho^2}}\cdot \\
&& \exp{\left\{
-\frac{1}{2(1-\rho^2)}
\left[ \frac{(x-\mu_x)^2}{\sigma_x^2}
- 2\rho\frac{(x-\mu_x)(y-\mu_y)}{\sigma_x\sigma_y}
+ \frac{(y-\mu_y)^2}{\sigma_y^2}
\right]
\right\}}\,. \nonumber
\label{eq:bivar}
\end{eqnarray}
Marginal distributions:
\begin{eqnarray}
X &\sim & {\cal N}(\mu_x,\sigma_x) \\
Y &\sim & {\cal N}(\mu_y,\sigma_y) \,.
\end{eqnarray}
Conditional distribution:
\begin{equation}
f(y|x_\circ) = \frac{1}{\sqrt{2\pi}\sigma_y\sqrt{1-\rho^2}}
\exp{\left[
-\frac{\left(y-\left[\mu_y+\rho\frac{\sigma_y}{\sigma_x}
\left(x_\circ-\mu_x\right)\right]
\right)^2}
{2\sigma_y^2(1-\rho^2)}
\right]}\,,
\label{eq:y_cond}
\end{equation}
i.e.
\begin{equation}
Y_{|x_\circ}\sim {\cal N}\left( \mu_y+\rho\frac{\sigma_y}{\sigma_x}
\left(x_\circ-\mu_x\right),\,
\sigma_y\sqrt{1-\rho^2}\right):
\label{eq:y_cond1}
\end{equation}
the condition $X=x_\circ$ squeezes the standard deviation and shifts
the mean of $Y$.
\end{description}
\section{Central limit theorem}\label{sec:clim}
\subsection{Terms and role}
\begin{figure}
\centering\epsfig{file=cen_lim.eps,width=12.5cm,clip=}
\caption{\sf Central limit theorem at work: the sum of $n$
variables, for two different distribution, is shown. The values
of $n$ (top-down) are: 1,2,3,5,10,20,50.}
\label{fig:cen_lim}
\end{figure}
The well known central limit theorem plays
a crucial role in statistics
and justifies the enormous importance
that the normal distribution has in many practical applications
(this is the reason why it appears on 10 DM notes).
We have reminded ourselves in (\ref{eq:linc1}-\ref{eq:linc2})
of the expression of the mean and variance of a linear combination
of random variables
$$Y=\sum_{i=1}^n X_i$$
in the most general case, which includes
correlated variables ($\rho_{ij}\ne0$). In the case of
independent variables the variance is
given by the simpler, and better known,
expression
\begin{equation}
\sigma_Y^2= \sum_{i=1}^n c_i^2\sigma_i^2 \hspace{1.0cm}
(\rho_{ij}=0,\ i\ne j) \,.
\end{equation}
This is a very general statement, valid for any
number and kind of variables
(with the obvious clause that all $\sigma_i$ must be finite) but
it does not give any information about the probability distribution
of $Y$. Even if all $X_i$ follow the same distributions $f(x)$,
$f(y)$ is different from $f(x)$, with some exceptions,
one of these being the normal.
The central limit theorem states that
the distribution of a linear combination
$Y$ will be {\it approximately normal} if the variables $X_i$
are independent and $\sigma_Y^2$ is much larger than any
single component $c_i^2\sigma_i^2$ from a non-normally distributed
$X_i$. The last condition is just to guarantee that there is
no single random variable which dominates the fluctuations.
The accuracy of the approximation improves as the number of
variables $n$ increases (the theorem says ``when $n\rightarrow\infty$''):
\begin{equation}
n\rightarrow\infty \Longrightarrow Y \sim
{\cal N}\left(\sum_{i=1}^n c_i X_i,
\left(\sum_{i=1}^n c_i^2\sigma_i^2\right)^{\frac{1}{2}}\right)
\end{equation}
The proof of the theorem
can be found in standard text books.
For practical purposes, and if one is not very interested
in the detailed behavior of the tails, $n$ equal to 2 or 3
may already gives a satisfactory approximation, especially
if the $X_i$ exhibits a gaussian-like shape. Look for example
at Fig. ~\ref{fig:cen_lim}, where samples of 10000 events have
been simulated starting from a uniform distribution and from a
crazy square wave distribution. The latter, depicting
a kind of ``worst practical case'', shows that, already
for $n=20$ the distribution of the sum is practically normal.
In the case of the uniform distribution $n=3$ already
gives an acceptable approximation as far as probability intervals of
one or two standard deviations
from the mean value are concerned. The figure also shows
that, starting from a triangular distribution (obtained
in the example from the sum of 2 uniform distributed variables),
$n=2$ is already sufficient (the sum of 2 triangular distributed
variables is equivalent to the sum of 4
uniform distributed variables).
\subsection{Distribution of a sample average}\label{ss:media}
As first application of the theorem let us remind ourselves that
a sample average $\overline{X}_n$of $n$ \underline{independent} variables
\begin{eqnarray}
\overline{X}_n &=& \sum_{i=1}^n\frac{1}{n}X_i,
\end{eqnarray}
is normally distributed, since it is a linear combination
of $n$ variables $X_i$, with $c_i=1/n$. Then:
\begin{eqnarray}
\overline{X}_n & \sim & {\cal N}(\mu_{\overline{X}_n},
\sigma_{\overline{X}_n}) \\
\mu_{\overline{X}_n} &=& \sum_{i=1}^n\frac{1}{n}\mu = \mu \\
\sigma^2_{\overline{X}_n}& = & \sum_{i=1}^n
\left(\frac{1}{n}\right)^2\sigma^2 = \frac{\sigma^2}{n} \\
\sigma_{\overline{X}_n}& = & \frac{\sigma}{\sqrt{n}}\,.
\end{eqnarray}
This result, we repeat, is independent of the distribution
of $X$ and is already {\it approximately valid} for small values of $n$.
\subsection{Normal approximation of the binomial and
of the Poisson distribution}
Another important application of the theorem is that the binomial
and the Poisson distribution can be approximated, for ``large numbers'',
by a normal distribution. This is a general result, valid for
all distributions which have the {\it reproductive property
under the sum}. Distributions of this kind are the binomial,
the Poisson and the $\chi^2$. Let us go into more detail:
\begin{description}
\item[\fbox{${\cal B}_{n,p} \rightarrow {\cal N}
\left(np, \sqrt{np(1-p)}\right)$}]
The reproductive property of the binomial states that if $X_1$,
$X_2$, $\ldots$, $X_m$ are $m$ independent variables,
each following a binomial distribution of parameter $n_i$ and $p$,
then their sum $Y=\sum_iX_i$ also follows a binomial distribution
with parameters $n=\sum_i n_i$ and $p$. It is easy to be convinced
of this property without
any mathematics: just think of what happens if one tosses bunches
of three, of five and of ten coins, and then one considers
the global result:
a binomial with a large $n$ can then always
be seen as a sum of many binomials with smaller $n_i$. The
application of the central limit theorem is straightforward,
apart from deciding when the convergence is acceptable:
the parameters on which one has to judge
are in this case $\mu=np$ and the
complementary quantity $\mu^c=n(1-p)=n-\mu$. If they are
\underline{both} $\gtrsim 10$ then the approximation starts to
be reasonable.
\item[\fbox{${\cal P}_{\lambda} \rightarrow
{\cal N}\left(\lambda, \sqrt{\lambda}\right)$}]
The same argument holds for the Poisson distribution.
In this case the approximation starts to be reasonable
when $\mu=\lambda \gtrsim 10$.
\end{description}
\subsection{Normal distribution of measurement errors}
The central limit theorem is also important to {\it justify}
why in many cases the distribution followed by
the measured values around their average is approximately normal.
Often, in fact, the random experimental error $e$,
which causes the fluctuations of the measured values
around the unknown true value of the physical quantity, can be seen
as an \underline{incoherent} sum of smaller contributions
\begin{equation}
e = \sum_i e_i\,,
\end{equation}
each contribution
having a distribution which satisfies
the conditions of the central limit theorem.
\subsection{Caution}
After this commercial in favour of the miraculous properties
of the central limit theorem, two remarks of caution:
\begin{itemize}
\item
sometimes the conditions of the theorem are not satisfied:
\begin{itemize}
\item
a single component dominates the fluctuation of the
sum:
a typical case is the well known Landau distribution;
also systematic errors may have the same effect on the global error;
\item
the condition of independence is lost if systematic
errors affect a set of measurements, or
if there is coherent noise;
\end{itemize}
\item
the
\underline{tails} of the distributions \underline{do exist}
and they are not always gaussian! Moreover,
realizations of a random variable several standard deviations
away from the mean are \underline{possible}. And they show up
without notice!
\end{itemize}
\section{Measurement errors and measurement
uncertainty}
One might assume that the concepts of error and uncertainty
are well enough known to be not worth discussing.
Nevertheless a
few comments are needed
(although for more details
to the DIN\cite{DIN} and ISO\cite{ISO,ISOD} recommendations
should be consulted):
\begin{itemize}
\item
the first concerns the terminology. In fact the words
{\it error} and {\it uncertainty} are
currently used almost as synonyms:
\begin{itemize}
\item
``error'' to mean both error
and uncertainty (but nobody says ``Heisenberg
Error Principle'');
\item
``uncertainty'' only for the uncertainty.
\end{itemize}
``Usually'' we understand
what each other is talking about, but a more precise
use of these nouns would really help. This is strongly
called for
by the DIN\cite{DIN} and ISO\cite{ISO,ISOD} recommendations.
They state in fact that
\begin{itemize}
\item
\underline{error} is {\it ``the result of a measurement minus a
true value of the measurand''}: it follows that
the \underline{error} is usually
\underline{unkown};
\item
\underline{uncertainty} is a ``{\it parameter, associated with the result
of a measurement, that characterizes the dispersion of the values that could
reasonably be attributed to the measurand}'';
\end{itemize}
\item
Within the High Energy Physics community
there is an established
practice for reporting the final uncertainty of a measurement in the form
of \underline{standard deviation}.
This is also recommended by these norms.
However this should be done
at each step of the analysis, instead of estimating
``maximum error bounds'' and using
them as standard deviation in the
``error propagation'';
\item
the process of measurement is a complex one and it is difficult
to disentangle the different contributions which cause the total
error. In particular,
the active role of the experimentalist
is sometimes overlooked.
For this reason it is
often incorrect to quote the (``nominal'') uncertainty due to the
instrument as if it were \underline{the} uncertainty
of the measurement.
\end{itemize}
\section{Statistical Inference}\label{sec:inference}
\subsection{Bayesian inference}
\label{ss:bayes_inf}
In the Bayesian framework the inference
is performed calculating
the final distribution of the random variable
associated to the true
values of the physical quantities
from all available information.
Let us call
$\underline{x}=\{x_1, x_2, \ldots, x_n\} $
the {\it n-tuple} (``vector'') of observables,
$\underline{\mu}=\{\mu_1, \mu_2, \ldots, \mu_n\}$ the n-tuple
of the true
values of the physical quantities of interest,
and $\underline{h}=\{h_1, h_2, \ldots, h_n\}$
the n-tuple of all the
possible realizations of the {\it influence variables} $H_i$.
The term ``influence variable'' is used here with
an extended meaning, to indicate not only external factors which
could influence the result (temperature, atmospheric pressure,
and so on) but also any possible calibration constant and any
source of systematic errors.
In fact the distinction between $\underline{\mu}$ and
$\underline{h}$ is artificial, since they are all conditional
hypotheses. We separate them simply because at the end we will
``marginalize'' the final joint distribution functions
with respect to $\underline{\mu}$, integrating the joint distribution
with respect to the other hypotheses
considered as influence variables.
The likelihood of the {\it sample} $\underline{x}$ being
produced from $\underline{h}$ and $\underline{\mu}$ and the
initial probability are
$$ f(\underline{x}|\underline{\mu}, \underline{h}, H_\circ)$$
and
\begin{equation}
f_\circ(\underline{\mu}, \underline{h}) =
f(\underline{\mu}, \underline{h}| H_\circ)\,,
\end{equation}
respectively.
$H_\circ$ is intended to remind us, yet again, that
likelihoods and priors
- and hence conclusions - depend
on all explicit and implicit assumptions within the problem,
and in particular on the parametric functions used to
model priors and likelihoods.
To simplify the formulae, $H_\circ$
will no longer be written explicitly.
Using the Bayes formula for multidimensional continuous
distributions (an extension of (~\ref{eq:bayes_cont}))
we obtain the most general formula
of inference
\begin{equation}
f(\underline{\mu}, \underline{h}|\underline{x}) =
\frac{f(\underline{x}|\underline{\mu}, \underline{h})
f_\circ(\underline{\mu}, \underline{h})}
{\int
f(\underline{x}|\underline{\mu}, \underline{h})
f_\circ(\underline{\mu}, \underline{h})
d\underline{\mu} d\underline{h}}\,,
\label{eq:ginf0}
\end{equation}
yielding the joint distribution of all conditional variables
$\underline{\mu}$ and $\underline{h}$ which are responsible
for the observed sample $\underline{x}$.
To obtain the final distribution of $\underline{\mu}$
one has to integrate (\ref{eq:ginf0})
over all possible values of $\underline{h}$,
obtaining
\begin{equation}
\boxed{
f(\underline{\mu}|\underline{x}) =
\frac{\int f(\underline{x}|\underline{\mu}, \underline{h})
f_\circ(\underline{\mu}, \underline{h})d\underline{h}}
{\int
f(\underline{x}|\underline{\mu}, \underline{h})
f_\circ(\underline{\mu}, \underline{h})
d\underline{\mu} d\underline{h}}\,.
}
\label{eq:ginf1}
\end{equation}
Apart from the technical problem of evaluating the integrals,
if need be
numerically or using Monte Carlo
methods\footnote{This is conceptually what experimentalists
do when they change all the parameters of the Monte Carlo simulation
in order to estimate the ``systematic error''.},
(\ref{eq:ginf1}) represents the most general form
of {\it hypothetical inductive inference}.
The word ``hypothetical''
reminds us of $H_\circ$.
When all the sources of influence are under control,
i.e. they can be assumed to take a precise value,
the initial distribution can be factorized by a
$f_\circ(\underline{\mu})$
and a Dirac $\delta(\underline{h}-\underline{h}_\circ)$,
obtaining the much simpler formula
\begin{eqnarray}
f(\underline{\mu}|\underline{x}) &=&
\frac{\int f(\underline{x}|\underline{\mu}, \underline{h})
f_\circ(\underline{\mu})
\delta(\underline{h}-\underline{h}_\circ)d\underline{h}}
{\int
f(\underline{x}|\underline{\mu}, \underline{h})
f_\circ(\underline{\mu})\delta(\underline{h}-\underline{h}_\circ)
d\underline{\mu} d\underline{h}}
\nonumber \\
& = &
\frac{f(\underline{x}|\underline{\mu}, \underline{h}_
\circ)f_\circ(\underline{\mu})}
{\int f(\underline{x}|\underline{\mu}, \underline{h}_\circ)
f_\circ(\underline{\mu}) d\underline{\mu}}\,.
\label{eq:ginf2}
\end{eqnarray}
Even if formulae (\ref{eq:ginf1}-\ref{eq:ginf2})
look complicated because of the
multidimensional integration and of the continuous nature
of $\underline{\mu}$, conceptually they are
identical to the example
of the $dE/dx$ measurement discussed in Sec. ~\ref{ss:bayes_inf}
The final probability density function provides the
most complete and detailed information about the
unknown quantities, but sometimes (almost always $\ldots$) one
is not interested in
full knowledge of $f(\underline{\mu})$, but just in a
few numbers which summarize at best the position and the width
of the distribution (for example when publishing the result
in a journal in the most compact way).
The most natural quantities for this purpose
are the expected value and the variance, or the standard deviation.
Then the Bayesian best estimate of a physical quantity
is:
\begin{eqnarray}
\widehat{\mu}_i = E[\mu_i] & = &
\int \mu_i f(\underline{\mu}|\underline{x}) d\underline{\mu}
\label{eq:best_mu1} \\
\sigma_{\mu_i}^2\equiv Var(\mu_i) & = & E[\mu_i^2] - E^2[\mu_i] \\
\sigma_{\mu_i} & \equiv & +\sqrt{\sigma_{\mu_i}^2}
\end{eqnarray}
When many true values are inferred
from the same data
the numbers which synthesize the result are not
only the expected values and variances, but also the covariances,
which give \underline{at least} the (linear!)
correlation coefficients between the variables:
\begin{equation}
\rho_{ij}\equiv\rho(\mu_i,\mu_j) = \frac{Cov(\mu_i,\mu_j)}
{\sigma_{\mu_i}\sigma_{\mu_j}}\,.
\end{equation}
In the following sections we will deal in most cases
with only one value to infer:
\begin{equation}
f(\mu|\underline{x}) = \ldots \,,
\end{equation}
\subsection{Bayesian inference and maximum likelihood}
We have already said
that the dependence of the final probabilities
on the initial ones gets weaker as the amount of
experimental information increases. Without going into mathematical
complications (the proof of this statement can be found
for example in\cite{Jeffreys})
this simply means that, asymptotically,
whatever $f_\circ(\mu)$
one puts in (\ref{eq:ginf2}),
$f(\mu|\underline{x})$ is unaffected. This is ``equivalent'' to
dropping
$f_\circ(\mu)$ from
(\ref{eq:ginf2}). This results in
\begin{equation}
f(\mu|\underline{x}) \approx
\frac{f(\underline{x}|\mu, \underline{h}_\circ)}
{\int f(\underline{x}|\mu, \underline{h}_\circ) d\mu}\,.
\end{equation}
Since the denominator of the Bayes formula has the
technical role of properly normalizing the probability
density function,
the result can be written in the simple form
\begin{equation}
f(\mu|\underline{x}) \propto
f(\underline{x}|\mu, \underline{h}_\circ) \equiv
{\cal L}(\underline{x};\mu, \underline{h}_\circ)\,.
\end{equation}
Asymptotically the final probability is just the (normalized)
likelihood! The notation ${\cal L}$ is that used in the
maximum likelihood literature (note that, not only does $f$
become ${\cal L}$,
but also ``$|$'' has been replaced by ``;'':
${\cal L}$ has no probabilistic interpretation in conventional statistics.)
If the mean value of $f(\mu|\underline{x})$
coincides with the value for which $f(\mu|\underline{x})$
has a maximum, we obtain the
maximum likelihood method. This does not mean that the
Bayesian methods are ``blessed'' because
of this achievement, and that
they can be used only in those cases where they provide the same results.
It is the
other way round, the maximum likelihood method
gets justified \underline{when} all the
the limiting conditions of the approach
($\rightarrow$ insensitivity of the result from the initial
probability $\rightarrow$ large number of events)
are satisfied.
Even if in this asymptotic limit the two approaches yield the same
numerical results, there are differences in their interpretation:
\begin{itemize}
\item
the likelihood, after proper normalization, has a probabilistic
meaning for Bayesians but not for
frequentists; so Bayesians can say that the probability
that $\mu$ is in a certain interval is, for example, $68\,\%$,
while this statement is blasphemous for a frequentist (``the
true value is a constant'' from his point of view);
\item
frequentists prefer to choose
$\widehat{\mu}_L$
the value which maximizes the likelihood,
as estimator. For Bayesians, on the other hand,
the expected value
$\widehat{\mu}_B=E[\mu]$ (also called the {\it prevision})
is more appropriate. This is justified by the fact that
the assumption of the $E[\mu]$ as best estimate of $\mu$
minimizes the risk of a bet (always keep the bet in mind!).
For example, if the final distribution is exponential
with parameter $\tau$ (let us think for a moment of particle
decays) the maximum likelihood method would {\it recommend betting} on
the value $t=0$, whereas the Bayesian
approach suggests the value $t=\tau$. If the terms of the bet
are ``whoever gets \underline{closest} wins'' what is the best strategy?
And then, what is the best strategy if the terms are
``whoever gets the \underline{exact} value wins''?
But now think of the probability of getting the exact value and
of the probability of getting closest?
\end{itemize}
\subsection{The dog, the hunter and the biased Bayesian estimators}
One of the most important tests to judge
how good an estimator is,
is whether or not it is
{\it correct} (not biased).
Maximum likelihood estimators are
usually correct, while Bayesian estimators - analysed within
the maximum likelihood framework - often are not.
This could be considered a weak point - however the
Bayes estimators are simply
naturally consistent with the status
of information before new data
become available.
In the maximum
likelihood method, on the other hand, it is not clear what
the assumptions are.
Let us take an example which shows the logic of frequentistic
inference and why the use of reasonable prior distributions
yields results which
that frame classifies as distorted.
Imagine meeting a hunting dog in the country. Let us assume we
know that there is a $50\,\%$ probability
of finding the dog within a radius of 100 meters centered
on the position of the hunter (this is our likelihood).
Where is the hunter? He is with $50\,\%$ probability
within a radius of 100 meters around the position of the dog,
with equal probability in all directions. ``Obvious''.
This is exactly the
logic scheme used in the frequentistic approach to
build confidence regions from the estimator (the dog in this
example). This however assumes that the hunter can be anywhere
in the country. But now let us change the status of information:
``the dog is by a river''; ``the dog has collected a duck and
runs in a certain direction''; ``the dog is sleeping'';
``the dog is in a field surrounded by a fence through which he
can pass without problems, but the hunter cannot''. Given
any new condition the conclusion changes.
Some of the new conditions change our likelihood, but
some others only influence the initial distribution.
For example, the case of the dog in an enclosure
inaccessible to the hunter is exactly the problem encountered
when measuring a quantity close to the edge of its physical region,
which is quite common in frontier research.
\section{Choice of the initial probability density function}
\subsection{Difference with respect to the discrete case}
\begin{figure}
\centering\epsfig{file=var_tr.eps,width=\linewidth,clip=}
\caption{\sf Examples of variable changes.}
\label{fig:var_tr}
\end{figure}
The title of this section is similar to that of Sec. ~\ref{sec:choice1}, but
the problem and the conclusions will be different. There we said that
the Indifference Principle (or, in its refined modern version, the
Maximum Entropy Principle) was a good choice. Here there are problems
with {\it infinities}
and with the fact that it is possible to map an infinite
number of points contained in a finite region onto an infinite
number of points contained in a larger or smaller
finite region. This changes the probability density
function. If, moreover, the transformation from one
set of variables to the
other is not linear (see, e.g. Fig. ~\ref{fig:var_tr})
what is uniform in one variable ($X$)
is not uniform in another variable (e.g. $Y=X^2$). This problem
does not exist in the
case of discrete variables, since if $X=x_i$ has a probability
$f(x_i)$ then $Y=x_i^2$ has the same probability.
A different way of stating the problem is that the
{\it Jacobian} of the
transformation squeezes or stretches the metrics, changing the
probability density function.
We will not enter into the open
discussion about the optimal choice
of the distribution. Essentially we shall use the uniform distribution,
being careful to employ the variable which ``seems'' most appropriate
for the problem, but \underline{You} may disagree
- surely with good reason - if You have a different
kind of experiment in mind.
The same problem is also present, but well hidden,
in the maximum likelihood method.
For example, it is possible to demonstrate
that, in the case of normally distributed likelihoods,
a uniform distribution of the mean $\mu$ is implicitly assumed
(see section \ref{sec:normal_results}).
There is nothing wrong with this, but one should be aware
of it.
\subsection{Bertrand paradox and angels' sex}
A good example to help understand the problems outlined
in the previous section
is the so-called Bertrand paradox:
\begin{description}
\item[Problem:]
Given a circle of radius $R$ and a chord drawn randomly on it,
what is the probability that the length $L$ of the chord
is smaller than
$R$?
\item[Solution 1:] Choose ``randomly'' two points on the circumference
and draw a chord between them: $\Rightarrow P(L<R)=1/3=0.33$\,.
\item[Solution 2:] Choose a straight line passing through
the center
of the circle; then draw a second line, orthogonal to the first,
and which intersects it inside the circle at a
``random'' distance from the center:
$\Rightarrow P(L<R)=1-\sqrt{3}/2 = 0.13$\,.
\item[Solution 3:] Choose ``randomly'' a point inside the circle and
draw a straight line orthogonal to the radius
that passes through
the chosen point $\Rightarrow P(L<R)=1/4 = 0.25$;
\item[Your solution:] $\ldots$ $\ldots$ $\ldots$?
\item[Question:] What is the origin of the paradox?
\item[Answer:] The problem does not specify how to ``randomly''
choose the chord. The three solutions take a
\underline{uniform} distribution:
along the circumference; along the the radius; inside
the area. What is uniform in one variable is not uniform in the others!
\item[Question:] Which is the \underline{right} solution?
\end{description}
In principle you may imagine an infinite number of different solutions.
{}From a physicist's viewpoint
any attempt to answer this question is a waste of time.
The reason why the paradox
has been compared to the Byzantine discussions
about the sex of angels is that there are indeed people arguing
about it. For example, there is a school of thought which
insists that Solution 2 is the \underline{right} one.
In fact this kind of paradox, together with abuse of the Indifference
Principle for problems like ``what is the probability that the
sun will rise tomorrow morning'' threw a shadow over
Bayesian methods at the end of last century. The maximum likelihood
method, which does not make explicit use of prior distributions,
was then seen as a valid solution to the problem. But
in reality
the ambiguity of the proper metrics on which
the initial distribution is uniform has an equivalent
on the arbitrariness of the variable used in the likelihood function.
In the end, what was criticized
when it was stated explicitly in the Bayes formula \underline{is}
accepted passively when it is hidden in the maximum
likelihood method.
\section{Normally distributed observables}\label{sec:normal_results}
\subsection{Final distribution, prevision and credibility intervals of
the true value}\label{sec:normal_results1}
The first application of the Bayesian inference will be that
of a normally distributed quantity. Let us take
a data sample $\underline{q}$ of $n_1$ measurements, of which
we calculate the average $\overline{q}_{n_1}$. In our formalism
$\overline{q}_{n_1}$ is a realization of the random variable
$\overline{Q}_{n_1}$. Let us assume \underline{we know} the
standard deviation $\sigma$ of the variable $Q$, either
because $n_1$ is very large and it can be estimated
accurately from the sample or because it was known {\it a priori}
(we are not going to discuss in these notes the case
of small samples and unknown variance).
The property of the average (see ~\ref{ss:media})
tells us that the
likelihood $f(\overline{Q}_{n_1}|\mu,\sigma)$ is gaussian:
\begin{equation}
\overline{Q}_{n_1} \sim {\cal N}(\mu, \sigma/\sqrt{n_1}).
\label{eq:lik_q}
\end{equation}
To simplify the following notation, let us call $x_1$
this average and $\sigma_1$ the standard deviation of the average:
\begin{eqnarray}
x_1 & = &\overline{q}_{n_1}\\
\sigma_1 & = & \sigma/\sqrt{n_1}\,,
\end{eqnarray}
We then apply (\ref{eq:ginf2}) and get
\begin{equation}
f(\mu|x_1,
{\cal N}(\cdot,\sigma_1)) =
\frac{\frac{1}{\sqrt{2\pi}\sigma_1}
e^{-\frac{(x_1-\mu)^2}{2\sigma_1^2}}f_\circ(\mu)}
{ \int \frac{1}{\sqrt{2\pi}\sigma_1}
e^{-\frac{(x_1-\mu)^2}{2\sigma_1^2}}f_\circ(\mu)d\mu}\, .
\label{eq:invg}
\end{equation}
At this point we have to make a choice for
$f_\circ(\mu)$. A reasonable choice
is to take, as a first guess,
a uniform distribution defined over a ``large''
interval which includes $x_1$. It is not really important
how large the interval is,
for a few $\sigma_1$'s away
from $x_1$ the integrand at the denominator
tends to zero because of the gaussian function. What is important
is that a constant $f_\circ(\mu)$ can be simplified
in (\ref{eq:invg}) obtaining
\begin{equation}
f(\mu|x_1,
{\cal N}(\cdot,\sigma_1)) =
\frac{\frac{1}{\sqrt{2\pi}\sigma_1}
e^{-\frac{(x_1-\mu)^2}{2\sigma_1^2}}}
{ \int_{-\infty}^{\infty}
\frac{1}{\sqrt{2\pi}\sigma_1}
e^{-\frac{(x_1-\mu)^2}{2\sigma_1^2}}d\mu}\, .
\label{eq:invg2}
\end{equation}
The integral in the denominator is equal to unity, since
integrating with
respect to $\mu_1$ is equivalent to integrating with respect to $x_1$.
The final result is then
\begin{equation}
f(\mu) =
f(\mu|x_1,{\cal N}(\cdot,\sigma_1)) =
\frac{1}{\sqrt{2\pi}\sigma_1}
e^{-\frac{(\mu-x_1)^2}{2\sigma_1^2}}\, :
\label{eq:invg3}
\end{equation}
\begin{itemize}
\item
the true value is normally distributed around $x_1$;
\item
its best estimate ({\it prevision}) is $E[\mu]=x_1$;
\item
its variance is $\sigma_{\mu}=\sigma_1$;
\item
the ``confidence intervals'', or {\it credibility intervals},
in which there is a certain probability of finding the
true value are easily calculable:
\begin{center}
\vspace{0.5 cm}
\begin{tabular}{|c|rcl|} \hline
Probability level & \multicolumn{3}{|c|}{credibility interval} \\
(confidence level) & \multicolumn{3}{|c|}{(confidence interval)} \\
$(\%)$ & \multicolumn{3}{|c|}{} \\ \hline
68.3 & \hspace{0.4cm}$x_1$ & $\pm$ & $ \sigma_1$ \\
90.0 & $x_1$& $\pm$& $ 1.65\sigma_1$ \\
95.0 & $ x_1$& $\pm$& $ 1.96\sigma_1$ \\
99.0 & $ x_1$& $\pm$& $ 2.58\sigma_1$ \\
99.73 & $ x_1$& $\pm$& $ 3\sigma_1$ \\ \hline
\end{tabular}
\vspace{0.5 cm}
\end{center}
\end{itemize}
\subsection{Combination of several measurements}
Let us imagine making a second set of measurements of the physical
quantity, which \underline{we assume} unchanged from the previous
set of measurements. How will our knowledge of $\mu$ change after
this new information? Let us call $x_2 = \overline{q}_{n_2}$
and $\sigma_2 = \sigma^\prime/\sqrt{n_2}$ the new average and standard
deviation of the average
($\sigma^\prime$ may be different from $\sigma$ of the sample of
numerosity $n_1$).
Applying
Bayes' theorem
a second time
we now have to use {\it as initial distribution
the final probability of the previous inference}:
\begin{equation}
f(\mu|x_1,\sigma_1, x_2, \sigma_2, {\cal N}) =
\frac{\frac{1}{\sqrt{2\pi}\sigma_2}
e^{-\frac{(x_2-\mu)^2}{2\sigma_2^2}}f(\mu|x_1,{\cal N}(\cdot,\sigma_1))}
{ \int \frac{1}{\sqrt{2\pi}\sigma_2}
e^{-\frac{(x_2-\mu)^2}{2\sigma_2^2}}f(\mu|x_1,{\cal N}(\cdot,\sigma_1))d\mu}\,
{}.
\label{eq:recg}
\end{equation}
The integral is not as simple as the previous one, but still
feasible analytically. The final result is
\begin{equation}
f(\mu|x_1,\sigma_1, x_2, \sigma_2, {\cal N}) =
\frac{1}{\sqrt{2\pi}\sigma_A}
e^{-\frac{(\mu-x_A)^2}{2\sigma_A^2}}\, ,
\label{eq:waver}
\end{equation}
where
\begin{eqnarray}
x_A & = & \frac{x_1/\sigma_1^2 + x_2/\sigma_2^2}
{1/\sigma_1^2 + 1/\sigma_2^2}\, ,
\label{eq:waver1} \\
\frac{1}{\sigma_A^2} & = & \frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2}\, .
\label{eq:waver2}
\end{eqnarray}
One recognizes the famous formula of the weighted
average with the inverse of the variances, usually obtained
from maximum likelihood.
Some remarks:
\begin{itemize}
\item
Bayes' theorem updates the knowledge about $\mu$
in an automatic and natural way;
\item
if $\sigma_1 \gg \sigma_2$ (and $x_1$ is not ``too far'' from
$x_2$) the final result is only determined by the second
sample of measurements.
This suggests that an alternative {\it vague} {\it a priori} distribution
can be, instead of the uniform, a gaussian with a
{\it large enough} variance
and a {\it reasonable} mean;
\item
the combination of the samples requires a subjective judgement
that the two samples are really coming from the same true
value $\mu$. We will not discuss this point in these notes, but
a hint on how to proceed is: take the inference on the
difference of two measurements, $D$, as explained at the end of
Section ~\ref{sec:offset} and judge yourself if $D=0$ is
consistent with the probability density function of $D$.
\end{itemize}
\subsection{Measurements close to the edge
of the physical region}\label{sec:neutrino}
A case which has essentially no solution
in the maximum likelihood approach is when a measurement is performed
at the edge of the physical region and the measured
value comes out very close to it, or even on the
\underline{unphysical} region.
Let us take a numeric example:
\begin{description}
\item[Problem:]
An experiment is planned to measure the
(electron) neutrino mass. The
simulations show that the mass resolution is $3.3\,\mbox{eV}/c^2$,
largely independent of the mass value, and that the measured
mass is normally distributed around
the true mass\footnote{In reality, often
what is normally distributed is $m^2$ instead of $m$.
Holding this hypothesis the terms of the problem change
and a new solution should be worked out, following the
trace indicated in this example.}.
The mass value which results from the elaboration,\footnote{
We consider detector and analysis machinery as a black box,
no matter how complicated it was, and treat the numerical
outcome as a result of a direct measurement\cite{DIN}.}
and corrected for all
known systematic effects, is $x=-5.41\,\mbox{eV}/c^2$. What have we learned
about the neutrino mass?
\item[Solution:]
Our {\it a priori} value of the mass is that it is \underline{positive}
and not too large (otherwise it would already have been measured
in other experiments). One
can take any vague distribution which assigns a probability
density function between 0 and 20 or 30 $\mbox{eV}/c^2$.
In fact, if an experiment having a resolution of $\sigma=3.3\,\mbox{eV}/c^2$
has been planned and financed by rational people, with
the {\it hope} of finding evidence of non negligible mass
it means that the mass was thought to be in that range.
If there is no reason to prefer one of the values in that interval
a uniform distribution can be used, for example
\begin{equation}
f_{\circ K}(m)=k=1/30\hspace{1.0cm} (0\le m \le 30)\,.
\end{equation}
Otherwise, if one thinks
there is a greater chance of the mass having
small rather than high values,
a prior which reflects
such an assumption could be chosen,
for example a half normal with $\sigma_\circ=10\,eV$
\begin{equation}
f_{\circ N}(m) =\frac{2}{\sqrt{2\pi}\sigma_\circ}
\exp{\left[-\frac{m^2}{2\sigma_\circ^2}\right]}
\hspace{1.0cm} (m \ge 0)\,,
\end{equation}
or a triangular distribution
\begin{equation}
f_{\circ T}(m) = \frac{1}{450}(30-x) \hspace{.6cm} (0\le m \le 30)\,.
\end{equation}
Let us consider for simplicity the uniform distribution
\begin{eqnarray}
f(m|x, f_{\circ K})
&=& \frac{
\frac{1}{\sqrt{2\pi}\sigma}
\exp{\left[-\frac{(m-x)^2}{2\sigma^2}\right]} k
}
{\int_0^{30}
\frac{1}{\sqrt{2\pi}\sigma}
\exp{\left[-\frac{(m-x)^2}{2\sigma^2}\right]}
k d\mu} \\
&= &
\frac{19.8}{\sqrt{2\pi}\sigma}\exp{\left[-\frac{(m-x)^2}{2\sigma^2}\right]}
\hspace{0.7 cm}(0 \le m \le 30)\,.
\end{eqnarray}
The value which has the highest degree of belief is $m=0$,
but $f(m)$ is non vanishing up to $30\,\mbox{eV}/c^2$ (even if very small).
We can define an interval, starting from $m=0$,
in which we believe that $m$ should have a certain
probability. For example
this level of probability can be $95\,\%$. One has to find the value
$m_\circ$ for which the cumulative function $F(m_\circ)$
equals 0.95.
This value of $m$ is called the {\it upper limit} (or {\it upper bound}).
The result is
\begin{equation}
m < 3.9\, \mbox{eV}/c^2\hspace{0.5 cm} at\ 0.95\,\%\ C.L. \,.
\end{equation}
If we had assumed the other initial distributions the
limit would have been in both cases
\begin{equation}
m < 3.7\, \mbox{eV}/c^2\hspace{0.5 cm} at\ 0.95\,\%\ C.L.\,,
\end{equation}
practically the same (especially if compared with the experimental
resolution of $3.3\, \mbox{eV}/c^2$).
\item[Comment:] Let us assume an {\it a priori} function
sharply peaked at zero and see what happens. For example it could be
of the kind
\begin{equation}
f_{\circ S}(m)\propto \frac{1}{m}\,.
\end{equation}
To avoid singularities in the integral,
let us take a power of $m$ a bit greater
than $-1$, for example $-0.99$, and let us limit its domain
to 30, getting
\begin{equation}
f_{\circ S}(m) = \frac{0.01\cdot 30^{0.01}}{m^{0.99}}\,.
\end{equation}
The upper limit becomes
\begin{equation}
m < 0.006\, \mbox{eV}/c^2\hspace{0.5 cm} at\ 0.95\,\%\ C.L.\,.
\end{equation}
Any experienced physicist would find this result \underline{ridiculous}.
The upper limit is less then $0.2\,\%$ of the experimental resolution;
like expecting to resolve objects having dimensions smaller than
a micron with a design ruler!
Notice instead that in the previous examples the limit was always of the
\underline{order of magnitude} of the experimental resolution
$\sigma$.
As $f_{\circ S}(m)$ becomes more and more peaked at zero (power
of $x\rightarrow 1$) the limit gets smaller and smaller. This means
that, asymptotically, the degree of belief that $m=0$ is so high
that whatever you measure you will conclude that $m=0$: you could use
the measurement to calibrate the apparatus!
This means that this choice of initial distribution was unreasonable.
\end{description}
\section{Counting experiments}
\subsection{Binomially distributed quantities}\label{ss:binom}
\begin{figure}
\centering\epsfig{file=beta.eps,clip=,width=\linewidth}
\caption{\sf Probability density function of the binomial parameter
$p$, having observed $x$ successes in $n$ trials.}
\label{fig:beta}
\end{figure}
Let us assume we have performed $n$ trials and obtained $x$
favorable events. What is the probability of the next event?
This situation happens frequently when measuring efficiencies,
branching ratios, etc. Stated more generally,
one tries to infer the ``constant
and unknown probability''\footnote{
This concept, which is very close to the physicist's mentality,
is not correct from the probabilistic - \underline{cognitive} -
point of view. According to the Bayesian scheme, in fact,
the probability changes with the new observations. The final
inference of $p$, however, does not depend on the particular sequence
yielding $x$ successes over $n$ trials. This can be seen in the next table
where $f_n(p)$ is given as a function of the number of trials $n$,
for the three sequences which give 2 successes (indicated by ``1'')
in three trials
(the use of (\ref{eq:inv_binom}) is anticipated):
\begin{center}
\begin{tabular}{|r|ccc}
\multicolumn{1}{c}{} & \multicolumn{3}{c}{Sequence} \\
n & 011 & 101 & 110 \\ \hline
0 & 1 & 1 & 1 \\
1 & $2(1-p)$ & $2p$ & $2p$ \\
2 & $6p(1-p)$ & $6p(1-p)$ & $3p^2$ \\
3 & $12p^2(1-p)$ & $12p^2(1-p)$ & $12p^2(1-p)$
\end{tabular}
\end{center}
This important result, related to the concept of
{\it interchangeability},
``allows'' a physicist who is
reluctant to give up the concept
``unknown constant probability'', to see the problem from his
point of view,
ensuring that the same numerical result is obtained.}
of an event occurring.
Where we can assume that the probability is constant
and the observed number of favorable events are binomially
distributed, the unknown quantity to be measured is the parameter
$p$ of the binomial. Using Bayes' theorem we get
\begin{eqnarray}
f(p|x,n,{\cal B}) & = & \frac{
f(x|{\cal B}_{n,p})f_\circ(p)
}{
\int_0^1 f(x|{\cal B}_{n,p})f_\circ(p)dp
}\nonumber \\
& = & \frac{
\frac{n!}{(n-x)!x!}p^x(1-p)^{n-x}f_\circ(p)
}{
\int_0^1
\frac{n!}{(n-x)!x!}p^x(1-p)^{n-x}f_\circ(p)dp
} \nonumber \\
& = & \frac{
p^x(1-p)^{n-x}
}{
\int_0^1
p^x(1-p)^{n-x} dp
}\, ,
\end{eqnarray}
where an initial uniform distribution has been assumed.
The final distribution is known to statisticians as $\beta$ distribution
since the integral at the denominator is the special
function called $\beta$, defined also for real values of $x$ and $n$
(technically this is a $\beta$ with parameters
$a=x+1$ and $b=n-x+1$). In our case
these two numbers are integer and the integral becomes
equal to $x!(n-x)!/(n+1)!$. We then get
\begin{equation}
f(p|x,n,{\cal B})
= \frac{(n+1)!}{x!(n-x)!}p^x(1-p)^{n-x}\,.
\label{eq:inv_binom}
\end{equation}
The expected value and the variance of this distribution
are:
\begin{eqnarray}
E[p] &=& \frac{x+1}{n+2}
\label{eq:infbinom1}\\
Var(p) &=& \frac{(x+1)(n-x+1)}{(n+3)(n+2)^2} \\
&=& \frac{x+1}{n+2}\left(\frac{n}{n+2} \nonumber
-\frac{x+1}{n+2}\right)\frac{1}{n+3} \\
&=& E[p]\left(\frac{n}{n+2} - E[p]\right)\frac{1}{n+3}
\label{eq:infbinom2}\,.
\end{eqnarray}
The value of $p$ for which $f(p)$ has the maximum is
instead $p_m=x/n$. The expression $E[p]$
gives the {\it prevision}
of the probability for the $(n+1)$-th event
occurring and is called the
``recursive Laplace formula'', or ``Laplace's rule of succession''.
When $x$ and $n$ become large, and $0 \ll x \ll n$,
$f(p)$ has the following asymptotic properties:
\begin{eqnarray}
E[p] &\approx & p_m=\frac{x}{n}\,; \\
Var(p) &\approx & \frac{x}{n}\left(1-\frac{x}{n}\right)\frac{1}{n}
= \frac{p_m(1-p_m)}{n}\,; \\
\sigma_p & \approx & \sqrt{\frac{p_m(1-p_m)}{n}}:\\
p &\sim & {\cal N}(p_m, \sigma_p)\,.
\end{eqnarray}
Under these conditions the frequentistic ``definition'' of probability
($x/n$)
is recovered.
Let us see two particular situations: when $x=0$ and $x=n$. In these
cases one gives the result as upper or lower limits, respectively.
Let us sketch the solutions:
\begin{itemize}
\item
\underline{x=n}:
\begin{eqnarray}
f(n|{\cal B}_{n,p}) & = & p^n ;\\
f(p|x=n,{\cal B}) & = & \frac{p^n}{\int_0^1p^ndp} = (n+1)\cdot p^n;\\
F(p|x=n,{\cal B}) & = & p^{n+1}\, .
\end{eqnarray}
To get the $95\,\%$ {\it \underline{lower} bound} ({\it limit}):
\begin{eqnarray}
F(p_\circ|x=n,{\cal B}) & = & 0.05\, , \nonumber \\
& & \nonumber \\
p_\circ & = & \sqrt[n+1]{0.05}\, .
\end{eqnarray}
An increasing number of trials $n$ constrain more and more
$p$ around 1.
\item
\underline{x=0}:
\begin{eqnarray}
f(0|{\cal B}_{n,p}) & = & (1-p)^n ;\\
f(p|x=0,n,{\cal B}) & = & \frac{(1-p)^n}{\int_0^1(1-p)^ndp}
= (n+1)\cdot (1-p)^n;\\
F(p|x=0, n, {\cal B}) & = & 1 - (1-p)^{n+1}\, .
\end{eqnarray}
To get the $95\,\%$ {\it \underline{upper} bound (limit)}:
\begin{eqnarray}
F(p_\circ|x=0,n,{\cal B}) & = & 0.95; \nonumber \\
& & \nonumber \\
p_\circ & = & 1 - \sqrt[n+1]{0.05}\, .
\end{eqnarray}
\end{itemize}
The following table shows the $95\,\%$ C.L. limits as a function
of $n$.
The Poisson approximation, to be discussed
in the next section, is also shown.
\begin{center}
\vspace{0.5 cm}
\begin{tabular}{|r|c|c|c|}\hline
& \multicolumn{3}{c|}{Probability level = $95\,\%$} \\ \hline
$n$ & $x= n$ & \multicolumn{2}{c|}{$x=0$} \\ \hline
& binomial & binomial & Poisson approx. \\
& & & (\,$p_\circ=3/n$\,)\\ \hline
3 & $p\ge 0.47$ & $p\le 0.53$ & $p\le 1$ \\
5 & $p\ge 0.61$ & $p\le 0.39$ & $p\le 0.6$ \\
10 & $p\ge 0.76$ & $p\le 0.24$ & $p\le 0.3$ \\
50 & $p\ge 0.94$ & $p\le 0.057$ & $p\le 0.06$ \\
100& $p\ge 0.97$ & $p\le 0.029$ & $p\le 0.03$ \\
1000 & $p\ge 0.997$ & $p\le 0.003$ & $p\le 0.003$ \\ \hline
\end{tabular}
\vspace{0.5 cm}
\end{center}
To show in this simple case how $f(p)$ is updated by the new information,
let us imagine we have performed two experiments. The results
are $x_1=n_1$ and $x_2=n_2$, respectively. Obviously the global
information
is equivalent to $x=x_1+x_2$ and $n=n_1+n_2$, with $x=n$.
We then get
\begin{equation}
f(p|x = n,{\cal B}) = (n+1)p^n = (n_1+n_2+1)p^{n_1+n_2}\, .
\end{equation}
A different way of proceeding would have been to calculate the final
distribution from the information $x_1=n_1$
\begin{equation}
f(p|x_1 = n_1,{\cal B}) = (n_1+1)p^{n_1}\, ,
\end{equation}
and feed it as initial
distribution to the next inference:
\begin{eqnarray}
f(p|x_1 = n_1, x_2=n_2,{\cal B}) & = & \frac{p^{n_2}
f(p|x_1=n_1, {\cal B})}
{\int_{0}^{1}p^{n_2}f(p|x_1=n_1, {\cal B})dp} \\
& = & \frac{p^{n_2}(n_1+1)p^{n_1}}
{\int_{0}^{1}p^{n_2}(n_1+1)p^{n_1}dp} \\
& = & (n_1+n_2+1)p^{n_1+n_2}\, ,
\end{eqnarray}
getting the same result.
\subsection{Poisson distributed quantities}\label{ss:poisson}
As is well known, the typical application of the Poisson
distribution is in counting experiments
such as source activity,
cross sections, etc. The unknown parameter to be
inferred is $\lambda$. Applying Bayes formula
we get
\begin{eqnarray}
f(\lambda|x,{\cal P}) &=& \frac{\frac{\lambda^xe^{-\lambda}}{x!}
f_\circ(\lambda)}
{\int_0^\infty\frac{\lambda^xe^{-\lambda}}{x!}
f_\circ(\lambda) d\lambda}\, .
\end{eqnarray}
Assuming\footnote{There is a school
of thought according to which the most appropriate
function is $f(\lambda)\propto1/\lambda$.
If \underline{You} think that it is reasonable
for your problem, it may be a good prior.
Claiming that this is ``the Truth'' is one
of the many
claims of the
angels' sex determinations. For didactical purposes
a uniform distribution is more than enough. Some comments
about the $1/\lambda$ prescription will be given
when discussing the particular case $x=0$.}
$f_\circ(\lambda)$ constant up to a certain
$\lambda_{max}\gg x$ and making the integral by parts we obtain
\begin{eqnarray}
f(\lambda|x,{\cal P}) & = & \frac{\lambda^x e^{-\lambda}}{x!}
\label{eq:inv_poiss1} \\
F(\lambda|x,{\cal P}) & = &
1 - e^{-\lambda}\left(\sum_{n=0}^x \frac{\lambda^n}{n!}\right)\,,
\label{eq:inv_poiss2}
\end{eqnarray}
where the last result has been obtained integrating
(\ref{eq:inv_poiss1}) also
by parts.
Fig. ~\ref{fig:distr_lambda} shows how to build the
credibility intervals, given a certain measured
number of counts $x$.
\begin{figure}[t]
\centering\epsfig{file=dago7.eps,clip=}
\caption{\sf Poisson parameter $\lambda$ inferred from
an observed number $x$ of counts.}
\label{fig:distr_lambda}
\end{figure}
Fig. ~\ref{fig:invpois} shows some numerical examples.
\begin{figure}
\centering\epsfig{file=invpois.eps,clip=}
\caption{\sf Examples of $f(\lambda|x_i)$.}
\label{fig:invpois}
\end{figure}
$f(\lambda)$ has the following properties:
\begin{itemize}
\item
the expected values, variance, value of maximum
probability are
\begin{eqnarray}
E[\lambda] & = & x+1 \\
Var(\lambda) & = & x+2 \\
\lambda_m &=& x \,;
\end{eqnarray}
the fact that the best estimate of $\lambda$ in the Bayesian sense
is not the intuitive value $x$ but $x+1$ should neither surprise,
nor disappoint us: according to the
initial distribution used ``there are always more possible
values of $\lambda$ on the right side than on the left side of $x$'',
and they pull the distribution to their side; the full information
is always given by $f(\lambda)$ and the use of the mean is just a
rough approximation; the difference from the ``desired'' intuitive value
$x$ in units of the standard deviation goes as $1/\sqrt{n+2}$
and becomes immediately negligible;
\item
when $x$ becomes large we get:
\begin{eqnarray}
E[\lambda] &\approx& \lambda_{m} = x \,; \\
Var(\lambda) &\approx& \lambda_{m} = x \,; \\
\sigma_\lambda &\approx& \sqrt{x} \,; \label{eq:radice_l}\\
\lambda & \sim & {\cal N}(x, \sqrt{x})\,.
\end{eqnarray}
(\ref{eq:radice_l}) is one of the most familar formulae
used by physicists to assess the uncertainty of a measurement,
although it is sometimes misused.
\end{itemize}
Let us conclude with a special case: $x=0$. As one might imagine,
the inference is highly sensitive
to the initial distribution.
Let us assume that {\it the experiment was planned with
the hope of \underline{observing} something}, i.e. that it could
detect a handful of events within its lifetime. With this hypothesis
one may use any vague prior function not strongly peaked
at zero. We have already come
across a similar case in section
{}~\ref{sec:neutrino},
concerning the upper limit of the neutrino mass. There it was shown
that reasonable hypotheses based on the \underline{positive attitude}
of the experimentalist are almost equivalent
and that they give results consistent with detector performances.
Let us use then
the uniform distribution
\begin{eqnarray}
f(\lambda|x=0,{\cal P}) & = & e^{-\lambda} \\
F(\lambda|x=0,{\cal P}) & = & 1-e^{-\lambda} \\
\lambda & < & 3 \ at\ 95\,\%\ C.L. \, .
\end{eqnarray}
\begin{figure}
\centering\epsfig{file=dago8.eps,clip=}
\caption{\sf Upper limit to $\lambda$ having observed 0 events.}
\label{fig:lim_lambda}
\end{figure}
\section{Uncertainty due to unknown systematic errors}\label{sec:unknown}
\subsection{Example: uncertainty of the instrument scale
offset}\label{sec:offset}
In our scheme any quantity of influence of which we don't know the exact
value is a source of systematic error. It will change the final
distribution of $\mu$ and hence its uncertainty.
We have already discussed the most general case in
{}~\ref{ss:bayes_inf}. Let us make a simple
application making a small variation to the example in section
{}~\ref{sec:normal_results1}: the ``zero'' of the instrument
is not known exactly, owing to calibration uncertainty.
This can be parametrized assuming that its true value
$Z$ is normally distributed around 0 (i.e. the calibration
was properly done!) with a standard deviation $\sigma_Z$.
Since, most probably, the true value of $\mu$ is independent from
the true value of $Z$, the initial joint probability density
function can be written as the product of the marginal ones:
\begin{equation}
f_\circ(\mu,z)=f_\circ(\mu)f_\circ(z)=
k\frac{1}{\sqrt{2\pi}\sigma_Z}
\exp{\left[-\frac{z^2}{2\sigma_Z^2}\right]}\,.
\end{equation}
Also the likelihood changes with respect to
{}~\ref{eq:lik_q}:
\begin{equation}
f(x_1|\mu,z) = \frac{1}{\sqrt{2\pi}\sigma_1}
\exp{\left[-\frac{(x_1-\mu-z)^2}{2\sigma_1^2}\right]}\,.
\end{equation}
Putting all the pieces together and making use of
(\ref{eq:ginf1}) we finally get
\begin{equation}
f(\mu|x_1, \ldots,f_\circ(z))
=
\frac{
\int
\frac{1}{\sqrt{2\pi}\sigma_1}
\exp{\left[-\frac{(x_1-\mu-z)^2}{2\sigma_1^2}\right]}
\frac{1}{\sqrt{2\pi}\sigma_Z}
\exp{\left[-\frac{z^2}{2\sigma_Z^2}\right]}
dz
}
{
\int\!\!\int
\frac{1}{\sqrt{2\pi}\sigma_1}
\exp{\left[-\frac{(x_1-\mu-z)^2}{2\sigma_1^2}\right]}
\frac{1}{\sqrt{2\pi}\sigma_Z}
\exp{\left[-\frac{z^2}{2\sigma_Z^2}\right]}
d\mu dz
}\,.
\nonumber
\end{equation}
Integrating\footnote{It may help to know that
$$\int_{-\infty}^{+\infty}\exp{\left[bx-\frac{x^2}{a^2}\right]}dx
= \sqrt{a^2\pi}\exp{\left[\frac{a^2b^2}{4}\right]}\,.$$}
we get
\begin{equation}
f(\mu) = f(\mu|x_1, \ldots,f_\circ(z)) =
\frac{1}{\sqrt{2\pi}\sqrt{\sigma_1^2+\sigma_Z^2}}
\exp{\left[-\frac{(\mu-x_1)^2}{2(\sigma_1^2+\sigma_Z^2)}\right]}\,.
\end{equation}
The result is that $f(\mu)$ is still a gaussian, but with
a larger variance. The global standard uncertainty
is the quadratic combination of
that due to the statistical fluctuation of the data sample
and the uncertainty due to the imperfect knowledge of the
{\it systematic effect}:
\begin{equation}
\sigma_{tot}^2 = \sigma_1^2+\sigma_Z^2\,.
\end{equation}
This result is well known, although there are still
some ``old-fashioned'' recipes which require
different combinations
of the contributions to be performed.
One has to notice that in this framework it makes no sense
to speak of ``statistical'' and ``systematical'' uncertainties,
as if they were of a different nature.
They have the same \underline{probabilistic} nature:
$\overline{Q}_{n_1}$ is around $\mu$ with a standard deviation
$\sigma_1$, and $Z$ is around 0 with standard deviation $\sigma_Z$.
What distinguishes the two components
is how the knowledge of the uncertainty is gained: in one case
($\sigma_1$) from repeated measurements; in the second case ($\sigma_Z$)
the evaluation was done by somebody else (the constructor
of the instrument),
or in a previous experiment, or guessed from the knowledge of the
detector, or by simulation, etc. This is the reason why the ISO Guide
\cite{ISO} prefers the generic names {\it Type A} and {\it Type B}
for the two kinds of contribution to global
uncertainty. In particular
the name ``systematic uncertainty'' should be avoided, while
it is correct to speak about ``uncertainty due to a systematic effect''.
\subsection{Correction for known systematic errors}\label{ss:known_syst}
It is easy to be convinced that if our prior knowledge
about $Z$ was of the kind
\begin{equation}
Z\sim {\cal N}(z_\circ,\sigma_Z)
\end{equation}
the result would have been
\begin{equation}
\mu \sim {\cal N}\left(x_1-z_\circ,
\sqrt{\sigma_1^2+\sigma_Z^2}\right)\,,
\end{equation}
i.e. one has first to \underline{correct} the result
\underline{for} the best value of the \underline{systematic error}
and then include in the \underline{global uncertainty} a term due to
imperfect knowledge
about it. This is a well known and practised
procedure, although there are still people who
confuse $z_\circ$ with its uncertainty.
\subsection{Measuring two quantities with the same instrument
having an uncertainty of the scale offset}\label{sec:off_err}
Let us take an example which is a bit more complicated (at least from
the mathematical point of view) but conceptually very
simple and also very common in laboratory practice.
We measure two physical quantities with the same instrument,
assumed
to have an uncertainty on the ``zero'',
modeled with a normal distribution as in the
previous sections. For each of the
quantities we collect a sample of data \underline{under the same
conditions}, which means that the unknown offset error does not
change from one set of measurements to the other.
Calling $\mu_1$ and $\mu_2$ the true
values, $x_1$ and $x_2$ the sample averages, $\sigma_1$ and
$\sigma_2$
the average's standard deviations,
and $Z$ the true value of the ``zero'',
the initial probability density and the likelihood are
\begin{equation}
f_\circ(\mu_1,\mu_2,z)=f_\circ(\mu_1)f_\circ(\mu_2)f_\circ(z)=
k\frac{1}{\sqrt{2\pi}\sigma_Z}
\exp{\left[-\frac{z^2}{2\sigma_Z^2}\right]}
\end{equation}
and
\begin{eqnarray}
f(x_1,x_2|\mu_1,\mu_2,z) &=&
\frac{1}{\sqrt{2\pi}\sigma_1}
\exp{\left[-\frac{(x_1-\mu_1-z)^2}{2\sigma_1^2}\right]}
\frac{1}{\sqrt{2\pi}\sigma_2}
\exp{\left[-\frac{(x_2-\mu_2-z)^2}{2\sigma_2^2}\right]} \nonumber \\
&=& \frac{1}{2\pi\sigma_1\sigma_2}
\exp{\left[-\frac{1}{2}\left(
\frac{(x_1-\mu_1-z)^2}{\sigma_1^2} +
\frac{(x_2-\mu_2-z)^2}{\sigma_2^2}
\right)
\right]}\,,
\end{eqnarray}
respectively.
The result of the inference is now the joint probability density
function of $\mu_1$ and $\mu_2$:
\begin{eqnarray}
f(\mu_1,\mu_2|x_1,x_2,\sigma_1,\sigma_2,f_\circ(z))
&=& \frac{\int f(x_1,x_2|\mu_1,\mu_2,z)f_\circ(\mu_1,\mu_2,z)dz}
{\int\!\!\int\!\!\int
f(x_1,x_2|\mu_1,\mu_2,z)f_\circ(\mu_1,\mu_2,z)d\mu_1 d\mu_2 dz}\,,\
\end{eqnarray}
where expansion of the functions has been omitted for the
sake of clarity.
Integrating we get
\begin{eqnarray}
f(\mu_1,\mu_2) &=&
\frac{1}
{2\pi\sqrt{\sigma_1^2+\sigma_Z^2}
\sqrt{\sigma_2^2+\sigma_Z^2}\sqrt{1-\rho^2}
}
\label{eq:bivarm}
\\
& & \exp{
\left\{
-\frac{1}{2(1-\rho^2)}
\left[ \frac{(\mu_1-x_1)^2}
{\sigma_1^2+\sigma_Z^2}
-2\rho\frac{(\mu_1-x_1) (\mu_2-x_2)}
{\sqrt{\sigma_1^2+\sigma_Z^2}
\sqrt{\sigma_2^2+\sigma_Z^2}}
+\frac{(\mu_2-x_2)^2}
{\sigma_2^2+\sigma_Z^2}
\right]
\right\}
}\,. \nonumber
\end{eqnarray}
where
\begin{equation}
\rho = \frac{\sigma_Z^2}{\sqrt{\sigma_1^2+\sigma_Z^2}
\sqrt{\sigma_2^2+\sigma_Z^2}}\,.
\label{eq:rho1}
\end{equation}
If $\sigma_Z$ vanishes then (\ref{eq:bivarm}) has the simpler expression
\begin{equation}
f(\mu_1,\mu_2) @>>{\sigma_Z\rightarrow 0}>
\frac{1}{\sqrt{2\pi}\sigma_1}
\exp{\left[-\frac{(\mu_1-x_1)^2}{2\sigma_1^2}\right]}
\frac{1}{\sqrt{2\pi}\sigma_2}
\exp{\left[-\frac{(\mu_2-x_2)^2}{2\sigma_2^2}\right]}\,,
\end{equation}
i.e. if there is no uncertainty on the offset calibration the
joint density function then $f(\mu_1,\mu_2)$ is equal to the
product of two \underline{independent}
normal functions, i.e. $\mu_1$ and $\mu_2$
are independent.
In the general case we have to conclude that:
\begin{itemize}
\item
the effect of the {\it common uncertainty} $\sigma_Z$ makes the two
values \underline{correlated}, since they are affected by a common
unknown
systematic error; the correlation coefficient is always non negative
($\rho \ge 0$), as intuitively expected from the definition
of systematic error;
\item
the joint density function is a {\it multinormal distribution}
of parameters
$x_1$, $\sigma_{\mu_1}=\sqrt{\sigma_1^2+\sigma_Z^2}$,
$x_2$, $\sigma_{\mu_2}=\sqrt{\sigma_2^2+\sigma_Z^2}$, and $\rho$
(see example of Fig. ~\ref{fig:bivar});
\item
the marginal distributions are still normal:
\begin{eqnarray}
\mu_1 &\sim& {\cal N}\left(x_1, \sqrt{\sigma_1^2+\sigma_Z^2}\right) \\
\mu_2 &\sim& {\cal N}\left(x_2, \sqrt{\sigma_2^2+\sigma_Z^2}\right)\,;
\end{eqnarray}
\item
the covariance between $\mu_1$ and $\mu_2$ is
\begin{eqnarray}
Cov(\mu_1,\mu_2) &=& \rho\sigma_{\mu_1}\sigma_{\mu_2} \nonumber \\
&=& \rho\sqrt{\sigma_1^2+\sigma_Z^2}
\sqrt{\sigma_2^2+\sigma_Z^2}
= \sigma_Z^2\,.
\label{eq:covm1m2}
\end{eqnarray}
\item
the distribution of any function $g(\mu_1,\mu_2)$ can be calculated
using the standard methods of probability theory. For example,
one can demonstrate that the sum $S=\mu_1+\mu_2$ and the difference
$D=\mu_1-\mu_2$ are also normally distributed (see also the
introductory discussion to the central limit theorem
and section ~\ref{sec:cov} for the calculation of averages
and standard deviations):
\begin{eqnarray}
S & \sim & {\cal N}\left(x_1+x_2,
\sqrt{\sigma_1^2+\sigma_2^2+(2\sigma_Z)^2}\right)\\
D & \sim & {\cal N}\left(x_1-x_2,
\sqrt{\sigma_1^2+\sigma_2^2}\right)\,.
\end{eqnarray}
The result can be interpreted in the following way:
\begin{itemize}
\item
the uncertainty on the difference does not depend on the
common offset uncertainty: whatever the value of the true ``zero'' is,
it cancels in differences;
\item
in the sum, instead, the effect of the common
uncertainty is somewhat amplified since it enters ``in phase''
in the global uncertainty of each of the quantities.
\end{itemize}
\end{itemize}
\subsection{Indirect calibration}
Let us use the result of the previous section to solve
another typical problem of measurements. Suppose that
after (or before, it doesn't matter) we have done the measurements
of $x_1$ and $x_2$ and we have the final result, summarized in
(\ref{eq:bivarm}), we know the ``exact'' value of $\mu_1$
(for example we perform the measurement on a reference).
Let us call it $\mu_1^\circ$.
Will this information provide a better knowledge of $\mu_2$?
In principle yes: the difference between $x_1$ and
$\mu_1^\circ$ defines the systematic error
(the true value of the ``zero'' $Z$). This error can
then be subtracted from $\mu_2$ to get a corrected value.
Also the overall uncertainty of $\mu_2$ should change, intuitively
it ``should'' decrease, since we are adding new information.
But its value doesn't seem to be obvious, since the
logical link between $\mu_1^\circ$ and $\mu_2$ is
$\mu_1^\circ\rightarrow Z \rightarrow \mu_2$.
The problem can be solved exactly using the concept of conditional
probability density function $f(\mu_2|\mu_1^\circ)$
(see (\ref{eq:y_cond}-\ref{eq:y_cond1})). We get
\begin{equation}
\Large{\mu_{2|\mu_1^\circ}
\sim} {\Large \cal N}\left(x_2+\frac{\sigma_Z^2}{\sigma_1^2+\sigma_Z^2}
(\mu_1^\circ-x_1),\ \sqrt{\sigma_2^2+\left(
\frac{1}{\sigma_1^2}+\frac{1}{\sigma_Z^2}
\right)^{-1}}\right)\,.
\label{eq:sigma2_1}
\end{equation}
The best value of $\mu_2$ is shifted by an amount $\Delta$,
with respect to the measured value $x_2$, which is
not exactly $x_1-\mu_1^\circ$, as
was na\"\i vely guessed,
and the uncertainty depends on $\sigma_2$, $\sigma_Z$
and $\sigma_1$. It is easy to be convinced that the
exact result is more resonable than the (suggested) first guess.
Let us rewrite $\Delta$ in two different ways:
\begin{eqnarray}
\Delta& = &\frac{\sigma_Z^2}{\sigma_1^2+\sigma_Z^2}(\mu_1^\circ-x_1)
\label{eq:delta1}\\
& = & \frac{1}{\frac{1}{\sigma_1^2}+\frac{1}{\sigma_Z^2}}
\left[\frac{1}{\sigma_1^2}\cdot(x_1-\mu_1^\circ)
+ \frac{1}{\sigma_Z^2}\cdot 0
\right]\,.
\label{eq:delta2}
\end{eqnarray}
\begin{itemize}
\item
Eq. (\ref{eq:delta1}) shows that one has to apply the
correction $x_1-\mu_1^\circ$ only if $\sigma_1=0$. If instead
$\sigma_Z=0$ there is no correction to be applied, since the
instrument is perfectly calibrated. If $\sigma_1\approx \sigma_Z$
the correction is half of the measured difference between
$x_1$ and $\mu_1^\circ$;
\item
Eq. (\ref{eq:delta2}) shows explicitly what is going on and
why the result is consistent with the way we have modeled the uncertainties.
In fact we have performed two independent calibrations: one
of the offset and one of $\mu_1$. The best estimate of the
true value of the ``zero'' $Z$ is the weighted average of the
two measured offsets;
\item
the new uncertainty of $\mu_2$ (see (\ref{eq:sigma2_1}))
is a combination of $\sigma_2$ and the uncertainty of the
weighted average of the two offsets. Its value is smaller than
what one would have with only one calibration and, obviously,
larger than that due to the sampling fluctuations alone:
\begin{equation}
\sigma_2 \le \sqrt{\sigma_2^2+\frac{\sigma_1^2\sigma_Z^2}
{\sigma_1^2+\sigma_Z^2}}
\le \sqrt{\sigma_2^2+\sigma_Z^2}\,.
\end{equation}
\end{itemize}
\subsection{Counting measurements in presence of background}
As an example of a different kind of systematic effect, let
us think of counting experiments in the presence of background. For
example we are searching for a new particle, we make some
selection
cuts and count $x$ events. But we also expect
an average number of background events $\lambda_{B_\circ}\pm\sigma_B$,
where $\sigma_B$ is the standard uncertainty of $\lambda_{B_\circ}$,
\underline{not} to be confused with
$\sqrt{\lambda_{B_\circ}}$. What can we say about
$\lambda_S$, the true value of the average number associated to the
signal? First we will treat the case on which the determination of the
expected number
of background
events is well known ($\sigma_B/\lambda_{B_\circ}\ll 1$), and then the
general case:
\begin{description}
\item[\fbox{$\sigma_B/\lambda_{B_\circ}\ll 1$}:] the true value of the
sum of signal and background is $\lambda=\lambda_S+\lambda_{B_\circ}$.
The likelihood is
\begin{equation}
P(x|\lambda) =\frac{e^{-\lambda}\lambda^x}{x!}\,.
\end{equation}
Applying Bayes's theorem we have
\begin{eqnarray}
f(\lambda_S|x,\lambda_{B_\circ})
&=& \frac{
e^{-(\lambda_{B_\circ}+\lambda_S)}
(\lambda_{B_\circ}+\lambda_S)^x
f_\circ(\lambda_S)
}
{ \int_0^\infty
e^{-(\lambda_{B_\circ}+\lambda_S)}
(\lambda_{B_\circ}+\lambda_S)^x
f_\circ(\lambda_S)d\lambda_S
}\,.
\end{eqnarray}
Choosing again $f_\circ(\lambda_S)$ uniform (in a reasonable interval)
this gets simplified. The integral at the denominator
can be done easily by parts and the final result is:
\begin{eqnarray}
f(\lambda_S|x,\lambda_{B_\circ}) &=&
\frac{e^{-\lambda_S}(\lambda_{B_\circ}+\lambda_S)^x}
{x!\sum_{n=0}^x\frac{\lambda_{B_\circ}^n}{n!}}
\label{eq:inv_p_a}\\
F(\lambda_S|x,\lambda_{B_\circ})
&=& 1 -
\frac{e^{-\lambda_S} \sum_{n=0}^x\frac{(\lambda_{B_\circ}+\lambda_S)^n}{n!}}
{\sum_{n=0}^x\frac{\lambda_{B_\circ}^n}{n!}} \label{eq:inv_p_b}\,.
\end{eqnarray}
{}From (\ref{eq:inv_p_a}-\ref{eq:inv_p_b})
it is possible to calculate in the usual way the best estimate
and the
credibility intervals of $\lambda_S$.
Two particular cases are of interest:
\begin{itemize}
\item
if $\lambda_{B_\circ}=0$ then formulae
(\!\ref{eq:inv_poiss1}-\!\ref{eq:inv_poiss2}) are recovered.
In such a case one measured count is enough to claim for a
signal (if somebody is willing to believe that
really $\lambda_{B_\circ}=0$ without any uncertainty$\ldots$);
\item
if $x=0$ then
\begin{equation}
f(\lambda|x,\lambda_{B_\circ}) = e^{-\lambda_S}\,,
\end{equation}
\underline{independently} of $\lambda_{B_\circ}$.
This result is not really obvious.
\end{itemize}
\item[\fbox{Any $g(\lambda_{B_\circ})$}:]
In the general case, the true value of the average
number of background events $\lambda_B$ is unknown.
We only known that it is distributed around
$\lambda_{B_\circ}$ with standard deviation $\sigma_B$
and probability density function $g(\lambda_B)$,
not necessarily a gaussian. What changes with respect to
the previous case is the initial distribution, now a joint
function
of $\lambda_S$ and of $\lambda_B$. Assuming
$\lambda_B$ and $\lambda_S$ independent the prior density function
is
\begin{equation}
f_\circ(\lambda_S,\lambda_B)=f_\circ(\lambda_S)g_\circ(\lambda_B)\,.
\end{equation}
We leave $f_\circ$ in the form of a joint distribution to
indicate that the result we shall get is the most general
for this kind of problem.
The likelihood, on the other hand, remains
the same as in the previous example.
The inference of $\lambda_S$ is done in the
usual way, applying Bayes' theorem and marginalizing with respect to
$\lambda_S$:
\begin{equation}
f(\lambda_S|x,g_\circ(\lambda_B))
=
\frac{\int e^{-(\lambda_B+\lambda_S)} (\lambda_B+\lambda_S)^x
f_\circ(\lambda_S,\lambda_B)d\lambda_B}
{\int\!\!\int e^{-(\lambda_B+\lambda_S)} (\lambda_B+\lambda_S)^x
f_\circ(\lambda_S,\lambda_B)d\lambda_Sd\lambda_B}\,.
\end{equation}
The previous case (formula (\ref{eq:inv_p_a}))
is recovered if the only value allowed
for $\lambda_B$ is $\lambda_{B_\circ}$ and $f_\circ(\lambda_S)$
is uniform:
\begin{equation}
f_\circ(\lambda_S,\lambda_B) = k\delta(\lambda_B-\lambda_{B_\circ})\,.
\end{equation}
\end{description}
\section{Approximate methods}
\subsection{Linearization}
We have seen in the above examples how to use the
general formula (\ref{eq:ginf1}) for practical applications.
Unfortunately,
when the problem becomes more complicated
one starts facing integration problems.
For this reason
approximate methods are generally used.
We will derive the approximation rules consistently
with the approach followed in these notes
and then the resulting formulae will
be compared with the ISO recommendations.
To do this, let us neglect for
a while all quantities of influence which could produce
unknown systematic errors. In this case (\ref{eq:ginf1})
can be replaced by (\ref{eq:ginf2}), which can be further simplified
if we remember that correlations between the results
are originated by unknown systematic errors. In absence of these,
the joint distribution of all quantities $\underline{\mu}$
is simply the product of
marginal ones:
\begin{equation}
f_{R_i}(\underline{\mu_i}) =
\prod_i f_{R_i}(\mu_i)\,,
\end{equation}
with
\begin{equation}
f_{R_i}(\mu_i) = f_{R_i}(\mu_i|x_i,\underline{h}_\circ) =
\frac{f(x_i|\mu_i, \underline{h}_\circ)f_\circ(\mu_i)}
{\int f(x_i|\mu_i, \underline{h}_\circ)
f_\circ(\mu_i) d\mu_i}\,.
\label{eq:ginf2a}
\end{equation}
The symbol $f_{R_i}(\mu_i)$ indicates that we are dealing with
\underline{\it raw values}\footnote{The choice of the adjective
``raw'' will become clearer in a later on.} evaluated at
$\underline{h}=\underline{h}_\circ$. Since for any variation
of $\underline{h}$ the inferred values of $\mu_i$ will change,
it is convenient, to name with the same subscript $R$ the
quantity obtained for $\underline{h}_\circ$:
\begin{equation}
f_{R_i}(\mu_i) \longrightarrow f_{R_i}(\mu_{R_i})\,.
\end{equation}
Let us indicate with $\widehat{\mu}_{R_i}$
and $\sigma_{R_i}$ the best estimates and the standard uncertainty
of the raw values:
\begin{eqnarray}
\widehat{\mu}_{R_i} &=& E[\mu_{R_i}]\\
\sigma_{R_i}^2 &=& Var(\mu_{R_i})\,.
\end{eqnarray}
For any possible configuration of conditioning
hypotheses $\underline{h}$, \underline{\it corrected} values $\mu_i$
are obtained:
\begin{equation}
\mu_i=\mu_{R_i} + g_i(\underline{h})\,.
\label{eq:correction}
\end{equation}
The function which relates the corrected value to the raw value
and to the systematic effects has been denoted by $g_i$ not to
be confused with a probability density function.
Expanding (\ref{eq:correction})
in series around $\underline{h}_\circ$
we finally arrive at the expression
which will allow us to make the approximated
evaluations of uncertainties:
\begin{equation}
\boxed{
\mu_i= \mu_{R_i}
+ \sum_l \frac{\partial g_i}{\partial h_l}
(h_l-h_{\circ_l}) + \ldots\,
}
\label{eq:linearizzazione}
\end{equation}
(All derivatives are \underline{evaluated at}
$\{\widehat{\mu}_{R_i},\underline{h}_\circ\}$. To simplify
the notation a similar convention
will be used in the following formulae).
Neglecting the terms of the expansion above the first order,
and taking the expected values, we get
\begin{eqnarray}
\widehat{\mu}_i &=& E[\mu_i] \nonumber \\
&\approx& \widehat{\mu}_{R_i}\,; \\
\sigma_{\mu_i}^2 & = & E\left[(\mu_i-E[\mu_i])^2\right] \nonumber \\
&\approx &
\sigma_{R_i}^2 +
\sum_l\left(\frac{\partial g_i}{\partial h_l}\right)^2
\!\sigma_{h_l}^2 \nonumber \\
& &\left\{ +
2\sum_{l< m} \left(\frac{\partial g_i}{\partial h_l}\right)
\left(\frac{\partial g_i}{\partial h_m}\right)
\rho_{lm}\sigma_{h_l}\sigma_{h_m}
\right\} \,; \label{eq:propag1} \\
Cov(\mu_i,\mu_j) &=& E\left[(\mu_i-E[\mu_i])(\mu_j-E[\mu_j])\right]
\nonumber \\
&\approx & \sum_l\left(\frac{\partial g_i}{\partial h_l}\right)
\left(\frac{\partial g_j}{\partial h_l}\right)\sigma_{h_l}^2
\nonumber \\
& & \left\{ +
2\sum_{l< m} \left(\frac{\partial g_i}{\partial h_l}\right)
\left(\frac{\partial g_j}{\partial h_m}\right)
\rho_{lm}\sigma_{h_l}\sigma_{h_m}
\right\}\,. \label{eq:propag2}
\end{eqnarray}
The terms included within $\{\cdot\}$ vanish if the unknown systematic
errors are uncorrelated, and the formulae become simpler.
Unfortunately, very often this is not the
case, as when several calibration constants
are simultaneously obtained from a fit (for example, in most linear
fits slop and intercept have a correlation coefficient close to $-0.9$).
Sometimes the expansion
(\ref{eq:linearizzazione}) is not performed around the best values
of $\underline{h}$ but around their \underline{nominal values}, in the
sense that the correction for the known value of the systematic errors
has not yet been applied
(see section \ref{ss:known_syst}). In this case (\ref{eq:linearizzazione})
should be replaced by
\begin{equation}
\mu_i=\mu_{R_i}
+ \sum_l \frac{\partial g_i}{\partial h_l}
(h_l-h_{N_l}) + \ldots\,
\label{eq:linearizzazione1}
\end{equation}
where the subscript $N$ stands for {\it nominal}. The best value of $\mu_i$
is then
\begin{eqnarray}
\widehat{\mu}_i &=& E[\mu_i] \nonumber \\
&\approx& \widehat{\mu}_{R_i}
+ E\left[\sum_l \frac{\partial g_i}{\partial h_l}(h_l-h_{N_l})\right]
\nonumber \\
&=& \widehat{\mu}_{R_i} + \sum_l\delta \mu_{i_l}
\label{eq:syst_corr}\,.
\end{eqnarray}
(\ref{eq:propag1}) and (\ref{eq:propag2}) instead remain valid,
with the condition that the derivative is calculated at
$\underline{h}_N$.
If \underline{$\rho_{lm}=0$} it is possible to
rewrite (\ref{eq:propag1}) and (\ref{eq:propag2})
in the following way, which is very convenient for practical applications:
\begin{eqnarray}
\sigma_{\mu_i}^2 &\approx &
\sigma_{R_i}^2 +
\sum_l\left(\frac{\partial g_i}{\partial h_l}\right)^2
\!\sigma_{h_l}^2 \\
&=& \sigma_{R_i}^2 + \sum_l u_{i_l}^2
\,; \label{eq:propag1a} \\
Cov(\mu_i,\mu_j)
&\approx & \sum_l\left(\frac{\partial g_i}{\partial h_l}\right)
\left(\frac{\partial g_j}{\partial h_l}\right)\sigma_{h_l}^2
\\
&=& \sum_l s_{ij_{l}}
\left|\frac{\partial g_i}{\partial h_l}\right|\sigma_{h_l}
\left|\frac{\partial g_j}{\partial h_l}\right|\sigma_{h_l}
\label{eq:propag2a} \\
&=& \sum_l s_{ij_{l}} u_{i_l}u_{j_l} \\
&=& \sum_l Cov_l(\mu_i,\mu_j)
\,. \label{eq:propag2b}
\end{eqnarray}
$u_{i_l}$ is the component to the standard uncertainty due to effect $h_l$.
$s_{ij_{l}}$ is equal to the product of signs of the derivatives,
which takes
into account whether the uncertainties are positively or negatively
correlated.
To summarize, when systematic effects are not correlated
with each other,
the following quantities are needed
to evaluate the corrected result, the
combined uncertainties and the correlations:
\begin{itemize}
\item
the raw $\widehat{\mu}_{R_i}$ and $\sigma_{R_i}$;
\item
the best estimates of the corrections $\delta \mu_{i_l}$ for each
systematic effect $h_l$;
\item
the best estimate of the standard deviation $u_{i_l}$ due to the
imperfect knowledge of the systematic effect;
\item
for any pair $\{\mu_i,\mu_j\}$ the sign of the correlation
$s_{ij_{l}}$ due to the effect $h_l$.
\end{itemize}
In High Energy Physics applications it is frequently the case that
the derivatives appearing in
(\ref{eq:syst_corr}-\ref{eq:propag2a}) cannot be calculated directly,
as for example when $h_l$ are parameters of a simulation program,
or acceptance cuts. Then variations of $\underline{\mu}_i$ are
usually studied varying a particular $h_l$ within
a {\it reasonable} interval, holding the other influence
quantities at the nominal value.
$\delta \mu_{i_l}$ and $u_{i_l}$ are calculated from
the interval $\pm\Delta_i^\pm$ of variation of the true value
for a given variation $\pm\Delta_{h_l}^\pm$ of $h_l$
and from the probabilistic meaning of the intervals (i.e.
from the assumed distribution of the true value).
This empirical procedure for determining
$\delta \mu_{i_l}$ and $u_{i_l}$ has the advantage that it
can take into account non linear effects, since it
directly measures the difference
$\widehat{\mu}_i - \widehat{\mu}_{R_i}$ for a given difference
$h_l-h_{N_l}$.
Some examples are given
in section \ref{ss:examples},
and two typical experimental applications
will be discussed in more detail
in section \ref{sec:cov}.
\subsection{BIPM and ISO recommendations}
In this section we compare the results obtained in the previous
section with the recommendations of the Bureau International des Poids
et M\'esures (BIPM) and the International Organization for Standardization
(ISO) on {\sl ``the expression of experimental uncertainty''}.
\begin{enumerate}
\item
\begin{quote}
{\sl\small The uncertainty in the result of a measurement generally consists
of several components which may be grouped into two categories according
to the way in which their numerical value is estimated:
\begin{description}
\item[A:] those which are evaluated by statistical methods;
\item[B:] those which are evaluated by other means.
\end{description}
There is not always a simple correspondence between the classification into
categories A or B and the previously used classification into
``random'' and ``systematic'' uncertainties.
The term ``systematic uncertainty'' can be misleading
and should be avoided.
The detailed report of the uncertainty should
consist of a complete list of the components, specifying for each the
method used to obtain its numerical result.
}
\end{quote}
Essentially
the first recommendation states that all uncertainties
can be treated probabilistically. The distinction between types A and B
is subtle and can be misleading if one thinks of ``statistical
methods'' as synonymous with ``probabilistic methods'' - as currently
done in High Energy Physics.
Here ``statistical'' has the classical meaning of
repeated measurements.
\item
\begin{quote}
{\sl \small
The components in category A are characterized by the estimated
variances $s_i^2$ (or the estimated ``standard deviations'' $s_i$)
and the number of degrees of freedom $\nu_i$. Where appropriate,
the covariances should be given.
}
\end{quote}
The estimated variances correspond to $\sigma_{R_i}^2$ of the
previous section. The degrees of freedom have are related to small
samples and to the {\it Student t} distribution. The problem of
small samples is not discussed in these notes,
but clearly this recommendation
is a relic of frequentistic methods. With the approach followed in
theses notes
there is no need to talk about degrees of freedom,
since the Bayesian inference defines the final
probability function $f(\mu)$ completely.
\item
\begin{quote}
{\sl\small The components in category B should be characterized
by quantities $u_j^2$, which may be considered as approximations
to the corresponding variances, the existence of which is assumed. The
quantities $u_j^2$ may the treated like variances and the quantities
$u_j$ like standard deviations. Where appropriate,
the covariances should be treated in a similar way.
}
\end{quote}
Clearly, this
recommendation is meaningful only in a Bayesian framework.
\item
\begin{quote}
{\sl \small
The combined uncertainty should be characterized by the numerical
value obtained by applying the usual method for the combination
of variances. The combined uncertainty and its components should
be expressed in the form of ``standard deviations''.
}
\end{quote}
This is
what we have found in (\ref{eq:propag1}-\ref{eq:propag2}).
\item
\begin{quote}
{\sl \small
If, for particular applications, it is necessary to multiply
the combined uncertainty by a factor to obtain
an overall uncertainty, the multiplying factor used must always
stated.
}
\end{quote}
This last recommendation states once more that the uncertainty is ``by
default'' the standard deviation
of the true value distribution. Any other quantity
calculated
to obtain a credibility interval with a certain
probability level should be clearly stated.
\end{enumerate}
To summarize, these are the basic ingredients of
the BIPM/ISO recommendations:
\begin{description}
\item[subjective definition of probability:] it allows variances to
be assigned conceptually to any physical quantity
which has an uncertain value;
\item[uncertainty as standard deviation]\
\begin{itemize}
\item
it is ``standard'';
\item
the rule of combination
(\ref{eq:linc2}-\ref{eq:linc5}) applies to standard deviations and not
to confidence intervals;
\end{itemize}
\item[combined standard uncertainty:] it is obtained by the usual formula
of ``error propagation'' and it makes use on variances,
covariances and first derivatives;
\item[central limit theorem:] it makes, under proper conditions,
the true value
normally distributed if one has several sources of uncertainty.
\end{description}
Consultation of the {\it Guide}\cite{ISO}
is recommended
for further explanations about the justification of the norms,
for the
description of evaluation procedures, as well for as examples.
I would just like to
end this section with some examples of the evaluation of
type B uncertainties and with some words of caution
concerning the use of approximations and of linearization.
\subsection{Evaluation of type B uncertainties}
The ISO {\it Guide} states that
\begin{quote}
{\sl \small
For estimate $x_i$ of an input quantity\footnote{
By ``input quantity'' the ISO {\it Guide} mean
any of the contributions $h_l$
or $\mu_{R_i}$ which enter into (\ref{eq:propag1}-\ref{eq:propag2}).}
$X_i$ that has not been
obtained from repeated observations, the $\ldots$ standard
uncertainty $u_i$ is evaluated by scientific judgement based on all the
available information on the possible variability of $X_i$. The pool
of information may include
\begin{itemize}
\item
previous measurement data;
\item
experience with or general knowledge of the behaviour and properties of
relevant materials and instruments;
\item
manufacturer's specifications;
\item
data provided in calibration and other certificates;
\item
uncertainties assigned to reference data taken from handbooks.
\end{itemize}
}
\end{quote}
\subsection{Examples of type B uncertainties}\label{ss:examples}
\begin{enumerate}
\item
A manufacturer's calibration certificate states that the uncertainty,
defined as \underline{$k$ standard deviations},
is ``$\pm\Delta$'':
$$u=\frac{\Delta}{k}\,.$$
\item
A result
is reported
in a publication
as $\overline{x}\pm \Delta$,
stating that the average has been performed on 4 measurements
and the uncertainty is a $95\,\%$ confidence interval.
One has to conclude that the confidence interval has been calculated
using the \underline{{\it Student} $t$}:
$$u=\frac{\Delta}{3.18}\,.$$
\item
a manufacturer's specification states that the
error on a quantity should not exceed $\Delta$. With this
limited information one has to assume a \underline{uniform distribution}:
$$u=\frac{2\Delta}{\sqrt{12}}=\frac{\Delta}{\sqrt{3}}\,;$$
\item
A physical parameter of a Monte Carlo is believed to lie in the
interval of $\pm \Delta$ around its best value,
but not with uniform distribution:
the probability that the parameter is
at center is higher than than that it is at the edges of the
interval. With this information a \underline{triangular distribution}
can be reasonably assumed:
$$u=\frac{\Delta}{\sqrt{6}}\,.$$
{\bf Note} that the coefficient in front of $\Delta$
changes from the $0.58$ of the
previous example to the $0.41$ of this. If the interval
$\pm\Delta$ were a $3\sigma$ interval then the coefficient
would have been
equal to $0.33$. These variations - to
be considered extreme - are smaller than
the statistical fluctuations of empirical standard
deviations estimated from $\approx 10$ measurements.
This shows that one should not be worried that the type B
uncertainties are less accurate than
type A, especially if one tries
to model
the distribution of the physical quantity
{\it honestly}.
\item
The absolute energy calibration of an electromagnetic
calorimeter module is not
exactly known and it is estimated to be between the nominal one
and $+10\,\%$. The ``statistical'' resolution is known by test beam
measurements to be $18\%/\sqrt{E/\mbox{GeV}}$. What is the uncertainty
on the energy measurement of an electron which has apparently released
30 GeV?
\begin{itemize}
\item
The energy has to be \underline{corrected} for the best estimate
of the calibration constant: $+5\,\%$:
$$E_R=31.5\pm 1.0\,\mbox{GeV}\,.$$
\begin{itemize}
\item
assuming a \underline{uniform}
distribution of the true calibration constant:
$u=31.5 \times 0.1/\sqrt{12} = 0.9\, \mbox{GeV}$:
$$E=31.5\pm 1.3\, \mbox{GeV}\,;$$
\item
assuming a \underline{triangular} distribution: $u=1.3\, \mbox{GeV}$:
$$E=31.5\pm 1.6\, \mbox{GeV}\,;$$
\end{itemize}
\item
interpreting the maximum deviation from the nominal calibration
as uncertainty
(see comment at the end of section \ref{ss:known_syst}):
$$E=30.0\pm 1.0\pm 3.0 \, \mbox{GeV} \rightarrow E=30.0\pm 3.2
\, \mbox{GeV} \,;$$
As already remarked earlier in these notes,
while reasonable assumptions (in this case
the first two) give consistent results, this is not true if one
makes inconsistent use of the information just for the sake
of giving ``safe'' uncertainties.
\end{itemize}
\item
As a more realistic and slightly more complicated example, let us
take the case of a measurement of two physical quantities performed
with the same apparatus. The result, before the correction
for systematic
effects and only with type {\it A} uncertainties is
$\mu_{R_1}=1.50\pm 0.07$ and
$\mu_{R_2}=1.80\pm 0.08$ (arbitrary units).
Let us assume that the measurements
depend on eight influence quantities $h_l$, and that most of them
influence both physical quantities. For simplicity we consider
$h_l$ to be independent from each other.
{ \footnotesize
\begin{table}
\begin{center}
\begin{tabular}{|cc|ccc|ccc|c|} \hline
\multicolumn{2}{|c|}{$h_l$} &
\multicolumn{3}{|c|}{$\mu_{R_1}=1.50\pm 0.07$} &
\multicolumn{3}{|c|}{$\mu_{R_2}=1.80\pm 0.08$} &
correlation \\ \hline
$l$ & model &
$\Delta_{1_l}^\pm$ & $\delta \mu_{1_l}$ & $u_{1_l}$ &
$\Delta_{2_l}^\pm$ & $\delta \mu_{2_l}$ & $u_{2_l}$ &
$Cov_l(\mu_1,\mu_2)$ \\ \hline
1 & normal &
$\pm 0.05$ & 0 & 0.05 &
0 & 0 & 0 & 0 \\
2 & normal &
$ 0 $ & 0 & 0 &
$\pm 0.08$ & 0.00 & 0.08 & 0 \\
3 & normal &
$ \left\{^{+0.10}_{-0.04}\right. $ & $+0.03$ & 0.07 &
$ \left\{^{+0.12}_{-0.05}\right. $ & $+0.035$& 0.085 & $+0.0060$ \\
4 & uniform &
$ \left\{^{+0.00}_{-0.15}\right. $ & $-0.075$ & 0.04 &
$ \left\{^{+0.07}_{-0.00}\right. $ & $+0.035$ & 0.02 & $-0.0008$ \\
5 & triangular &
$\pm 0.10$ & 0.00 & 0.04 &
$\pm 0.02$ & 0.00 & 0.008 & $+0.0003$ \\
6 & triangular&
$ \left\{^{+0.02}_{-0.08}\right. $ & $-0.03$ & 0.02 &
$ \left\{^{+0.01}_{-0.03}\right. $ & $-0.010$ & 0.008 & $+0.0016$ \\
7 & uniform &
$ \left\{^{+0.10}_{-0.06}\right. $ & $+0.02$ & 0.05 &
$ \left\{^{+0.14}_{-0.06}\right. $ & $+0.04$ & 0.06 & $+0.0030$ \\
8 & triangular&
$ \left\{^{+0.03}_{-0.02}\right. $ & $+0.005$ & 0.010 &
$\pm 0.03$ & 0.000 & 0.012 & $+0.00012$ \\ \hline
``$\sum_{h_l}$'' & normal &
& $-0.05$ & 0.12 & & $+0.10$ & 0.13 & +0.010 \\ \hline
\multicolumn{2}{|c|}{} &
\multicolumn{3}{|c|}{$\mu_{1}=1.45\pm 0.14$} &
\multicolumn{3}{|c|}{$\mu_{2}=1.90\pm 0.15$} &
+0.010 \\
\multicolumn{2}{|c|}{} &
\multicolumn{3}{|c|}{$(\mu_{1}=1.45\pm 0.10 \pm 0.10$)} &
\multicolumn{3}{|c|}{$(\mu_{2}=1.90\pm 0.11 \pm 0.10$)} &
($\rho = +0.49$) \\ \hline
\multicolumn{2}{|c|}{} &
\multicolumn{6}{|c|}{$\mu_2+\mu_1 = 3.35\pm 0.25$} & \\ \hline
\multicolumn{2}{|c|}{} &
\multicolumn{6}{|c|}{$\mu_2-\mu_1 = 0.45\pm 0.15$} & \\ \hline
\multicolumn{2}{|c|}{} &
\multicolumn{6}{|c|}{$\overline{\mu} = 1.65\pm 0.12$} & \\ \hline
\end{tabular}
\end{center}
\caption{\sf Example of the result of two physical quantities
corrected by several systematic effects (arbitrary units).}
\label{tab:syst}
\end{table}
}
Tab. \ref{tab:syst} gives
the details of correction for the systematic effects and
of the uncertainty evaluation,
performed using (\ref{eq:syst_corr}), (\ref{eq:propag1a})
and (\ref{eq:propag2a}).
To see the importance
of the correlations, the result of the sum and of the difference
of $\mu_1$ and $\mu_2$ is also reported.
In order to split the final result into ``individual'' and ``common''
uncertainty (between parenthesis in Tab. \ref{tab:syst})
we have to remember that,
if the error is additive, the covariance between $\mu_1$
and $\mu_2$ is
given by the variance
of the unknown systematic error (see \ref{eq:covm1m2}).
The average $\overline{\mu}$ between $\mu_1$ and $\mu_2$,
assuming it
has a physical meaning, can be evaluated either using
the results of section \ref{sec:cov}, or simply calculating the
average weighted with the inverse of the
variances due to the individual uncertainties,
and then adding quadratically
the common uncertainty at the end. Also $\overline{\mu}$ is reported in
Tab. \ref{tab:syst}.
\end{enumerate}
\subsection{Caveat concerning a blind use of approximate methods}
The mathematical apparatus of variances and covariances
of (\ref{eq:propag1}-\ref{eq:propag2})
is often seen as the most complete description of uncertainty
and in most cases used blindly in further uncertainty calculations.
It must be clear, however, that
this is just an approximation based on linearization. If the
function which relates the corrected value to the raw value and the
systematic effects is not linear then the linearization may cause
trouble.
An interesting case is discussed in section \ref{sec:cov}.
There is another problem which may arise from the simultaneous use
of Bayesian estimators \underline{and} approximate methods.
Let us introduce the problem with an example.
\begin{description}
\item[Example 1:] 1000 independent
measurements of the efficiency of a detector have been
performed (or 1000 measurements of branching ratio, if you
prefer).
Each measurement was
carried out on a base of 100 events and each time 10 favorable events
were observed (this obviously strange - though not impossible -
but it simplifies the calculations). The result of each
measurement will be (see (\ref{eq:infbinom1}-\ref{eq:infbinom2})):
\begin{eqnarray}
\widehat{\epsilon}_i &=& \frac{10+1}{100+2} = 0.1078 \\
\sigma(\epsilon_i) & = & \sqrt{\frac{11\cdot 91}{103\cdot 102^2}} = 0.031\,;
\end{eqnarray}
Combining the 1000 results using
the standard weighted average
procedure gives
\begin{equation}
\epsilon = 0.1078 \pm 0.0010\,.
\end{equation}
Alternatively, taking the complete set of results to be equivalent to
100000 trials with 10000 favorable events, the combined result is
\begin{equation}
\epsilon^\prime = 0.10001\pm 0.0009\,.
\end{equation}
(the same as if one had used
the Bayes theorem
iteratively
to infer $f(\epsilon)$ from the the partial 1000 results.)
The conclusions are in disagreement and the
first result is clearly mistaken.
\end{description}
The same problem arises in the case of inference of the
Poisson distribution
parameter $\lambda$ and, in general, whenever $f(\mu)$ is not symmetrical
around $E[\mu]$.
\begin{description}
\item[Example 2:] Imagine an experiment running
continuously for one year,
searching for monopoles and identifying none.
The consistency with zero can be stated either
quoting $E[\lambda]=1$ and $\sigma_\lambda=2$, or
a $95\,\%$
upper limit $\lambda < 3$. In terms of rate (number of monopoles
per day) the result would be either $E[r]=2.7\cdot 10^{-3}$,
$\sigma(r)=5.5\cdot 10^{-3}$, or an upper limit $r<8.2\cdot 10^{-3}$.
It easy to show that, if we take the 365 results for
each of the running days and combine them
using
the standard weighted average, we get
$r=1.0\pm 0.1$ monopoles/day! This absurdity is not
caused by
the Bayesian method, but by the standard rules for combining the
results (the weighted average formulae
(\ref{eq:waver1}) and (\ref{eq:waver2})
are derived from the normal distribution hypothesis).
Using Bayesian inference would have led to
a consistent and reasonable result no matter how the 365 days of running
had been subdivided for partial analysis.
\end{description}
This suggests that in some cases it could be preferable to
give the result in terms
of the value of $\mu$
which maximizes $f(\mu)$ ($p_m$ and $\lambda_m$ of
sections \ref{ss:binom} and \ref{ss:poisson}). This way of presenting
the results is similar to that suggested by the maximum likelihood
approach, with the difference that for $f(\mu)$ one should take
the final probability density function and not simply the likelihood.
Since it is practically impossible to summarize
the outcome of an inference
in only two
numbers (best value and uncertainty),
a description of the method
used to evaluate them should be provided, except when
$f(\mu)$ is approximately normally distributed
(fortunately this happens most of the time).
\section{Indirect measurements}
Conceptually this is a very simple task in the Bayesian framework,
whereas
the frequentistic one requires a lot of gymnastics
going back and forward from the logical level of true values
to the logical level of estimators. If one accepts that
the true values are just random variables\footnote{
To make the formalism lighter, let us call
both the
random variable associated to the quantity and the quantity itself
by the same name $X_i$ (instead of $\mu_{x_i}$).},
then,
calling $Y$ a function of other quantities $X$,
each
having a probability density function $f(x)$,
the probability density function
of $Y$ $f(y)$ can be calculated with the
standard formulae
which follow from the rules
probability.
Note that in the approach presented in these notes
uncertainties due to systematic effects
are treated in the same way as indirect measurements.
It is worth repeating that
there is no conceptual
distinction between various components
of the measurement uncertainty.
When approximations are sufficient,
formulae (\ref{eq:propag1}) and (\ref{eq:propag2}) can be used.
Let us take an example for which the linearization does not give
the right result:
\begin{description}
\item[Example:] The speed of a proton is measured with a time-of-flight
system. Find the $68$, $95$ and $99\,\%$ probability intervals
for the energy, knowing that $\beta=v/c=0.9971$,
and that distance and time have been measured with a $0.2\,\%$ accuracy.
The relation
$$E=\frac{mc^2}{\sqrt{1-\beta^2}}$$
is strongly non linear. The results given by the approximated method
and the correct one are, respectively:\\
\begin{center}
\begin{tabular}{|c|c|c|}\hline
C.L. & linearization & correct \\
(\%) & $E$ (GeV) & $E$ (GeV) \\ \hline
68 & $6.4 \le E \le 18$ & $8.8 \le E \le 64$ \\
95 & $0.7 \le E \le 24$ & $7.2 \le E < \infty$ \\
99 & $0. \le E \le 28$ & $6.6 \le E < \infty$ \\ \hline
\end{tabular}
\end{center}
\end{description}
\section{Covariance matrix of
experimental results}\label{sec:cov}
\subsection{Building the covariance matrix of experimental data}
In physics applications,
it is rarely the case that the covariance
between the best estimates
of two physical quantities\footnote{
In this section\cite{syst}
the symbol $X_i$ will indicate
the variable associated to the $i$-th physical quantity
and $X_{ik}$ its $k$-th direct measurement; $x_i$
the best estimate of
its value, obtained by an average over many direct measurements or
indirect measurements, $\sigma_i$ the
standard deviation, and $y_i$ the value corrected for the calibration
constants. The weighted average of several $x_i$ will be
denoted by $\overline{x}$.},
each given by the
arithmetic average of direct measurements
($x_i = \overline{X_i} = \frac{1}{n}\sum_{k=1}^n X_{ik}$),
can be
evaluated from the sample covariance of the two averages
\begin{equation}
Cov(x_i,x_j)
=\frac{1}{n(n-1)}\sum_{k=1}^{n}(X_{ik}-\overline{X}_i)(X_{jk}-\overline{X}_j)\
{}.
\label{eq:cov1}
\end{equation}
More frequent is the well understood
case in which the physical
quantities are obtained as a result of a $\chi^2$
minimization, and the terms of the inverse of the
covariance matrix are related to the curvature of $\chi^2$
at its minimum:
\begin{equation}
\left(V^{-1}\right)_{ij} = \frac{1}{2}\left.
\frac{\partial^2\chi^2}{\partial X_i\partial X_j}
\right|_{x_i,x_j}\, .
\label{eq:cov2}
\end{equation}
In most cases one determines independent values
of physical quantities
with the same
detector, and the correlation between them originates from
the detector calibration uncertainties.
Frequentistically, the use of (\ref{eq:cov1}) in this case
would correspond to having a ``sample
of detectors'', with each of which a
measurement of all the physical quantities is performed.
A way of building the covariance matrix from the direct measurements
is to consider the original measurements and the calibration
constants as a common set of independent and uncorrelated
measurements, and then to calculate corrected values that take into
account the calibration constants.
The variance/covariance propagation will automatically provide the full
covariance matrix of the set of results.
Let us derive it for two cases that happen frequently, and then
proceed to the general case.
\subsubsection{Offset uncertainty}
Let
$x_i\pm\sigma_i$ be the $i=1\ldots n$ results
of independent measurements
and ${\bf V}_X$ the (diagonal) covariance matrix.
Let us assume that they are all affected by the same calibration
constant $c$, having a standard uncertainty $\sigma_c$.
The corrected results are then $y_i = x_i + c$.
We can assume, for
simplicity, that the most probable value of $c$ is 0, i.e.
the detector is well calibrated.
One has to
consider the calibration constant as
the physical quantity $X_{n+1}$, whose best estimate is
$x_{n+1} = 0$.
A term $V_{X_{n+1,n+1}} = \sigma^2_c$ must be added to the
covariance matrix.
The covariance matrix of the corrected results is given by the
transformation:
\begin{equation}
{\bf V}_Y = {\bf M}{\bf V}_X{\bf M}^T\,,
\end{equation}
where $M_{ij}= \left.\frac{\partial Y_i}{\partial X_j}
\right|_{x_j}$.
The elements of ${\bf V}_Y$ are given by
\begin{equation}
V_{Y_{kl}} = \sum_{ij}
\left.
\frac{\partial Y_k}{\partial X_i}
\right|_{x_i}
\left.
\frac{\partial Y_l}{\partial X_j}
\right|_{x_j}
V_{X_{ij}}\, .
\end{equation}
In this case we get:
\begin{eqnarray}
\sigma^2(Y_i) & = & \sigma_i^2+\sigma_c^2 \\
Cov(Y_i,Y_j) & = & \sigma_c^2 \hspace{1.3 cm} (i\ne j)\\
\rho_{ij} & = & \frac{\sigma_c^2}
{\sqrt{\sigma_i^2+\sigma_c^2}
\sqrt{\sigma_j^2+\sigma_c^2}} \\
&=& \frac{1}
{\sqrt{1+\left(\frac{\sigma_i}{\sigma_c}\right)^2}
\sqrt{1+\left(\frac{\sigma_j}{\sigma_c}\right)^2}}\, ,
\end{eqnarray}
reobtaining the results of section \ref{sec:off_err}.
The total uncertainty on the single measurement is given by the
combination in quadrature of the individual and the common
standard uncertainties, and all the covariances are equal to $\sigma^2_c$.
To verify, in a simple case, that the result is reasonable,
let us consider only two independent quantities $X_1$ and $X_2$,
and a calibration constant $X_3 = c$, having
an expected value equal to zero. From these we can calculate
the correlated quantities $Y_1$ and $Y_2$ and finally their
sum ($S\equiv Z_1$) and difference ($D\equiv Z_2$). The results are:
\begin{eqnarray}
{\bf V}_Y &=&
\left( \begin{array}{cc}
\sigma_1^2+\sigma_c^2 & \sigma_c^2 \\
\sigma_c^2 & \sigma_2^2+\sigma_c^2
\end{array}
\right)\\
& & \\
& & \\
{\bf V}_Z &=&
\left( \begin{array}{cc}
\sigma_1^2 + \sigma_2^2+
4\cdot\sigma_c^2 &\ \sigma_1^2-\sigma_2^2 \\
\sigma_1^2-\sigma_2^2 & \ \sigma_1^2 + \sigma_2^2
\end{array}
\right) \, .
\end{eqnarray}
It follows that
\begin{eqnarray}
\sigma^2(S) & = & \sigma_1^2 +\sigma_2^2 +(2\cdot\sigma_c)^2 \\
\sigma^2(D) & = & \sigma_1^2 + \sigma_2^2\, ,
\end{eqnarray}
as intuitively expected.
\subsubsection{Normalization uncertainty}
Let us consider now the case where the calibration constant
is the scale factor $f$, known with a standard uncertainty $\sigma_f$.
Also in this case, for simplicity and without losing generality,
let us suppose that the most probable value of $f$ is 1.
Then
$ X_{n+1} = f$, i.e.
$ x_{n+1} = 1$, and
$V_{X_{n+1,n+1}} = \sigma^2_f$.
Then
\begin{eqnarray}
\sigma^2(Y_i) & = & \sigma_i^2 + \sigma_f^2 x_i^2 \\
Cov(Y_i,Y_j) & = & \sigma_f^2 x_i x_j
\hspace{1.3 cm}(i\ne j) \\
\rho_{ij} & = & \frac{x_i x_j}
{\sqrt{x_i^2+\frac{\sigma_i^2}{\sigma_f^2}}
\sqrt{x_j^2+\frac{\sigma_j^2}{\sigma_f^2}}} \\
|\rho_{ij}| & = & \frac{1}
{\sqrt{1+\left(\frac{\sigma_i}{\sigma_f x_i}\right)^2}
\sqrt{1+\left(\frac{\sigma_j}{\sigma_f x_j}\right)^2}
} \\
\, .
\end{eqnarray}
To verify the results let us consider two independent measurements
$X_1$ and $X_2$, let us
calculate the correlated quantities $Y_1$ and $Y_2$, and finally their
product ($P\equiv Z_1$) and their ratio ($R\equiv Z_2$):
\begin{eqnarray}
{\bf V}_Y & = &
\left( \begin{array}{cc}
\sigma_1^2+\sigma_f^2\cdot x_1^2
& \sigma_f^2\cdot x_1\cdot x_2 \\
& \\
\sigma_f^2\cdot x_1\cdot x_2
& \sigma_2^2+\sigma_f^2\cdot x_2
\end{array}
\right)\\
& & \\
& & \\
{\bf V}_Z & = &
\left( \begin{array}{cc}
\sigma_1^2\cdot x_2^2 +
\sigma_2^2\cdot x_1^2 +
4\cdot\sigma_f^2\cdot x_1^2\cdot x_2^2 &
\ \sigma_1^2-\sigma_2^2\cdot\frac{x_1^2}{x_2^2} \\
& \\
\sigma_1^2 -
\sigma_2^2\cdot\frac{x_1^2}{x_2^2}
&
\ \frac{\sigma_1^2}{x_2^2} +
\sigma_2^2\cdot\frac{x_1^2}{x_2^4}
\end{array}
\right)\, .
\end{eqnarray}
It follows that:
\begin{eqnarray}
\sigma^2(P) & = & \sigma_1^2\cdot x_2^2 +
\sigma_2^2\cdot x_1^2 +
(2\cdot\sigma_f\cdot x_1\cdot x_2)^2 \\
\sigma^2(R) & = & \frac{\sigma_1^2}{x_2^2} +
\sigma_2^2\cdot\frac{x_1^2}{x_2^4} \, .
\end{eqnarray}
Just as an unknown common offset error cancels in differences
and is enhanced in sums, an unknown normalization error has
a similar effect
on the ratio and the product. It is also interesting to calculate
the standard uncertainty of a difference in case of a normalization error:
\begin{eqnarray}
\sigma^2(D) & = & \sigma_1^2+\sigma_2^2
+\sigma_f^2\cdot(x_1-x_2)^2\, .
\end{eqnarray}
The contribution from an unknown
normalization error vanishes if the two
values are equal.
\subsubsection{General case}
Let us assume there are $n$ independently
measured values $x_i$ and
$m$
{\it calibration constants} $c_j$
with their covariance matrix
${\bf V}_c$. The latter
can also be theoretical parameters influencing the data, and
moreover they may be
correlated, as usually
happens if, for example, they are parameters of a calibration fit.
We can then include the $c_j$ in the vector that contains the
measurements and ${\bf V}_c$ in the covariance matrix ${\bf V}_X$:
\begin{equation}
\underline{x} = \left( \begin{array}{c}
x_1 \\
\vdots \\
x_n \\
c_1 \\
\vdots \\
c_m
\end{array}
\right)\, , \ \ \ \ \ \ \
{\bf V}_X = \left( \begin{array}{cccc|c}
\sigma_1^2 & 0 & \cdots & 0 & \\
0 & \sigma_2^2 & \cdots & 0 & \\
\cdots & \cdots & \cdots & \cdots & {\bf 0} \\
0 & 0 & \cdots & \sigma_n^2 & \\
\hline
& & {\bf 0} & & {\bf V}_c
\end{array}
\right)\, .
\end{equation}
The corrected quantities are obtained from the most general
function
\begin{equation}
Y_i = Y_i(X_i,\underline{c}) \hspace{2.0 cm} (i=1,2, \ldots,
n)\, ,
\end{equation}
and the covariance matrix
${\bf V}_Y$ from the covariance propagation
${\bf V}_Y = {\bf M}{\bf V}_X{\bf M}^T$.
As a frequently encountered example, we can think of several
normalization constants, each affecting a subsample of the data -
as is
the case where each of several detectors
measures a set of physical quantities.
Let us consider consider just three quantities
($X_i$) and three
uncorrelated
normalization standard uncertainties ($\sigma_{f_j}$),
the first one common to
$X_1$ and $X_2$, the second to
$X_2$ and $X_3$ and the third to all three.
We get the following covariance matrix:
$$
\left( \begin{array}{ccc}
\sigma_1^2 +
\left(\sigma_{f_1}^2 +
\sigma_{f_3}^2 \right)\cdot x_1^2
& \left(\sigma_{f_1}^2 +
\sigma_{f_3}^2 \right)\cdot x_1\cdot x_2
& \sigma_{f_3}^2\cdot x_1\cdot x_3 \\
& & \\
\left(\sigma_{f_1}^2 +
\sigma_{f_3}^2\right) \cdot x_1\cdot x_2
& \sigma_2^2 +
\left(\sigma_{f_1}^2 + \sigma_{f_2}^2 +
\sigma_{f_3}^2 \right)\cdot x_2^2
& \left( \sigma_{f_2}^2 +
\sigma_{f_3}^2 \right)\cdot x_2\cdot x_3 \\
& & \\
\sigma_{f_3}^2 \cdot x_1 \cdot x_3
& \left( \sigma_{f_2}^2 +
\sigma_{f_3}^2\right) \cdot x_2\cdot x_3
& \sigma_3^2 +
\left( \sigma_{f_2}^2 +
\sigma_{f_3}^2\right) \cdot x_3^2
\end{array}
\right)\, .
$$
\subsection{Use and misuse of the covariance matrix to fit correlated data}
\subsubsection{Best estimate of the true value from two correlated
values.}
Once the covariance matrix is built
one uses it in a $\chi^2$ fit to get the
parameters of a function.
The quantity to be minimized is
$\chi^2$, defined as
\begin{equation}
\chi^2 = \underline{\Delta}^T {\bf V}^{-1}\underline{\Delta}\, ,
\end{equation}
where $\underline{\Delta}$ is the vector of the differences
between the theoretical and the experimental values.
Let us
consider the simple case in which two results of the same physical quantity
are available, and the individual and the common
standard uncertainty are known.
The best estimate of the true value of the physical quantity
is then obtained by fitting the constant
$Y=k$ through the data points. In this simple case
the $\chi^2$ minimization can be performed easily.
We will consider
the two cases of offset and normalization uncertainty. As before,
we assume that the detector is well calibrated, i.e. the most
probable value of the calibration constant is, respectively
for the two cases, 0 and 1, and hence $y_i=x_i$
\subsubsection{Offset uncertainty}
Let $x_1\pm\sigma_1$ and $x_2\pm\sigma_2$ be the two measured values,
and $\sigma_c$ the common standard uncertainty.
The $\chi^2$ is
\begin{eqnarray}
\chi^2 & = & \frac{1}{D} \left[
(x_1-k)^2\cdot (\sigma_2^2+\sigma_c^2)
+(x_2-k)^2\cdot (\sigma_1^2+\sigma_c^2)\right. \\
& & \hspace{0.7 cm} \left. -2\cdot (x_1-k)\cdot
(x_2-k)\cdot\sigma_c^2
\right]\, ,
\end{eqnarray}
where
$D=\sigma_1^2\cdot\sigma_2^2+ (\sigma_1^2+\sigma_2^2)\cdot\sigma_c^2$
is the determinant of the covariance matrix.
Minimizing $\chi^2$ and using the second derivative
calculated at the minimum we obtain the best value of $k$ and its
standard deviation:
\begin{eqnarray}
\widehat{k} &=& \frac{x_1\cdot\sigma_2^2+x_2\cdot\sigma_1^2}
{\sigma_1^2+\sigma_2^2}
\hspace{0.3 cm}(= \overline{x}) \\
& & \\
\sigma^2(\widehat{k})
&=& \frac{\sigma_1^2\cdot\sigma_2^2}
{\sigma_1^2+\sigma_2^2} + \sigma_c^2\, .
\end{eqnarray}
The most probable value of
the physical quantity is exactly what one obtains
from the
average $\overline{x}$
weighted with the inverse of the individual variances.
Its overall uncertainty is the quadratic sum of the standard
deviation of the weighted
average and the common one. The result coincides with the simple
expectation.
\subsubsection{Normalization uncertainty}
Let $x_1\pm\sigma_1$ and $x_2\pm\sigma_2$ be the two measured values,
and $\sigma_f$ the common standard uncertainty on the scale.
The $\chi^2$ is
\begin{eqnarray}
\chi^2 & = & \frac{1}{D} \left[
(x_1-k)^2\cdot (\sigma_2^2+x_2^2\cdot\sigma_f^2)
+(x_2-k)^2\cdot (\sigma_1^2+x_1^2\cdot\sigma_f^2)\right. \\
& & \hspace{0.7 cm} \left. -2\cdot (x_1-k)\cdot
(x_2-k)\cdot x_1\cdot x_2\cdot\sigma_f^2
\right]\, ,
\end{eqnarray}
where $D=\sigma_1^2\cdot\sigma_2^2 +
(x_1^2\cdot\sigma_2^2 +x_2^2\cdot\sigma_1^2)\cdot\sigma_f^2\,$\,.
We obtain in this case the following result:
\begin{eqnarray}
\widehat{k} &=& \frac{x_1\cdot\sigma_2^2+x_2\cdot\sigma_1^2}
{\sigma_1^2+\sigma_2^2+(x_1-x_2)^2\cdot\sigma_f^2} \\
& & \\
\sigma^2(\widehat{k})
&=& \frac{\sigma_1^2\cdot\sigma_2^2+
(x_1^2\cdot\sigma_2^2+x_2^2\cdot\sigma_1^2)\cdot\sigma_f^2}
{\sigma_1^2+\sigma_2^2 + (x_1-x_2)^2\cdot\sigma_f^2}\, .
\end{eqnarray}
With respect to the previous case, $\widehat{k}$
has a new term $(x_1-x_2)^2\cdot\sigma_f^2$ in the denominator. As long as
this is negligible with respect to the individual variances
we still get the weighted average $\overline{x}$,
otherwise a smaller value is obtained.
Calling $r$ the ratio between $\widehat{k}$ and
$\overline{x}$, we obtain
\begin{equation}
r = \frac{\widehat{k}}{\overline{x}} = \frac{1}
{1+\frac{(x_1-x_2)^2}
{\sigma_1^2+\sigma_2^2}\cdot\sigma_f^2 }\, .
\end{equation}
Written in this way, one can see that the deviation from the
simple average value depends on the compatibility of the two values
and on the normalization uncertainty.
This can be understood in the following way:
as soon as the two values are in some disagreement, the fit
starts to vary the normalization factor
- in a hidden way -
and to squeeze the scale
by an amount allowed by $\sigma_f$, in order to minimize the
$\chi^2$.
The reason the fit prefers,
normalization factors smaller than 1
under these conditions
lies in the standard formalism of the covariance propagation,
where only first derivatives are considered. This
implies that the individual
standard deviations are not rescaled by lowering the normalization
factor, but
the points get closer.
\begin{description}
\item[Example 1.] Consider
the results of two measurements,
$8.0\cdot (1\pm 2\,\%)$ and $8.5\cdot(1\pm 2\,\%)$, having
a $10\,\%$ common normalization error.
Assuming that the two measurements
refer to the same physical quantity,
the best estimate
of its true value can be obtained
by fitting the points to a constant.
Minimizing
$\chi^2$ with
${\bf V}$ estimated empirically by the data, as explained
in the previous section, one obtains a value of
$7.87\pm0.81$, which is surprising to say the least,
since the most
probable result is outside the interval determined by the two
measured values.
\item[Example 2.] A real life
case of this strange effect which occurred during
the global analysis of the $R$ ratio in $e^+e^-$
performed by the CELLO colla\-bo\-ration\cite{CELLO},
is shown in
Fig. ~\ref{fig:cello}.
The data points represent the averages in
energy bins of the results of the PETRA and PEP experiments. They
are all correlated and
the error bars show the
total error
(see \cite{CELLO} for details). In particular, at the
intermediate stage of the analysis shown in the figure, an
overall $1\,\%$ systematic error due theoretical uncertainties
was included in the covariance matrix.
The $R$ values above $36\,$GeV
show the first hint of the rise of the $e^+e^-$ cross section
due to the $Z^\circ$ pole.
At that time it was
very interesting to prove that the observation was not
just a statistical fluctuation.
In order to test this, the
$R$ measurements were fitted with
a theoretical function having \underline{no}
$Z^\circ$ contributions,
using only data below a certain energy.
It was expected that a fast increase of
$\chi^2$ per number of degrees of freedom $\nu$
would be observed
above $36\,$GeV,
indicating that a theoretical prediction without
$Z^\circ$ would be inadequate for describing the high energy data.
The surprising result
was a ``repulsion''
(see Fig. ~\ref{fig:cello})
between the experimental data and the fit:
including the high energy points with larger $R$ a lower
curve was obtained,
while $\chi^2/\nu$ remained almost constant.
\end{description}
\begin{figure}
\centering\epsfig{file=ree.eps,width=9cm,clip=}
\caption{\sf {\it R} measurements
from PETRA and PEP experiments
with the best fits
of QED+QCD to all the data (full line) and only below
$36\,$GeV (dashed line). All data points are correlated (see text).}
\label{fig:cello}
\end{figure}
To see the source
of this effect more explicitly let us consider an alternative way
often used to take
the normalization uncertainty
into account.
A scale factor $f$, by which all
data points are multiplied, is introduced to the
expression of the $\chi^2$:
\begin{equation}
\chi^2_A = \frac{(f\cdot x_1 - k)^2}{(f\cdot\sigma_1)^2} +
\frac{(f\cdot x_2 - k)^2}{(f\cdot\sigma_2)^2} +
\frac{(f-1)^2}{\sigma_f^2}\, .
\label{eq:chi2_a}
\end{equation}
Let us also consider the same expression when the individual
standard deviations
are not rescaled:
\begin{equation}
\chi^2_B = \frac{(f\cdot x_1 - k)^2}{\sigma_1^2} +
\frac{(f\cdot x_2 - k)^2}{\sigma_2^2} +
\frac{(f-1)^2}{\sigma_f^2}\, .
\label{eq:chi2_b}
\end{equation}
The use of $\chi^2_A$ always gives the result $\widehat{k} = \overline{x}$,
because the term
$(f-1)^2/\sigma_f^2$ is harmless\footnote{
This can be seen
rewriting (\ref{eq:chi2_a}) as
\begin{equation}
\frac{(x_1 - k/f)^2}{\sigma_1^2} +
\frac{(x_2 - k/f)^2}{\sigma_2^2} +
\frac{(f-1)^2}{\sigma_f^2}\, .
\end{equation}
For any $f$,
the first two terms determine the value of $k$, and the
third one binds $f$ to 1.}
as far as the value of the minimum $\chi^2$
and the determination on $\widehat{k}$ are concerned.
Its only influence is
on $\sigma(\widehat{k})$, which turns out to be equal to
quadratic combination of the
weighted average standard deviation with
$\sigma_f\cdot\overline{x}$,
the normalization uncertainty on the
average.
This result corresponds to the usual one
when the normalization factor
in the definition of $\chi^2$ is not included,
and the overall uncertainty is added
at the end.
Instead,
the use of $\chi^2_B$ is equivalent to the
covariance matrix: the same values of the minimum $\chi^2$,
of $\widehat{k}$ and of $\sigma(\widehat{k})$ are obtained, and
$\widehat{f}$ at the minimum turns out to be exactly the $r$ ratio
defined above. This demonstrates that the effect happens
when the data values are rescaled independently of their
standard uncertainties. The effect can become huge if the
data show mutual disagreement.
The equality of the results obtained with $\chi^2_B$
with those obtained with the covariance matrix allows us to
study, in a simpler way,
the behaviour of $r$ (= $\widehat{f}$)
when an arbitrary amount of data points are analysed.
The fitted value of the normalization factor is
\begin{equation}
\widehat{f} = \frac{1}
{1+\sum_{i=1}^n\frac{(x_i-\overline{x})^2}{\sigma_i^2}\cdot\sigma_f^2}\,.
\end{equation}
If the values of $x_i$ are consistent with
a common
true value it can be shown
that the expected value of $f$ is
\begin{equation}
<f> = \frac{1}{1+(n-1)\cdot\sigma_f^2}\,.
\end{equation}
Hence, there is a bias on the result when for a non-vanishing
$\sigma_f$ a large number of data points are fitted. In particular,
the fit on average produces a bias larger than the
normalization uncertainty itself if $\sigma_f > 1/(n-1)$.
One can also see that $\sigma^2(\widehat{k})$ and the
minimum of $\chi^2$
obtained with the
covariance matrix or with $\chi^2_B$ are smaller by the same factor
$r$
than those obtained with $\chi^2_A$.
\subsubsection{Peelle's Pertinent Puzzle}
To summarize, where
there is an overall uncertainty
due to an unknown systematic error and
the covariance matrix is used to define
$\chi^2$, the behaviour of the fit
depends on whether
the uncertainty is on the offset or on the scale. In the first case
the best estimates of the function parameters are exactly
those obtained without overall uncertainty,
and only the parameters' standard deviations are affected.
In the case of unknown \underline{normalization}
errors, biased results can be obtained. The
size of the bias depends on the
fitted function, on the
magnitude of the overall uncertainty and on the number of data points.
It has also been shown that
this bias comes from the linearization performed in the
usual covariance propagation. This means that, even though the use
of the covariance matrix can be very useful in analyzing the
data in a compact way using available computer algorithms,
care is required
if there is one large normalization uncertainty
which affects all the data.
The effect discussed above has also been observed independently
by R.W. Peelle
and reported the year after the analysis
of the CELLO data\cite{CELLO}. The problem has been
extensively discussed among the community of
nuclear physicists, where it is presently
known as ``Peelle's
Pertinent Puzzle''\cite{CS}.
A recent case in High Energy Physics in which this effect has
been found to have biased the result is discussed in \cite{Morris}.
\section{Multi-effect multi-cause inference: unfolding}
\subsection{Problem and typical solutions}
In any experiment the distribution of the measured observables
differs from that of the corresponding
{\it true} physical quantities due to
physics and detector effects. For example, one may be interested
in measuring in
the variables $x$ and $Q^2$
Deep Inelastic Scattering events.
In such a case one is able to build
statistical estimators
which
in principle have a physical
meaning similar to the true quantities,
but which have a non-vanishing variance and are also distorted
due to QED and QCD radiative corrections, parton fragmentation,
particle decay and limited detector performances.
The aim of the experimentalist
is to {\it unfold}
the observed distribution from all these distortions
so as
to extract the true distribution
(see also \cite{Blobel} and \cite{Zech}).
This requires
a satisfactory knowledge of the overall effect of the
distortions on the true physical quantity.
When dealing with only one physical variable the usual method
for handling this problem is the so called
{\it bin-to-bin} correction: one evaluates
a {\it generalized efficiency} (it
may even be larger than unity)
calculating the ratio between the number of events falling in a certain
bin of the reconstructed variable and the number of events in the
\underline{same} bin of the true variable
with a Monte Carlo simulation.
This efficiency is then used
to estimate the number of true events from the
number of events observed in that bin. Clearly this method requires the
same subdivision in bins of the true and the experimental variable
and hence it cannot take into account
large migrations of events from one
bin to the others.
Moreover it neglects the
unavoidable correlations between adjacent bins. This approximation is valid
only if the amount of migration is negligible
and if the standard deviation of the
smearing is smaller than the bin size.
An attempt to solve the problem
of migrations is sometimes
made building a matrix which connects the
number of events generated in one bin to
the number of events observed
in the other bins. This matrix is then inverted and applied to the measured
distribution. This immediately produces inversion problems
if the matrix is singular. On the other hand, there is no reason
from a probabilistic point of view why the inverse matrix
should exist. This as can easily be seen
taking the example of two bins of the
true quantity both of which have
the same probability of being observed
in each of the bins of the measured quantity.
It follows that treating probability distributions
as vectors in space is not correct, even
in principle. Moreover
the method is not able to handle large statistical fluctuations
even if the matrix can be inverted (if we have, for example,
a very large number of events with which
to estimate its elements and we choose
the binning in such a way as to make the matrix not singular).
The easiest way to see this is to think of the unavoidable negative
terms of the inverse of the matrix which in some extreme cases
may yield negative numbers of unfolded events.
Quite apart from these
theoretical reservations,
the actual experience of those who have used this method is
rather discouraging, the results being highly unstable.
\subsection{Bayes' theorem stated in terms of causes and effects}
Let us
state Bayes' theorem in terms of several independent
{\it causes} ($C_i,\ i=1, 2, \ldots, n_C$) which
can produce one {\it effect} ($E$).
For example, if we consider Deep Inelastic Scattering events,
the effect $E$ can be the observation of
an event in a cell of the measured quantities
$\{\Delta Q^2_{meas}, \Delta x_{meas}\}$.
The causes $C_i$ are then all the possible cells of the true values
$\{\Delta Q^2_{true}, \Delta x_{true}\}_i$.
Let us assume we know the
{\it initial probability} of the causes $P(C_i)$ and the conditional
probability that the $i$-th cause will produce the effect $P(E|C_i)$.
The Bayes formula is then
\begin{equation}
P(C_i|E) = \frac{P(E|C_i)\cdot P(C_i)}
{\sum_{l=1}^{n_C} P(E|C_l)\cdot P(C_l)}\, .
\label{eq:bayes_unf}
\end{equation}
$P(C_i|E)$ depends on the initial
probability of the causes.
If one has no better prejudice
concerning $P(C_i)$
the process of inference can be started
from a uniform distribution.
The final distribution depends also on $P(E|C_i)$. These probabilities must
be calculated or estimated with Monte Carlo methods. One
has to keep in mind
that, in contrast to $P(C_i)$, these probabilities are not updated
by the observations. So if there are ambiguities
concerning the choice of $P(E|C_i)$ one has to try
them all in order to evaluate
their {\it systematic effects} on the results.
\subsection{Unfolding an experimental distribution}
If one observes $n(E)$ events with effect $E$, the expected
number of events assignable to each of the causes is
\begin{eqnarray}
\widehat{n}(C_i) = n(E)\cdot P(C_i|E)\,.
\label{eq:nc}
\end{eqnarray}
As the outcome of a measurement one has several possible effects
$E_j$ ($j=1, 2, \ldots, n_E$) for a given cause $C_i$.
For each of them the
Bayes formula
(\ref{eq:bayes_unf})
holds, and
$P(C_i|E_j)$ can be evaluated.
Let us write (\ref{eq:bayes_unf})
again in the case of $n_E$ possible effects\footnote{The
broadening of the distribution due to the smearing suggests
a choice of $n_E$ larger then $n_C$. It is worth remarking
that there is no need to reject events where a measured quantity
has a value outside the range allowed for the physical quantity.
For example, in the case of
Deep Inelastic Scattering events, cells with
$x_{meas} > 1$ or $Q_{meas}^2 < 0$ give information
about the true distribution too.},
indicating the initial probability of the causes with
$P_\circ (C_i)$:
\begin{equation}
P(C_i|E_j) = \frac{P(E_j|C_i)\cdot P_\circ (C_i)}
{\sum_{l=1}^{n_C} P(E_j|C_l)\cdot P_\circ (C_l)}\, .
\label{eq:bays_unf}
\end{equation}
One should note that:
\begin{itemize}
\item
$\sum_{i=1}^{n_C} P_\circ (C_i) = 1$, as usual.
Notice that if the probability of a cause
is initially set to zero it can never change, i.e. if a cause
does not exist it cannot be invented;
\item
$\sum_{i=1}^{n_C} P(C_i|E_j) = 1$\, :
this normalization condition, mathematically
trivial since it comes directly from (\ref{eq:bays_unf}),
indicates that
each effect must come from one or more of the
causes under examination. This means that if the
observables also contain
a non negligible amount of background, this needs to be included
among the causes;
\item
$0 \le \epsilon_i \equiv \sum_{j=1}^{n_E} P(E_j|C_i) \le 1$\,:
there is no need for
each cause to produce at least
one of the effects.
$\epsilon_i$ gives the {\it efficiency} of finding the
cause $C_i$ in any of the possible effects.
\end{itemize}
After $N_{obs}$ experimental observations one obtains
a distribution of frequencies
$\underline{n}(E) \equiv \{n(E_1), n(E_2), \ldots , n(E_{n_E})\} $.
The expected number of events
to be assigned
to each of the causes (taking into account only to the observed events)
can be calculated applying (\ref{eq:nc}) to each effect:
\begin{eqnarray}
\left.\widehat{n}(C_i)\right|_{obs} & =& \sum_{j=1}^{n_E}n(E_j)\cdot P(C_i|E_j)
\,.
\end{eqnarray}
When inefficiency\footnote{If $\epsilon_i=0$ then
$\widehat{n}(C_i)$ will
be set to zero, since the experiment is not sensitive to the cause $C_i$.}
is also brought into the picture,
the best estimate of the true
number of events becomes
\begin{eqnarray}
\widehat{n}(C_i)& =& \frac{1}{\epsilon_i}
\sum_{j=1}^{n_E}n(E_j)\cdot P(C_i|E_j)
\hspace{1. cm}\epsilon_i \ne 0\,.
\end{eqnarray}
{}From these unfolded events we can estimate
the true total number of events,
the final probabilities of the causes and the overall efficiency:
\begin{eqnarray}
\widehat{N}_{true}& =& \sum_{i=1}^{n_C} \widehat{n}(C_i)\nonumber \\
\widehat{P}(C_i) \equiv P(C_i|\underline{n}(E))
&=&\frac{\widehat{n}(C_i)}{\widehat{N}_{true}} \nonumber \\
\widehat{\epsilon} &=& \frac{N_{obs}}{\widehat{N}_{true}}\,. \nonumber
\end{eqnarray}
If the initial distribution $\underline{P_\circ} (C)$
is not consistent with the
data, it will not agree with the final distribution
$\underline{\widehat{P}}(C)$.
The closer the initial distribution is to
the true distribution, the better
the agreement is.
For simulated data one can
easily verify for simulated data
that the distribution $\underline{\widehat{P}}(C)$
lies between $\underline{P_\circ} (C)$
and the true one. This suggests proceeding iteratively.
Fig. ~\ref{fig:unf} shows an example of a bidimensional distribution
unfolding.
More details about iteration strategy, evaluation of uncertainty,
etc. can be found in \cite{unfolding}.
\begin{figure}
\centering\epsfig{file=unfold.eps,width=\linewidth,clip=}
\caption{\sf Example of a two dimensional unfolding: true distribution
(a), smeared distribution (b)
and results after the first 4 steps ((c) to (f)).}
\label{fig:unf}
\end{figure}
I would just like to comment on an obvious criticism that may be made:
``{\it the iterative procedure is against the Bayesian spirit}, since
the same data are used many times
for the same inference''. In principle
the objection is valid, but in practice this
technique is a ``trick'' to give
to the experimental data a weight (an importance) larger than
that of the priors. A more rigorous procedure which took into
account uncertainties and correlations of the initial distribution
would have been much more complicated.
An attempt of this kind can be found in \cite{Weise3}.
Examples of unfolding procedures performed with
non-Bayesian methods are described
in \cite{Blobel} and \cite{Zech}.
\newpage
\section{Conclusions}
These notes have shown how it is possible to build a
powerful theory of measurement uncertainty starting from
a definition of probability which seems
of little use at first sight,
and from a formula that - some say -
looks too trivial to be called a theorem.
The main advantages the Bayesian approach has
over the others are
(further to the non negligible fact that it is
able to treat problems on
which the others fail):
\begin{itemize}
\item
the recovery of the intuitive idea of probability as a valid
concept for treating scientific problems;
\item
the simplicity and naturalness of the basic tool;
\item
the capability of combining prior prejudices and experimental
facts;
\item
the automatic updating property as soon as new
facts are observed;
\item
the transparency of the method which allows the
different assumptions on which the inference may depend
to be checked and changed;
\item
the high degree of awareness that it gives to its user.
\end{itemize}
When employed on the problem of measurement errors,
as a special application of conditional probabilities,
it allows all possible sources of uncertainties
to be treated in the most general way.
When the problems get complicated and the general method becomes
too heavy to handle, it is possible to use
approximate methods based on the linearization of
the final probability density function to calculate
the first and second moments of the distribution. Although the
formulae are exactly those of the standard ``error propagation'',
the interpretation of the true value as a random variable simplifies
the picture and allows easy inclusion of
uncertainties due to systematic effects.
Nevertheless there are some cases in which the linearization
may cause severe problems, as shown in Section 14. In such
cases one needs to go back to the general method or to apply
other kinds of approximations which are not just blind use
of the covariance matrix.
The problem of unfolding dealt with in the last section should
to be considered a side remark with respect to the
mainstream of the notes. It is in fact a mixture of genuine Bayesian
method (the basic formula), approximations (covariance matrix evaluation)
and {\it ad hoc} prescriptions of iteration and smoothing,
used to sidestep the formidable problem
of seeking the general solution.
\newpage
I would like to conclude with three quotations:
\begin{itemize}
\item
\begin{quote}
{\sl \small
``($\ldots$) the best evaluation of the uncertainty ($\ldots$) must be given
($\ldots$) \\
The method stands, therefore, in contrast to certain older methods
that have the following two ideas in common:
\begin{itemize}
\item
The first idea is that the uncertainty reported should be 'safe'
or 'conservative' ($\ldots$) In fact, because the evaluation
of the uncertainty of a measurement result is problematic,
it was often made deliberately large.
\item
The second idea is that the influences that give rise to
uncertainty were always recognizable as either 'random'
or 'systematic' with the two being of different nature; ($\ldots$)''
\end{itemize}
}(ISO {\it Guide}\cite{ISO})
\end{quote}
\item
\begin{quote}
{\sl \small
``Well, QED is very nice and impressive, but when everything
is so neatly wrapped up in blue bows, with all
experiments in exact agreement with each other and with
the theory - that is when one is learning
{\bf absolutely nothing}.''
``On the other hand, when experiments are in hopeless conflict
- or when the observations do not make sense according to
conventional ideas, or when none of the new models seems
to work, in short when the situation is an unholy mess -
{\bf that} is when one is really making hidden progress
and a breakthrough is just around the corner!''}\\
(R. Feynman, 1973 Hawaii Summer Institute,
cited by D. Perkins at 1995 EPS Conference, Brussels)
\end{quote}
\item
\begin{quote}
{\sl \small
``Although this {\it Guide} provides a framework for assessing
uncertainty, it cannot substitute for critical
thinking, intellectual honesty, and professional skill. The evaluation
of uncertainty is neither a routine task nor a
purely mathematical one; it depends on detailed knowledge
of the nature of the measurand and of the measurement.
The quality and utility of the uncertainty quoted for the result of a
measurement therefore ultimately depend on the understanding,
critical analysis, and integrity of those who contribute to
the assignment of its value''.
} (ISO {\it Guide}\cite{ISO})
\end{quote}
\end{itemize}
\newpage
\section*{Acknowledgements}
It was a great pleasure to give the lectures on which these
notes are based. I thank all the students for the active interests
shown and for questions and comments. Any further criticism
on the text is welcome.
I have benefitted a lot from discussions
on this subject
with my Rome and DESY colleagues,
expecially those of the ZEUS Collaboration. Special
acknowledgements go to
Dr. Giovanna Jona-Lasinio and Prof. Romano Scozzafava of
`'La Sapienza'', Dr. Fritz Fr\"ohner of FZK Karlsruhe (Germany),
Prof. Klaus Weise of the PTB Braunschweig (Germany),
and Prof. G\"unter Zech of Siegen University (Germany)
for clarifications on several aspects of probability
and metrology. Finally, I would like to thank
Dr. Fritz Fr\"ohner,
Dr. Gerd Hartner of Toronto University (Canada),
Dr. Jos\`e Repond of Argonne National Laboratory (USA)
and Prof. Albrecht Wagner of DESY (Germany) for
critical comments
on the manuscript.
\section*{Bibliographic note}
The state of the art of Bayesian theory is summarized
in \cite{Bernardo}, where many references can be found.
A concise presentation of the basic principles can instead
been found in \cite{nature}.
Text books that I have consulted are \cite{Winkler}
and \cite{Press}. They contain many references too.
As an introduction to subjective probability
de Finetti's {\it ``Theory of
probability''}\cite{Definetti3}
is
a {\it must}. I have found the reading of \cite{Definetti1}
particular stimulating and that of \cite{Scozzafava}
very convincing (thanks expecially to the many examples and exercises).
Unfortunately these two books are only available
in Italian for the moment.
Sections \ref{sec:variables} and \ref{sec:clim} can be reviewed
in standard text books. I recommend those familiar to you.
The applied part of these notes, i.e. after section \ref{sec:inference},
is, in a sense, ``original'', as it has been derived
autonomously and, in many cases, without the knowledge that the
results have been known to experts for two centuries. Some of
the examples of section \ref{sec:unknown} were worked out for these
lectures.
The references in the applied part are given at the appropriate
place in the text - only those actually used have been indicated.
Of particular interest is the Weise and W\"oger theory
of uncertainty\cite{Weise2}, which differs from that of these
notes because of the additional use of the Maximum Entropy Principle.
A consultation of the ISO {\it Guide}\cite{ISO} is advised.
Presently the BIPM recommendations are also followed
by the american National Institute of Standards and Technology
(NIST), whose {\it Guidelines}\cite{nist} have the advantage,
with respect to the ISO {\it Guide}, of being available on www too.
|
2,869,038,153,757 | arxiv | \section{introduction}
Let $\triangle A_{1}A_{2}A_{3}$ be a geodesic triangle on a $C^{2}$ complete surface $M.$
We denote by $w_{i}$ a positive real number (weight), which corresponds to each vertex $A_{i},$ by $l_{A_{i}}(F)$, the geodesic distance from the vertex $A_{i},$ to the point $F,$ for $i=1,2,3.$
The weighted Fermat-Torricelli problem on a $C^{2}$ complete surface $M$ states that:
\begin{problem}
Find a point $F\in M,$
such that:
\begin{displaymath}
f(F)=w_{1}l_{A_{1}}(F)+w_{2}l_{A_{2}}(F)+w_{3}l_{A_{3}}(F)\to min.
\end{displaymath}
\end{problem}
The inverse weighted Fermat-Torricelli problem on $M$ states that:
\begin{problem}
Given a point $F$ which belongs to the interior of $\triangle A_{1}A_{2}A_{3}$ on $M,$
does there exist a unique set of positive weights $\{w_{1}, w_{2},w_{3}\},$ such
that
\begin{displaymath}
w_{1}+w_{2}+w_{3}= c =const,
\end{displaymath}
for which $F$ minimizes
\begin{displaymath}
f(F)=w_{1}l_{A_{1}}(F)+w_{2}l_{A_{2}}(F)+w_{3}l_{A_{3}}(F).
\end{displaymath}
\end{problem}
The solutions w.r to the weighted Fermat-Torricelli problem and an inverse weighted Fermat-Torricelli problem for a $C^{2}$ complete surface with Gaussian curvature $0<K<c$ or $K<0,$ has been given in \cite{Zachos/Cots:10}, \cite{Cots/Zach:11}.
We mention the necessary and sufficient conditions to locate the weighted Fermat-Torricelli point at the interior of $\triangle A_{1}A_{2}A_{3}$ on a $C^{2}$ complete surface with Gaussian curvature $0<K<c$ or $K<0:$
\begin{proposition}[Floating Case]\cite{Zachos/Cots:10},\cite{Cots/Zach:11}
If $P$, $Q$ $\in\{A_{1},A_{2},A_{3}\}$ and $\vec{U}_{PQ}$ is the unit tangent
vector of the geodesic arc $PQ$ at P and D is the domain of M
bounded by $\triangle A_{1}A_{2}A_{3},$
then the following (I), (II), (III) conditions are equivalent:\\
(I) All the following inequalities are satisfied simultaneously:
\begin{equation}\label{cond120n}
\left\| w_{2}\vec{U}_{A_{1}A_{2}}+w_{3}\vec{U}_{A_{1}A_{3}}\right\|> w_{1},
\end{equation}
\begin{equation}\label{cond1202n}
\left\| w_{1}\vec{U}_{A_{2}A_{1}}+w_{3}\vec{U}_{A_{2}A_{3}}\right\|> w_{2},
\end{equation}
\begin{equation}\label{cond1203n}
\left\| w_{1}\vec{U}_{A_{3}A_{1}}+w_{2}\vec{U}_{A_{3}A_{2}}\right\|> w_{3},
\end{equation}
(II) The point $F$ is an interior point of the triangle
$\triangle A_{1}A_{2}A_{3}$ and does not belong to the geodesic arcs
$\gamma_{A_{1}A_{2}},$ $\gamma_{A_{2}A_{3}}$
and $\gamma_{A_{3}A_{1}},$\\
(III) $w_{1}\vec{U}_{FA_{1}}+w_{2}\vec{U}_{FA_{2}}+w_{3}\vec{U}_{FA_{3}}=\vec{0}.$\\
\end{proposition}
The solution of the weighted Fermat-Torricelli problem is a weighted tree (Weighted Fermat-Torricelli tree or weighted Steiner tree). The derivative of the weighted length of weighted Fermat-Torricelli trees and weighted Steiner trees on a connected complete Riemannian manifold is calculated in \cite{IvanovTuzhilin:01b} which is a generalization of the first variation formula for the length of geodesics w.r. to arc length (\cite{VToponogov:05}).
The weighted Fermat-Torricelli problem or weighted Steiner problem on a Riemannian manifold is a special case of a
one-dimensional variational problem in which branching extremals are introduced in \cite{IvanovTuzhilin:01}.
In this paper, we provide a new variational method to solve the weighted Fermat-Torricelli problem by assigning a positive number at the weighted Fermat-Torricelli point (g.FT that has acquired a subconscious)
for infinitesimal geodesic triangles on a $C^{2}$ complete surface $M$ with variable Gaussian curvature
$a<K<b,$ for $a, b\in \mathbb{R}.$
This variational method is based on the unified cosine law of Berg-Nikolaev given in \cite{BNik:07} for the K-plane (Sphere $S^{2}_{k}$, Hyperbolic plane $H^{2}_{k}$ and Euclidean Plane $\mathbb{R}^{2}$) and an assertion that the generalized Fermat-Torricelli point is located at three spherical regions or three hyperbolic regions or three plane regions or a combination of spherical, hyperbolic and plane regions with different constant curvatures.
Thus, we may obtain a generalized Fermat-Torricelli tree on a Torus or a surfaces of revolution in $\mathbb{R}^{3},$
having elliptic points ($K>0$,) hyperbolic points ($K<0$) and parabolic points $K=0$.
\section{The generalized Fermat-Torricelli(w.F-T)problem on a $C^{2}$ complete surface $M$ with $a<K<b$}
We denote by $\triangle ABC$ an infinitesimal geodesic triangle on a surface $M,$ by $w_{R}$ a positive real number (weight) which corresponds to each vertex $R,$ for $R\in \{A,B,C\}$ and by $w_{S}$ is a positive real number(weight)
which corresponds to an interior point $F$ of $\triangle ABC.$
The generalized Fermat-Torricelli problem with one node that has acquired unconscious (S.FT problem) states that:
Assume that we select weights $w_{A},$ $w_{B},$ $w_{C},$ such that the g.FT point is located at the interior of $\triangle ABC.$
\begin{problem}
Find the point $F\in M,$ that has acquired a subconscious $w_{S}$
such that:
\begin{equation}\label{minimum}
f(F)=w_{A}l_{A}(F)+w_{B}l_{B}(F)+w_{C}l_{C}(F)\to min.
\end{equation}
\end{problem}
We denote by $\varphi_{Q},$ the angle between the geodesic arcs
$\gamma_{RF}$ and $\gamma_{SF}$ for $Q,R,S\in\{A,B,C\}$
and $Q\ne R\ne S$.
\begin{theorem}\label{floatsol}
If the g.F-T point $F$ is an interior point of the infinitesimal geodesic triangle
$\triangle ABC$ (see figure 1), then each angle $\varphi_{Q},$
$Q\in\{A,B,C\}$ can be expressed as a function of $w_{A},$ $w_{B}$
and $w_{C}:$
\begin{equation}\label{eq:arr}
\cos\varphi_{Q}=\frac{w_{Q}^{2}-w_{R}^{2}-w_{S}^{2}}{2w_{R}w_{S}},
\end{equation}
for every $Q, R, S\in\{A,B,C\},$ $Q\ne R\ne S.$
\end{theorem}
\begin{proof}
Assume that $\triangle ABF,$ $\triangle BFC,$ $\triangle AFC$ belong to a spherical, hyperbolic or planar region
of constant Gaussian curvature $k_{3},$ $k_{1},$ $k_{2},$ for $k_{i}\in \mathbb{R},$ $i=1,2,3.$
We set $l_{A}(B^{\prime})\equiv l_{A}(F)+dl_{A}$
and $l_{B^{\prime}}(F)=dl_{B}.$
We denote by
\begin{displaymath}
\kappa_{i} = \left\{ \begin{array}{ll}
\sqrt{K_{i}} & \textrm{if $K_{i}>0$,}\\
i\sqrt{-K_{i}} & \textrm{if $K_{i}<0$.}\\
\end{array} \right.
\end{displaymath}
The unified cosine law for $\triangle AB^{\prime}F$ is given by:
\begin{equation} \label{eqvar1}
\cos(\kappa_{3} (l_{A}(F)+dl_{A}))=\cos(\kappa_{3} l_{A}(F))\cos(\kappa_{3}
dl_{B})+\sin(\kappa_{3} l_{A}(F))\sin(\kappa_{3}
dl_{B})\cos(\varphi_{C}),
\end{equation}
or
\begin{eqnarray} \label{eqvar1b}
\cos(\kappa_{3}l_{A}(F))\cos(\kappa_{3}dl_{A})-\sin(\kappa_{3}l_{A}(F))\sin(\kappa_{3}dl_{A})=\\\nonumber\cos(\kappa_{3} l_{A}(F))\cos(\kappa_{3}
dl_{B})+\sin(\kappa_{3} l_{A}(F))\sin(\kappa_{3}
dl_{B})\cos(\varphi_{C}),
\end{eqnarray}
By applying Taylor's formula, we obtain:
\begin{equation}\label{eqvar2}
\cos\kappa_{3} dl_{A}=1+o((k_{3}dl_{A})^{2}),
\end{equation}
\begin{equation}\label{eqvar3}
\sin\kappa_{3} dl_{A}=\kappa_{3}dl_{A}+o((k_{3}dl_{A})^{3}),
\end{equation}
\begin{equation}\label{eqvar4}
\cos\kappa_{3} dl_{B}=1+o((k_{3}dl_{B})^{2}),
\end{equation}
and
\begin{equation}\label{eqvar5}
\sin\kappa_{3} dl_{B}=\kappa_{3}dl_{B}+o((k_{3}dl_{B})^{3}).
\end{equation}
By replacing (\ref{eqvar2}), (\ref{eqvar3}),(\ref{eqvar4}),(\ref{eqvar5}) in (\ref{eqvar1b}) and neglecting second order terms, we derive that:
\begin{equation}\label{varphiC}
\frac{dl_{A}}{dl_{B}}=\cos(\pi-\varphi_{C}).
\end{equation}
The unified cosine law for $\triangle CB^{\prime}F$ is given by:
\begin{equation} \label{eqvar1c}
\cos(\kappa_{1} (l_{C}(F)+dl_{C}))=\cos(\kappa_{1} l_{C}(F))\cos(\kappa_{1}
dl_{B})+\sin(\kappa_{1} l_{C}(F))\sin(\kappa_{1}
dl_{B})\cos(\varphi_{A}),
\end{equation}
By applying Taylor's formula, we obtain:
\begin{equation}\label{eqvar2c}
\cos\kappa_{1} dl_{C}=1+o((k_{1}dl_{C})^{2}),
\end{equation}
\begin{equation}\label{eqvar3c}
\sin\kappa_{1} dl_{C}=\kappa_{1}dl_{C}+o((k_{1}dl_{C})^{3}),
\end{equation}
\begin{equation}\label{eqvar4c}
\cos\kappa_{1} dl_{B}=1+o((k_{1}dl_{B})^{2}),
\end{equation}
and
\begin{equation}\label{eqvar5c}
\sin\kappa_{1} dl_{B}=\kappa_{1}dl_{B}+o((k_{1}dl_{B})^{3}).
\end{equation}
Similarly, by replacing (\ref{eqvar2c}), (\ref{eqvar3c}),(\ref{eqvar4c}),(\ref{eqvar5c}) in (\ref{eqvar1c}) and neglecting second order terms, we derive that:
\begin{equation}\label{varphiA}
\frac{dl_{C}}{dl_{B}}=\cos(\pi-\varphi_{A}).
\end{equation}
By differentiating the objective function (\ref{minimum}) w.r. to a parameter $s,$ we get:
\begin{equation}\label{derobj1}
\frac{df}{ds}=w_{A}\frac{dl_{A}}{ds}+w_{B}\frac{dl_{B}}{ds}+w_{C}\frac{dl_{C}}{ds}
\end{equation}
By setting $s=-l_{B}$ and by replacing (\ref{varphiC}) and (\ref{varphiA}) in (\ref{derobj1}), we have:
\begin{equation}\label{equation1var}
w_{A}+w_{B}\cos(\varphi_{C}+w_{C}\cos(\varphi_{B})=0.
\end{equation}
Similarly, by working cyclically and setting the parametrization $s=-l_{C}$ and $s=-l_{A},$
we derive:
\begin{equation}\label{equation2var}
w_{A}\cos\varphi_{C}+w_{B}+w_{C}\cos\varphi_{A}=0,
\end{equation}
\begin{equation}\label{equation3var}
w_{A}\cos\varphi_{B}+w_{B}\cos\varphi_{A}+w_{C}=0,
\end{equation}
and
\[\varphi_{A}+\varphi_{B}+\varphi_{C}=2\pi.\]
The solution of (\ref{equation1var}),
(\ref{equation2var}) and (\ref{equation3var}) w.r. to $\cos\varphi_{Q}$ yields
(\ref{eq:arr}).
\end{proof}
Suppose that $w_{A},$ $w_{B},$ $w_{C}$ are variables and $\varphi_{A},$ $\varphi_{B},$ $\varphi_{C},$
are given.
The solution of (\ref{equation1var}),
(\ref{equation2var}) and (\ref{equation3var}) w.r. to $w_{A}, w_{B}, w_{C}$ yields
a positive answer w.r to the inverse weighted Fermat-Torricelli problem on $M:$
\begin{proposition}\label{propo5}
The solution of the inverse weighted Fermat-Torricelli problem on a surface $M$ is given by:
\begin{equation}\label{inverse111}
w_{Q}=\frac{Constant}{1+\frac{\sin{\varphi_{R}}}{\sin{\varphi_{Q}}}+\frac{\sin{\varphi_{S}}}{\sin{\varphi_{Q}}}},
\end{equation}
for $Q,R,S\in \{A,B,C\}.$
\end{proposition}
\begin{remark}
The solution of the inverse weighted Fermat-Torricelli problem on a $C^{2}$ complete surface with Gaussian curvature
$0<K<a$ or $K<0,$ has been derived in \cite{Zachos/Cots:10}, \cite{Cots/Zach:11}.
\end{remark}
The idea of assigning a residual weight (subconscious) at a weighted Fermat-Torricelli point (generalized Fermat-Torricelli point) is given in \cite{Zachos:20}, by assuming that a weighted Fermat-Torricelli tree is a
two way communication network and the weights $w_{A},$ $w_{B},$ $w_{C}$ are three small masses that may move through the branches of the weighted Fermat-Torricelli tree. By assuming mass flow continuity of this network, we obtain the generalized inverse weighted Fermat-Torricelli problem (inverse s.FT problem).
The inverse s.F.T problem is the inverse weighted Fermat-Torricelli problem, such that the weighted Fermat-Torricelli point has acquired a subconscious $w_{S}.$
We denote by $w_{R}$ a mass flow which is transferred from $R$
to $F$ for $R\in \{A,B\},$ by $w_{S}$ a residual weight which
remains at $F$ and by $w_{C}$ a mass flow which is transferred
from $F$ to $C,$ by $\tilde{w_{R}}$ a mass flow which is transferred from
$F$ to $R,$ $R\in \{A,B\},$ and by $\tilde{w_{S}}$ a residual
weight which remains at $F$ and by $\tilde{w_{C}}$ a mass flow
which is transferred from $C$ to $F.$
The following equations are derived by this mass flow along the infinitesimal geodesic arcs $AF,$ $BF,$ $CF:$
\begin{equation}\label{weight1outflow}
w_{A}+w_{B}=w_{C}+w_{S}
\end{equation}
and
\begin{equation}\label{weight2inflow}
\tilde{w_{A}}+\tilde{w_{B}}+\tilde{w_{S}}=\tilde{w_{C}}.
\end{equation}
By taking into account (\ref{weight1outflow}) and (\ref{weight2inflow}) and by
setting $\bar{w_{S}}=w_{S}-\tilde{w_{S}},$ we get:
\begin{equation}\label{weight12inoutflow}
\bar{w_{A}}+\bar{w_{B}}=\bar{w_{C}}+\bar{w_{S}}
\end{equation}
such that:
\begin{equation}\label{weight12inflowsum}
\bar{w_{A}}+\bar{w_{B}}+\bar{w_{C}}=c>0,
\end{equation}
\begin{problem}\label{mixinv5triangle}
Given a point $F$ which belongs to the interior of the inifinitesimal geodesic triangle $\triangle
ABC$ on $M$, does there exist a unique
set of positive weights $\bar{w_{R}},$ such that
\begin{equation}\label{isoptriangle}
\bar{w_{A}}+\bar{w_{B}}+\bar{w_{C}} = c =const,
\end{equation}
for which $F$ minimizes
\begin{displaymath}
f(F)=w_{A}l_{A}(F)+w_{B}l_{B}(F)+w_{C}l_{C}(F),
\end{displaymath}
\begin{displaymath}
f(F)=\tilde{w}_{A}l_{A}(F)+\tilde{w}_{B}l_{B}(F)+\tilde{w}_{C}l_{C}(F),
\end{displaymath}
\begin{displaymath}
f(F)=\bar{w}_{A}l_{A}(F)+\bar{w}_{B}l_{B}(F)+\bar{w}_{C}l_{C}(F),
\end{displaymath}
\begin{equation}\label{imp1mixtr}
w_{R}+\tilde{w_{R}}=\bar{w_{R}}
\end{equation}
under the condition for the weights:
\begin{equation}\label{cond3mixtr}
\bar{w_{i}}+\bar{w_{j}}=\bar{w_{S}}+\bar{w_{k}}
\end{equation}
for $i,j,k\in {A,B,C}$ and $i\ne j\ne k.$
\end{problem}
\begin{theorem}\label{propomix4triangle}
Given the g.FT point $F$ to be an
interior point of the triangle $\triangle ABC$ with
the vertices lie on three geodesic arcs that meet at $F$ and
from the two given values of $\varphi_{B},$ $\varphi_{C},$ the
positive real weights $\bar{w_{R}}$ given by the formulas
\begin{equation}\label{inversemix42tr}
\bar{w_{A}}=-\left(\frac{\sin(\varphi_{B}+\varphi_{C})}{\sin\varphi_{C}}\right)\frac{c-\bar{w_{S}}}{2},
\end{equation}
\begin{equation}\label{inversemix43tr}
\bar{w_{B}}=\left(\frac{\sin\varphi_{B}}{\sin\varphi_{C}}\right)\frac{c-\bar{w_{S}}}{2},
\end{equation}
and
\begin{equation}\label{inversemix41tr}
\bar{w_{C}}=\frac{c-\bar{w_{S}}}{2}
\end{equation}
give a negative answer w.r. to the inverse s.FT problem on $M.$
\end{theorem}
\begin{remark}
Theorem~2 is proved in \cite{Zachos:20} for the case of $\mathbb{R}^{2}.$
\end{remark}
We conclude with an evolutionary scheme of infinitesimal geodesic triangles, which connects the subconscious
of a weighted Fermat-Torricelli tree with the Aleksandrov curvature of a geodesic triangle (\cite{Alexandrov:96},\cite{BNik:07})).
Phase~1 At time zero, we assume that a point $F$ in $\mathbb{R}^{3}$ tends to split in three directions.
It acquires a subconscious which equals with the absolute value of the Gaussian curvature $\|K(F)\|$ and predermines the surface with Gaussian curvature $K,$ on which these three geodesic arcs will move (Weighted Fermat-Torricelli tree).
Phase~2 After time $t,$ the subconscious quantity is increased and the value of the Aleksandrov curvature of the infinitesimal geodesic triangle $T$ is reached:
\[\bar{w_{S}}=\|K(T)\|=\|\angle A+\angle B+\angle C-\pi\|.\]
The following equations determine the values of $\bar{w}_{A},$ $\bar{w}_{B}$ and $\bar{w}_{C}:$
\[\bar{w_{A}}=-\left(\frac{\sin(\varphi_{B}+\varphi_{C})}{\sin\varphi_{C}}\right)\frac{1-\bar{w_{S}}}{2},\]
\[\bar{w_{B}}=\left(\frac{\sin\varphi_{B}}{\sin\varphi_{C}}\right)\frac{1-\bar{w_{S}}}{2},\]
and
\[\bar{w_{C}}=\frac{1-\bar{w_{S}}}{2}.\]
Phase~2 gives a plasticity solution of the inverse s.FT problem that has acquired a subconscious $\bar{w_{S}}=\|K(\triangle ABC)\|.$
|
2,869,038,153,758 | arxiv | \section{Introduction}
In recent decades, general theories of phase transitions and critical phenomena
have been developed, unifying our understanding of equilibrium phase transitions
in liquid-vapor, magnetic, liquid crystals and other systems. By contrast, the
study of nonequilibrium critical phenomena is still in development. Since the
transition rates in such systems do not satisfy detailed balance, the
steady-state probability distribution in these systems is not known a priori,
and the analysis must be based upon the dynamics.
Starting from the basic contact process \cite{harris}, many particle
systems have been studied in efforts to characterize scaling properties at nonequilibrium
phase transitions \cite{marro,henkel,odor}. These models, which involve creation and annihilation of
particles on a lattice,
typically exhibit a phase transition to an {\it absorbing state}
(one allowing no escape), and so violate the detailed balance principle.
An issue that has attracted some interest is the combined effect of multiparticle rules
and diffusion (hopping), which tends to spread particles uniformly over the system.
In this work we revisit the pair annihilation
model (PAM) \cite{pam1,pam2}.
In this model particles diffuse on a lattice at rate $D$, nearest-neighbor pairs of
particles are annihilated at rate $(1-D)/(1+\lambda)$, and particles attempt to create new particles at rate
$(1-D)\lambda/(1+\lambda)$.
Double occupancy of sites is forbidden.
The model exhibits active and absorbing phases,
separated by a continuous phase transition at $\lambda_c(D)$.
Using cluster approximations and Monte Carlo simulation, we determine the phase boundary
in one, two, and three dimensions.
The pair approximation predicts that for a diffusion rate
greater than a certain value, $D^*$, the critical parameter $\lambda_c=0$.
(That is, for $D>D^*$, an arbitrarily small creation rate is sufficient to
maintain a nonzero particle density.)
This prediction is known to be wrong in dimensions $d \leq 2$: Katori and Konno \cite{katori} proved
that $\lambda_c > 0$ for any diffusion probability $D<1$, in one and two dimensions.
Their theorem is based on a relation between the PAM and the branching annihilating
random walk of Bramson and Gray \cite{gray}.
Existence of a $D^* < 1$ is not ruled out in $d \geq 3$ dimensions. The difference
is connected with the nonrecurrence of random walks in $d \geq 3$ \cite{katori}.
How $\lambda_c$ tends to zero as $D \to 1$ is, however, unknown.
Moreover the question of whether, in three or more dimensions, $D^*$ is in fact less that unity,
has not, to our knowledge, been studied.
The principal motivation for the present work is
to determine $\lambda_c(D)$ via numerical simulation.
We also verify that the model belongs to the directed percolation (DP) universality class,
as expected on the basis of symmetry considerations \cite{jans81,gras82}.
Our simulation results, while consistent
with the Katori-Konno theorem, show that in the two-dimensional case, $\lambda_c$ becomes extremely
small as $D$ approaches unity, possibly suggesting the (incorrect)
impression that the critical value is actually zero
at some finite diffusion rate.
The remainder of this paper is organized as follows. In the following section (II) we define the
model and discuss its limiting behaviors in the $\lambda$-$D$ plane. Then in Sec. III we present,
for completeness, the one- and two-site cluster approximations. Simulation results are discussed
in Sec. IV, followed by a brief discussion in Sec. V.
\section{The model}
The PAM is defined on a lattice,
in which sites can be either occupied by a particle or vacant \cite{marro,pam1,pam2};
we denote these states by $\sigma_x = 1$ (site $x$ occupied) and $\sigma_x = 0$ (site $x$ vacant).
There are three kinds of transition: nearest-neighbor (NN) hopping (``diffusion"), creation,
and pairwise annihilation, with associated rates
$D$, $(1-D)\lambda/(1+\lambda)$, and $(1-D)/(1+\lambda)$, respectively.
(Since the rates are parameterized so as to sum to unity, we are free to refer to $D$ as the
diffusion {\it probability}.)
At each step of the evolution, the next attempted transition is taken as diffusion, creation, or
annihilation, with probabilities
$D$, $(1-D)\lambda/(1+\lambda)$, and $(1-D)/(1+\lambda)$, respectively.
In a hopping transition,
a site $x$ is chosen at random, along with a nearest-neighbor site $y$ of $x$.
Then if $\sigma_x \neq \sigma_y$, the states are exchanged. In a creation event, a site $x$ is
chosen. If $\sigma_x =1$, a nearest-neighbor $y$ is chosen, and if $\sigma_y = 0$ this variable
is set to one. Finally, in an annihilation event, a pair of nearest-neighbor sites $x$ and $y$
are chosen at random, and if $\sigma_x = \sigma_y = 1$, both variables are set to zero. Each
transition corresponds to a time increment $\Delta t = 1/N_{site}$, where $N_{site}$ is the
number of lattice sites.
To improve efficiency, in simulations the
site $x$ is chosen from a list of occupied sites;
then the time increment is $\Delta t = 1/N_p$, with $N_p$ the number of {\it particles} in the
system, immediately prior to the transition. In this implementation, the rate of annihilation
of a given NN particle pair is
\begin{equation}
R_{an} = \frac{1}{\Delta t} \frac{1-D}{1+\lambda} \frac{2}{N_p} \frac{1}{2d}
= \frac{1}{d} \frac{1-D}{1+\lambda}
\label{effannrate}
\end{equation}
where the factor $2/N_p$ arises because either particle in the pair can be selected from the
list of $N_p$ particles.
The particle-free configuration is absorbing. By analogy with the contact process \cite{marro,harris},
we expect that in the infinite-size limit the system undergoes a phase transition between
an active state (with nonzero stationary particle density) and an absorbing one, as one crosses
the critical line $\lambda_c(D)$ in the $\lambda$-$D$ plane. As creation depends upon a
single particle, the order parameter is the
stationary density of particles, $\rho$.
When a new particle is created, it
always forms a pair with the ``parent" particle, making these two particles susceptible to annihilation.
In the active stationary state, increasing $D$ at fixed $\lambda$ tends to reduce the fraction of
nearest-neighbor particle pairs toward its random mixing value, $\rho^2$. Thus we should expect
$\lambda_c$ to be a decreasing (or at least, nonincreasing) function of $D$.
In the simplest mean-field theory analysis, the annihilation rate is proportional to $\rho^2$, so that
for small $\rho$, one has $\dot{\rho} \propto \lambda \rho - \mbox{const.} \times \rho^2 $, which admits a
stationary solution $\rho \propto \lambda$ for {\it any} nonzero creation rate. In the limit $D \to 1$
we expect mean-field theory to hold, so that $\lambda_c \to 0$ in this limit. This raises the question
of whether $\lambda_c$ vanishes at some diffusion
probability $D^*$ strictly less than unity. While the two-site approximation predicts $D^* < 1$ in any
dimension, the
results of Katori and Konno \cite{katori} imply that $D^* = 1$ in dimensions $d \leq 2$.
The phase diagram of the PAM is expected to have the form shown in Fig. 1. For $D < D^*$
the behavior along the critical line $\lambda_c (D)$ should be that of DP, given that such
behavior is generic for absorbing-state phase transitions without special symmetries or conserved
quantities \cite{jans81,gras82}. If $D^* < 1$, then we expect mean-field
like critical behavior as $\lambda \to \lambda_c = 0$ at fixed $D > D^*$. Within the absorbing phase, for
$0 < \lambda < \lambda_c (D)$, an isolated particle can produce an offspring, leading to annihilation
of both the original and the new particle. On the line $\lambda = 0$, this channel to
annihilation is not available, and isolated particles cannot disappear. Thus the dynamics at long
times, for $D>0$, will be that of the diffusive annihilation process $A + A \to 0$, for which the particle
density $\rho(t)$ decays $\sim 1/\sqrt{t}$ in $d=1$,
$\sim (\ln t)/t$ in two dimensions,
and $\sim 1/t$ in $d \geq 3$
\cite{torney,benav}. Finally, at the point $\lambda = D = 0$, starting from
all sites occupied, pairs are annihilated successively until only isolated particles remain. This
is equivalent to the random sequential adsorption (RSA) of dimers. (In the present case, of course,
dimers are removed not adsorbed, so the final particle density is $1 - 2\theta_\infty$, where
$\theta_\infty$ is the final coverage in RSA.)
On the line, the final density
of isolated particles is $e^{-2} = 0.135335...$ \cite{flory}, while in two dimensions one has
$\rho_\infty \simeq 0.093108(8)$ \cite{dimerRSA}. One may anticipate interesting crossover behaviors in the
vicinity of one or another limit. In the present work, however, we focus on determining
the function $\lambda_c (D)$ using Monte Carlo simulation.
\begin{figure}[!h]
\epsfysize=9cm
\epsfxsize=10cm
\centerline{\epsfbox{phase1.eps}}
\caption{\sf Schematic phase diagram of the PAM in the $\lambda$-$D$ plane.
The results of \cite{katori} imply that $D^*=1$ in one and two dimensions.}
\label{fig:phase}
\end{figure}
\section{Cluster approximations}
In this section we study the PAM through mean-field -- site and pair approximations \cite{ben}.
In general, mean-field results provide a good qualitative description of the phase
diagram and give an order-of-magnitude estimate of the critical point.
$n$-site approximations are a natural way to improve the mean-field approach. The method
consists of treating the transitions inside clusters of $n$ sites exactly, while
transitions involving sites outside the cluster are treated in an approximate manner.
\vspace{0.5cm}
\subsection{One-site approximation}
\vspace{0.5cm}
Let $\rho = \mbox{Prob}(\sigma_x = 1)$ denote the density of particles. The density is governed by,
\begin{eqnarray}
\frac{d\rho}{dt} &=& \frac{1}{2d}(1-D)\frac{\lambda}{1+\lambda}
\sum_{\hat{e}} P(\sigma_x=0, \sigma_{x+\hat{e}}=1)
\nonumber \\
&-& \frac{1}{d} \frac{1-D}{1+\lambda} \sum_{\hat{e}} P(\sigma_x=1, \sigma_{x+\hat{e}}=1)
\nonumber \\
&-&D\sum_{\hat{e}} P(\sigma_{x}=1, \sigma_{x+\hat{e}}=0)
\nonumber \\
&+&D\sum_{\hat{e}} P(\sigma_{x}=0, \sigma_{x+\hat{e}}=1),
\label{eq:ee}
\end{eqnarray}
where the sums are over the $2d$ nearest-neighbors of site $x$, and
$P(\sigma_{x},\sigma_{x+\hat{e}})$ is a two-site joint probability.
Equation~(\ref{eq:ee}) couples the one-site probability $\rho$ to the
two-site probabilities, which in turn
depend on the three-site probabilities, and so forth, leading to an infinite
hierarchy of equations for the $n$-site probabilities. The site-approximation
consists in truncating this hierarchy at $n = 1$, so that the two-site
probabilities are replaced by a product of two one-site
probabilities. Assuming spatial homogeneity and isotropy we obtain the following
equation for $\rho$,
\begin{eqnarray}
{d\rho \over dt} = \frac{1-D}{1+\lambda} \left[ \lambda\rho -
(2+\lambda)\rho^2 \right]
\label{eq:sa}
\end{eqnarray}
The stationary solutions are $\overline\rho=0$ (unstable for $\lambda>0$) and
$\overline\rho=\lambda/(2 + \lambda)$. Thus, in this approximation the critical parameter $\lambda_c$ is
zero in any dimension. Notice that in this approximation the diffusion rate has no
influence on the stationary solution.
\vspace{0.5cm}
\subsection{Pair approximation}
\vspace{0.5cm}
To derive the pair approximation equations, we consider the changes in the configuration of a
NN pair of sites (the {\it central} pair), given the states of the surrounding sites.
Using the symbols $\circ$ and $\bullet$ to represent, respectively, vacant and occupied sites,
the states of a pair are $\circ\circ$, $\bullet\bullet$, $\bullet\circ$, and $\circ\bullet$;
the latter two have the same probability and may be treated as a single class using appropriate
symmetry factors. It is convenient to use $(\bullet\bullet)$ to denote the fraction of
$\bullet\bullet$ pairs, and so on. Then we have for the site fractions $(\bullet) = \rho$ and
$(\circ) = 1 -\rho$:
\begin{eqnarray}
(\bullet) &=& (\bullet\bullet) + (\bullet\circ),\\
(\circ) &=& (\circ\circ) + (\bullet\circ).
\label{eq:concentration}
\end{eqnarray}
\noindent The pair fractions satisfy $(\circ\circ) + 2 (\circ\bullet) + (\bullet\bullet) = 1$.
The pair approximation consists in writing the joint probability of a set of three neighboring
sites in the form $(abc) = (ab)(bc)/(b)$.
There are five possible transitions between the pair states. Consider for example
the transition $\circ\circ \to \circ\bullet$. This can occur via creation or via
hopping, if and only if the rightmost site of the central pair has an occupied
NN. Since its NN {\it within} the central pair is vacant,
at least one of its $2d-1$ NNs {\it outside} the central pair
must be occupied. The rate of transitions via creation is
\begin{equation}
R_{1,c} = (1-D) \tilde{\lambda} \frac{2d-1}{2d} \frac{(\circ\circ)(\circ\bullet)}{(\circ)}
\label{r1c}
\end{equation}
where we introduced $\tilde{\lambda} = \lambda/(1+\lambda)$. Adding the contribution due to diffusion,
we obtain the total rate for this transition,
\begin{equation}
R_{1} = \frac{2d-1}{2d} \frac{(\circ\circ)(\circ\bullet)}{(\circ)} [D + (1-D) \tilde{\lambda}]
\label{r1c}
\end{equation}
Note that the contribution to the loss term for $(\circ\circ)$ associated with this process is $2R_1$,
due to the mirror transition $\circ\circ \to \bullet\circ$.
The rates for the other transitions are:
$\circ\bullet \to \circ\circ$:
\begin{equation}
R_2 = \frac{2d-1}{2d} \frac{(\circ\bullet)}{(\bullet)} [D (\circ\bullet)
+ 2 (1-D)(1- \tilde{\lambda}) (\bullet\bullet)]
\end{equation}
$\circ\bullet \to \bullet\bullet$:
\begin{equation}
R_3 = \frac{2d-1}{2d} \frac{(\circ\bullet)^2}{(\circ)} [D
+ (1-D)\tilde{\lambda}] + \frac{1}{2d} (1-D) \tilde{\lambda} (\circ\bullet)
\end{equation}
$\bullet\bullet \to \circ\circ$:
\begin{equation}
R_4 = \frac{1}{d} (1-D) (1- \tilde{\lambda}) (\bullet\bullet)
\end{equation}
$\bullet\bullet \to \circ\bullet$:
\begin{equation}
R_5 = \frac{2d-1}{2d} \frac{(\bullet\bullet)}{(\bullet)} \left[2 (1- D) (1- \tilde{\lambda}) (\bullet\bullet)
+ D (\circ\bullet) \right]
\end{equation}
The equations of motion for the pair probabilities are then
\begin{equation}
\frac{d}{dt} (\circ\circ) = 2R_2 + R_4 - 2R_1
\end{equation}
\begin{equation}
\frac{d}{dt} (\circ\bullet) = R_1 + R_5 -R_2 -R_3
\end{equation}
and
\begin{equation}
\frac{d}{dt} (\bullet\bullet) = 2R_3 - R_4 - 2 R_5
\end{equation}
\vspace{1em}
\noindent The active stationary solution of the above equations is
\begin{equation}
\overline{\rho} = \frac{\lambda[(4d-3+D)\lambda - 2(1-2dD)]}
{(4d-3+D)\lambda^2 + 2[2d(D+2) -3]\lambda + 4(2d-1)D},
\end{equation}
and
\begin{equation}
\overline{(\bullet\bullet)} = \frac{\lambda}{\lambda +2} \, \overline{\rho} \,.
\end{equation}
\vspace{1em}
\noindent For $ \lambda < 2(1-2dD)/(4d-3+D)$, only the trivial solution ($\overline{\rho} =0$) exists.
If $D \geq D^* = 1/2d$, however, the active solution exists for any $\lambda > 0$.
The phase transition occurs at
\begin{equation}
\lambda_c = \left\{ \begin{array} {cc} \frac{2(1-2dD)}{4d-3+D}, \;\;\;\;\;\;
D < D^* = \frac{1}{2d} \\
\\
0, \;\;\;\;\;\; D > D^*
\end{array} \right.
\end{equation}
\noindent Thus the pair approximation predicts a nonzero critical creation rate
only for diffusion probabilities $D < D^* = 1/(2d)$; for larger values of $D$, there is
a nonzero particle density for any $\lambda > 0$, as in the one-site approximation.
For $D=0$, we have $\lambda_c = 2$, 2/5, and 2/9 in one, two and three
dimensions, respectively; the corresponding values from simulation are $\lambda_c = 5.368(1)$,
1.0156(1), and 0.475(1).
[We note that the pair approximation results derived above differ slightly from those
given in \cite{marro} since in the latter case the annihilation rate for a NN
particle pair is taken as $(1-D)/(1+\lambda)$, i.e., $d$ times the rate given in
Eq. (\ref{effannrate}).]
Katori and Konno \cite{katori} proved that the prediction of $D^* < 1$, furnished
by the pair approximation, is wrong for $d\leq 2$. That is,
in one and two dimensions, $\lambda_c > 0$ for any $D<1$.
In the following section we investigate
how $\lambda_c$ tends to zero as $D \to 1$ in one and two dimensions, and determine
$D^*$ in the three-dimensional case.
\section{Simulations}
We use Monte Carlo simulations to obtain accurate values of
the critical creation rate $\lambda_c(D)$ and the critical exponents of the PAM in one, two,
and three dimensions.
\subsection{One dimension}
\subsubsection{Spreading behavior}
A well established method for determining the critical point and certain critical exponents
is through the study of propagation of activity, starting from a localized seed, as
proposed long ago by Grassberger and de la Torre \cite{grassberger}.
One studies the activity in a large set
of trials, all starting from a configuration very close to the absorbing
state. Here the initial configuration is that of a single pair of particles at the two central sites,
in an otherwise empty lattice. Each trial ends when it reaches
the absorbing state, or at a
maximum time, $t_{max}$, chosen such that the activity never reaches the edges
of the system (in any trial) for $t \leq t_{max}$.
For $\lambda>\lambda_c$
there is a nonzero probability that the process survives as $t\rightarrow
\infty$; for $\lambda \leq \lambda_c$ the process dies with probability 1. Of
primary interest are $P(t)$, the probability of surviving until time $t$ or greater, $n(t)$,
the mean number of particles (averaged over all trials), and $R^2(t)$, the
mean-square distance of particles from the origin. At the critical point
these quantities follow asymptotic power laws,
\begin{eqnarray}
P(t)&\propto& t^{-\delta} \\
n(t)&\propto& t^\eta \\
R^2(t)&\propto& t^{z_{sp}} .
\end{eqnarray}
The exponents $\delta$, $\eta$, and $z_{sp}$ satisfy the hyperscaling relation $4\delta+2\eta=dz_{sp}$, in
$d\leq 4$ dimensions \cite{grassberger}. (We note that $z_{sp}$ is related to the usual dynamic
exponent $z$ via $z_{sp} = 2/z$.)
We study activity spreading in one dimension using samples
of from $10^6$ or $2 \times 10^6$ trials for each $\lambda$ value of interest.
The maximum time $t_{max}$ = 15$\,$000 for $D \leq 0.7$, 30$\,$000 for $D=0.8$ and 0.9, and
$60\,000$ for $D=0.95$. (As $D$ increases, the asymptotic power-law behavior
occurs at ever later times.) To ensure that activity never reaches the borders, we use
a lattice size of $L=50\,000$ for $t_{max}$ = 15$\,$000, and $L=80\,000$ for the
longer studies. A study performed at a given value of $\lambda$ is used to generate
results for nearby values using sample reweighting \cite{reweighting}.
To locate the critical point, we use the criterion of power-law behavior of $n(t)$;
Fig.~\ref{fig:spread} illustrates the analysis for $D=0.3$. The main graph is a log-log plot of
$n(t)$ showing an apparent power law for $\lambda=3.4687$. The curves for nearby values
(specifically, $\lambda$ = 3.4681, 3.4684, 3.4690, and 3.4693, obtained via reweighting),
cannot be distinguished on the scale of this graph, but if we plot $n^* \equiv n/t^\eta$, the curves for
different $\lambda$ values fan out (upper inset), with upward curvature
indicating a supercritical value of $\lambda$ and vice-versa.
The exponent $\eta$
is estimated via analysis of the
local slope, $\eta(t)$,
defined as the inclination of a least-square linear fit to the data (on logarithmic scales),
on the interval $[t/a, \,at]$. (The choice of the factor $a$ represents a compromise between
high resolution, for smaller $a$, and insensitivity to fluctuations, for larger values;
here we use $a = 2.59$.) Plotting $\eta(t)$ versus $1/t$ (lower inset of
Fig.~\ref{fig:spread}) allows one to estimate $\lambda_c$ (the curves for $\lambda>\lambda_c$
veer upward, and vice-versa), and to estimate the critical exponent $\eta$ by
extrapolating $\eta(t)$ to $1/t\rightarrow 0$.
The main source of uncertainty in the exponent estimates is the uncertainty in $\lambda_c$
itself.
An analogous procedure is used to estimate
exponents $\delta$ and $z_{sp}$.
In Table~\ref{tb:pamsim1d} we list the critical parameters and spreading exponents
found via spreading simulations combined with local-slopes analysis.
\begin{figure}
\epsfysize=12cm
\epsfxsize=14cm
\centerline{\epsfbox{lnu3new.eps}}
\caption{\sf Main graph: $n(t)$ on log scales for the one-dimensional PAM
with $D=0.3$ and $\lambda=3.4687$. Upper inset: $n^* = n/t^\eta$ on log scales, for
(lower to upper) $\lambda=$ 3.4681, 3.4684, 3.4687, 3.4690, and 3.4693. Lower inset:
local slopes $\eta(t)$ for the same set of $\lambda$ values.}
\label{fig:spread}
\end{figure}
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
$D$ & $\lambda_c$ & $\delta$ & $\eta$ & $z_{sp}$ \\ \hline\hline
0.0 & 5.3720(5) & 0.159(1) & 0.315(1) & 1.266(2) \\
0.1 & 4.6709(2) & 0.161(1) & 0.314(2) & 1.264(1) \\
0.2 & 4.0417(2) & 0.160(1) & 0.314(1) & 1.266(1) \\
0.3 & 3.4687(2) & 0.162(1) & 0.312(1) & 1.268(2) \\
0.4 & 2.9411(2) & 0.160(2) & 0.314(2) & 1.264(2) \\
0.5 & 2.4473(1) & 0.159(1) & 0.315(2) & 1.267(3) \\
0.6 & 1.9778(2) & 0.159(1) & 0.315(2) & 1.266(1) \\
0.7 & 1.5231(2) & 0.158(2) & 0.315(1) & 1.265(2) \\
0.8 & 1.0684(2) & 0.159(2) & 0.315(2) & 1.265(2) \\
0.9 & 0.5891(1) & 0.161(1) & 0.315(1) & 1.267(3) \\
0.95& 0.3214(1) & 0.159(2) & 0.318(3) & 1.266(4) \\ \hline
\end{tabular}
\end{center}
\caption{\sf Results of spreading simulations for the PAM in one dimension.}
\label{tb:pamsim1d}
\end{table}
For $D=0$, the critical parameter for the PAM,
$\lambda_c (0)$=5.368(1), is considerably larger than that of the contact process
($\lambda_c$=3.29785(2)), as expected since here each annihilation event removes
two particles. (The fact that $\lambda_c$ is {\it less than twice} the corresponding
value in the CP may be attributed to the tendency for particles to cluster: removing two
particles may eliminate additional pairs, thus reducing the effective rate of
annihilation.)
For all diffusion probabilities studied, our estimates for the critical exponents are
in good accord with the DP values $\delta=0.15947(3)$,
$\eta=0.31368(4)$, and $z=1.26523(3)$ \cite{marro}.
A plot of the phase boundary in the $\lambda$-$D$ plane (see Fig.~\ref{fig:sim1d}) suggests that
$\lambda_c \to 0$ as $D \to 1$, so that $D^* = 1$ in agreement with the
Katori-Konno theorem. Extrapolation of $D$ versus $\lambda_c$, using a fourth-order polynomial
fit to the data for $D \geq 0.6$, yields $D = 1.0005$ for $\lambda_c=0$, confirming to high
precision that $\lambda_c > 0$ for $D < 1$.
\begin{figure}[h]
\epsfysize=9cm \epsfxsize=12cm \centerline{ \epsfbox{lc1dnew.eps}}
\vspace{-3em}
\caption{\sf Points along the critical line in the $\lambda$-$D$ plane in one dimension,
as determined via simulation. Error bars are smaller than symbols.
The solid line is a quartic fit to the six points with largest $D$.
}
\label{fig:sim1d}
\end{figure}
$\;$
\vspace{2em}
\subsection{Two dimensions}
\subsubsection{Steady-state behavior}
We encounter rather large uncertainties in studies of spreading behavior
of the two-dimensional PAM, and so turn to the steady-state approach to investigate
this system (on the square lattice).
In these studies we initialize the system with all sites occupied, and
allow it to evolve until it attains a quasistationary (QS) regime,
in which bulk properties such as the particle density $\rho$, averaged
over surviving realizations, are time-independent.
According to the finite-size scaling hypothesis \cite{fisher,barber},
the QS properties depend on system size $L$ through the ratio $L / \xi$, or equivalently
through the scaling variable $\Delta L^{1/\nu_\bot}$, where $\Delta\equiv \lambda-\lambda_c $.
Expressing the order parameter as a function of $\Delta$ and $L$, we have
\begin{eqnarray}
\rho (\Delta,L)\propto
L^{-\beta/\nu_\bot}
f(\Delta L^{1/\nu_\bot}).
\label{eq:densdeltaL}
\end{eqnarray}
with $f(x)\propto x^\beta$ as $x\rightarrow\infty$.
At the critical point, $\Delta=0$,
\begin{eqnarray}
\rho (0,L)\propto L^{-\beta/\nu_\bot}.
\label{eq:densdeltaLcp}
\end{eqnarray}
Thus an asymptotic power-law dependence of $\rho$ on $L$ is
a useful criterion for criticality.
We study the QS density as a function of system size
to locate the critical point, using sizes $L=25$, 50, 100,...,800.
The relaxation time varies from
$\tau=800$ for the smallest size, to $\tau=200\,000$ for the largest; the
number of realizations varies from 500 to 10\,000.
Using the power-law criterion, we obtain the estimates for $\lambda_c$ listed in
Table~\ref{tb:pamss2d}.
It is worth mentioning that the values for $\lambda_c$,
for $D=0$ and $D=0.1$ are in
good agreement with those obtained in the preliminary spreading behavior studies.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
$D$ & $\lambda_c$ \\ \hline\hline
0.0 & 1.0156(1) \\
0.1 & 0.7877(1) \\
0.2 & 0.5890(5) \\
0.3 & 0.4166(1) \\
0.4 & 0.2685(5) \\
0.5 & 0.1462(2) \\
0.6 & 0.056(1) \\ \hline
\end{tabular}
\end{center}
\caption{\sf Critical parameters obtained through steady-state simulations
in two dimensions.}
\label{tb:pamss2d}
\end{table}
In Fig.~\ref{fig:datacol} we verify the scaling collapse of the order parameter,
plotting $x \equiv L^{\beta/\nu_\perp} \rho$ versus $y \equiv \Delta L^{1/\nu_\perp}$, for system sizes
$L$=16, 32, 64, 128, and 256. A good collapse is obtained
using the DP values $\nu_\perp$ = 0.733 and $\beta/\nu_\perp = 0.795$
\cite{marro}.
The data are consistent with the scaling law $\rho \propto \Delta^\beta$,
using the DP value $\beta = 0.583(4)$ \cite{marro}.
\begin{figure}[!h]
\epsfysize=10cm
\epsfxsize=13cm
\centerline{
\epsfbox{col2d.eps}}
\vspace{-2em}
\caption {\sf (Color online)
Scaling plot of the stationary density in the two-dimensional PAM with $D=0$.
System sizes $L$=16 ($+$); 32 ($\times$); 64 (diamonds); 128 ($\bullet$) and 256 (squares).
The slope of the straight line is 0.583.
}
\label{fig:datacol}
\end{figure}
\subsubsection{Quasistationary simulations}
As $D$ approaches 0.7 the critical value $\lambda_c$ becomes very small. We require an efficient
simulation method to obtain precise estimates for the critical value for larger diffusion rates,
in particular, to determine how $\lambda_c$ tends to zero as $D$ increases. For this purpose
we use quasistationary (QS) simulations, which sample directly the QS probability distribution,
that is, the long-time distribution conditioned on survival. The details of the method are
explained in Ref. \cite{qssim}. To obtain these results we use lattice sizes $L=100$,
200, 400 and 800 for
$D=0.7$, and include studies of larger systems for higher diffusion rates
(up to $L=6400$, for $D \geq 0.78$). The critical point is determined
via the criteria of power-law scaling of the density and mean lifetime with system size, and
convergence of the moment ratio $m = \langle \rho^2 \rangle / \rho^2$ to a finite limiting value
as $L \to \infty$, as discussed in \cite{moments}. (The lifetime $\tau$ is expected
to follow $\tau \sim L^z$.) Using this method we obtain the values listed in
Table~\ref{tb:qss2d}. We note that our results for $\beta/\nu_\perp$, $z$, and the limiting moment
ratio $m_c$ are consistent with the known DP values of 0.795(10), 1.7674(6), and 1.3257(5),
respectively \cite{marro,reweighting,moments}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
$D$ & $\lambda_c$ \\ \hline\hline
0.60 & 0.05632(3) \\
0.70 & 0.00940(5) \\
0.73 & 0.003957(3) \\
0.78 & 0.0004815(7) \\
0.80 & 0.00015(2) \\ \hline
\end{tabular}
\end{center}
\caption{\sf Critical parameters obtained through quasistationary simulations
in two dimensions.}
\label{tb:qss2d}
\end{table}
\noindent For $D=0.8$, $\lambda_c$ is of order $10^{-4}$, and a precise determination becomes very
difficult due to the small number of particles present in the system. Reliable
determination of $\lambda_c$ for larger diffusion rates would therefore require studies
of even larger systems, which was deemed impractical.
We find that $\lambda_c(D)$ can be fit quite well using an expression of the form
\begin{equation}
\lambda_c = A \exp \left[ - \frac{C}{(1-D)^\gamma}\right]\,.
\label{fit2d}
\end{equation}
\noindent Applied to the data for $D \geq 0.4$, a least-squares procedure yields $\gamma = 1.41(2)$,
$C = 0.984(2)$, and $A = 2.02(2)$. The good quality of the fit is evident in the inset
of Fig. \ref{lc2da}. Thus, while a plot of the data on linear scale might suggest
that $\lambda_c \to 0$ at some diffusion rate between 0.8 and 1 (see Fig. \ref{lc2da}, main
graph), our results are in fact consistent with $\lambda_c$ nonzero, though very small,
for diffusion rates between 0.7 and unity.
\begin{figure}[!h]
\epsfysize=10cm
\epsfxsize=12cm
\centerline{
\epsfbox{lc2da.eps}}
\caption {\sf Critical line of the two-dimensional PAM. Inset:
the same data plotted as $\ln \lambda_c$ versus\\ $1/(1-D)^{1.41}$.}
\label{lc2da}
\end{figure}
\subsection{Three dimensions}
We employed quasistationary simulations to determine $\lambda_c (D)$ for the PAM on the simple cubic lattice.
For relatively small diffusion rates good results are obtained using lattice sizes $L=8$, 16, 24, 36,
and 54. For diffusion rates greater than about 0.25, however, there are substantial finite-size
effects, and to observe clear signs of DP-like scaling we need to study larger systems ($L=80$ and 120
in addition to the sizes mentioned above).
The results (see Table~\ref{tb:pamss3d} and Fig.~\ref{lc3d}), show that in this case
$\lambda_c$ does fall to zero at a diffusion rate considerably less than unity; extrapolation of
the data to $\lambda = 0$ yields $D^* = 0.333(3)$. The critical exponents determined via finite-size scaling
analysis, $\beta/\nu_\perp = 1.40(1)$ and $z = 1.94(2)$, are once again in good agreement with literature
values of 1.39(3) and 1.919(4), respectively.
Our study yields the moment ratio value $m = 1.47(1)$ for the three-dimensional models
in the DP universality class; to our knowledge this quantity has not been determined previously.
For $D > D^*$, the particle density is expected to tend to zero linearly with $\lambda$, as the
reproduction rate approaches zero. We have verified this behavior (down to $\lambda = 10^{-4}$) for
$D=0.8$.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
$D$ & $\lambda_c$ \\ \hline\hline
0.0 & 0.47390(5) \\
0.1 & 0.2943(1) \\
0.2 & 0.1420(1) \\
0.25 & 0.07790(5) \\
0.28 & 0.04487(3) \\
0.31 & 0.01762(2) \\
0.32 & 0.0103(1) \\ \hline
\end{tabular}
\end{center}
\caption{\sf Critical parameters obtained through quasistationary simulations
in three dimensions.}
\label{tb:pamss3d}
\end{table}
\begin{figure}[!h]
\epsfysize=10cm
\epsfxsize=12cm
\centerline{
\epsfbox{lc3da.eps}}
\caption {\sf Critical line of the PAM in three dimensions; error bars are smaller than
symbols. The solid line is a cubic fit to the
data, yielding $D^* = 0.333(3)$.}
\label{lc3d}
\end{figure}
\section{Discussion}
We study the phase boundary of the pair annihilation model in the
reproduction rate - diffusion probability ($\lambda$ - $D$) plane. Our simulation results
are consistent with the theorem proven some time ago by Katori and Konno \cite{katori},
namely that in one and two dimensions, $\lambda_c > 0$ for any $D<1$.
The pair approximation is in conflict with this result, as it predicts that in any number
of dimensions, there is
a diffusion probability $D^* < 1$, above which $\lambda_c = 0$.
In one dimension
the behavior (in simulations) is straightforward, as $\lambda_c \propto 1-D$ for $D \simeq 1$. In two
dimensions however it is quite subtle, as $\lambda_c$ becomes exponentially small as
$D \to 1$, and a cursory analysis could well give the impression that $\lambda_c$ is
actually zero at some value of $D$ between 0.8 and unity.
Finally in three dimensions
the pair approximation prediction is verified qualitatively; we find $D^* = 0.333(3)$
in this case, while the PA yields $D^* = 1/6$.
Intuitively, the unusual behavior of $\lambda_c(D)$ in two dimensions can be understood
as a consequence of $d=2$ marking a critical dimension for the recurrence of a random walk.
Our simulation results for critical exponents and the moment ratio $m$ are consistent
with the directed percolation values, as expected.
Given the qualitative failure of the pair approximation in one and two dimensions,
it is natural to ask whether approximations using larger clusters would predict
the phase diagram correctly. This strikes us as unlikely, since cluster
approximations have been found to be insensitive to subtle effects involving diffusion and/or
multiparticle rules in other cases \cite{ben,fontanari,trpcr2009}.
\vspace{1em}
\noindent {\bf Acknowlegdments}
\vspace{1em}
This work was supported by CNPq, Brazil.
\bibliographystyle{apsrev}
|
2,869,038,153,759 | arxiv |
\section{Completing the path-coloring}
Let $S$ denote a set of strange $2$-cycles of $C_1$.
We show that there exist so called {\bf \em exchange sets} $E_1$ and $F_1$ with the following properties.
\begin{lemma}
There exist sets $E_1, F_1 \subset E$ and an assignment $f: E_1 \rightarrow F_1$ satisfying the following conditions.
\begin{enumerate}
\item Let $c$ be any strange $2$-cycle of $C_1$. Then $E_1$ contains one edge $e_1$ of $c$ and $f(e_1)$ is an edge of $C_{max}$ incident to $c$ (but not contained in $c$).
\item $F_1=f(E_1)$ is a matching, i.e., no two edges of $F_1$ share a vertex.
\item $4w(E_1) \leq 6w(F_1)$.
\end{enumerate}
\end{lemma}
\noindent{\bf Proof.~}
For each strange $2$-cycle $c$, the set $E_1$ contains an edge of $c$ with minimum weight.
Let $d$ be a cycle of $C_{max}$ with length greater than $3$ such that at least one strange $2$-cycle of $C_1$ shares an edge with $d$.
Consider a $2$-cycle $c=(u,v)$ belonging to $S$ such that the edge $(u,v)$ belongs to $d$. Let $(u',u), (v,v')$ be the edges of $d$ adjacent to $c$. We call $(u',u)$ an {\em incoming neighbour of $c$} and $(v,v')$ an {\em outgoing neighbour of $c$}. If $c$ is not incorrigible, then $\min\{w(u,v), w(v,u)\} \leq \frac 34(w(u',u)+w(v,v')$. If $d$ shares an edge only with strange $2$-cycles which are not incorrigible, we set $F_1$ as either the set of all incoming neighbours or the set of all outgoing neighbours, choosing the one with maximum weight. Since strange $2$-cycles are vertex-disjoint, the obtained set $F_1$ is a matching.
If $c$ is incorrigible, then $w(u,v) \leq \max\{w(u',u), w(v,v')\}$ and $F_1$ contains the neighbour of $c$ with maximum weight. If that neighbouring edge $e$ is also adjacent to another cycle $c'$ of $S$, then $6$ copies of $e$ are sufficient for removing $4$ copies of $(u,v)$ and $4$ copies of an edge of $c'$.
\hfill $\Box$\\[.1ex]
Let $R'$ denote the set of all tricky triangles of $C_1$. They correspond to a matching $N'$ of $H$. Notice that $N \cap N' =\emptyset$, because no tricky triangle of $R$ (corresponding to $N$) can occur in $C_1$. Thus $N \cup N'$ forms a set of alternating paths and cycles. Since $N$ is a maximum matching of $H$, each alternating path $P$ that contains at least one edge of $N'$ has even length - thus the number of edges of $N'$ on $P$ equals the number of edges of $N$.
For each alternating cycle and each alternating path of even length we replace some edges of triangles of $R'$ with edges belonging to triangles represented by edges of $N$ belonging to the same path or cycle. More precisely, suppose that an alternating path $P$ or cycle $C$ consists of a sequence of edges $e_1, f_1, \ldots,
e_i, f_i, \ldots, e_k, f_k$ such that for $1 \leq i \leq k$ it holds that $e_i \in N', f_i \in N$ and edges $e_i, f_i$ have a common vertex in $V(H)$. Then we replace some edges of each tricky triangle $t_i$ of $C_1$ corresponding to edge $e_i$ with some edges of a tricky triangle (not occurring in $C_1$) corresponding to edge $f_i$.
We now describe the exact procedure of replacement.
Let $t_i=(p,q,r)$ be a tricky triangle of $C_1$ with a t-cycle $c_i=(q,r)$. Recall that $\Delta(c)=w(r,q)-1.5w(q,r)$. In $G_1$ we take $14$ copies
of $(q,r)$, $10$ copies of each of $(p,q), (r,p)$ and $3$ copies of $(r,q)$. This means that we are lacking only one copy of $(r,q)$, i.e.,:
\begin{fact}
The weight of the induced subgraph $G_1(t_i)$ of $G_1$ on vertices $p,q,r$ satisfies: \\
$w(G_1(t_i)) = 4w(c_i)+10w(t_i) - w(q,r)$.
\end{fact}
Consider alternating paths and cycles of $N \cup N'$. Each one of them consists of some sequence of edges $e_1, f_1, \ldots,
e_i, f_i, \ldots, e_k, f_k$. For any alternating cycle $C$, we can additionally arrange the edges on $C$ so that that a common vertex of any two edges $e_i$ and $f_i$ on $C$ in $H$ corresponds to a $2$-cycle $c_i$. Let $(e_i, f_i)$ be any pair of edges from such alternating cycle or path and suppose that a tricky triangle $t_i$ of $C_1$ corresponding to $e_i$ has the form $t_i=(p,q,r)$. If the common vertex of $e_i$ and $f_i$ in $H$ corresponds to a $2$-cycle $c_i$, then a tricky triangle represented by $f_i$ has the form $t'_i=(p', q,r)$. We add either $(p',q)$ or $(r,p')$ to $F_2$ (and also $3$ copies of the edge added to $F_2$ to $G_1$). If $F_1$ contains an edge incoming to $p'$, we choose $(p',q)$, otherwise - $(r,p')$. If, on the other hand, the common vertex of $e_i$ and $f_i$ in $H$ corresponds to the vertex $p$, then a tricky triangle represented by $f_i$ has the form $t'_i=(p, q',r')$. In this case we add either $(p,q')$ or $(r',p)$ to $F_2$ (and also $3$ copies of the chosen edge to $G_1$). If $F_1$ contains an edge incoming to $p$, we choose $(p,q')$, otherwise - $(r',p)$.
We call a tricky triangle $t'_i$ of $R$ corresponding to the edge $f_i$ a {\bf \em rescuer} of $t_i$.
In the next lemma we are going to prove that the total weight of edges added to $G_1$ makes up for the deficiencies in the weights of the subgraphs induced by vertices of tricky triangles of $C_1$.
\begin{lemma}
Let $N'_2$ denote the set of all t-cycles of tricky triangles of $C_1$. If $c$ is a t-cycle of a tricky triangle, then $\alpha(c)$ denotes
the weight of the lighter edge of $c$. We have: \\
$\sum_{c \in N'_2} 1.5\alpha(c)+ \Delta(c) \leq 3 w(F_2)$
\end{lemma}
To prove it we show the following lemma.
\begin{lemma}
Let $c_1, c_2$ be two $2$-cycles such that $w'(c_1)=w'(c_2)$
and let $\mu(c_i)$ denote the minimum weight edge of a tricky triangle
incident to $c_i$. Then $3 \mu(c_1) \geq 1.5 \alpha(c_2)+\Delta(c_2)$.
\end{lemma}
\noindent{\bf Proof.~}
Suppose that $\alpha(c_2)=\alpha(c_1)+\epsilon, \Delta(c_2)=\Delta(c_1)-\epsilon$.
We know that $\mu(c_1)> 0.6\Delta(c_1)+\frac{\alpha(c_1)}{2}$. We show that $3(0.6\Delta(c_1)+\frac{\alpha(c_1)}{2}) \geq 1.5 \alpha(c_2)+\Delta(c_2)$. This is equivalent to $0.8 \Delta(c_1) \geq \frac{\epsilon}{2}$, which is true because $\Delta(c_1)\geq \epsilon$.
Next we show that $3(\frac{3}{5}(\Delta(c_1)-\epsilon)+\frac{1}{2}(\alpha(c_1)+\epsilon)) \geq \frac{3}{2}\alpha(c_1)+\Delta(c_1)$.
This is equivalent to $\frac{4}{5}\Delta(c_1) \geq \frac{3}{10}\epsilon$, which holds because $\Delta(c_1)\geq \epsilon$.
\hfill $\Box$\\[.1ex]
Next we show that we are able to extend the current path-coloring of $G_1$ to the subgraphs containing strange $2$-cycles and tricky $2$-triangles of $C_1$.
We start with subgraphs containing tricky triangles of $C_1$. We proceed in the order dictated by directed paths and cycles of a graph $H^{dir}$, which is a compressed
and directed version of the graph $H$. $H^{dir}$ is obtained from $H$ as follows. For each tricky triangle $t$ of $C_1$ we identify as one vertex $v_t$ four vertices in total: all vertices of $t$ as well as the t-cycle $c$ of $t$.
Let $e=(u,v)$ be any edge of $N$. It then corresponds to a tricky triangle $t'$ of $R$. If $t'$ is a rescuer of a tricky triangle $t$ of $C_1$, we direct the counterpart of $e$ in $H^{dir}$ from $v_t$.
We fist deal with directed cycles of $H^{dir}$.
\begin{lemma}
Let $c_H$ be any directed cycle of $H^{dir}$. We are able to extend the partial coloring of $G_1$ to the edges of tricky triangles covered by $c_H$ and the edges of $F_2$ of their rescuers.
\end{lemma}
\noindent{\bf Proof.~} Let $t_1, \ldots, t_k$ be the order of tricky triangles of $C_1$, in which they (or more precisely, the vertices representing them) occur on $c_H$. Assume that each $t_i$ has the form $t_i=(q_i, r_i, p_i)$, where $(q_i, r_i)$ is a t-cycle of $t_i$. This means that
for each $1 \leq i\leq k$ a rescuer $t'_i$ of $t_i$ has the form $t'_i=(q_i, r_i, p'_i)$, where $p'_i$ lies on $t_{i+1}$ (indices are taken modulo $k$). The vertex $p_i$ is incident to two edges $e_i=(s_i, p_i), e'_i=(p_i, s'_i)$ belonging to $C_{max}$, which are already colored.
We can assume that $e_i$ and $e'_i$ are diverse. Thus we use either $8$ or $9$ edges of ${\cal K}$ on $e_i$ and $e'_i$. We consider each $t_i$ in turn. We assign $3$ colors either to $f_i=(r_i, p'_i)$ or to $f'_i=( p'_i, q_i)$. We do it in such a way that:
\begin{itemize}
\item A color assigned to $f_i$ or $f'_i$ does not occur on any of $e_{i+1}, e'_{i+1}, f_{i-1}, f'_{i-1}$.
\item A color occurring on $e_i$ may be assigned to $f_i$ but not $f'_i$. Similarly, a color occurring on $e'_i$ may be assigned to $f'_i$ but not $f_i$.
\end{itemize}
We now show that we are able to assign colors to each $f_i$ or $f'_i$ to satisfy the above. Suppose that we consider $t_i$. At most $9$ colors of ${\cal K}$ are used on $e_i, e'_i$. Possibly, $t_{i-1}$ was considered before and thus one of $f_{i-1}, f'_{i-1}$ is already colored
with $3$ colors. Let $Z_1=col(e_i) \cup col(e'_i) \cup col(f_{i-1}) \cup col(f'_{i-1}), \ Z_2=col(e_{i+1}) \cup col(e'_{i+1}) \cup col(f_{i+1}) \cup col(f'_{i+1})$ and $d=\max\{|(col(e_i) \setminus Z_2|, |(col(e'_i) \setminus Z_2|\}$. We use $\min\{3,d\}$ colors of either $col(e_i)$ or $(col(e'_i)$ on correspondingly either $f_i$ or $f'_i$. If we have applied $3$ colors, we are done.
Note that $|Z_1| \leq 12$ and $|Z_2| \leq 12$. We have $|Z_1 \cup Z_2|=|Z_1|+|Z_2|-|Z_1 \cap Z_2|\leq mult(e_i)+mult(e'_i)+15 -|Z_1 \cap Z-2|$. Also, $|Z_1 \cap Z_2| \geq mult(e_i)+mult(e'_i)-2d$. Therefore, $|Z_1 \cup Z_2| \leq 15+2d$. This means that, if $d<3$, there are at least
$3-d$ colors of ${\cal K}$, none of which belongs to either $Z_1$ or $Z_2$ and then we use $3-d$ such colors.
\hfill $\Box$\\[.1ex]
\begin{lemma}
Let $p_H$ be any directed cycle of $H^{dir}$. We are able to extend the partial coloring of $G_1$ to the edges of tricky triangles covered by $p_H$ and the edges of $F_2$ of their rescuers.
\end{lemma}
\noindent{\bf Proof.~}
Let $t_1, \ldots, t_k$ be the order of tricky triangles of $C_1$, in which they (or more precisely, the vertices representing them) occur on $p_H$. Assume that each $t_i$ has the form $t_i=(q_i, r_i, p_i)$, where $(q_i, r_i)$ is a t-cycle of $t_i$. This means that
for each $1 \leq i\leq k$ a rescuer $t'_i$ of $t_i$ has the form $t'_i=(q_i, r_i, p'_i)$, where for each $i >1$, $p'_i$ lies on $t_{i-1}$ The vertex $p_i$ is incident to two edges $e_i=(s_i, p_i), e'_i=(p_i, s'_i)$ belonging to $C_{max}$, which are already colored.
Before coloring $G_1$, whenever possible, we replace the edges $e_i$ and $e'_i$ with one edge $e''_i=(s_i, s'_i)$. The only cases when we do not perform such a replacement is when (i) $s_i=s'_i$ and then $(s_i,p_i)$ is a $2$-cycle of $C_{max}$, (ii) $(s_i, s'_i)$ is a $2$-cycle of $C_1$, (iii) there is a triangle of $C_1$ containing $s_i$ and $s'_i$. Thus, apart from the three cases described above, edges $e_i$ and $e'_i$ are colored with the same $4$ colors of ${\cal K}$.
Suppose that $f_i=(r_i, p'_i) \in F_2$. There are $6$ colors forming set $Z_i$ available for coloring it and we have to choose $3$. There exists such set $Z_i$, because $G_1$ contains one edge of $C_{max}$ incoming to $p'_i$, colored with $4$ colors and one edge of $C_1$ incoming to $p'_i$, colored with $10$ colors and $f_i$ has to be diverse with both of them. Let $d_i=(s_i, x_i), d'_i=(x'_i,s'_i)$ be two edges of $C_1$. Each of them is colored with $10$ colors. It is possible that there exists one or two edges of $F_2$ of the form $\tilde{f}_i=(s_i, y_i), \tilde{f'}_i=(y'_i, s'_i)$, which are already colored. Any edge of $F_2$ is colored with $3$ colors. If we want to recolor $e_i$, we have to ensure that $e_i$ is diverse with both $d_i$ and $\tilde{f}_i$. It means that we have at least $7$ colors (set $Z^1_i$) at our disposal for coloring $e_i$. By the same token, we have at least $7$ colors (set $Z^2_i$) available for coloring $e'_i$.
Let us first consider the case when $e_i$ and $e'_i$ are colored with the same $4$ colors. If $|col(e_i) \cap Z_i|\leq 3$ we color $f_i$ with $3$ colors of $Z_i \setminus col(e_i)$. In the other case, we recolor $e_i$ and $e'_i$ by replacing one fixed color $k\in Z_i \cap col(e_i)$ with $k_1$ on $e_i$ and with $k_2$ on $e'_i$. Colors $k_1, k_2$ are such that $k_1 \neq k_2$ and $k_j \in Z^j_i \setminus col(e_i)$ for $j \in \{1,2\}$. We then use $k$ and two other colors of $Z_i \setminus col(e_i)$ on $f_i$.
Let us now deal with the three cases when $e_i$ and $e'_i$ are not replaced with one edge.
If $s_i=s'_i$, then $F_2$ contains at most one of the edges $\tilde{f}_i=(s_i, y_i), \tilde{f'}_i=(y'_i, s_i)$. It means that at least one of the sets $Z^1_i, Z^2_i$ contains $10$. We can notice that if some color $k$ belongs to $Z^1_i \cap Z^2_i$, then if we use $k$ on exactly one of $e_i, e'_i$ then that edge will be safe with respect to $k$, because neither $d_i$ nor $d'_i$ is colored with $k$. We need to assign $mult(e_i)+mult(e'_i) \leq 9$ different colors
to $e_i$ and $e'_i$. Note that color $z$ assigned to $f_i$ can be assigned only to $e_i$ and not to $e'_i$ but we need to ensure that it will not belong to a monochromatic cycle. We recolor $e_i, e'_i$ as follows:
\begin{itemize}
\item If $|(Z^1_i \cup Z^2_i) \setminus Z_i)|\geq 6$, then we color $e_i, e'_i$ using at most $3$ colors of $Z_i$ in total and assign the remaining colors (unassigned to either $e_i$ or $e'_i$) to $f_i$.
\item If $|(Z^1_i \cup Z^2_i) \setminus Z_i)|=4+x$, where $x \in \{0,1\}$, then $|Z^1_i \cap Z^2_i| \geq 7-x$, because $|Z^1_i \cup Z^2_i |=|Z^1_i|+|Z^2_i|-|Z^1_i \cap Z^2_i| \geq 17-|Z^1_i \cap Z^2_i| $. This means that $Z^1_i \cap Z^2_i$ contains at least $2-x$ elements of $Z_i$.
We assign $2-x$ colors of $Z^1_i \cap Z^2_i \cap Z_i$ to $e_i$ and also $f_i$, $1+x$ other colors of $Z_i$ to $f_i$ and the remaining $3$ colors of $Z$ can be assigned to either $e_i$ or $e'_i$. We also use $9-(5-x)=4+x$ colors of $(Z^1_i \cup Z^2_i) \setminus Z_i)$ to complete the coloring of $e_i$ and $e'_i$.
\end{itemize} \hfill $\Box$\\[.1ex]
\subsection{Full coloring of b-edges and s-edges}
During the processing of paths and cycles of $C_1$ we do not assign colors $col'(e)$ for any edge $e$, which is either a secondary b-edge or an s-edge. As a result some b-edges cannot be assigned their inherited colors $col''(e)$. This happens for any b-edge, whose ally is an s-edge or a secondary b-edge.
To be able to complete the process of coloring all b-edges and s-edges, we introduce a directed graph $D=(V_D, E_D)$, which shows the dependencies between halfy triangles containing such edges. The vertex set of $D$ consists of all halfy triangles. For a halfy triangle $t$, we denote by $v_t$ a vertex representing it in $D$. The edge set $E_D$ contains an edge $(v_t, v_{t'})$ iff the ally of the main b-edge of $t'$
is either an s-edge of $t$ or a secondary b-edge of $t$. The direction of an edge $(v_t, v_{t'})$ reflects the fact that $t$ needs to be fully colored to be able to complete the coloring of $t'$. Note that each vertex of $V_D$ has at most one incoming edge and at most two outgoing edges. To {\bf \em D-process} a directed path or cycle $s$ of $D$ means to complete the coloring of each halfy triangle $t$ corresponding to any vertex on $s$ in such a way that $G_1$ remains unblocked.
Notice that any two cycles of $D$ are vertex-disjoint.
\begin{lemma}\label{sedge1}
It is possible to color an s-edge of a tricky $2$-triangle $t$ in such a way that
it does not prevent any halfy triangle from being colored in a cooperative manner.
\end{lemma}
\noindent{\bf Proof.~}
If the b-edge $e(t)$ of $t$ is already fully colored, then we color the s-edge $s(t)$ in a standard manner described earlier.
Assume now that $e(t)$ is not fully colored. It means that the ally $al(e(t))$ of $e(t)$ is not precolored, i.e., its own colors are not yet assigned. Therefore, $al(e(t))$ is an s-edge or a secondary b-edge of some tricky triangle $t_0$. Let $a(t)$ denote the antenna of $t$ and $e_{01}$ an edge of $C_1$ coincident with both $e(t)$ and $al(e(t)$.
If $a(t)$ is already precolored and no color $k \in col'(a(t)$ is forbidden on $s(t)$, then we claim that we can assign colors of $col'(a(t)$ to $s(t)$. To see this, suppose first that $al(e(t))$ is an s-edge of $t_0$. Then $e(t_0)$ and $e_{01}$ form a directed path $P$ of length $2$.
If $e(t)$ is an antenna of $t_0$ (when $t_0$ is a $2$-triangle) or an ally of $al(e(t)$ (when $t_0$ is a $3$-triangle), then the standard procedure of shadowing $al(e(t)$ would involve assigning colors of $col'(e(t)$ to $al(e(t)$ and forbidding colors of $col''(e(t_0)$ on $e_{01}$ by assigning colors of $col''(e(t_0)$ to $col''(e(t))$. We still do this if $col''(e(t_0) \cap col(s(t))=\emptyset$. On the other hand, if some color $k \in col''(e(t_0) \cap col(s(t))$, then we can notice that $e(t_0)$ is already safe w.r.t. $k$, because $e(t_0)$ cannot belong to a monochromatic cycle of color $k$, since such a cycle would have to contain $s(t)$, which is impossible. This means that we do not have to do shadowing for any such color $k$. If $e(t)$ is neither an antenna of $t_0$ (when $t_0$ is a $2$-triangle) nor an ally of $al(e(t)$ (when $t_0$ is a $3$-triangle), then the reasoning is similar and the following. Let $al(al(e(t)))$ denote the ally of $al(e(t)$.
Instead of doing the shadowing with the aid of $al(al(e(t)$, we can instead use $e(t)$ and then the proof of the claim for the case when $al(e(t))$ is an s-edge of $t_0$ goes through. Suppose next that $al(e(t))$ is a secondary b-edge of $t_0$, then again $e(t_0)$ and $e_{01}$ form a directed path $P$ of length $2$ and for any $k \in col''(e(t_0)) \cap s(t)$, the edge $e(t_0)$ is safe w.r.t. $k$. Thus we do not assign it to $al(e(t))$ or hence also not assign it to $col''e(t)$. This finishes the proof of the claim.
Suppose next that $a(t)$ is already precolored but some colors of $col'(a(t)$ are forbidden on $s(t)$. It means that $s(t)$ is an antenna
of some tricky triangle $t_2$. Let $e_{12}$ denote an edge of $C_1$ coincident with $s(t)$ and incident to $e_{01}$ - the edges $e_{01}, e_{12}$ form a directed path $P_1$. We can notice that then $e(t_0)$ is safe w.r.t. any color $k \notin col'(e(t_2))$, because $P_1$ together with $e(t_2)$ form a directed path of length $3$ and no color $k' \in col''(e(t_0))$ can occur on $e_{12}$. Therefore, we do not shadow $s(t_0)$ - instead we assign $col'(e(t_2)$ to $e(t)$, which ensures that $e(t_0)$ is safe w.r.t. any color. Also we assign $s(t)$ any $5$ colors not forbidden on it, i.e. disjoint with $col'(e(t)) \cup col'(e(t_2)$.
Finally, we consider the case when $a(t)$ is not yet precolored. If $a(t)$ is a secondary b-edge of a $3$-triangle $t_3$, we can assign to $s(t)$ and $a(t)$ the same $5$-element set of colors disjoint with $col'(e(t)$. Note that we do not have to do the shadowing for inherited colors of $e(t)$, because $col'(e(t))=col'(e(t_3)$ and hence $e(t)$ is safe w.r.t. any color $k \in col''(e(t)$ (because such color either does not occur on $e(t_3)$ or on an edge of $C_1$ connecting $t$ and $t_3$). If $a(t)$ is an s-edge, then for similar reasons we can assign to $s(t)$ and $a(t)$ the same $5$-element set of colors disjoint with $col'(e(t)$.
\hfill $\Box$\\[.1ex]
\begin{lemma}\label{sedge2}
It is possible to color an s-edge $s(t)$ of a tricky $3$-triangle $t$ in such a way that
it does not prevent any halfy triangle from being colored in a cooperative manner. Moreover, if $s(t)$ is an ally of a main b-edge $e(t')$ of some halfy triangle $t'$, then it is possible fully color $e(t')$.
\end{lemma}
\noindent{\bf Proof.~} We proceed in the manner described in the proof of the lemma above, where the ally of the s-edge of $t$ plays the same
role as the outer antenna of a tricky $2$-triangle in Lemma \ref{sedge1}. The only difference is that the ally $a$ of the s-edge of $t$
may be colored in the same way as the main b-edge of $t$. Then we can still assign colors of $col'(a)$ to $s(t)$ because the secondary b-edge of $t$ is diverse with $e(t)$ i.e., $col''(e'(t)) \cap col'(e(t))=\emptyset$.
If $s(t)=(u,v)$, then $C_1$ contains edges $(u,u_1), (v_1,v)$ coincident with $s(t)$ and $C_{max}$ contains edges $a=(u_2,u_1), a'=(v_1, v_2)$.
If the ally $a$ of the s-edge of $t$ is the main b-edge of a halfy triangle $t'$, then instead of assigning colors of $col'(a)$ to $s(t)$, we may assign colors of $col'(a')$ to it and then $a'$ plays the role as the outer antenna of a tricky $2$-triangle in Lemma \ref{sedge1}.
If $col'(a)=col'(a')$ (more precisely, if $col'(a)\cap col'(a')\neq \emptyset)$, then it still means that $e(t')$ is not colored fully.
In such a case, we assign to $col''(e(t'')$ any $5$ color set $Z$ disjoint with $col(e(t)$. If later on, $col''(e(t))$ turns out to be different from $Z$, we can shadow $s(t)$ using $a'$, because $col(s(t))=col'(a')$.
\hfill $\Box$\\[.1ex]
At this point the only not fully colored edges are either the main or the secondary b-edges of halfy triangles. Moreover, the main b-edge
$e(t)$ of a halfy triangle $t$ is not fully colored only if (i) its ally is the secondary b-edge of some halfy triangle $t'$ or (ii) its ally is an s-edge $s(t')$ of some halfy triangle $t'$ and $e(t)$ is the ally of $s(t')$, which also means that if $t'$ is a tricky $2$-triangle, then $e(t)$ is the antenna of $t'$.
\begin{lemma}
It is possible to D-process each cycle of $D$.
\end{lemma}
\noindent{\bf Proof.~}
Let $c$ be any cycle of $D$. By saying that a triangle $t$ is on $c$, we mean that $v_t \in c$. Similarly, by $(t',t) \in c$ we mean $(v_{t'}, v_{t}) \in c$.
Let $(t',t)$ be any edge of $c$. Let $Z_1$ denote $col'(e(t')$. If $t'$ is a $3$-triangle, then $Z_2$ denotes $col''(e'(t')$. Otherwise,
if $t'$ is a $2$-triangle, then $Z_2=col'(e(t')$. Notice that $Z_1$ and $Z_2$ are disjoint (because the outer antenna of $t'$ is equal to
$e(t)$ if $t'$ is a $2$-traingle.) We assign any $5$-color set $Z$ disjoint with both $Z_1$ and $Z_2$ to $col''(e(t')$.
Next we color fully $t'$ and all succeeding tricky triangles using only colors of $Z_3=Z \cup Z_1 \cup Z_2$ on main b-edges.
We now argue that this is possible for each triangle $t_1$ on $c$.
Suppose first that $t_1$ is a $2$-triangle and we have just fully colored its main b-edge so that $col''(e(t_1)) \subset Z_3$. Let $t_2$ denote the tricky triangle succeeding $t_1$ on $c$. The ally of $e(t_2)$ is $s(t_1)$ as well as the antenna of $t_1$ is $e(t_2)$. We want to shadow $s(t_1)$ w.r.t. colors of $col''(e(t_1)$. We recall that if some color $k \in col''(e(t_1) \cap col(s(t_2))$, then we do not shadow $s(t_1)$ w.r.t. $k$ because $e(t_1)$ is already safe w.r.t. $k$. We assign all colors of $col''(e(t_1) \setminus col(s(t_2))$ to $col''(e(t_2)$. If there are fewer than $5$ of them, we add some colors of $Z_3$.
Suppose next that $t_1$ is a $3$-triangle and we have just fully colored its main b-edge $e(t_1$) by adding $5$ colors of $Z_3$, i.e. $col''(e(t_1)) \subset Z_3$. We assign $col''(e(t_1) \setminus (col''(e'(t_1) \cup col(s(t_1))$ to $col'(e'(t_1))$. Recall that we shadow $s(t)$ only w.r.t. colors of $col''(e(t_1)) \cap col(e'(t_1))$. If the successor $t_2$ of $t_1$ on $c$ is such that the ally of $e(t_2)$ is $s(t_1)$, then we assign also $col''(e(t_1) \setminus (col''(e'(t_1) \cup col(s(t_1))$ to $col''(e(t_2)$ and if there are fewer than $5$ such colors, we complete the set with an appropriate number of colors of $Z_3$. Note that we also independently add some colors to $col'(e'(t_1))$, if there are already fewer than $5$. These colors do not have to belong to $Z_3$. If the ally of $e(t_2)$ is $e'(t_1)$, we also complete the set $col'(e'(t_1)$ with colors of $Z_3$. Since we want to avoid at most $|col'(e(t_1)) \cup col''(e'(t_1))| \leq 10$ colors and $|Z_3|=15$ we can always do that.
This way when we return to the triangle $t'$, we will not have to recolor it much. Anyway, we will be able to do it in such a way that the coloring of none of the remaining triangles on $c$ will have to be changed. More precisely, consider the edge $(t'',t')$. Since $t''$ is now fully colored,
$e(t')$ inherits some colors from one of the edges of $t''$. The inherited colors are a subset of $Z \cup Z_1 \cup Z_2$. If the inherited colors are $Z_1$ or $Z$, then we assign $Z$ to $e(t)$ and $Z_2$ to $s(t)$. If the inherited colors are $Z_2$, then we leave the coloring as it is. In the case that the inherited colors $I$ contain $i$ colors of $Z \cup Z_1$ and $5-i$ colors of $Z_2$, we proceed analogously, i.e. we assign to $s(t)$: $i$ colors of $Z_2 \setminus I$ and $5-i$ colors of $Z \setminus I$.
\hfill $\Box$\\[.1ex]
\begin{lemma}
Let $t$ be a tricky triangle with exactly two incident edges $e_1, e_2$ of $C_1$ and such that either both these edges are incoming or both are outgoing. Then it is possible to color the edges of $t$ and $e_1, e_2$ in such a way that $G'_1$ is unblocked.
\end{lemma}
\noindent{\bf Proof.~} Suppose that $t=(p,q,r)$ and $e_1=(p', p), e_2=(q',q)$. Assume also that a $2$-cycle $(q,r)$ belongs to $C_{max}$ and that $C'_{max}$
contains edges $(p',p''), (p,p'''), (p_4,p), (q', q'')$. W.l.o.g. suppose that $col(p',p'')=\{6,7,8,9,10\}, col(q',q'')=\{16,17,18,19,20\}$.
Since $(p,p''')$ is weakly diverse with $(p',p'')$ at least $2$ colors of $(p,p''')$ do not occur on $(p',p'')$. This means that there exist $3$ colors of $col(p',p'')$ which occur neither on $e_1$ nor on $(p, p''')$ nor on $(p_4,p)$. Suppose that these are colors $8,9,10$. We assign them to $(r,p)$.
\hfill $\Box$\\[.1ex]
\section{Introduction}
In the maximum asymmetric traveling salesman problem (Max ATSP) we are given a complete directed graph $G=(V,E)$ with nonnegative weights on the edges and we wish to compute a traveling salesman tour of maximum weight.
The problem is known to be APX-hard \cite{PY} and the current best approximation algorithms for it are due to Kaplan, Lewenstein, Shafrir, Sviridenko \cite{KLSS} obtained in 2003 and Elbassioni, Paluch, van Zuylen \cite{PEZ} published in 2012.
Both of them achieve the approximation ratio of $\frac 23$, the former is based on linear programming and the other is combinatorial and simpler.
Besides being an interesting problem in itself, Max ATSP is also of particular interest because of its applications to a number of related problems. For example, an $\alpha$-approximation algorithm for Max ATSP implies a $(2+\frac{11(1-\alpha)}{9-2\alpha})$-approximation algorithm for SSP, which was shown by Mucha \cite{Mucha}. The shortest superstring problem is defined as follows. We are given $n$ strings $s_1, s_2, \ldots, s_n$ over a given alphabet $\sum$ and we want to find a shortest string $s$ such that each $s_i$ for $i, 1 \leq i \leq n$ is a substring of $s$. SSP arises in DNA sequencing and data compression.
Currently the best approximation algorithm for SSP is due to Mucha \cite{Mucha} and achieves an approximation factor of $2 \frac{11}{23}$. For a long time the best approximation algorithm for SSP was the one given by Sweedyk \cite{sweedyk} in 1999 with an approximation factor of $2 \frac{1}{2}$.
Any $\alpha$-approximation algorithm for Max ATSP implies also an algorithm with the same guarantee for the maximal compression problem defined by Tarhio and Ukkonen \cite{TU}.
We devise a combinatorial $\frac{7}{10}$-approximation algorithm for Max ATSP, thus proving
\begin{theorem}
There exists a $\frac{7}{10}$-approximation algorithm for the maximum asymmetric traveling salesman problem.
\end{theorem}
Using the result of Mucha \cite{Mucha}, we obtain
\begin{corollary}
There exists a $2 \frac{33}{76}$-approximation algorithm for the shortest superstring problem.
\end{corollary}
The presented results are a simpler and weaker version of \cite{34max}.
The approach we have adopted is as follows. We start by computing a maximum weight {\it cycle cover} $C_{max}$ of $G$, where
a cycle cover $C$ of graph $G$ is defined as a set of directed cycles of $G$ such that each vertex of $G$ belongs to exactly one cycle of $C$. A maximum weight cycle cover of $G$ can be found in polynomial time by a reduction to maximum weight matching.
Let $opt$ denote the weight of a traveling salesman tour of $G$ of maximum weight.
The weight of an edge $e$ will be denoted as $w(e)$ and for any subset $E'$ of edges $E$ by $w(E')$ we will mean $\sum_{e \in E'} w(e)$.
Since a traveling salesman tour is a cycle cover of $G$ (consisting of just one cycle), we know that $w(C_{max}) \geq opt$. By removing the lightest edge from each cycle of $C_{max}$, we obtain a collection of vertex-disjoint
paths, which can be arbitrarily patched to form a tour. Removing the lightest edge from cycle $c$ of length $k$ results in a path of weight at least $\frac{k-1}{k} w(c)$. Since $C_{max}$ may contain cycles of length two ($2$-cycles),
in the worst case the obtained tour may have weight equal to $\frac 12 w(C_{max})$. If we could find a maximum weight cycle cover of $G$ without cycles of length two ($2$-cycles) or three ($3$-cycles or {\em triangles}), then we would achieve a $
\frac 34$-approximation, but, unfortunately finding a maximum weight cycle cover without $2$-cycles is APX-hard \cite{BM}.
{\bf Eliminating and diluting problematic subgraphs with the aid of half-edges}
Since $2$- and $3$-cycles in a maximum weight cycle cover are an obstacle to getting a $\frac{7}{10}$-approximation, we would like to somehow get rid of them. To this end we use a technique of eliminating problematic subgraphs with the aid of {\bf \em half-edges} - a half-edge of edge $(u,v)$ is informally speaking
``either a head or a tail of $(u,v)$''. Half-edges have already been introduced in \cite{PEZ}. They have also been employed in \cite{12tsp},\cite{maxtsp}, \cite{01tsp}. Here we further develop this approach and show how to eliminate even more complex subgraphs. We already know that computing a maximum weight cycle cover without $2$- and $3$-cycles is hard. What we propose instead is to find a cycle cover $C'$ {\em improving on $C_{max}$} in the sense it does not contain certain $2$- and $3$-cycles from $C_{max}$ as well as some other difficult subgraphs but possibly contains half-edges and has weight at least $opt$. Let us note that it is the requirement that the weight of $C'$ is an upper bound on $opt$ that makes the task difficult. Without it finding new cycle covers avoiding prescribed configurations is easy and we would not even have to resort to using half-edges. We believe that the method utilizing half-edges provides a handy and relatively easy way
of obtaining new cycle covers (or sometimes matchings) improving on previous ones in a certain manner and having weight upper
or lower bounding $opt$, respectively. Additionally, half-edges in such cycle covers can be either completely discarded or extended to full edges, yielding regular cycle covers. Such an approach is often substantially easier
than extracting a good cycle cover from the fractional solution of an appropriate linear program.
For example, note that the method of obtaining two cycle covers of weight at least $2opt$ and without any common $2$-cycle in \cite{KLSS}
is very complicated.
We deal with problematic subgraphs by either {\em eliminating} or {\em diluting} them. If $C_{max}$ contains at least one $2$-cycle or triangle, we compute a
a cycle cover of $G$ that does not contain any $2$-cycle or triangle that already belongs to $C_{max}$ but may contain $2$-cycles or triangles that are not in $C_{max}$ or half-edges. Such a cycle cover $C_1$ is going to be called a
{\em relaxed cycle cover $\mathbf \mathit C_{1}$ improving $\mathbf \mathit C_{max}$}. Also we will ensure that a computed $C_1$ has weight at least $opt$.
In some cases $C_1$ would suffice to build a traveling salesman tour of weight at least $\frac{7}{10} opt$. To (try to) extract such a tour
from $C_1$ and $C_{max}$ we build a multigraph $G_1$ consisting of $4$ copies of $C_{max}$ and $10$ copies of $C_1$. Each occurrence of an edge $e$ in $C_{max}$ contributes $4$ copies of $e$ to $G_1$ and each occurrence of $e$ in $C_1$ contributes $10$ copies of $e$ to $G_1$. If $C_1$ contains only one half-edge of a certain edge $e$, then $C_1$ contributes $5$ copies of $e$ to $G_1$. The number of copies of edge $e$ in $G_1$ may be equal to up to $14$.
The total weight of edges of $G_1$ is at least $14 opt$. We would like to divide edges of $G_1$ into $20$ sets $Z_1, \ldots, Z_{20}$
in such a way that each $Z_i$ ($1 \leq i \leq 20$) is a collection of
vertex-disjoint paths. One of the sets $Z_1, \ldots, Z_{20}$ would then have to have weight at least $\frac{7}{10} opt$ and by patching it to a tour, we would obtain the desired solution. Dividing edges of $G_1$ into $20$ sets
can be viewed as coloring them with $20$ colors so that each color class contains vertex-disjoint paths. Such coloring will also be called a {\em path-$20$-coloring} of $G_1$. We can see that we are not able to path-$20$-color $G_1$ if $C_1$ contains a {\em tricky} triangle $t$, which is a triangle that shares an edge with a $2$-cycle of $C_{max}$. This is because a subgraph of $G_1$ induced on the vertices of $t$ contains $38$ edges, $4$ of which belong to an edge oppositely oriented to an edge of $t$. Therefore we would need $21$ colors to path-color it.
In the paper we show that if $C_1$ does not contain a tricky triangle, then we are able
to color $G_1$ as required.
To safeguard against tricky triangles in $C_1$, we introduce a technique of {\em diluting}, one of the main new techniques of the paper.
It consists in allowing a tricky triangle $t$ to occur in $C_1$, but in a {\em diluted} form, by which we mean that although it contains all
its edges, its weight is seemingly appropriately decreased, which enables its path-coloring. In other words, this technique succeeds (in a way) in altering the weights of edges in an unalterable (fixed) graph! \\
{\bf Methods of edge coloring} \
For coloring $G_1$ we present a method, which we think is interesting in its own right.
One of the surprisingly simple ideas on which this method is based is as follows: let $S$ be a subset of $V$ and $e=(u,v)$ an edge going into $S$ (i.e. $u \notin S$ and $v \in S$), which is colored with a color $k$. Then if there exists no edge $e'=(u',v')$ outgoing from $S$ (i.e. such that $u' \in S$ and $v' \notin S$) which is colored $k$, then $e$ does not belong to any cycle, whose all edges are colored $k$. Using this idea in an inductive way is very helpful in the process of coloring.
Coloring of multigraphs considered in this paper is also related to {\em the linear arboricity conjecture}, which asserts that every $k$-regular digraph can be path-$(k+1)$-colored (\cite{Naka}, \cite{alon}). This relationship is more visible while path-$3$-coloring a $2$-regular $2$-digraph or path-$4$-coloring a $3$-regular digraph. Path-$3$-coloring a $2$-digraph is a special very short case of our method of path-coloring and we obtain a method of path-$4$-coloring $3$-regular digraphs obtained from $2$ copies of one cycle cover $C_{o}$ and
$1$ copy of another one $C'_{o}$.
We are convinced that the presented techniques will find many other applications, not only in the context of traveling salesman problems.
{\bf Previous and related results}
The history of approximating the problems of maximum asymmetric traveling salesman and shortest superstring is quite long as is shown by the following lists of papers \cite{Li}, \cite{Blum}, \cite{Teng}, \cite{Czumaj}, \cite{KPS}, \cite{armen95}, \cite{armen95}, \cite{BJJ}, \cite{sweedyk}, \cite{KLSS}, \cite{PEZ}, \cite{Mucha} and \cite{FNW}, \cite{KPS} \cite{B1}, \cite{LS}, \cite{KLSS},
\cite{PEZ}.
Other variants of the maximum traveling salesman problem that have been considered are among others: the maximum symmetric traveling salesman problem (MAX TSP), in which the underlying graph is undirected - currently the best known approximation ratio is $\frac 45$ \cite{maxtsp}, the maximum symmetric traveling salesman problem, in which the edge weights satisfy the triangle inequality - the best approximation factor is $\frac 78$ \cite{KM2},
the maximum asymmetric traveling salesman problem with a triangle inequality - the best approximation ratio is $\frac{35}{44}$ \cite{KM1}.
\section{Missing proofs} \label{miss}
We start with two auxiliary lemmas.
\begin{lemma} \label{est}
Let $t$ be a tricky $3$-triangle and $a$ any of its edges. Then $\frac{9}{31}w(t) < w(a) <\frac{2}{5} w(t)$.
Let $p$ be any two-edge path contained in $opp(t)$. Then $w(p) < \frac{28}{37}w(t)$.
\end{lemma}
\noindent{\bf Proof.~}
We show that if at least one of these statements does not hold, then there exists an amenable
subgraph on $t$ of weight at least $14w(t)$, which contradicts the fact that $t$ is tricky.
Suppose that $w(a) \geq \frac{2}{5} w(t)$ for some edge $a$ of $t$. Then the other two edges have weights $w(b)=\frac{3}{10}w(t)+\delta, w(c)=\frac{3}{10}w(t)-\delta$ for some $\delta \geq 0$. By taking $20$ copies of $a$, $17$ of $b$ and $3$ of $c$, we obtain an amenable
subgraph on $t$ of weight at least $14w(t)$.
Suppose now that $\frac{9}{31}w(t) \geq w(a)$ for some edge $a$ of $t$. Then the other two edges have weights $w(b)=\frac{11}{31}w(t)+\delta, w(c)=\frac{11}{31}w(t)-\delta$ for some $\delta \geq 0$. By taking $20$ copies of $a$, $17$ of $b$ and $3$ of $c$, we obtain an amenable
subgraph on $t$ of weight at least $14w(t)$.
Assume now that there exists a two-edge path $p$ contained in $opp(t)$ having weight $w(p) \geq \frac{28}{37}w(t)$. Therefore its edges
$a$ and $b$ have weights $w(a)=\frac{14}{37}w(t)+\delta, w(b)=\frac{14}{37}w(t)-\delta$ for some $\delta \geq 0$. By taking $20$ copies of $a$and $17$ of $b$, we obtain an amenable
subgraph on $t$ of weight at least $14w(t)$. \hfill $\Box$\\[.1ex]
The corollary of this lemma is the following:
\begin{lemma} \label{est1}
Let $t$ be a tricky $3$-triangle and $a$ any of its edges. Let $a', b',c'$ denote edges of $opp(t)$ such that $w(a')\geq w(b') \geq w(c')$.
Then $w(b') < \frac{14}{37}w(t)$.
\end{lemma}
{\bf Proof of Lemma \ref{relax}}
First we show that any perfect matching $M$ of $G'$ yields a relaxed cycle cover. For any edge $(u_{out}, e^1_{uv}) \in M$, we add a half-edge $(u, x_{(uv)})$ to $\tilde C$ and for any edge $(u_{in}, e^2_{vu}) \in M$, we add a half-edge $(x_{(vu)},u)$ to $\tilde C$.
Since $M$ is a perfect matching of $G'$ each vertex $u_{in}$ and each vertex $u_{out}$ has an incident edge in $M$. Hence, each vertex in $V$
has exactly one outgoing and one incoming half-edge in $\tilde C$.
If an edge $(u,v) \in E$ does not belong to any tricky $2$-cycle, $3$-triangle or $2$-triangle of $R$, then it is replaced with an edge $(u_{out}, v_{in})$ in $E'$. Thus, if
$\tilde C$ contains only one half-edge of some edge $e$, then $e$ must belong to one of the mentioned tricky cycles.
For every tricky $3$-triangle $t$, $G'$ contains eight additional vertices $\gamma^{t+}_p, \gamma^{t-}_p, \ldots, \gamma^c_r, \gamma^c_q$ each of which excludes exactly one half-edge within $t \cup opp(t)$ from $\tilde C$. This exclusion follows from the fact that each of these vertices is matched to some vertex $e^1_{(uv)}$ or $e^2_{(uv)}$, such that $(u,v)\in t \cup opp(t)$. If one of these additional vertices is matched to a vertex $e^1_{(uv)}$, then a half-edge $(u, x_{(uv)})$ does not belong to $\tilde C$. Similarly, if it is matched to a vertex $e^2_{(uv)}$, then a half-edge $(x_{(uv)}), v)$ does not belong to $\tilde C$. Therefore, for each tricky triangle $t$, at least eight half-edges within $t \cup opp(t)$ do not belong to $\tilde C$, which means that $\tilde C$ contains at most four half-edges within $t \cup opp(t)$, hence contains neither $t$ nor $opp(t)$.
Next, we deal with the properties stated in the current lemma.
The proof of the first property is very similar to the proof of Lemma 2 in \cite{PEZ}.
Let $t=(p,q,r)$ be a tricky $3$-triangle such that $(r,q)$ is the chosen edge with maximum weight among edges of $opp(t)$. Let us notice that we may assume that $\tilde C$ is integral on $\{(q,p), (r,p)\}$ because if $M$ contains one half-edge of $(q,p)$, then it also contains one half-edge of $(r,p)$ and these half-edges are crossing. Also, the half-edges incoming to $p$ have the same weight. Therefore such two half-edges in $M$ can be replaced in $\tilde C$ by that one of the edges $\{(q,p), (r,p)\}$, whose half-edge incident to $q$ or $r$ is contained in $M$.
Applying the same kind of reasoning we can prove that:
\begin{claim}
Let $c$ denote the $2$-cycle $(q,r)$.
If vertices $\gamma^c_q$ and $\gamma^c_r$ are matched in $M$ to the subdivision vertices of the same edge of $c$, then $M$ yields
a relaxed cycle cover $\tilde C$, which is integral on $t$.
\end{claim}
We denote the half-edges $(u,x_{(u,v)})$ and $(x_{(u,v)},v)$ as $u^\rightarrow_v$ and $v^\leftarrow_u$.
Assume first that $\gamma^c_q$ and $\gamma^c_r$ are matched in $M$ to $e^2_{q,r}$ and $e^2_{r,q}$. This means that $\gamma^{t-}_q$ must be matched to
$e^2_{p,q}$ and $\gamma^{t-}_r$ to $e^2_{p,r}$. Vertices $\gamma^{t+}_q, \gamma^{t+}_r$ may be matched in $M$ in one of the following ways:
\begin{enumerate}
\item $e^1_{p,q}$ and $e^1_{p,r}$. Thus, $e^2_{q,r}$ and $e^2_{r,q}$ must be matched to $q_{out}$ and $r_{out}$, which in turn means
that neither can $e^1_{q,p}$ be matched to $q_{out}$ nor $e^1_{r,p}$ to $r_{out}$.
Therefore, $\tilde C$ contains two outgoing half-edges within the $2$-cycle $(q,r)$ and no half-edges within $\{(p,q), (p,r), (q,p), (r,p)\}$.
\item $e^1_{r,q}$ and $e^1_{q,r}$. This would mean that both $e^1_{p,q}$ and $e^1_{p,r}$ must be matched to $p_{out}$, which is impossible.
Therefore this case cannot occur.
\item $e^1_{r,q}$ and $e^1_{p,r}$.
$\tilde C$ contains half-edges $ q^\rightarrow_r, p^\rightarrow_q, $ and either the edge $(q,p)$ or no half-edge within $\{(r,p), (q,p)\}$.
\item $e^1_{p,q}$ and $e^1_{q,r}$. $\tilde C$ contains half-edges $p^\rightarrow_r, r^\rightarrow_q$ and either the edge $(r,p)$ or no half-edge within $\{(r,p), (q,p)\}$.
\end{enumerate}
Assume next that $\gamma^c_q$ and $\gamma^c_r$ are matched in $M$ to $e^1_{q,r}$ and $e^1_{r,q}$. This means that $\gamma^{t+}_q$ must be matched to
$e^1_{p,q}$ and $\gamma^{t+}_r$ to $e^1_{p,r}$. Vertices $\gamma^{t-}_q, \gamma^{t-}_r$ may be matched in $M$ in one of the following ways:
\begin{enumerate}
\item $e^2_{p,q}$ and $e^2_{p,r}$. $\tilde C$ contains two incoming half-edges within the $2$-cycle $(q,r)$.
\item $e^2_{r,q}$ and $e^2_{q,r}$. This means that $e^2_{p,q}$ must be matched to $q_{in}$ and $e^2_{p,r}$ to $r_{in}$. Therefore, $\tilde C$
contains $q^\leftarrow_p, r^\leftarrow_p$ and no half-edge within $(q,r)$.
\item $e^2_{r,q}$ and $e^2_{p,r}$.
$\tilde C$ contains half-edges $q^\leftarrow_p, r^\leftarrow_q$.
\item $e^2_{p,q}$ and $e^2_{q,r}$. $\tilde C$ contains half-edges $q^\leftarrow_r, r^\leftarrow_p$.
\end{enumerate}
In all the above four cases $\tilde C$ may contain additionally one of the edges $(q,p), (r,p)$.
We now want to show that $w(\tilde C)_t$ satisfies the conditions described in the definition of a harmonious triangle $t$.
Suppose first that $\tilde C$ is not integral and contains two half-edges within $t \cup opp(t)$. By the observation above we know that none of these half-edges belongs to
$(q,p)$ or $(r,p)$. If $\tilde C$ does not contain a half-edge of $(r,q)$ (the heaviest edge of $opp(t)$), then to prove that $t$ is harmonious in this case it suffices to show that $5w(a')+5w(c')+4w(t) \leq 10w(t)-5w(a)$ holds for any two different edges $a',c'$ of $\{(p,q), (p,r), (q,r)\}$ and for any edge $a$ of $t$. This is equivalent to showing $5w(a')+5w(c')+5w(a) \leq 6w(t)$.
Suppose to the contrary that $5w(a')+5w(c')+5w(a) > 6w(t)$. But then if $a',c' \in opp(t)$, by Lemma \ref{est} $5(w(a')+w(c')) < 5 \cdot \frac{28}{37}w(t)$ and $w(a)< \frac{2}{5}w(t)$, which means that $5w(a')+5w(c')+5w(a) < 6w(t)$ - a contradiction. If only one of $a',c'$ belongs to $opp(t)$, we have that $5w(a')+5w(c')+5w(a) < 10 \cdot \frac{2}{5}w(t) + 5 \cdot \frac{14}{37}w(t)<6w(t)$ - again a contradiction.
If both $a'$ and $c'$ belong to $t$, then $w(a')+w(c')< \frac{22}{31}w(t)$ and hence $5w(a')+5w(c')+5w(a)<6w(t)$. The proof holds holds also when $\tilde C$ contains two half-edges within $opp(t)$, one of which may be a half-edge $(r,q)$, because then $5(w(a')+w(c')) < 5 \cdot \frac{28}{37}w(t)$.
Assume now that $\tilde C$ contains two half-edges, one of which is a half-edge of $(r,q)$ and the other is within $t$. We notice that then $\tilde C$ must contain
one half-edge of $(q,r)$, because if $M$ contains one half-edge of $(r,q)$ and one half-edge of $(p,q)$, then these half-edges are crossing and can be replaced by one edge - either $(r,q)$ or $(p,q)$. Now we prove that $5w(r,q)+5w(q,r)+4w(t) \leq \max\{10w(t)-5w(a), 10(w(r,q)+w(q,r)\}$ holds for any edge $a$ of $t$ different from $(q,r)$.
Let $b=(q,r)$ and $b'=(r,q)$. Suppose that $5w(b)+5w(b')+4w(t)>10w(t)-5w(a)$.
This means that $w(b')+w(b)> \frac{6}{5}w(t)-w(a)$. If $5w(b)+5w(b')+4w(t)$ is also greater than $10(w(b)+w(b'))$, then $w(b)+w(b')< \frac{4}{5}w(t)$. However, by Lemma \ref{est} $\frac{6}{5}w(t)-w(a)>\frac 45 w(t)$ - a contradiction.
Let us now consider the cases when $\tilde C$ is not integral and contains four half-edges within $t \cup opp(t)$. Hence, $\tilde C$
contains the whole edge $(r,p)$ or the whole edge $(q,p)$, because $\tilde C$ can contain at most two half-edges within the remaning edges
within $t \cup opp(t)$. We can notice that if $\tilde C$ contains $(r,p)$, then we have an identical proof as above for the case of two half-edges within $t \cup opp(t)$ because we then add $10w(a)$ to both sides of the inequality.
We analyze now the cases when $\tilde C$ contains $(q,p)$:
\begin{enumerate}
\item all four half-edges are within $opp(t)$. We show that $5w(a')+5w(b')+10w(c')\leq 6w(t)+5w(a)$ if $a',b', c' \in opp(t), a\in t$ and
$c' \neq (r,q)$. Since $5w(a')+5w(b')+10w(c')=5w(opp(t))+5w(c') \leq 5w(t)+ \frac{70}{37}w(t)< 7w(t)$ and $6w(t)+5w(a)> 7\frac{14}{31}w(t)$,
the inequality indeed holds.
\item $\tilde C$ contains also two incoming half-edges within $(q,r)$. We notice that it cannot happen that $5w(b)+5w(b')+4w(t)+10w(c') > 10w(t)+5w(b)$, because it would mean that $w(b')+2w(c')>\frac{6}{5}w(t)$. However, by Lemma \ref{est} $w(c')+w(b')< \frac{28}{37}w(t)$ and $w(c')< \frac{14}{37}w(t)$, which means that $w(b')+2w(c')< \frac{42}{37}w(t)< \frac{6}{5}w(t)$.
\item $\tilde C$ contains $q^\leftarrow_p, r^\leftarrow_p$. Then we show that $5w(c)+5w(a')+10w(c')+4 w(t) \leq 10w(t)+5w(b)$, which is equivalent to $w(c)-w(b)+w(a')+2w(c') \leq \frac{6}{5}w(t)$. $a', c'$ are two edges of $opp(t)$ which do not contain a maximum weight edge $b'$. Hence $w(a')+2w(c')\leq w(opp(t)) \leq w(t)$. Also, $w(c)-w(b)< \frac{2}{5}w(t) -\frac{9}{31}w(t)<\frac{1}{5}w(t)$.
\item $\tilde C$ contains half-edges $q^\leftarrow_p, r^\leftarrow_q$. We show that $5w(b)+5w(c)+10w(c')+4 w(t) \leq 10w(t)+5w(b)$, which is equivalent to $w(c)+2w(c') \leq \frac{6}{5}w(t)$. We know that $2w(c')<\frac{28}{37}w(t)<\frac{4}{5}w(t)$. On the other hand, $w(c)< \frac{2}{5}w(t)$.
\end{enumerate}
{\bf Proof of Lemma \ref{uzup}}
Let $e=(u,v)$ be any edge of $c$ and $e_1=(u,v'), e_2=(u',v)$ edges of $C'_{max}$ coincident with it. Then $e$ has to be colored with colors
of ${\cal K} \setminus (col(e_1) \cup col(e_2))$, i.e., it cannot be colored with any color assigned to the edge coincident with it.
Suppose first that $c$ does not contain any chords. If each color $k \in {\cal K}$ is assigned to some ray of $c$, then we are already done. By coloring each edge $e$ of $c$ with any $mult(e)$ colors of ${\cal K}$ that are not assigned to any edges of $C_{max}$ coincident with $e$, we achieve that each color $k \in {\cal K}$ is {\em not} assigned to some edge of $c$, thus $c$ is not monochromatic with respect to any color of ${\cal K}$. Otherwise, if $kol(c) <20$ and thus not every color $k \in {\cal K}$ is assigned to some ray of $c$, we still have that $\chi(c) = kol(c)+flex(c) \geq 20$. Therefore, $flex(c)=\sum_{e \in c} flex(e) \geq 20-kol(c)$. This means that for each edge $e$ of $c$ with $flex(e)>0$ we can choose $flex(e)$ colors of ${\cal K}$, which will {\em not} appear on $e$. Recall that $e$ is colored with $mult(e)$ colors and $flex(e)=10-|col(e_1) \cup col(e_2)|$, where $e_1, e_2$ are edges coincident with $e$. This way we can distribute all colors of ${\cal K}$ not assigned to any rays of $c$ among edges of $c$ and ensure that each edge of ${\cal K}$, which is not assigned to any ray of $c$, does not appear on some edge of $c$. Hence, $c$ will again not be monochromatic w.r.t. any color of ${\cal K}$.
Let us also remark that during processing of cycles we keep an invariant that if $c$ has some rays, then some two of them are diverse.
Assume now that $c$ contains some chords. Observe that if $c$ has no incident bows (edges of multiplicity $5$), then the flexibility of each edge $e$ of $c$ satisfies $flex^0(e) \geq 2 + \min\{0, 8-kol(c)\}$.
Thus for the flexibility of the whole cycle $c$ it holds: $flex^0(c) \geq 2\lambda(c) + \min\{0, 8- kol(c)\}$. This means that if $c$ has length
at least $6$, we can color the chords of $c$ however we like and $c$ will not become blocked - $\chi(c)\geq 20$. Using the same argument as above we can then always color the edges of $c$ in such a way that each color of ${\cal K}$ does not appear on some edge of $c$.
There remains the question of how to ensure that each chord is safe. If a chord $e$ of $c$ is contained in a directed path $P$ consisting of edges of $C_{max}$ that contains some ray $r$ of $c$, then to guarantee that $e$ is safe, we can simply color all edges of $P$ between $e$ and the closest ray $r'$ of $c$ (together with $e$) with the same set of colors as $r'$. Notice that if $c$ has length smaller than $6$ and
has no uncolored chords after this procedure, then $flex^+(c) \geq 4$, thus $kol(c)+flex(c) \geq 20$.
Assume then now that we have already colored all such chords of $c$.
Assume that $c$ has the form $(v_1, v_2, \ldots, v_{\lambda(c)})$, then we say that a sequence $R(i,j)=(v_i, v_{i+1}, \ldots, v_j)$ is {\bf \em a row of subcycles} if (i) for each $k, i \leq k \leq j$ there exists an uncolored subcycle $c_k$ of $c$ going through $v_k$ and for any two $i_1, i_2$ such that $i \leq i_1 <i_2 \leq j$ subcycles $c_{i_1}$ and $c_{i_2}$ are different and (ii) it is maximal - i.e. it cannot be extended.
A row $R(i,j)$ begins (corr. ends) in one of three ways:
\begin{enumerate}
\item a marker - when edges of $C_{max}$ incident to $v_{i-1}$ (corr. $v_{j+1}$) are already colored,
\item a twist - if a subcycle $c'$ of $c$ going through $v_{i-1}$ goes also through one of the vertices $v_{i+1}, \ldots, v_{j}$ (corr. $c'$ going through $v_{j+1}$ goes also through one of the vertices $v_{i}, \ldots, v_{j-1}$),
\item a broad subcycle - if a subcycle $c_i$ goes also through $v_{i-1}$ (corr. $c_j$ goes also through $v_{j+1}$).
\end{enumerate}
To color a row $R(i,j)$ of subcycles in a {\bf \em zebra manner} means to (i) choose two disjoint four-element sets of colors $B$ and $W$ and (ii) color all edges of $C_{max}$ outgoing of $v_i, \ldots v_j$ using all colors of $B$ and the remaining edges of subcycles $c_i, c_{i+1}, \ldots, c_j$ with $4$ colors of $W$.
\begin{claim}
Suppose that a row $R(i,j)$ of subcycles is colored in a zebra manner and $e=(v', v_{i-1})$ is an edge of $C_{max}$. Then
\begin{enumerate}
\item If $R(i,j)$ begins with a marker and $B = col(e)$, then all edges of subcycles $c_i, c_{i+1}, \ldots, c_j$ are safe except for possibly
the edge outgoing of $v_j$ - some edge $(v_j, v'')$ of $c_j$.
\item If $R(i,j)$ begins with a twist, then all edges of subcycles $c_i, c_{i+1}, \ldots, c_j$ are safe except for possibly
the edge $(v_j, v'')$ of $c_j$ and the edge $e'=(v_{i-1}, v_i)$ of $c$ has $flex(e') \geq 4$.
\item If $R(i,j)$ begins with a broad cycle, then all edges of subcycles $c_i, c_{i+1}, \ldots, c_j$ are safe except for possibly
the edges: $(v_j, v'')$ of $c_j$ and $(v''', v_i)$ of $c_i$; the edge $e'=(v_{i-1}, v_i)$ of $c$ has $flex^+(e') \geq 4$ or $(v''', v_i)$ is safe.
\end{enumerate}
\end{claim}
Suppose first that $kol(c)+flex^0(c) \geq 20$ and no chord is a bow (an edge of multiplicity $5$). We then only have to color the chords and the edges of the cycle $c$ so that each one of them is safe. We consider each row $R(i,j)$ of subcycles separately. Suppose first that it begins with a marker. We then choose for $B$ some $4$ colors contained in $col(e_1) \cup col(e_2)$ and color $R(i,j)$ in a zebra manner. If $R(i,j)$ ends with a twist, then all edges contained in the subcycles of $R(i,j)$ are safe as a result. Otherwise,
the only edge, which may be not safe is $(v_j, v'')$. If $R(i,j)$ ends with a broad cycle, then $flex^+(v_j, v_{j+1}) \geq 4$ and we can forbid $4$ colors of $W$ on $(v_j, v_{j+1})$ thus ensuring that $(v_j, v'')$ is safe. If $R(i,j)$ ends with a marker, then let $e_3, e_4$ denote the edges of $C_{max}$ incident to $v_{j+1}$. To $W'$ we add $\min\{4, |(col(e_3) \cup col(e_4)) \setminus B|$ colors of $(col(e_3) \cup col(e_4)) \setminus B$. Note that $(v_j, v'')$ is safe w.r.t. each such color. If $W'$ has fewer than $4$ such colors, we notice that
$flex^+(v_j, v_{j+1}) \geq 4-|W'|$, because $4-W'$ colors of $B$ must be also present on $(v''', v_{j+1})$. Therefore we can forbid all the remaining colors of $W\setminus W'$ on $(v_j, v_{j+1})$ and thus ensure that $(v_j, v'')$ is safe. If $R(i,j)$ begins and ends with a twist,
then by coloring it in a zebra manner, we guarantee that all edges of its subcycles are safe. Similarly, if $R(i,j)$ begins and ends with a broad cycle, then $flex^+(v_{i-1}, v_i) \geq 4, flex^+(v_j, v_{j+1}) \geq 4$ and we can forbid colors of $B$ on $(v_{i-1}, v_i)$ and colors of $W$ on $(v_j, v_{j+1})$.
If any of the subcycles $c'$ in a row $R(i,j)$ contains a bow, then $c'$ cannot be a broad cycle (because a broad cycle contains an edge of multiplicity at least $14$ and a $2$-cycle containing a bow has edges with multiplicities $4$ and $5$).
If $kol(c) \geq 20$ or $kol(c)+flex^0(c)\geq 20+$ the number of chords that are bows, then we color any bow occurring in a row $R(i,j)$ with a color $k' \notin W \cup B$ and forbid $k'$ on one edge of $c$, using thus one unit of $flex^0(c)$.
We deal now with cycles with chords such that $flex^0(c)+kol(c)<20$ and without incident bows. Note that such $c$ has length between $4$and $5$.
In this case we additionally need to get an extra $flex^+(c)\geq 4$. Except for the case when $R(i,j)$ begins and ends with a broad cycle, we either get it for free by the above claim or can ensure it by choosing $B$ in such a way that $flex^+(v_{i-1}, v_i) \geq 4$.
If a row $R(i,j)$ begins and ends with a broad cycle, then we do not color it in a zebra manner but in an {\bf \em alternate zebra manner}:
we use the same $4$ colors of $B$ on edges $(v_i, v'_i), (v'_{i+1}, v_{i+1}), (v_{i+2}, v'_{i+2}), \ldots)$ and the same $4$ colors of $W$
on $(v'_i, v_i), (v_{i+1}, v'_{i+1}), (v'_{i+2}, v_{i+2}), \ldots)$ thus increasing $flex^+(c)$ sufficiently.
Finally, we deal with cycles with chords such that $flex^0(c)+kol(c)<20$ and having incident bows. The otline of the proof in this case is the following. Let $c'$ be an uncolored subcycle. It is contained in two rows $R(i,j)$ and $R(i',j')$. Sometimes we need to get an extra $flex^+(c) \geq 7$ (for example, if $c$ is a square). We are then able to get an extra $flex^+(c)$ of $4$ from each of the rows.
\koniec
\section{Construction of $G_1$ in the presence of tricky triangles and tricky $2$-cycles} \label{multi}
We show how to modify the multigraph $G_1$ built in the previous section, when $G$ contains tricky triangles or $C_1$ contains strange $2$-cycles. We say that a tricky triangle $t=(p,q,r)$ is {\bf \em halfy} if $C_1$ contains exactly one half-edge of some edge of $t$ or of $opp(t)$.
The main new features are going to be the following $3$ types of subgraphs, shown in Figures \ref{htrgs} and \ref{bow}, arising on tricky triangles:
\begin{enumerate}
\item a subgraph on $p,q,r$ such that $C_{max}$ contains a halfy $3$-triangle $t=(p,q,r)$ and $C_1$ contains exactly four edges incident to $t$: either three incoming edges and one outgoing of $t$ or three outgoing and one incoming. W.l.o.g. assume that $C_1$ contains
edges $(p',p), (r',r), (q,q_1), (q_2, q)$. We proceed as indicated in the definition of a harmonious triangle. $G_1$ then contains either (i) $10$ copies of each of $(r,p),(q,r)$ and $5$ copies of $(p,q)$ or (ii) $10$ copies of each of $(p,r),(r,p)$ and $0$ copies of $(p,q), (q,r)$. We choose the option of maximum weight. If option (ii) is maximum, then we treat $C_{max}$ as though it contained the $2$-cycle $(p,r)$ and not the triangle $(p,q,r)$ and in coloring $G_1$ we do not treat $(p,q,r)$ as a tricky triangle.
Any edge $e \in C_{max}$ such that $mult(e)=10$ is called a {\bf \em b-edge}. A subgraph of $G_1$ on $p,q,r$ contains thus two b-edges.
\item a subgraph on $p, q,r$ such that $C_{max}$ contains a halfy $2$-triangle $t=(p,q,r)$ with a t-cycle $c=(q,r)$ and $C_1$ contains exactly four edges incident to $t$, three of which are incident to $c$. W.l.o.g assume that two edges of $C_1$ are incident to $q$ and that $C_1$ contains an edge $(r_1, r)$. Then $mult(r,q)=10$ and $mult(q,r)=5$, hence $(r,q)$ is a b-edge.
\item a subgraph on $q,r$ such that $C_{max}$ contains a $2$-cycle $c=(q,r)$, which is a t-cycle of a tricky $2$-triangle $t$ and $C_1$ contains a loop $e'_t$. Then $C_1$ contains four edges incident to $c$ and
$mult(q,r)=5, mult(r,q)=4$. We call the edge $(q,r)$ a {\bf \em bow}.
\end{enumerate}
\begin{figure}[h]
\centering{\includegraphics[scale=0.6]{halfytrgs.pdf}}
\caption{{\scriptsize Halfy triangles with $3$ incoming and $1$ outgoing edges of $C_1$: a tricky $3$- and $2$-triangle $t=(p,q,r)$.}
} \label{htrgs}
\end{figure}
\begin{figure}[h]
\centering{\includegraphics[scale=0.6]{bow.pdf}}
\caption{{\scriptsize A bow.}
} \label{bow}
\end{figure}
The following two types of subgraphs can be treated in a very similar way as a subgraph surrounding a halfy $2$-cycle (Figure \ref{htrgs2}):
\begin{enumerate}
\item a subgraph on $p,q,r$ such that $C_{max}$ contains a halfy $3$-triangle $t=(p,q,r)$ and $C_1$ contains exactly two edges incident to $t$: either two incoming or two outgoing. W.l.o.g. assume that $C_1$ contains
edges $(p',p)$ and $(q',q)$. We again proceed as indicated in the definition of a harmonious triangle. $G_1$ contains then either (i) $10$ copies of each of $(p,q), (r,p)$ and $15$ copies
of $(q,r)$ or (ii) $10$ copies of each of $(p,r), (r,q), (q,r)$ or (iii) $10$ copies of each of $(q,r), (r,p), (p,r)$. We choose the option with maximum weight. If option (ii) or (iii) is maximum, then we treat $C_{max}$ as though it contained the $2$-cycle $(r,q)$ or $(r,p)$ and not the triangle $(p,q,r)$ and in coloring $G_1$ we do not treat $(p,q,r)$ as a tricky triangle. We call the edges $(p', p''), (q', q'')$ of $C_{max}$ {\bf \em antennas} of $t$ and require that they are diverse.
\item a subgraph on $p,q,r$ such that $G$ contains a halfy $2$-triangle $t=(p,q,r)$, where $c=(q,r)$ is its t-cycle and $C_1$
contains exactly two edges incident to $t$: either two incoming or two outgoing. W.l.o.g. assume that
$C_1$ contains
edges $(p',p)$ and $(q',q)$. $G_1$ contains then $5$ copies of each of $(p,q), (r,q)$, $4$ copies of $(r,p)$ and $14$ copies
of $(q,r)$. We call the edges $(p', p''), (q', q'')$ of $C_{max}$ {\bf \em antennas} of $t$ and require that they are diverse.
We call the edge $(p,p''')$ of $C_{max}$ a {\bf \em weak antenna} of $t$ and require that it is {\bf \em weakly diverse} with $(p', p'')$, by which we mean that $|col(p, p''') \setminus col(p',p'')| \geq 2$.
\end{enumerate}
\begin{figure}[h]
\centering{\includegraphics[scale=0.6]{halfytrgs2.pdf}}
\caption{{\scriptsize Halfy triangles with two incoming edges of $C_1$: a tricky $3$- and $2$-triangle $t=(p,q,r)$.}
} \label{htrgs2}
\end{figure}
If $C_1$ contains some strange $2$-cycle or tricky triangles, then the multigraph $G_1$ contains non-path-$20$-colorable subgraphs.
We deal with such non-colorable subgraphs at the end by finding exchange sets $F_1, F_2$ and extending the partial path-$20$-coloring.
If $C_1$ contains a $2$-cycle or triangle of $C_{max}$, then such a cycle is not tricky and then we can replace it
with other edges so as to receive an amenable subgraph.
In all other aspects the obtained multigraph $G_1$ has almost the same properties as in previous sections, i.e., each vertex has at most two incoming and two outgoing edges: two in $C_{max}$ and two in $C_1$ and thus indegree and outdegree at most $14$.
Below we give a detailed description of the construction of $G_1$.
\subsection{Tricky $2$-cycles}
A $2$-cycle $c=(u,v)$ of $G$ is {\bf \em strange} if exactly one of the edges of $c$ belongs to $C_{max}$. Let $c=(u,v)$ be a strange $2$-cycle such that $(u,v) \in C_{max}$
and $(u',u), (v,v')$ are its incident edges of $C_{max}$. If $c$ is not a subcycle of a triangle of $C_{max}$ and both (i) $w(u,v) > \frac 34 (w(u',u)+w(v,v'))$ and (ii) $w(v,u) > \frac 34(w(u',u)+w(v,v'))$, then it is said to be {\bf \em incorrigible}.
\begin{lemma} \label{2incor}
Let $c=(u,v)$ be an incorrigible $2$-cycle. Then at most one of the vertices $u,v$ is part of an incorrigible $2$-cycle $c'\neq c$.
If $c=(u,v)$ and $c'=(u,v')$ are two incorrigible $2$-cycles such that $(u,v), (u,v'), (v',v'') \in C_{max}$ and $w(u,v)\geq w(u,v')$,
then $w(v',v'') < \frac{w(u,v')}{3}$.
\end{lemma}
\noindent{\bf Proof.~} Suppose to the contrary that both $u$ and $v$ are part of incorrigible $2$-cycles $c_1=(u',u)$ and $c_2=(v,v')$ different from $c$. Assume that edges $(u,v), (u',u), (v,v')$ belong to $C_{max}$ and let $(u'',u'), (v',v'')$ be edges of $C_{max}$ adjacent to $u'$ and $v'$.
Since the $2$-cycles $c_1$ and $c_2$ are incorrigible, we have that $w(u',u)> \frac 34 (w(u,v)+w(u'',u'))$ and $w(v,v')> \frac 34 (w(u,v)+w(v',v''))$, which implies that $w(u',u)> \frac 34 w(u,v)$ and $w(v,v')> \frac 34 w(u,v)$. It means that $\frac 34 (w(u',u)+w(v,v'))>
\frac 98 w(u,v) \geq w(u,v)$. Therefore $c=(u,v)$ is not incorrigible - a contradiction.
Let $c=(u,v)$ and $c'=(u,v')$ be two incorrigible $2$-cycles such that $(u,v), (u,v'), (v',v'') \in C_{max}$ and $w(u,v)\geq w(u,v')$.
Then, because $c'$ is incorrigible, $w(u,v')> \frac 34 (w(u,v)+w(v',v''))$. Hence, $w(v',v'')< \frac 43w(u,v') - w(u,v)$. Since $w(u,v)\geq w(u,v')$, we get that $w(v',v'') < \frac 13 w(u,v')$. \hfill $\Box$\\[.1ex]
For $2$-cycles, which are not subcycles of triangles of $C_{max}$ we in fact use a different definition of a tricky cycle than the one presented earlier.
A $2$-cycle $c=(u,v)$ of $G$, which is not a subcycle of any triangle of $C_{max}$ is tricky
if it satisfies one of the following:
\begin{enumerate}
\item $c$ is hard (by definition it then belongs to $C_{max}$).
\item $c=(u,v)$ is an incorrigible $2$-cycle that is either vertex-disjoint with any other incorrigible $2$-cycle or there exists an incorrigible
$2$-cycle $c'=(u,v')$ such that $(u,v), (u,v') \in C_{max}$ and $w(u,v)>w(u,v')$.
\end{enumerate}
It easily follows that any two tricky $2$-cycles are vertex-disjoint.
\subsection{Strange halfy $2$-cycles}
If $C_1$ contains exactly one half-edge of each edge of a $2$-cycle $c$ of $G$, then $c$ is called a {\bf \em halfy} $2$-cycle of $C_1$.
Additionally, if a halfy $2$-cycle $c$ of $C_1$ is such that exactly one of the edges of $c$ belongs to $C_{max}$, then it is said to be
a {\bf \em strange halfy 2-cycle of $C_1$}.
In the way shown below we deal with each strange halfy $2$-cycle that is not a subcycle of a tricky $3$-triangle.
To facilitate the subsequent coloring of $G_1$, we modify it as follows. We are also going to modify $C_{max}$. To avoid confusion, we denote the modified $C_{max}$ as $C'_{max}$. Let $c=(u,v)$ be any strange halfy 2-cycle of $C_1$. Suppose that $(u,v) \in C_{max}$. Let $(u',u), (v,v') \in C_{max}$ be the two edges of $C_{max}$ incident to $(u,v)$.
We remove all copies of $(u',u), (v,v')$ ($8$ in total) from $G_1$ and replace them with $1$ additional copy of $(u,v)$ and $4$ additional copies of $(v,u)$ - as a result $mult(u,v)=mult(v,u)=10$ and $mult(u',u)=mult(v,v')=0$. Thus, $(u,v)$ and $(v,u)$ are b-edges. Also, in $C'_{max}$, we replace edges $(u',u), (v,v')$ with one edge $(v,u)$. As a consequence of this modification, we obtain a multigraph $G_1$, in which each halfy $2$-cycle $c=(u,v)$ of $C_1$ (not only strange but also one with both edges in $C_{max}$) has no incident edges of $C_{max}$ apart from those already belonging to $c$ and both edges of $c$ are b-edges.
\begin{lemma}
The weight of the thus modified $G_1$ is at least $4w(C_{max})+10w(C_1)$.
\end{lemma}
\noindent{\bf Proof.~} This follows from the fact that each halfy and strange $2$-cycle $(u,v)$ of $C_1$ is incorrigible. Assuming that it is $(u,v)$ that belongs to $C_{max}$ and $(u,v)$ is incorrigible, by the definition we get that $w(u,v) > \frac 14 (w(u',u)+w(v,v'))$ and $w(v,u) > \frac 34(w(u',u)+w(v,v'))$, which means that $w(u,v)+5w(v,u)\geq 4(w(u',u)+ w(v,v'))$.
\hfill $\Box$\\[.1ex]
\subsection{Tricky $3$-triangles}
For any tricky $3$-triangle, which is halfy in $C_1$, we proceed as described at the beginning of this section.
This is justified by Lemma \ref{relax}, which says that any such triangle is harmonious in $C_1$.
\subsection{Tricky $2$-triangles}
\begin{lemma} \label{nontricky}
Let $t=(p,q,r)$ be a tricky $2$-triangle of $C_1$ with a t-cycle $c=(q,r)$. Then
\begin{enumerate}
\item $w(r,q)> \max\{\frac{w(t)}{2}, \frac{3}{2}w(q,r)\}$.
\item Let $\Delta = w(r,q) - \frac{3}{2}w(q,r)$. Then $\min\{w(p,q), w(r,p)\} > \frac{3}{5}\Delta +\frac{w(q,r)}{2}$.
\item Let $\epsilon= w(r,q) - \frac{w(t)}{2}$. Then $w(q,r) \geq \frac{w(t)}{3}-\epsilon$.
\end{enumerate}
\end{lemma}
\noindent{\bf Proof.~}
If point $1$ did not hold, we could replace $4$ copies of $(r,q)$ with two copies of $t$ and obtain a path-$20$-colorable subgraph or replace $4$-copies of $(r,q)$ with $6$ copies of $(q,r)$.
Let $a=(q,r), d=(r,q), b_1=(p,q), b_2=(r,p)$.
We now prove point $2$. We have that $w(d)=\frac{3}{2}w(a)+\Delta$.
We notice that in order for
$t$ to be tricky it has to hold that $4w(c)+10w(t) > 10w(a)+10w(b_2)+10w(d)$, because the subgraph consisting of $10$ copies of each of $a, b_2,
d$ is path-$20$-colorable. This means that $20w(a)+10w(b_1)+10w(b_2)+4\Delta > 25w(a)+10w(b_2)+10 \Delta$, which implies that $w(b_1)>\frac{3}{5}\Delta + \frac{w(a)}{2}$.
We obtain the same estimation for $w(b_2)$ if we consider the subgraph consisting of $10$ copies of each of $a, b_1,
d$. Then it must hold that $4w(c)+10w(t) > 10w(a)+10w(b_1)+10w(d)$.
To prove point $3$, suppose to the contrary that $a<\frac{w(t)}{3}-\epsilon$. We will show that in such a case $t$ is not tricky. Notice that $10w(t)+4w(c)<10w(t)+\frac{10}{3}w(t)=\frac{40}{3}w(t)$.
Since $a<\frac{w(t)}{3}-\epsilon$, $b+w(b_2)> \frac{w(t)}{3}-\epsilon$.
Suppose that $b \geq w(b_2)$. Let $b=\frac{w(t)}{3}+ \epsilon/2+\delta$. Then $w(b_2)> \frac{w(t)}{3}+ \epsilon/2-\delta$.
If $b_1$ is coincident with an edge of $C_{max}$ with multiplicity $5$ (a bow), then
we take $15$ copies of $b_1$, $13$ copies of $b_2$ and $12$ copies of $a$.
Hence, the whole subgraph has weight greater than
$5w(t)+7.5\epsilon+15\delta+\frac{13}{3}w(t)+6.5\epsilon-13\delta+4w(t)-12\epsilon>\frac{40}{3}w(t)$.
If $b_1$ is not coincident with an edge of $C_{max}$ with multiplicity $5$, then
we take $16$ copies of $b$, $12$ copies of $b_2$ and $12$ copies of $a$.
Hence, the whole subgraph has also weight greater than
$\frac{40}{3}w(t)$.
\hfill $\Box$\\[.1ex]
Let $t=(p,q,r)$ be a tricky triangle such that $c=(r,q)$ is a $2$-cycle of $C_{max}$.
Let $a=(q,r), d=(r,q), b_1=(p,q), b_2=(r,p)$.
\begin{enumerate}
\item $C_1$ contains two crossing half-edges within $b_1, b_2$ and no half-edges within $c$.
Then $C_1$ contains either $(r_1,r)$ and two edges incident to $q$ or $(q_1,q)$ and two edges incident to $r$. W.l.o.g. assume that the first case holds.
$G_1$ contains then $10$ copies of $d$ and $5$ copies of $a$.
To prove that $10w(d)+5w(a) \geq 5w(b_1)+5w(b_2) + 4w(c)$, we use Lemma \ref{nontricky} point $3$.
Let $\epsilon= w(d) - \frac{w(t)}{2}$ and suppose that $10w(d)+5w(a) < 5w(b_1)+5w(b_2) + 4w(c)$. Then $10w(d)+5w(a) < 5w(b_1)+5w(b_2) + 4w(a)+4w(d)$, which implies that $6w(d)+w(a)<5w(b_1)+5w(b_2)$. Hence $3w(t)+6\epsilon+w(a)<5(w(t)-w(a))$. Thus $w(a)<\frac{w(t)}{3} - \epsilon$, which contradicts Lemma \ref{nontricky} point $3$.
\item $C_1$ contains two crossing half-edges within $b_1, b_2$ and two crossing half-edges within $c$.
Then $C_1$ contains either (i) $(p_1,p)$ and $(q,q_1)$ or (ii) $(r_1,r)$ and $(p,p_1)$. W.l.o.g. assume that
case (i) holds. Then $G_1$ contains $15$ copies of $d$ and depending on which is more convenient either (i)
$5$ copies of $a$ and $5$ copies of that edge from $b_1, b_2$ which has greater weight or (ii) $5$ copies of each of
$b_1, b_2$. To facilitate the coloring of $G_1$, we modify $G'_1$ in such a way that we replace edges $(p,q), (q,q_1)$ with one edge
$(p,q_1)$. The edge $(p,q_1)$ has multiplicity $10$ in $G'_1$. $G'_1$ does not contain any of the remaining edges of $t$ or $c$. The restriction regarding this modification is such that if $G'_1$ contains $(q_1,p)$ and $mult(q_1,p)=14$, then we do not perform it and instead remove from $G'_1$ all edges of $t$ and $c$ as well as $(q,q_1)$.
Note that the required weight is equal to $W=4w(c)+5w(c)+5w(b_1)+5w(b_2)=9w(a)+9w(d)+5w(b_1)+5w(b_2)$.
We need to prove that $15w(d)+5w(a)+5\max\{w(b_1), w(b_2)\} \geq W$ and $15w(d)+5w(b_1) + 5w(b_2) \geq W$.
Let us prove the first one. Notice that $\max\{w(b_1), w(b_2)\} \geq \frac{w(t)-w(a)}{2}$. Suppose that $15w(d)+5w(a)+5\max\{w(b_1), w(b_2)\} < W$. This means that $4w(a)+ 5\max\{w(b_1), w(b_2)\} > 6w(d)$. Using $w(d)=\frac{w(t)}{2}+\epsilon$, we get that
$1.5 w(a)+2.5w(t) > 3w(t)+6\epsilon$. Hence $w(a)>\frac{w(t)}{3}+4\epsilon$. But then $w(d) < \frac{3}{2}w(a)$, which contradicts Lemma \ref{nontricky} point $1$.
\item $C_1$ contains two crossing half-edges within $(p,q), (r,p)$ and one whole edge within $c$.
Then $C_1$ contains either (i) $(p_1,p)$ and $(q_1,q)$ or (ii) $(p,p_1)$ and $(q,q_1)$. The case is similar to the case for the tricky
$3$-triangle. We do not modify anything in $G_1$.
W.l.o.g. assume that case (i) holds. Edges $(p_1, p_2), (q_1, q_2)$ of $C'_{max}$ are called the {\bf \em antennas} of $t$ and required to be diverse.
\item $C_1$ does not contain any half-edges within $c$ but contains a loop $e'_t$.
Since $10w(e'_t)=w(r,q)$, $G_1$ contains $5$ copies of $(r,q)$ (and not $4$ as usual). Such an edge $(r,q)$ is called a {\bf \em bow}.
\item $C_1$ contains all edges of $t$ and a loop $e_t$.
Then $10w(t)+4w(c)+10w(e_t)= 14w(q,r)+10(w(p,q)+w(r,p))+3w(r,q)$ and $G_1$ indeed contains $14$ copies of $(q,r)$, $10$ copies of each of
$(p,q), (r,p)$ and $3$ copies of $(r,q)$, which is a path-$20$-colorable subgraph.
\end{enumerate}
\subsection{Strange $2$-cycles}
Finally, we show what we do about strange $2$-cycles of $C_1$. Let $c=(u,v)$ be a strange $2$-cycle of $C_1$ and suppose $C_{max}$ contains $(u,v), (u',u), (v,v')$.
Since $c$ belongs to $C_1$, it means that $4w(v,u) \leq 6 w(v',v)$ or $4w(v,u) \leq 6 w(u,u')$. Suppose that the first case holds. Then $G_1$ will contain $6$ copies of $(v,u)$ (instead of $10$) and additionally, $6$ copies of $(v',v)$. To facilitate the path-coloring of $G_1$, $G'_1$ is modified as follows. If $v'=u'$, $G'_1$ contains a $2$-cycle $(u', u'')$, where $u''$ is a new vertex and the number of copies of $(u',u'')$ and $(u'',u)$ is equal to respectively $10$ and $5$ and $G'_1$ does not contain vertices $u,v$ or their adjacent edges. If $v' \neq u'$, $G'_1$ does not contain $u$ or $v$ or their adjacent edges and contains instead $4$ copies of the edge $(u',v')$, which is treated as though it belonged to $C'_{max}$.
\begin{theorem}
$w(G_1) \geq 10w(C_1)+4w(C_{max})$
\end{theorem}
The proof follows from the above discussion.
\section{Outline of algorithm}
Suppose we have computed a maximum weight cycle cover $C_{max}$ of a given complete directed graph $G=(V,E)$.
We will say that a cycle $c$ is {\bf \em hard} if it belongs to $C_{max}$ and each edge $e$ of $c$ satisfies $w(e)>\frac{3}{10} w(c)$. We are going to call cycles of length $i$, i.e. consisting of $i$ edges, {\bf \em $i$-cycles}. Also, $3$-cycles will be called {\bf \em triangles}.
Let us notice that only $2$-cycles and triangles can be hard. By $c=(v_1, v_2, \ldots, v_i)$ we denote an $i$-cycle consisting of edges $(v_1, v_2), \ldots, (v_{i-1},v_i), (v_i, v_1)$.
If $C_{max}$ does not contain a hard cycle, then we can easily build a traveling salesman tour of weight at least $\frac{7}{10} w(C_{max}) \geq \frac{7}{10} opt$.
If $C_{max}$ contains at least one hard cycle, we would like to obtain another cycle cover $C_1$, which does not contain any hard cycle from $C_{max}$ (i.e. for each hard cycle $c$ of $C_{max}$, not all edges of $c$ are contained in $C_1$), has weight at least $opt$ and enables us to build a tour of weight at least $\frac{7}{10} opt$. Let us remark here that computing a cycle cover of weight at least $opt$ and not containing any hard cycle is hard. For comparison, note that computing a maximum weight cycle cover without any $2$-cycles is NP-hard \cite{BM}. For this reason, we are going to relax the notion of a cycle cover and allow it to contain {\bf \em half-edges} - a half-edge of edge $(u,v)$ is informally speaking
``half of the edge $(u,v)$ that contains either a head or a tail of $(u,v)$''. We formally define half-edges and cycle covers allowing half-edges later. For now one may think of $C_1$ as a standard cycle cover.
To extract a tour of weight at least $\frac{7}{10} opt$ from $C_{max}$ and $C_1$, we are going to build a multigraph $G_1$ consisting of $4$ copies of $C_{max}$ and $10$ copies of $C_1$. More precisely, $G_1$ contains $4$ copies of each edge $e \in C_{max}\setminus C_1$, $10$ copies of each $e \in C_1\setminus C_{max}$ and $14$ copies of each $e \in C_1 \cap C_{max}$. We would like to color each edge of $G_1$ with one of $20$ colors so that edges of the same color form a collection of disjoint paths or, in other words, we would like to {\bf \em path-$20$-color} $G_1$ or {\bf \em path-color it with $20$ colors}. We may notice that it is not possible, if
$C_1$ contains one of the following:
\begin{enumerate}
\item a $2$-cycle or triangle of $C_{max}$.
\item a triangle oppositely oriented to a triangle of $C_{max}$.
\item a $2$-cycle $c=(u,v)$ such that one of its edges belongs to $C_{max}$. This is because $G_1$ contains in this case $24$ edges connecting
$u$ and $v$ ($14$ in one direction and $10$ in the other) and thus we would need $24$ colors to path-color $G_1$.
\item a triangle $t=(p,q,r)$ such that a $2$-cycle $(q,r)$ belongs to $C_{max}$. In this case $G_1$ contains a subgraph consisting of $14$ copies of $(q,r)$, $10$ copies of each of $(p,q), (r,p)$ and $4$ copies of $(r,q)$, which is clearly non-path-$20$-colorable.
\end{enumerate}
We later show that if $C_1$ does not contain any of the above cycles, then $G_1$ built from $C_{max}$ and $C_1$ in the manner described above is always path-$20$-colorable. Ideally, we would like the enumerated cycles not to
occur in $C_1$ at all. However, not all of them are bad for our purposes, because sometimes it is easy to replace some edges of these cycles
with other ones, so that we obtain a path-$20$-colorable multigraph. For example, if $C_1$ contains a triangle $t=(p,q,r)$ such that a $2$-cycle $(q,r)$ belongs to $C_{max}$ and $w(r,q) \leq \frac{3}{2}w(q,r)$, we can replace $4$ copies of $(r,q)$ with $6$ copies of $(q,r)$ and
make the subgraph on $p,q,r$ path-$20$-colorable.
Also, this way we do not diminish the overall weight of the subgraph.
Below we define a set of cycles that are {\bf \em tricky}. The occurrence of any such cycle $c$ in $C_1$ means that $G_1$ is non-path-$20$-colorable and we cannot remedy this by local replacements of edges.
A cycle of $G$ oppositely oriented to $c$ is denoted as $opp(c)$. A cycle $c'$ is said to be a {\bf \em subcycle} of $c$ if every vertex of $c'$ belongs to $c$. For any multisubgraph $G'$ of $G$, by $mult_{G'}(e)$ we denote the number of copies of $e$ occurring in $G'$
and for any subset $V'$ of vertices of $G'$ by $E_{G'}(V')$ we denote the set of edges of $G'$ connecting any two vertices of $V'$.
Let $S=(V_S, E_S)$ be a multisubgraph of $G$. For any $v \in V_S$,
by $indeg_S(v), outdeg_S(v)$ we denote, respectively, the indegree and outdegree of $v$ in $S$. Let $G_1/S$ denote a multigraph $(G_1\setminus E_{G_1}(S)) \cup E_S$.
We say that a multisubgraph $S$ of $G$ is {\bf \em amenable} if (i) any path-$20$-coloring of $G_1\setminus S$ can be extended to path-$20$-coloring of $G_1/S$ and (ii) every vertex $v\in V_S$
satisfies $indeg_{G_1/S}(v) \leq 17$ or $outdeg_{G_1/S}(v)\leq 17$.
(The degrees are required to satisfy this condition, because we want to leave the possibility of adding $3$ copies of some edge $e \in G$ incident to any vertex $v$ to the multigraph $G_1/S$.)
We define a vertex/edge surrounding of $c$ (with respect to $C_{max}$) denoted $sur^v(c)$ and $sur^e(c)$, respectively.
For a triangle $t=(p,q,r)\in C_{max}$ we have $sur^v(t)=\{p,q,r\}$ and $sur^e(t)=\{(p,q), (q,r), (r,p)$. For a triangle $t=(p,q,r)$ such that the $2$-cycle $(q,r)$ belongs to $C_{max}$, we have $sur^v(t)=\{p,q,r\}$ and $sur^e(t)=\{(q,r), (r,q)$.
Let $c$ be a $2$-cycle $(u,v)$ and $(u',u), (v,v')$ two edges of $C_{max}$. Then $sur^v(t)=\{u',u,v,v'\}$ and $sur^e(t)=\{(u',u), (u,v), (v,v')$.
A cycle $c$ is {\bf \em tricky} if it belongs to type $1,3$ or $4$ enumerated above and no amenable subgraph on $sur^v(c)$ has weight at least $10w(c)+4w(sur^e(c))$. A triangle $t$ that belongs to $C_{max}$ is called a {\bf \em $3$-triangle} and a triangle of type $4$ - {\bf \em a $2$-triangle}.
We later prove that any two tricky $2$-cycles are vertex-disjoint. We observe also that for any tricky $3$-triangle $t$ holds that if $t$ is not vertex-disjoint
with some other tricky cycle $c$, then $c$ is a sub-$2$-cycle of $t$. On the other hand tricky $2$-triangles do not even have to be edge-disjoint with other tricky $2$-triangles. We require that $C_1$ does not contain any tricky $2$-cycle or tricky $3$-triangle or a triangle oppositely oriented to a tricky $3$-triangle. As for tricky $2$-triangles, we are going to forbid only their subset in $C_1$.
Let $t$ be a tricky $2$-triangle $t=(p,q,r)$ such that $c=(q,r)$ is a $2$-cycle of $C_{max}$. We call $p$ its {\bf \em t-point} and the $2$-cycle $(q,r)$ its {\bf \em t-cycle}. We have already observed that $w(r,q)> \frac{3}{2}w(q,r)$, because otherwise we could take $4$ copies of $c$, $10$ copies of $t$ and replace in it $4$ copies of $w(r,q)$ with $6$ copies of $(q,r)$, obtaining thus an amenable subgraph. Let $\Delta(c)=w(r,q)-\frac{3}{2}w(q,r)$. To the $2$-cycle $c$ we assign weight $w'(c)=w(q,r)+\Delta(c)$. Also, by $\kappa(t)$ we denote $\frac{w(r,q)}{10}$. Among the set of all tricky $2$-triangles we are going to distinguish a set $R$ of its representatives and require that $C_1$ does not contain any tricky triangle from $R$ or if it does, then every such triangle $t$ is {\bf \em diluted}, by which we mean that in $C_1$ it has weight equal to $w(t)-\kappa(t)$. We explain below how this is possible.
To identify the set $R$, we construct a bipartite graph $H=(C \cup P, E_t)$, where $C$ contains all t-cycles and $P$ t-points. An edge $(c,p)$ belongs to $E_t$ iff there exists a tricky triangle $t$ such that $c$ is its t-cycle and $p$ its t-point. Let $T_1 \cup T_2 \cup \ldots T_k$ be a partition of the set of vertices of $C$ such that t-cycles belonging to the same $T_i$ have equal weight $w'$ and for each $i<j$ and any $c_i \in T_i, c_j \in T_j$, it holds $w'(c_i)>w'(c_j)$. We assign {\em ranks} to edges of $H$ in the following manner. Any edge of $E_t$ incident to a vertex of $T_i$ has rank $i$.
We are going to compute a {\bf \em rank-maximal} matching $N$ of $H$, which is a matching of $H$ containing a maximum number of rank one edges and subject to this condition a maximum number of rank two edges and so on. A rank-maximal matching can be computed in polynomial time
\cite{rank}.
One can observe that
\begin{fact}
Any rank-maximal matching of $H$ is a maximum matching of $H$.
\end{fact}
As the set $R$ representing tricky $2$-triangles we set tricky triangles corresponding to the edges of $N$.
Let $t=(p,q,r)$ be a tricky $2$-triangle with a t-cycle $c=(q,r)$. Observe that if $C_1$ contained a diluted $t$, i.e. with weight in $C_1$ decreased by $\kappa(t)$, then $10(w(t)-\kappa(t))+4w(c)$ has the same weight as $14$ copies of $(q,r)$, $10$ copies of each of $(p,q), (r,p)$ and $3$ copies of $(r,q)$, which forms a path-$20$-colorable subgraph on $p,q,r$. Hence, a diluted triangle of $R$ can be allowed in $C_1$.
To be able to compute a cycle cover $C_1$ of weight at least $opt$ and which does not contain any problematic cycle,
we are going to allow it to contain {\em half-edges}, defined as follows. Let $\tilde G =(\tilde V, \tilde E)$ be a graph obtained from $G$ by splitting
each edge $(u,v)\in E$ with a vertex $x_{(u,v)}$ into two edges $(u,x_{(u,v)})$ and $(x_{(u,v)},v)$ having weights such that $w(u,x_{(u,v)})+ w(x_{(u,v)},v)=w(u,v)$. Each of the edges $(u, x_{(u,v)}), (x_{(u,v)},v)$ is called
{\bf \em a half-edge (of $(u,v)$)}. By saying that an edge $(u,v)$ of $G$ belongs to a subset $\tilde C \subseteq \tilde E$, we will mean that both half-edges of $(u,v)$ belong to $\tilde{C}$.
We say that $\tilde C \subseteq \tilde E$ does not contain a cycle $c$ of $G$, if $\tilde C$ does not contain all edges of $c$, i.e., there exists at least one edge $e$ of $c$ such that at least one half-edge of $e$ does not belong to $\tilde C$.
To deal with tricky triangles from the set $R$, we need to further extend the graph $\tilde G$.
For each tricky triangle $t \in R$, we add two new vertices $v_t, v'_t$ and two loops: $e_t$ incident to $v_t$ and $e'_t$ incident to $v'_t$
with weights $w(e_t)=-\kappa(t), w(e'_t)=\kappa(t)$.
We call this graph $\hat G=(\hat V, \hat E)$. (Note that this is a supergraph of $\tilde G$.
The idea behind these new loops is as follows. For each tricky triangle $t =(p,q,r) \in R$, $C_1$ either does not contain $t$ or it does contain $t$ and also a loop $e_t$. This implies that the weight of such $t$ in $C_1$ can be viewed as though it were equal to $w(t)- \kappa(t)$, i.e., it means that $t$ is diluted, which enables the coloring of the subgraph on $p,q,r$. By saying that $C_1$ contains a diluted $t$, we mean that $C_1$ contains $t$ and also a loop $e_t$.
\begin{definition}\label{rel2}
A {\bf \em relaxed cycle cover improving $C_{max}$} is a subset $\hat C\subseteq \hat E$ such that
\begin{itemize}
\item[(i)]
each vertex in $V$ has exactly one outgoing and one incoming half-edge in $\hat C$;
\item[(ii)] for any tricky $2$-cycle or $3$-triangle $c$, $\hat C$ does not contain $c$ or $opp(c)$.
\item [(iii)] for any tricky $2$-triangle $t\in R$, $\hat C$ either does not contain $t$ or contains a diluted $t$.
\item[(iv)] if $\hat C$ contains only one half-edge of edge $(u,v)$, then $(u,v)$ belongs to a tricky $2$-cycle, $3$-triangle or $2$-triangle of $R$ or to a triangle oppositely oriented to a tricky $3$-triangle.
\end{itemize}
\end{definition}
A relaxed cycle cover $C$ improving $C_{max}$, or a relaxed cycle cover $C$ for short, consists of directed cycles and/or directed paths. A directed cycle of $C$ corresponds to a directed cycle of the original graph $G$
and a directed path ends and begins with a vertex in $\tilde V \setminus V$.
The outline of a $\frac{7}{10}$-approximation algorithm for Max ATSP is as follows.
\begin{enumerate}
\item Compute a maximum weight cycle cover $C_{max}$ of $G$.
\item If $C_{max}$ does not contain a hard cycle, extract from $C_{max}$ a set ${\cal P}$ of vertex-disjoint paths of weight at least $\frac{7}{10}w(C_{max})$ and go to Step \ref{step}.
\item Compute a relaxed cycle cover $C_1$ improving $C_{max}$ with weight $w(C_1) \geq opt$.
\item Compute a multigraph $G_1$ with weight $w(G_1) \geq 4w(C_{max})+10 w(C_1)$ and path-$20$-color $G_1$ omitting non-path-$20$-colorable subgraphs. If the whole $G_1$ is path-colored, go to Step \ref{step}.
\item Compute exchange sets $E_1, F_1$ such that $G_2=G_1 \setminus E_1 \cup F_1$ is path-$20$-colorable. Extend the existing coloring of $G_1$ to that of $G_2$.
\item \label{step} Extend a set ${\cal P}$ of vertex-disjoint paths of weight at least $\frac{7}{10}opt$ to a tour of $G$.
\end{enumerate}
\section{Path-coloring in the presence of tricky triangles} \label{pathcolmul}
As previously, i.e., in Section \ref{pathcol}, we would like to take advantage of Observation \ref{obs} and color rays in portions by coloring all rays of one cycle or path in one step. Here, however, the situation is somewhat more complicated because of b-edges. Consider an edge $e=(u,v)$ belonging to some path or cycle $s$ of $C_1$. Suppose that it is coincident with two rays
$r=(u,u'), r'=(v',v)$ of this path or cycle, thus $r$ is an inray and $r'$ an outray of $s$. If $r$ is a b-edge, meaning that $mult(r)=10$, then it is impossible to color $r$ and $r'$ with disjoint sets of colors of ${\cal K}$. This is because, both $r$ and $r'$ have to be diverse with $e$. However, $mult(e)=10$, which implies that afterwards the coloring it must hold that $col(e) \cup col(r)={\cal K}$, hence $col(r')$ must be a subset of $col(r)$. To deal efficiently with coloring of b-edges, we divide the set of colors $col(r)$ assigned to any b-edge
$r$ into $r$'s {\bf \em own colors}, denoted $col'(r)$ and {\bf \em colors inherited} by $r$, denoted $col''(r)$. If in the above example,
$r'$ is not a b-edge, then colors inherited by $r$ are such that $col''(r)=col(r')$ and $r'$ is called an {\bf \em ally} of $r$.
Below we define {\em allies} for all b-edges. They help to control, which colors are inherited by which edges. For every b-edge, its inherited colors come from its ally.
The division is such that for any $r, r' \in C_{max}$ coincident with an edge $e$ of $C_1$ such that $r$ is a b-edge, it holds that
(i) $col''(r) \supset col(r')$, if $r'$ is not a b-edge, (ii) $col''(r) \supset col'(r')$, otherwise and (iii) $|col'(r)|=|col''(r)|=5$ (which holds after $r$ is fully colored). For example, if $e$ is coincident with two b-edges $r$ and $r'$, then it holds that
$col(r)=col(r')$ and $col'(r)=col''(r'), \ col'(r')=col''(r)$, thus half of the set $col(r)$ are $r$'s own colors and half are inherited from $r'$.
Now, we define allies. Let $r=(u,v)$ be a b-edge. It is coincident with one edge of $C_1$.
Suppose that $r$ is coincident with an edge $e_1=(v_1,v)$ of $C_1$. Then there exists an edge $r'_1=(v_1, v'_1)$ belonging to $C'_{max}$. We call $r'_1$ an {\bf \em ally} of $r$ and denote as $al(r)=r'_1$.
(If such an edge $r'_1$ does not exist, that we can add an artificial edge of this form. In reality it means that we have more flexibility in coloring $r$.) The situation is symmetric if $r$ is coincident with an edge $e_2=(u,u_1)$ of $C_1$. Then the edge $r'_2=(u_1, u'_1)\in C'_{max}$
is an {\bf \em ally} of $r$ and denoted as $al(r)=r'_2$.
Let us now examine what methods we can use to ensure that any b-edge is safe with respect to inherited colors. To this end, we appropriately define what it means for a halfy triangle to be {\bf \em blocked} and {\bf \em cooperative}. As for colors owned by b-edges we can apply Observation \ref{obs} to ensure their safety.
We say that two antennas $a_1, a_2$ of a halfy cycle $c$ of $C_1$ are {\bf \em diverse} if the sets of its own colors are disjoint, i.e., if $col'(a_1) \cap col'(a_2)=\emptyset$.
\subsection{Halfy triangles}
Let $t=(p,q,r)$ be a halfy triangle consisting of edges $a=(p,q), b=(q,r), c=(r,p)$ and $(p_1,p), (q_1,q), (q,q_2), (r_1,r)$ edges of $C_1$ and $(r_1, r'_1), (q''_2,q_2), (q_1, q'_1), (p_1, p'_1)$ edges of $C'_{max}$.
Suppose that $G'_1$ contains $10$ copies of $d=(r,q)$ and $5$ copies of $b$ and $C_1$ contains edges $(q_1,q), (q,q_2), (r_1, r)$. We call $b$ an {\bf \em s-edge} of $t$. If $b'=(r_1, r'_1) \in C'_{max}$ does not belong to a halfy $2$-cycle of $C_1$, then $b'$ is said to be an {\bf \em outer antenna} of $t$ and $d$ an {\bf \em inner antenna} of $t$. Since $d$ is a b-edge, it has an ally $d'$ and $5$ colors assigned to it are inherited , i.e., $col''(d) \supset col'(d')$.
To be able to guarantee that $d$ is safe with respect to each inherited color $k \in col''(d)$, we require that the antennas of $t$ are diverse (i.e. that $col'(d)$ and $col'(b')$ are disjoint) and we say that $t$ is {\bf \em blocked} if this condition is not satisfied.
(The situation is symmetric if $t$ contains $b=(r,q)$, $G'_1$ contains $10$ copies of $d=(q,r)$ and $5$ copies of $b$ and $C_1$ contains edges $(q_1,q), (q,q_2), (r,r_1)$.)
Let $Z(t)$ denote a subset of $col''(d) \setminus col'(b')$ such that $k \in Z(t)$ if $d$ is not already safe w.r.t. $k$ at the moment of
coloring $d$. $Z(t)$ may of course contain all colors of $col''(d) \setminus col'(b')$.
Notice that if we color $b$ in such a way that we assign $|Z(t)|$ colors of $col'(b') \setminus col''(d)$ to $b$ (i.e. $|(col'(b') \setminus col''(d) )\cap col(b)| \geq |Z(t)|$), then $flex^+(r_1,r)\geq |Z(t)|$, which means
that we can forbid any color of $Z(t)$ on $(r_1, r)$ and hence are able to color $(r_1,r)$ so that no color of $Z(t)$ occurs on it (any color of $col'(b')$ does not occur on $(r_1,r)$ anyway). This is how we are going to ensure that $d$ is safe with respect to inherited colors. Observe that since the outer antenna $b'$ id required to be diverse with the inner antenna $d$ (i.e. $col'(d) \cap col'(b')=\emptyset$), there is no risk of assigning to $b$ any color already assigned to $d$ and hence creating a monochromatic $2$-cycle.
To summarise, we say that edge $b$ is {\bf \em shadowed} or that we {\bf \em shadow} $b$ if (i) it is colored in such a way that $|(col'(b') \setminus col''(d) )\cap col(b)| \geq |Z(t)|$
and if (ii) no color of $Z(t)$ occurs on $(r_1, r)$. Also, if $b'$ is a b-edge, then it views the edge $b$ as though it were colored with
colors of $Z(t)$ and treats them as $col'(b)$. Thus, if $b'$ is a b-edge, then $col''(b') \supseteq Z(t)$.
For example, suppose that $d'$ and $b'$ are already colored and $col(d')=\{1,2,3,4,5\}$ and $col(b')=\{6,7,8,9,10\}$ and we want to color $d$ and $b$ (because we are processing a path or cycle of $C_1$ containing edge $(q_1,q), (q, q_2)$). Because $d$ is a b-edge and $d'$ is its ally, the colors $col''(d)$ inherited by $d$ are $\{1,2,3,4,5\}$. We assign own colors of $d$ to $d$ so that they are disjoint with $col(b')$, for example, $col'(d)=\{11,12,13,14,15\}$. Therefore, $col(d)=col'(d) \cup col''(d)$. Next, we shadow $b$. Since, $Z(t)=\{1,2,3,4,5\}$, we assign all colors of $b'$ to $b$, thus
$col(b)=\{6,7,8,9,10\}$. Also, we assign colors of ${\cal K} \setminus (Z(t) \cup col(b)$ to $(r_1,r)$. Hence, $col(r_1,r)=\{11, 12, \ldots, 20\}$.
This means that $d$ is safe w.r.t. each color of $col''(d)$.
We say that such a halfy triangle $t$ is {\bf \em cooperative} if
\begin{itemize}
\item $t$ is not blocked,
\item its s-edge is shadowed,
\item $b$ and $d$ are diverse.
\end{itemize}
Suppose next that $t=(p,q,r)$ is a halfy triangle such that and $G'_1$ contains $10$ copies of each of $a=(p,q),c=(r,p)$, $5$ copies of $b=(q,r)$ and $C_1$ contains edges $(p_1, p), (q_1,q), (q,q_2), (r_1, r)$. (The situation is symmetric when $C_1$ contains $(p, p_1), (q_1,q), (q,q_2), (r, r_1)$.) We call $b$ an {\bf \em s-edge} of $t$, $a$ the {\bf \em main b-edge} of $t$ and $c$ the {\bf \em secondary b-edge} of $t$. Both $a$ and $c$ are b-edges an their allies are denoted as $a'$ and $c'$, respectively. An edge $b'=(r_1, r_2) \in C'_{max}$ is said to be an {\bf \em ally} of the s-edge $b$.
If $c'$ does not belong to a halfy $2$-cycle of $C_1$, then $c'$ is said to be an {\bf \em outer antenna} of $t$ and $a$ an {\bf \em inner antenna} of $t$. A triangle $t$ it is said to be {\bf \em blocked}, if its antennas are not diverse. Notice that since $c'$ is an ally
of $c$, the fact that antennas of $t$ are diverse implies that $col''(c) \cap col'(a) =\emptyset$.
Whenever possible, we color $c$ in such a way that that each color $k$ inherited by $a$ ($k \in col''(a)$)
is assigned to $c$. More precisely,
for any inherited color $k$ assigned to $a$, if $k$ is not already assigned to $c$ ($k \in col''(a) \setminus col''(c)$),
we assign it to $col'(c)$ unless it is forbidden on $c$, because $c$ is an antenna of some halfy cycle of $C_1$. (It can be proved that
if a color $k$ is forbidden on $c$, then $a$ is safe w.r.t. $k$.)
Additionally, if $col'(c) \neq col''(a)$, then we ensure that $|col(a) \cap col(c)|\leq 5$. (If $col'(c) = col''(a)$, then $|col(a) \cap col(c)|\leq 5$ always holds.)
For each color $k \in col(a) \cap col(c)$, we are going to guarantee that both $a$ and $c$ are safe w.r.t. $k$ by {\em shadowing} $b$ similarly as in the case above and as explained below.
Let $Z(t)=\{k\in (col(a) \cap col(c)) \setminus col'(b'): k$ is such that $a$ is not safe w.r.t. $k$ at the moment of coloring $a\}$.
We want to ensure that $a$ and $c$ are safe with respect to each color $k \in Z(t)$. To this end, we color $b$ in such a way that $b$
is assigned at least $|Z(t)|$ colors of $col'(b')$, i.e., $|col'(b') \cap col(b)| \geq |Z(t)|$. Then we can color $(r,r_1)$ so that no color of $Z(t)$ occurs on it, i.e., $b$ is {\bf \em shadowed}.
To sum up, we say that a halfy triangle $t$ is {\bf \em cooperative} if
\begin{itemize}
\item $t$ is not blocked and $|col(a) \cap col(c)| \leq 5$,
\item its s-edge is shadowed,
\item no color $k$ occurs on every edge of $t$.
\end{itemize}
Let $e(t)$ be a b-edge of a halfy triangle $t$, then an s-edge contained in $t$ is said to be {\bf \em associated} with $e(t)$.
\subsection{Algorithm}
When coloring rays of a cycle or path $s$ of $C_1$, we may not be able to color b-edges and s-edges incident to $s$ fully, because their allies have not been colored yet. For this reason, we introduce the notion of precoloring. To {\bf \em precolor} an edge $r$ means to:
\begin{itemize}
\item color $r$, if $r$ is neither a b-edge nor an s-edge,
\item color $r$ with $5$ colors denoted as $col'(r)$, if $r$ is a b-edge but not a secondary b-edge,
\item leave $r$ uncolored, if $r$ is a secondary b-edge or an s-edge.
\end{itemize}
Below we show that we can guarantee that each ray of a given cycle $c$ is safe by using a similar approach as previously, where we colored
inrays and outrays with disjoint sets of colors. The modification consists in the fact that for b-edges, we only require that colors owned by them, i.e., sets $col'(r)$ obey this partition.
\begin{lemma} \label{basekol1}
Let $c$ be a cycle of $C_1$ such that each of its incident rays is uncolored or safe. Then we are able to precolor uncolored rays of $c$ in such a way that under the condition that each halfy triangle incident to $c$ is cooperative, each ray of $c$ is safe.
\end{lemma}
\noindent{\bf Proof.~}
We partition ${\cal K}$ into $Z^-(c)$ and $Z^+(c)$. We would like to color each uncolored inray of $c$ with colors of $Z^-(c)$ and each uncolored outray of $c$ with colors of $Z^+(c)$. For every uncolored ray $r$ such that $mult(r) \leq 5$ and which is not an s-edge, this is indeed how we proceed.
For every ray $r$ of $c$, which is a b-edge, we assign $5$ colors of either $Z^+(c)$ or $Z^-(c)$ to $col'(r)$, depending on whether $r$ is an inray or an outray. Note that a ray of a cycle can never be a secondary b-edge, because both endpoints of a secondary b-edge belong to paths of $C_1$. The other colors assigned to $r$ are inherited from the ally of $r$.
As for any s-edge $e$, we have already observed that it can only belong to a monochromatic cycle $c$, which is a (sub)cycle of a halfy triangle $t$ which contains $e$. If $t$ is cooperative, then it is guaranteed not to happen.
Let $k$ be a color assigned to some ray $r$ of $c$, also possibly at some later point after the precoloring of $c$. If $r$ is an s-edge, then we have already shown above that under the condition the halfy triangle $t$ containing $r$ is cooperative, $r$ is safe. Assume next that $r$ is not an s-edge. If $r$ is an inray and $k \in col'(r)$, then if $r$ was precolored before processing $c$, it is safe by the assumption. If $r$ was precolored during processing $c$, then $k \in Z^-(c)$. Any potential monochromatic cycle $c'$ containing $r$ must contain some outray $r'$ of $c$. If any outray $r'$ of $c$ is colored $k$, then $r'$ was either colored $k$ before we started precoloring rays of $c$ or $r'$ is an s-edge or $k$ belongs to colors inherited by $r'$, i.e., $k \in col''(r')$. In all these three cases, however, $r'$ is guaranteed to be safe w.r.t. $k$ under the condition that a halfy triangle containing $r'$ is cooperative. This means that $r$ is safe
under the condition that each halfy triangle incident to $c$ is cooperative. \hfill $\Box$\\[.1ex]
We say that an edge is {\bf \em conditionally safe} if it is guaranteed not to belong to a monochromatic cycle under the condition
that all halfy triangles are cooperative.
We say that $G'_1$ is blocked if there exists a cycle, a halfy $2$-cycle or tricky triangle of $C_1$ that is blocked. Otherwise, $G'_1$ is unblocked.
To {\em \bf process} a cycle or path $s$ of $C_1$ means to precolor all its rays in such a way that all of them are conditionally safe
and $G'_1$ is unblocked, assuming that before starting to process this cycle or path, $G'_1$ is safe and unblocked.
{\scriptsize
\noindent Algorithm Color7 \\
\vspace{0cm} {\bf while} there exists an unprocessed cycle of $C_1$ without any incident b-edge\\
\vspace{-0.2cm} \hspace{2cm} $c \leftarrow$ an unprocessed cycle of $C_1$ without any incident b-edge with a minimal number of uncolored rays;\\
\vspace{-0.2cm}\hspace{2cm} process $c$;\\
\vspace{0cm} process all cycles of $C_1$ with an incident b-edge; \\
\vspace{0cm} {\bf while} there exists an unprocessed path of $C_1$ \\
\vspace{-0.2cm} \hspace{2cm} $p \leftarrow$ any unprocessed path of $C_1$;\\
\vspace{-0.2cm}\hspace{2cm} process $p$;\\
\vspace{0cm} color fully all b-edges and s-edges; \\
\vspace{-0.2cm} color the remaining uncolored edges in such a way that each of them is safe and $G'_1$ does not become blocked.\\
}
By $\beta(c)$ we denote the number of bows incident to $c$.
\begin{lemma} \label{blocked2}
Let $c$ be a cycle of $C_1$. Then $\chi(c)=kol(c)+flex^0(c)+flex^+(c) +blank(c) \geq kol(c)+2\lambda(c) -\beta(c)+flex^+(c)$.
If the number of uncolored rays and chords of $c$ is equal to $r$, then $\chi(c)\geq kol(c)+2\lambda(c)-\beta(c)+4r +flex^+(c)$.
As a consequence:
\begin{enumerate}
\item A cycle of length greater than $2$ and one uncolored chord or two uncolored rays cannot be blocked.
\item A $2$-cycle is blocked only if some two of its non-complementary rays $r_1, r_2$ are not diverse.
\end{enumerate}
\end{lemma}
\noindent{\bf Proof.~}
Let $e$ be an edge of $c$. If none of its coincident edges of $C'_{max}$ is colored, then $flex(e)=10$. If exactly one of its coincident edges
of $C'_{max}$ is colored, then $flex(e)=6$. Otherwise, $flex(e) \geq \max\{10-kol(c),2\}$.
\hfill $\Box$\\[.1ex]
We now present an extended version of Lemma \ref{poss}. The main difference comes from the fact that $G'_1$ may contain bows, which occurs in point $5$. Recall that a bow is an edge
of multiplicity $5$ contained in a $2$-cycle of $C'_{max}$.
\begin{lemma} \label{poss1}
Suppose that at step $S$ we want to color a set $U$ of uncolored edges, where $U$ consists of either (i) a subset of uncolored rays of a cycle $c$ of $C_1$ or (ii) an antenna of a halfy cycle $c$ of $C_1$.
Then, assuming that $G'_1$ is unblocked, there always exists a number $\Delta'(c)$ and a set $Z \subseteq {\cal K}$ such that by using $\Delta'(c)$ different colors of $Z$ on $U$, we guarantee that $c$ does not become blocked. Depending on additional conditions, $\Delta'(c)$ and $|Z|$ can be expressed as the following functions of a certain $\Delta(c)\leq \Delta'(c)$:
\begin{enumerate}
\item [0]. If $c$ has at least two chords or one chord and $\lambda(c)>3$, then $\Delta'(c)=0$. In the remaining points we assume that $c$ has no chords or one chord and $\lambda(c)=3$.
\item If $c$ is a $2$-cycle with $r$ colored rays, then $\Delta'(c)=mult(U)$ and $|Z| =20-4r+\rho(c)$.
\item If $c$ has one uncolored ray, no chords and $\lambda(c)>2$, then $\Delta'(c)=4-\Delta(c) \geq 0$, where $\Delta(c)=flex^+(c)+kol(c)-10$ and $|Z|\geq 12-\Delta(c)$.
\item Assume that $c$ has exactly two uncolored incident edges of $C_{max}$ and $\lambda(c)>2$. Then $|Z|\geq 12-\Delta(c)+\rho(c)$, where $\Delta(c)=flex^+(c)+kol(c)-8$. If we color only one ray of $c$, then $\Delta'(c)=2-\Delta(c)$, otherwise $\Delta'(c)=6-\Delta(c)$.
\item Assume that $c$ has at least $u \geq 3$ uncolored rays and and $\lambda(c)>2$. Then $|Z|\geq 20-flex^+(c)-kol(c)+\rho(c)$.
If we color $u-2$ rays of $c$, then $\Delta'(c)=0$; if $u-1$, then $\Delta'(c)=\min\{10- flex^+(c)-kol(c), 0\}$; if we color all $u$ rays of $c$, then $\Delta'(c)=\min\{14- flex^+(c)-kol(c), 0\}$.
\item If $U$ consists of an antenna of $c$, then $\Delta'(c)=4$ and $|Z|\geq 15$.
\end{enumerate}
\end{lemma}
\noindent{\bf Proof.~}
If $c$ has no chords and $r$ uncolored rays, then $\chi(c)=kol(c)+2\lambda(c)-\beta(c)+4r +flex^+(c)$.
{\em Case: $c$ has exactly one uncolored ray and no chords.} \\
\noindent We can notice that if $c$ has exactly one uncolored ray $e$ incident to vertex $v \in c$ , then it cannot belong to a $2$-cycle $c'$ of $C'_{max}$, because both edges of any $2$-cycle of $C'_{max}$
are colored during the same step. Thus at any point of the execution of Algorithm Color 7 either both edges of such $2$-cycle are uncolored or both are colored.
This means that $e$ is not a bow and also that $v$ has no incident bow. Therefore, $\beta(c) \leq \lambda(c)-1$ and $mult(e)=4$.
Thus $2\lambda(c)-\beta(c)$ is minimum when $\beta(c) = \lambda(c)-1$ and $\lambda(c)=3$ and amounts to $4$.
By Lemma \ref{ost} there exists a set $Z$ of colors the application of any color of which
increases $flex^+(c)+kol(c)$.
To guarantee that $c$ does not become blocked it suffices to use $mult(e)-\min\{0, \chi(c)-20\}$ colors of $Z$. Let us estimate $\chi(c)-20$.
We have $\chi(c)-20= kol(c)+flex^+(c)+2\lambda(c)-\beta(c)+4-20 \geq kol(c)+flex^+(c)-12$. Let $\Delta(c)=\min\{0,kol(c)+flex^+(c)-12\}$.
By Lemma \ref{ost} the size of $Z$ is at least $24-kol(c) \geq 12+12-kol(c)-flex^+(c)=12 -\Delta(c)$.
{\em Case: $c$ has exactly two uncolored incident edges $e, f$ of $C'_{max}$.} \\
Suppose first that the currently colored rays of $c$ do not contain a bow. Then $kol(c)+flex^+(c)\geq 8$ and $flex^0(c)+blank(c)\geq 14$,because either $e,f$ do not contain a bow and then $blank(c)=8$ and $flex^0(c)\geq 6$ or $e,f$ contain a bow and then $blank(c)=9$ and $flex^0(c)\geq 5$. Hence $\chi(c) \geq 22$, because $\chi(c)=kol(c) +flex^+(c)+flex^0(c)+blank(c)$.
Thus, to guarantee that $c$ does not become blocked, it suffices to use $mult(U)-2-\min\{0, \chi(c)-22\}$ colors of $Z$. We define $\Delta(c)$ as follows: $\chi(c)-22 \geq kol(c)+flex^+(c)+flex^0(c)+blank(c)-22 \geq kol(c)+flex^+(c)-8=\Delta(c)$.
Suppose next that the currently colored rays of $c$ contain a bow. Then $kol(c)+flex^+(c)\geq 13$ (because we keep an invariant that if there exists an edge $e$ of $c$ with both rays colored and an incident bow, then $kol(c)+flex^+(c)\geq 13$ only among these three rays of $c$ incident to $e$)
and $flex^0(c)+blank(c)\geq 12+x$,
where $x=1$ if $e,f$ are coincident with the same edge of $c$ and $x=0$, otherwise.
Hence, $\chi(c) \geq 25+x$.
For the case when $|U|=1$, we can color the ray of $U$ in any way, since currently $\chi(c)\geq 25$, hence $\Delta'(c)=0$. For the case when $|U|=2$, it suffices to use $mult(U)-5-x - \min\{0, \chi(c)-25\}$ colors of $Z$. We define $\Delta(c)$ as follows: $\chi(c)-25 =kol(c)+flex^+(c)+flex^0(c)+blank(c)-25 \geq kol(c)+flex^+(c)-12-x=\Delta(c)$.
By Lemma \ref{ost} the size of $Z$ is at least $20-kol(c)$, when $e$ and $f$ are coincident with the same edge of $c$.
If the currently colored rays of $c$ do not contain a bow, then $20-kol(c) \geq 12+8-kol(c)-flex^+(c)=12 -\Delta(c)$. Otherwise,
$20-kol(c) \geq 7+13-kol(c)-flex^+(c)=7 -\Delta(c)$.
When $e$ and $f$ are not coincident with the same edge of $c$, then the size of $Z$
is at least $24-kol(c)$, which is greater or equal $16 -\Delta(c)$, if $c$ has no incident colored bow and greater or equal $12-\Delta(c)$,
otherwise. \hfill $\Box$\\[.1ex]
\begin{lemma} \label{min1}
Let $c$ be an unprocessed cycle of $C_1$ that at some step of Algorithm Color7 has a minimal number of uncolored rays, no incident b-edges and $\lambda(c)>2$. Then it is always possible to process $c$.
\end{lemma}
\noindent{\bf Proof.~}
If $c$ has exactly one uncolored ray $r$ or its already colored rays do not contain a bow, then the proof is the same as in Lemma \ref{min}.
Assume then now that $c$ already has a colored ray, which is a bow. If $c$ has two uncolored incident edges $e,f$ of $C'_{max}$ forming a set $U$,
then we only have to use $mult(U)-5-x - \Delta(c)$ colors of the set $Z$, which has size at least $12 -\Delta(c)$ if $e,f$ are not coincident with the same edge or $7- \Delta(c)$, otherwise. The first case is analogous to those considered in Lemma \ref{min}. In the second one, $e,f$ cannot contain a bow and thus we only have to use $8-5-1=2$ colors of $Z$, which is easily achieved.
If $c$ has at least $3$ uncolored rays, then the proof is almost the same as the proof of Lemma \ref{min}.
\hfill $\Box$\\[.1ex]
\begin{lemma} \label{min2}
Let $c$ be an unprocessed $2$-cycle of $C_1$ that at some step of Algorithm Color7 has a minimal number of uncolored rays and no incident b-edges. Then it is always possible to process $c$.
\end{lemma}
\noindent{\bf Proof.~}
If $c=(v,u)$ has no incident bow, then the proof is almost the same as that of Lemma \ref{2cycle}.
Suppose now that $c$ has an bow $r$ incident to vertex $u$ of $c$. It meas that $r$ belongs to a $2$-cycle $c'=(u,u')$ of $C_{max}$ and $u'$
does not lie on $c$. If $u'$ also lies on a $2$-cycle $c_1=(u',v')$ of $C_1$, we also process $c_1$, i.e., we process $c$ and $c_1$ (and possibly some other $2$-cycles) during the same step. W.l.o.g. we may assume that rays $r_1,r_2$ of $c$ incident to $v$ and rays $r_3,r_4$ of $c_1$ incident to $v'$ are not bows. We then treat these two $2$-cycles $c, c_1$ as though they were one $2$-cycle of $C_1$ with four rays $r_1, r_2, r_3, r_4$. We then complete the coloring on bows accordingly, i.e., if $r_1$ is an inray of $c$ and $r_3$ an outray of $c_1$, we assign any colors of $col(r_1) \cup col(r_3)$ to the edge $(u,u')$. Similarly, we assign any colors of $col(r_2) \cup col(r_4)$ to the edge $(u',u)$. \hfill $\Box$\\[.1ex]
\begin{lemma} \label{bcycle}
Let $c$ be an unprocessed cycle of $C_1$ that has an incident b-ray. Then it is always possible to process $c$.
\end{lemma}
\noindent{\bf Proof.~} We show that we can always ensure that $c$ is not blocked by coloring the rays of $c$ in such a way that $kol(c)+flex^+(c)+flex^0(c) \geq 20$.
To process $c$, we need to color all rays of $c$, which are not s-edges. We partition ${\cal K}$ into two disjoint sets $Z^+(c)$ and $Z^-(c)$. If we precolor each uncolored outray with colors of $Z^+$ and each uncolored inray with colors of $Z^-(c)$, then each such newly colored ray becomes safe. As for s-edges, we have more freedom in coloring them and do not have to observe this partition. Recall that each s-edge $e$ is guaranteed to be safe (as long as it is not assigned the same color $k$ as the other b-edge(s) of the same halfy triangle).
While coloring the rays of $c$, we also have to ensure that no other cycle or halfy cycle of $C_1$ becomes blocked. We do not need to concern ourselves with blocking cycles of $C_1$ different from $c$, because cycles of $C_1$ with no incident b-rays are already processed and by the current lemma we are always capable of processing a cycle of $C_1$ with an incident b-ray. Thus we only have to take care of halfy cycles.
Since each ray of $c$ is an antenna of at most one halfy cycle, every ray of $c$ has to be diverse with at most one edge.
We make the following useful observation.
\begin{claim} \label{cbray}
Let $r$ be a b-ray of $c$, $r'$ an s-edge associated with $r$ and $k$ a color not occurring on any ray of $c$. Then we can always assign $k$
to at least one of $r,r'$ and be able to color the halfy triangle containing $r$ in a cooperative manner.
\end{claim}
\noindent{\bf Proof.~}
Let $t$ be a halfy triangle containing $r$.
If $k$ is not forbidden on $r$, then we are done. Suppose then that $k$ is forbidden on $r$, which means that $k$ is assigned to the outer antenna $a$ of $t$ - we have a requirement that $r$ has to be diverse with $a$. If $t$ is a tricky $2$-triangle, then it means that while shadowing $r'$, we will assign colors of $col'(a)$ to $col'(r')$ and hence we will assign $k$ to $r'$. More precisely, we will assign $k$ to $r'$ as long as $k \notin col''(r)$. However, if $k \in col''(r)$, then $k$ is assigned to $r$. Either way $k$ can appear on $r$ or $r'$.
If $t$ is a tricky $3$-triangle and $k$ is assigned to $a$, then we do not want to assign $k$ to $r$, because we then would have to shadow $r'$ w.r.t. $k$ and possibly additional $5$ inherited colors of $r$. But we can notice that if $k$ does not occur on any ray of $c$, we do not have to shadow $r'$ w.r.t. $k$, because $r$ is safe w.r.t. it. Even if $k$ is assigned to inherited colors of some b-ray $r_1$ of $c$, then we make sure that $r_1$ is safe w.r.t. all inherited colors. This completes the proof. \hfill $\Box$\\[.1ex]
In view of the above claim, we notice that if the number of uncolored rays of $c$, which are not s-edges is at least $4$, then we can easily
guarantee that $kol(c)+flex^0(c) \geq 20$ by assigning different colors to $col'(r)$ of each uncolored ray or by assigning them to an appropriate s-edge. (For example, suppose that $c$ has one b-ray $r$, one s-edge $r'$ and $3$ uncolored rays and for each of the rays the same $5$ colors of $Z'$ are forbidden. Then by Claim \ref{cbray} we can assign $Z'$ to $r'$ and use colors of ${\cal K} \setminus Z'$ on the uncolored rays.)
The situation is analogous if $c$ has already one colored ray and the number of uncolored rays of $c$, which are not s-edges is at least $3$
or generally if $kol(c)+flex^+(c)+ flex^0(c)+ blank(c)- 5 |\{ $\ s-edges incident to \ $\} c \}| \geq 20$. In all these cases we are simply able to use $20-kol(c)-flex^+(c)-flex^0(c)$ new colors (not already included in $col(c)$) on the uncolored rays or s-edges.
Let us next observe that in the situation when all uncolored rays of $c$, which are not s-edges, are either all outrays or all
inrays, all rays of $c$ are safe under the condition that each b-ray of $c$ is diverse with an associated s-edge. Thus we do not have to shadow s-edges incident to $c$ and can use new colors on them. This means that in such situations we are always able to use
$20-kol(c)-flex^+(c)-flex^0(c)$ new colors on uncolored rays of $c$, because $\chi(c)=kol(c)+flex^+(c)+ flex^0(c)+ blank(c)\geq 20$.
We can use the same argument for any $2$-cycle with exactly one incident b-ray, even if it has two uncolored inrays or two uncolored outrays.
To illustrate the above reasoning consider the following example. Suppose that $c$ has only two uncolored rays: an incoming b-ray $r$ and an s-edge $r'$ associated with it. In this case $c$ already has at least $4$ colored rays (or it has some number of colored rays and chords). Thus it already holds that $kol(c)+flex^+(c)+flex^0(c) \geq 10$. Also, the ally $al$ of $r$ is already colored. Let $Z'=col'(al)$. We of course have to assign $Z'$ to $r$. Since $c$ does not have an uncolored outray,which is not an s-edge, $r$ is safe w.r.t. every color of $Z'$. Hence we do not have to shadow $r'$. Next we assign $5$ new colors to $r$ and $5$ different new colors to $r'$. This way we increase $kol(c)$ by $10$. As a result $kol(c)+flex^+(c)+flex^0(c) \geq 20$.
We are thus left with the following three cases: $c$ has one b-inray and one b-outray and either (i) one more b-ray and no other chords or rays
or (ii) two rays incident to the same vertex and $kol(c)+flex^+(c)+flex^0(c)<10$ or (iii) $c$ is a $2$-cycle.
In the first two cases it means that $c$ is a triangle. Therefore, in all these three cases one of the edges of $c$ is coincident both with a b-outray $r_1$ and a b-inray $r_2$.
Let us note that $r_1, r_2$ will be colored with the same $10$ colors belonging to the set $Z$ and none of the s-edges $s_1,s_2$ associated with, respectively, $r_1$ and $r_2$ will be colored with any element of $Z$. Hence, $r_1, r_2$ are going to contribute $10$ new colors to $col(c)$ and $s_1, s_2$ are going to either contribute at least another $5$ to $kol(c)$ or increase $flex^+(c)$. Either way, $s_1, s_2$ will increase $kol(c)+flex^+(c)$ by $5$. If $c$ already has some colored rays or is $2$-cycle, then it means that we have already guaranteed that $kol(c)+flex^+(c)+flex^0(c)$ reaches at least $20$. If we have the first case (i), then let $r_3$ be the third b-ray of $c$. Let us observe that $r_3$'s ally
is either $s_1$ or $s_2$. It suffices, if we precolor $r_3$ with colors disjoint with $Z$. In this way the contribution of $r_1, r_2, r_3$
to $col(c)$ amounts to $15$ colors and edges $s_1, s_2$ will increase $kol(c)+flex+(c)$ by another $5$. \hfill $\Box$\\[.1ex]
\begin{fact}
\begin{enumerate}
\item Each edge of $C'_{max}$ is an antenna of at most two halfy cycles of $C_1$.
\item If an edge $e \in C'_{max}$ is an antenna of two halfy cycles, then it is not incident to a cycle of $C_1$.
\end{enumerate}
\end{fact}
\begin{lemma} \label{path1}
It is always possible to process a path of $C_1$.
\end{lemma}
\noindent{\bf Proof.~}
Since paths are processed after cycles of $C_1$, only halfy cycles of $C_1$ can become blocked. To prevent this, we have to ensure that antennas of the same halfy cycle are diverse or weakly diverse. The path $p$ has two outer antennas $a_1, a_2$ and at most one of them is an inray and at most one the outray of $p$. (Each one of them may also be a chord of $p$.) Assume that $a_1$ is an inray and $a_2$ the outray. Each $a_i$ may have to be diverse with two different antennas. Additionally, each $a_i$ may be accompanied by either an inner antenna of the same halfy cycle or a weak antenna. In each case we call it $a'_i$. Observe that any inner antenna $a'_i$ may also have to be diverse with two other antennas, one of which is always $a_i$. A weak antenna $a'_i$ only needs to be weakly diverse with $a_i$, but may have to be diverse with some other antenna $b_j$. The case is most difficult when all four antennas exist and all are bilateral. Note that no other ray of $p$ is a bilateral antenna.
Let $Z_i$ and $Z'_i$ denote the set of colors forbidden on $a_i$ and $a'_i$, respectively. Let us note that if $a_i$ is uncolored, then $|Z'_i| \leq 5$. If $a'_i$ is an uncolored inner antenna, then $|Z_i| \leq 5$. If $a'_i$ is a weak antenna (or if $p$ has no antenna $a'_i$) , then $|Z_i|$ may be equal to $10$.
Assume that none of the four antennas is already colored, as the other cases are contained in this one.
Suppose first that $a'_1$ and $a'_2$ are weak antennas.
We partition ${\cal K}$ into
two $10$-element sets $Z^-(p)$ and $Z^+(p)$ so that $|Z^-(p) \setminus Z_1 | \geq 5,\ |Z^-(p) \setminus (Z_1 \cup Z'_1) | \geq 7 $ and $|Z^+(p) \setminus Z_2 | \geq 5,\ |Z^+(p) \setminus (Z_2 \cup Z'_2) | \geq 7 $. To this end, we divide $Z={\cal K} \setminus (Z_1 \cap Z_2)$ (almost) equally between $Z^-(p)$ and $Z^+(p)$ in such a way that $Z\setminus Z_1$ goes to $Z^-(p)$ and $Z \setminus Z_2$ to $Z^+(p)$. Since $|Z_1 \cap Z_2| \leq 10$ and hence $|Z|\geq 10$ each of $Z^-(p)\setminus Z_1, Z^+(p)\setminus Z_2$ contains at least $5$ colors. Additionally,
we divide $Z'=((Z_1 \cap Z_2) \setminus (Z_1 \cap Z_2)$ (almost) equally between $Z^-(p)$ and $Z^+(p)$ in such a way that $Z'\setminus Z'_1$ goes to $Z^-(p)$ and $Z' \setminus Z'_2$ to $Z^+(p)$. Since $|Z'_1 \cap Z'_2|)\leq 5$, we get that $\frac{1}{2}(|Z|+|Z'|) \geq \frac{1}{2}(20-|Z_1 \cap Z_2|+ |Z_1 \cap Z_2|- |Z'_1 \cap Z'_2|) \geq 7$, which means that each of the sets $Z^-(p) \setminus (Z_1 \cup Z'_1), Z^+(p) \setminus (Z_2 \cup Z'_2)$ contains at least $7$ elements. To finish the partition of ${\cal K}$, we divide the set $Z'_1 \cap Z'_2$ in such a way that each of the sets
$Z^-(p)$ and $Z^+(p)$ has exactly $10$ elements.
We can check that having such sets $Z^-(p)$ and $Z^+(p)$ enables us to color each ray of $p$ so that each antenna is (weakly) diverse with required antenna. For any ray $r$, which is not any of the four antennas $a_1, a_2, a'_1, a'_2$, $r$ has at most $5$ colors forbidden on it
and since each of the sets $Z^-(p), Z^+(p)$ contains $10$ elements, we are able to color $r$ in a required manner. To color $a_1$ and $a'_1$,
we assign to them colors of $Z^-(p) \setminus (Z_1 \cup Z'_1)$ (of which there are at least $7$) so that $|col'(a_1) \cap col'(a'_1)| leq 3$.
Suppose next that $a'_1, a'_2$ are inner antennas. Then we want to partition ${\cal K}$ into
two $10$-element sets $Z^-(p)$ and $Z^+(p)$ so that $|Z^-p \setminus (Z_1 \cup Z'_1)| \geq 10$ and $|Z^+p \setminus (Z_2 \cup Z'_2)| \geq 10$.
Observe that it is not possible if $Z_1 \cap Z_2 \cap Z'_1 \cap Z'_1 \neq \emptyset$. However, this can be avoided, by requiring, for example that antennas
$b_1, b'_1$ are diverse (thus $b_1$ has to be diverse with $c_1$ and $b'_1$ instead of $c_1$ and $a_1$ at the moment of coloring $b_1$), where $b_1, b'_1$ are antennas, with which $a_1, a'_1$ have to be diverse. The case when one of $a'_1, a'_2$ is a weak antenna and the other an inner one is very similar.
After computing such a partition, we are able to color each ray of $p$ so as to ensure that each antenna is (weakly) diverse with required antennas. \hfill $\Box$\\[.1ex]
\section{Path-coloring} \label{pathcol}
Once we have computed a maximum weight cycle cover $C_{max}$ and a relaxed cycle cover $C_1$ our next task is to construct and color a multigraph $G_1$. The constructed multigraph is required to have weight at least $4 w(C_{max})+10 w(C_1)$ and be path-$20$-colorable.
On the high level, to satisfy the first requirement, we build $G_1$ by taking $4$ copies of $C_{max}$ and $10$ copies of $C_1$, obtaining possibly a multigraph with non-path-$20$-colorable subgraphs. To remedy the multigraph, we replace certain edges of such non-path-$20$-colorable subgraphs with other ones in a way
that preserves the required weight and makes the multigraph $G_1$ path-$20$-colorable or facilitates the coloring. The precise construction is described below.
\subsection{Preprocessing via alternating cycles}
The preproprecssing described below is needed for coloring $2$-cycles of $C_1$ (the proof of Lemma \ref{2cycle}) and it can be skipped during the first reading.
Before building the multigraph $G_1$ we modify $C_1$ so that it differs from $C_{max}$ in a minimal way. An {\bf \em alternating cycle} in $C_{max} \oplus C_1$ is a sequence of edges of the form
\newline $(v_1, v_2), (v_3, v_2), (v_3, v_4), (v_5, v_4), \ldots, (v_{k-1}, v_k), (v_1,v_k)$,
in which edges belong alternately to $C_{max}\setminus C_1$ and $C_1 \setminus C_{max}$. By applying an alternating cycle $C_{alt}$ to $C_1$
we mean the operation, whose result is $C_1 \oplus C_{alt}$, in which we treat $C_{alt}$ as a set of edges. An alternating cycle $C_{alt}$ is {\bf \em good} if $C'_1=C_1 \oplus C_{alt}$ does not contain a problematic cycle or a tricky triangle of $R$, i.e., $C'_1$ is a relaxed cycle cover improving $C_{max}$.
\begin{fact}
Let $C_{alt}$ be a good alternating cycle and $C'_1=C_1 \oplus C_{alt}$. Then $w(C'_1)\geq w(C_1)$.
\end{fact}
\noindent{\bf Proof.~}
Since $C_{max}$ is a maximum weight cycle cover of $G$, $w(C_{max} \oplus C_{alt}) \leq w(C_{max})$. Therefore, $w(C'_1) \geq w(C_1)$.
\hfill $\Box$\\[.1ex]
We apply good alternating cycles to $C_1$ until it is no longer possible. We still call a new relaxed cycle cover $C_1$.
\begin{lemma} \label{alt}
After preprocessing, it holds that no alternating cycle is good.
\end{lemma}
\subsection{Construction of $G_1$}
In this section we assume that $G$ does not contain any tricky triangles and that $C_1$ does not contain
any {\bf \em strange $2$-cycles} - a $2$-cycle is said to be strange if exactly one of its edges belongs to $C_{max}$. This in particular means that each half-edge of $C_1$ belongs to a tricky $2$-cycle. If $C_1$ contains exactly one half-edge of each edge of a $2$-cycle $c$ of $G$, then $c$ is called a {\bf \em halfy} $2$-cycle of $C_1$. In this section we also assume that each halfy $2$-cycle $c$ of $C_1$ belongs to $C_{max}$.
We start the construction of $G_1$ by taking $4$ copies of $C_{max}$ and $10$ copies of $C_1$, by which we mean the following. Let $mult(e)$
denote the number of copies of edge $e \in G$ contained in $G_1$. At the beginning for each edge $e \in G$, we set $mult(e)=0$.
Next, for each $e\in C_{max}$, we increase $mult(e)$ by $4$ and further, for each $e \in C_1$ (note that $e\in C_1$ means that the whole edge $e$ belongs to $C_1$), we increase $mult(e)$ by $10$. Subsequently, for each $e$ such that $C_1$ contains only a half-edge of $e$,
we increase $mult(e)$ by $5$. Clearly, the thus obtained $G_1$ has weight equal to exactly $4 w(C_{max})+10 w(C_1)$.
\subsection{Path-coloring}
Let ${\cal K}$ denote $\{i \in N: 1 \leq i \leq 20\}$.
To {\bf \em path-20-color} a multigraph $G_1$, or to {\bf \em path-color} it, means to assign a color of ${\cal K}$ to each edge of $G_1$ in such a way that each color class consists of vertex-disjoint paths. Equivalently, we will be interested in path-coloring the underlying simple graph $G_1$, in which to each edge $e$ of $G_1$ we will assign a subset $col(e)$ of colors of ${\cal K}$ such that the size of $col(e)$ equals the number of copies of $e$ in the multigraph $G_1$, i.e., $|col(e)|=mult(e)$ (and each color class consists of vertex-disjoint paths).
A path-coloring of $G_1$ will be carried out gradually. In the process each edge $e$ of $G_1$ can be either {\bf \em colored} - when it has $mult(e)$ colors assigned to it, or {\bf \em uncolored} - when it is assigned no color.
A cycle $c$ is called {\bf \em monochromatic} if there exists a color $i$ of ${\cal K}$ such that each edge of $c$ is colored with $i$ - $c$ is then a monochromatic cycle of color $i$. Of course, a (partially) path-colored $G_1$ cannot contain a monochromatic cycle.
We will say that an edge $e$ is {\bf \em safe} if no matter how we color the so far uncolored edges, it is guaranteed not to belong to any monochromatic cycle. For example suppose that $u$ has three incident edges in $G_1$ - $e_1=(u,v), e_2=(z,u), e_3=(z',u)$ such that $mult(e_1)=mult(e_2)=4, \ mult(e_3)=10$ and $col(e_1)=\{1,2,3,4\}$, $col(e_2)=\{5,6,7,8\}$ and $col(e_3)=\{11, \ldots, 20\}$. Also, $u$ has no other outgoing edge in $G_1$. Then, clearly, $e_1$ is safe. By saying that an edge $e$ is $k$-safe we will mean that $e$ is guaranteed not to belong to a monochromatic cycle of color $k$.
If $S$ denotes any subset of vertices of $G_1$, then $S^+$ denotes a set of edges $\{(u,v) \in G_1: u \in S, v \notin S\}$ and analogously,
$S^-=\{(u,v) \in G_1: u \notin S, v \in S\}$.
In path-coloring $G_1$ we are going to heavily use the following very helpful observation:
\begin{observation} \label{obs}
Suppose that edge $e\in S^-$ is colored with $k$ and no edge of $S^+$ is uncolored or colored with $k$. Then $e$ is $k$-safe.
Analogously, if edge $e\in S^+$ is colored with $k$ and no edge of $S^-$ is uncolored or colored with $k$, then $e$ is $k$-safe.
\end{observation}
Recall that $C_1$ consists of cycles and paths. Any path $p$ of $C_1$ ends with a half-edge of some edge $e$. Such edge $e$ is called a {\bf \em border} of $p$. All paths of $C_1$ occurring in this section end with borders belonging to tricky $2$-cycles of $C_{max}$. Notice that each halfy $2$-cycle of $C_1$ either has two incoming paths of $C_1$ or two outgoing paths of $C_1$.
Apart from borders, we distinguish two other types of edges of $C_{max}$. An edge $e=(u,v) \in C_{max}$ that is not a border is called a {\bf \em ray} if $u$ and $v$ belong to two different cycles of $C_1$ or two different paths of $C_1$
or one of them belongs to a path of $C_1$ and the other to a cycle of $C_1$. Otherwise, it is called a {\bf \em chord}. Note that a chord $e$ may also belong to $C_1$. A ray $r=(u,v)$ incident to a vertex on a cycle $c$ or path $p$ of $C_1$ is said to be a ray of $c$ or correspondingly $p$. If vertex $v$ belongs to $e$, then $r$ is said to be an {\bf \em inray} of $e$ (and $c$ or $p$). Otherwise, it is called its {\bf \em outray}.
Using Observation \ref{obs} we can apply the following simple method of coloring rays of $C_1$.
\begin{lemma} \label{basekol}
Let $c$ be a cycle of $C_1$ such that each of its incident rays is uncolored or safe. Then we are able to color all uncolored rays of $c$ in such a way that each one of them is safe.
\end{lemma}
\noindent{\bf Proof.~} It is easy to guarantee that each newly colored ray is safe - it suffices if we color inrays and outrays of $c$ with disjoint sets of colors, i.e., we partition ${\cal K}$ into $Z^-(c)$ and $Z^+(c)$ and each uncolored inray of $c$ is colored with colors of $Z^-(c)$ and each uncolored outray of $c$ with colors of $Z^+(c)$.
Then by Observation \ref{obs} and the fact that each previously colored ray is already safe, each ray of $c$ is safe. \hfill $\Box$\\[.1ex]
For paths of $C_1$ we can in fact apply the same method:
\begin{lemma} \label{basekolpath}
Let $p$ be a cycle of $C_1$ such that each of its incident rays is uncolored or safe. Then we are able to color all uncolored rays of $p$ in such a way that each one of them is safe.
\end{lemma}
\noindent{\bf Proof.~} We use the same method as in the lemma above. Rays are not the only outgoing/incoming edges of $p$. There are also borders.
However, the only cycle any border belongs to is a halfy $2$-cycle and any $2$-cycle consists of two borders (of two different paths of $C_1$). Thus by this observation, Observation \ref{obs} and the fact that each previously colored ray is already safe, each ray of $p$ is safe. \hfill $\Box$\\[.1ex]
Coloring rays so that they are safe does not mean, however, that there always exists a possibility of coloring the remaining edges of $G_1$
so that we do not create a monochromatic cycle. Let us consider a few examples.
If $c=(p,q,r,s)$ is a $4$-cycle of $C_1$ with $4$ inrays, each colored with $\{1,2,3,4\}$ and $4$ outrays, each colored with $\{5,6,7,8\}$, then
the only colors we can use on any edge of $c$ are those belonging to $Z={\cal K} \setminus \{1,2, \ldots, 8\}$. Any color of $Z$ can be used on at most three edges of $c$ and each edge of $c$ has to be assigned $10$ different colors. Thus we would need at least $40/3>13$ different colors, but have only $12$. Therefore, it is not possible to path-color $c$.
We can notice that, if instead of a $4$-cycle we had a $6$-cycle $c$ with $6$ inrays, each colored with $\{1,2,3,4\}$ and $6$ outrays, each colored with $\{5,6,7,8\}$, then we would be able to path-color $c$. Suppose now that we have a $4$-cycle $c=(p,q,r,s)$
the same as above except for the fact that one of its outrays is colored with $\{1,2,3,4\}$. We of course assume that all rays are safe.
It turns out that in this case we can path-color $c$, because one edge of $c$ can be colored with colors of $\{5,6,7,8\}$ and then we need $(40-4)/3 =12$ colors for the rest.
Below we define {\em blocked} cycles of $C_1$ and prove that any cycle of $C_1$ that is not blocked can be path-$20$-colored.
Two edges $e_1, e_2$ are said to be {\bf \em coincident}
if there exists vertex $v$ such that either $e_1=(v,v_1), e_2=(v,v_2)$ or $e_1=(v_1,v), e_2=(v_2,v)$.
We say that two edges $e_1, e_2$ are {\bf \em diverse} if $col(e_1) \cap col(e_2)=\emptyset$. Let us note that coincident edges must be diverse.
We define the flexibility of $e=(u,v) \in C_1$, denoted $flex(e)$, as follows. Let $e_1=(u,v'), e_2=(u',v)$ be edges of $C_{max}$ coincident with $e$ (it is possible that $e_1=e_2$ as well as one or both of $e_1, e_2$ do not exist because they have been removed during the modification of $G_1$). Then $flex(e)=10- |col(e_1) \cup col(e_2)|$. Thus, if $e_1$ and $e_2$ are colored with two non-empty disjoint sets of colors, then $flex(e)=2$.
For each cycle $c$ of $C_1$ we define its flexibility $flex(c)$ and colorfulness $kol(c)$. The flexibility of $c$ is defined as $flex(c)=\sum_{e \in c} flex(e)$. Colorfulness $kol(c)$ denotes the number of colors of ${\cal K}$ used so far for coloring the edges of $G_1$ incident to $c$. For a subset $E'$ of edges of $E$ by $mult(E')$ we denote $\sum_{e \in E'} mult(e)$. By $\lambda(c)$ we denote the length of a cycle $c$. For a cycle $c$ with at least one chord, $chor(c)=4$; a cycle $c$ with no chords has $chor(c)=0$.
Using the above notions we define the characteristic $\chi(c)$ of a cycle $c$ of $C_1$ as follows. If $c$ has (i) at least two chords or
(ii) one chord and $\lambda(c)>2$, then $\chi(c)=20$. Otherwise, $\chi(c)=flex(c)+kol(c)-chor(c)$.
A cycle $c$ of $C_1$ is said to be {\bf \em blocked} if $\chi(c)<20$ and {\bf \em unblocked} otherwise.
\begin{lemma} \label{uzup}
Let $c$ be a cycle of $C_1$ that is not blocked and such that each of its incident rays is colored and safe. Then we are able to color all edges and chords of $c$ in such a way that each one of them is safe.
\end{lemma}
Similarly, as cycles of $C_1$ may be blocked, paths of $C_1$ can become non-path-$20$-colorable too. Or, more precisely, halfy $2$-cycles
of $C_1$ can become blocked. Let $p_1=(u_1, \ldots, u_k)$ denote a path of $C_1$. Then both $u_1$ and $u_k$ belong to two different halfy $2$-cycles $c_1=(u_1,v_1)$ and $c_2=(u_k, w_l)$ of $C_1$. Thus $C_1$ contains also paths $p_2=(v_1, \ldots, v_{k'})$ and $p_3=(w_1, \ldots, w_l)$, though it may happen that $p_2=p_3$. If $p_1$ consists of more than one edge, then $C_{max}$
contains edges $a_1=(u'_2,u_2), a'=(u_{k-1},u'')$, none of which is a border. Each of these edges is called an {\bf \em antenna} (of $p_1$).
$a_1$ is also said to be an antenna of $c_1$ and $a'$ of $c_2$.
\begin{figure}[h]
\centering{\includegraphics[scale=0.8]{anteny.pdf}}
\caption{{\scriptsize Antennas $a_1,a_2$ of a halfy $2$-cycle $(u_1, v_1)$.}
} \label{antennas}
\end{figure}
\begin{fact} \label{anteny}
Let $c=(u_1,v_1)$ be a halfy $2$-cycle of $C_1$ with two antennas $a_1, a_2$. Then, in any path-coloring of $G_1$ the antennas $a_1$ and $a_2$ have to be diverse.
\end{fact}
\noindent{\bf Proof.~} Suppose that $C_1$ contains paths $p_1=(u_1,u_2, \ldots, u_k)$ and $p_2=(v_1,v_2, \ldots, v_l)$. Then the antennas $a_1$ and $a_2$
have the form $a_1=(u'_2,u_2)$ and $a_2=(v'_2,v_2)$. We know that $mult(a_1)=mult(a_2)=4, \ mult(u_1,u_2)=mult(v_1,v_2)=mult(u_1,v_1)=mult(v_1,u_1)=10$. The situation is depicted in Figure \ref{antennas}.
Since $c$ is a $2$-cycle, its edges have to be diverse. Also, we may notice that $(u_1,u_2)$ has to be colored in the same way as $(v_1,u_1)$
and $(v_1,v_2)$ in the same way as $(u_1,v_1)$. Therefore $(u_1,u_2)$ and $(v_1,v_2)$ have to be diverse. Also $col(u_1,u_2) \cup col(v_1,v_2)={\cal K}$.
Since $a_1$ and $(u_1,u_2)$ have to be diverse and so do $a_2$ and $(v_1,v_2)$, $a_1$ and $a_2$ have to be diverse as well.\hfill $\Box$\\[.1ex]
If $a_1, a_2$ are two antennas of a $2$-cycle $c$ and $a_1$ is already colored but $a_2$ not, then we say that colors of $col(a_1)$
are {\bf \em forbidden} on $a_2$. A halfy $2$-cycle $c$ of $C_1$ is said to be {\bf \em blocked}, if it has two antennas and they are not diverse. The multigraph $G_1$ is {\bf \em blocked} if at least one cycle or halfy $2$-cycle of $C_1$ is blocked. The multigraph $G_1$ is {\bf \em safe} if each of its colored edges is safe.
We say that a cycle or path of $C_1$ is {\bf \em unprocessed} if at least one of its rays is uncolored.
To {\bf \em process} a cycle/path of $C_1$ means to color its rays so that each of them is safe and $G_1$ is not blocked, assuming that before starting to process this cycle or path, $G_1$ is safe and unblocked.
We are now ready to state the algorithm for path-$20$-coloring $G_1$.
\vspace{0.5cm}
{\scriptsize
\noindent Algorithm Color7 \\
\vspace{0cm} {\bf while} there exists an unprocessed cycle of $C_1$\\
\vspace{-0.2cm} \hspace{2cm} $c \leftarrow$ an unprocessed cycle of $C_1$ with a minimal number of uncolored rays;\\
\vspace{-0.2cm}\hspace{2cm} process $c$;\\
\vspace{0cm} {\bf while} there exists an unprocessed path of $C_1$\\
\vspace{-0.2cm} \hspace{2cm} $p \leftarrow$ any unprocessed path of $C_1$;\\
\vspace{-0.2cm}\hspace{2cm} process $p$;\\
\vspace{-0.2cm} color the remaining uncolored edges in such a way that each of them is safe;\\
}
In what follows we prove the correctness of Algorithm Color7.
Let $B$ denote the set of uncolored edges of $G_1$.
We divide the flexibility of each edge $e$ having coincident edges $e_1, e_2 \in C_{max}$ using three components $flex^0(e)= 10 - mult(e_1)- mult(e_2)$, $flex^+(e)= |col(e_1) \cap col(e_2)|$ and $blank(e)= mult(e_1 \cap B)+ mult(e_2 \cap B)$. Thus $flex(e)=flex^0(e)+flex^+(e)+blank(e)$. As a result the flexibility of each cycle $c$ of $C_1$ consists of three components as well - $flex(c)=flex^0(c)+flex^+(c)+blank(c)$.
We say that two rays $r_1 =(u, u'), r_2=(v',v)$ of a $2$-cycle $c$ are {\bf \em complementary (on $c$)} if either $(u,v)$ or $(v',u')$ is an edge of $c$.
\begin{lemma} \label{blocked}
Let $c$ be a cycle of $C_1$ with $u$ incident uncolored edges of $C_{max}$. Assume that $c$ has (i) no chords or (ii) one chord and $\lambda(c)=3$. Then $\chi(c)=kol(c)+flex^0(c)+flex^+(c)+blank(c) -chor(c) \geq kol(c)+2\lambda(c)+4u+flex^+(c)$.
As a consequence:
\begin{enumerate}
\item A cycle $c$ of length $\lambda(c)>2$ can be blocked only if it has at most one uncolored ray.
\item A $2$-cycle is blocked only if some two of its non-complementary rays $r_1, r_2$ are not diverse.
\end{enumerate}
\end{lemma}
\noindent{\bf Proof.~} Any chord $e$ of $c$ contributes $2 mult(e)$ to $blank(c)$. Therefore, $blank(c)-chor(c)=4u$.
For any edge $e$ of $c$, it holds $flex^0(e)=2$. The claim follows.
\hfill $\Box$\\[.1ex]
\begin{lemma} \label{ost}
Let $r_1, r_2$ be two edges of $C_{max}$ coincident with an edge $e$ belonging to a cycle $c$ of $C_1$. Suppose also that $r_1$ is uncolored. There exists a set $Z \subseteq {\cal K}$ of colors, the application of any color of which on $r_1$ increases $kol(c) +flex^+(c)$ by one, i.e., $kol(c) +flex^+(c)$ increases by $|col(r_1) \cap Z|$ after coloring $r_1$. If $r_2$ is uncolored, then $Z$ has $20-kol(c)$ elements. Otherwise, $Z$ is of size $20-kol(c)+mult(r_2)$.
\end{lemma}
\noindent{\bf Proof.~}
By coloring $r_1$ with a color not occurring yet on the rays of $c$, we increase $kol(c)$ by $1$. There are $20-kol(c)$ such colors.
Additionally, if $r_2$ is already colored, then by coloring $r_1$ with any color assigned to $r_2$, we increase $flex^+(e)$ and thus also
$flex^+(c)$. \hfill $\Box$\\[.1ex]
For a cycle $c$ of $C_1$, $\rho(c)$ indicates the maximum multiplicity of $r_2$ from Lemma \ref{ost}, i.e., the maximum multiplicity of a colored ray $r_2$ of $c$ incident to $e \in c$ such that the other edge $r_1$ of $C_{max}$ incident to $e$ is an uncolored ray of $c$.
\begin{lemma} \label{poss}
Suppose that at step $S$ we want to color a set $U$ of uncolored edges, where $U$ consists of either (i) a subset of uncolored rays of a cycle $c$ of $C_1$ or (ii) an antenna of a halfy $2$-cycle $c$ of $C_1$.
Then, assuming that $G_1$ is unblocked, there always exists a number $\Delta'(c)$ and a set $Z \subseteq {\cal K}$ such that by using $\Delta'(c)$ different colors of $Z$ on $U$, we guarantee that $c$ does not become blocked. Depending on additional conditions, $\Delta'(c)$ and $|Z|$ can be expressed as the following functions of a certain $\Delta(c)\leq \Delta'(c)$:
\begin{enumerate}
\item [0]. If $c$ has at least two chords or one chord and $\lambda(c)>3$, then $\Delta'(c)=0$. In the remaining points we assume that $c$ has no chords or one chord and $\lambda(c)=3$.
\item If $c$ is a $2$-cycle with $r$ colored rays, then $\Delta'(c)=mult(U)$ and $|Z| =20-4r+\rho(c)$.
\item If $c$ has one uncolored ray, no chords and $\lambda(c)>2$, then $\Delta'(c)=4-\Delta(c) \geq 0$, where $\Delta(c)=flex^+(c)+kol(c)-10$ and $|Z|\geq 14-\Delta(c)$.
\item Assume that $c$ has exactly two uncolored incident edges of $C_{max}$ and $\lambda(c)>2$. Then $|Z|\geq 12-\Delta(c)+\rho(c)$, where $\Delta(c)=flex^+(c)+kol(c)-8$. If we color only one ray of $c$, then $\Delta'(c)=2-\Delta(c)$, otherwise $\Delta'(c)=6-\Delta(c)$.
\item Assume that $c$ has at least $u \geq 3$ uncolored rays and and $\lambda(c)>2$. Then $|Z|\geq 20-flex^+(c)-kol(c)+\rho(c)$.
If we color $u-2$ rays of $c$, then $\Delta'(c)=0$; if $u-1$, then $\Delta'(c)=\min\{10- flex^+(c)-kol(c), 0\}$; if we color all $u$ rays of $c$, then $\Delta'(c)=\min\{14- flex^+(c)-kol(c), 0\}$.
\item If $U$ consists of an antenna of $c$, then $\Delta'(c)=4$ and $|Z|\geq 16$.
\end{enumerate}
\end{lemma}
The proof of this lemma is contained in the proof of Lemma \ref{poss1}.
\begin{fact} \label{anteny}
\begin{enumerate}
\item Each edge of $C_{max}$ is an antenna of at most two different halfy $2$-cycles of $C_1$.
\item If an edge $e \in C_{max}$ is an antenna of two halfy $2$-cycles, then it is not incident to a cycle of $C_1$.
\end{enumerate}
\end{fact}
\begin{lemma} \label{min}
Let $c$ such that $\lambda(c)>2$ be an unprocessed cycle of $C_1$ that at some step of Algorithm Color7 has a minimal number of uncolored rays. Then it is always possible to process $c$.
\end{lemma}
\noindent{\bf Proof.~} We divide the set of colors ${\cal K}$ into two sets $Z^+(c)$ and $Z^-(c)$. Next, we color each uncolored inray of $c$ with one of the colors of $Z^-(c)$ and each uncolored outray of $c$ with one of the colors of $Z^+(c)$. This way each newly colored ray is safe - by Observation \ref{obs} and the assumption that all previously colored rays are safe.
Now, we prove that we can carry out the above in such a way that no cycle or halfy $2$-cycle of $C_1$ becomes blocked.
Suppose first that $c$ has exactly one uncolored ray $r$. By Lemma \ref{poss} there exists $\Delta(c) \leq 4$
and a $(12-\Delta(c))$-element set $Z \subseteq K$ such that by coloring $r$ with $4-\Delta(c)$ colors of $Z$ we guarantee that $c$ does not become blocked. If $r$ is incident to another cycle of $C_1$ or is an antenna of a halfy $2$-cycle of $C_1$, then we may also have to ensure
that this (halfy) cycle denoted as $c'$ does not become blocked. Regardless of whether $c'$ is a cycle or a halfy $2$-cycle of $C_1$, by Lemma
\ref{poss} we know that there exists an analogous number $\Delta(c') \leq 4$
and an at least $(12-\Delta(c'))$-element set $Z' \subseteq K$ such that coloring $r$ with $4-\Delta(c')$ colors of $Z'$ guarantees that $c'$ does not become blocked. Because $20 \geq |Z \cup Z'|=|Z|+|Z'|-|Z \cap Z'| \geq 24 -\Delta(c)-\Delta(c')-|Z \cap Z'|$, we obtain that
$|Z \cap Z'|\geq 4 -\Delta(c)-\Delta(c')$. If $\Delta(c)+\Delta(c') \geq 4$, then $(4-\Delta(c))+ (4-\Delta(c')) \leq 4$ and we can simply
use $4-\Delta(c)$ colors of $Z$ and $4-\Delta(c')$ colors of $Z'$ to color $r$. Otherwise, we use $4 -\Delta(c)-\Delta(c')$ colors of $ Z \cap Z, \ \Delta(c')$ colors of $Z\setminus Z'$ and $\Delta(c)$ colors of $Z'\setminus Z$. This way neither $c$ nor $c'$ will become blocked.
Suppose now that $c$ has exactly two uncolored rays $r_1, r_2$. By Lemma \ref{poss} it is enough to color $r_1, r_2$ with
$6-\Delta(c)$ colors of an at least $(12-\Delta(c))$-element set $Z \subseteq {\cal K}$. If $r_1$ is a ray or an antenna of a (halfy) cycle $c'$ of $C_1$, then by the argument above, we can color $r_1$ so that at least $4-\Delta(c)$ colors belong to $Z$ and at least $4-\Delta(c')$ to
$Z'$. This means that we have already used at least $4 - \Delta(c)$ colors of $Z$. To guarantee that $c$ is not blocked, it suffices to use at most $2$ additional (not already used on $r_1$) colors of $Z$. If $r_1$ and $r_2$ are also the last two uncolored rays of $c'$, then we have to use $6-\Delta(c')$ colors of $Z'$, which means that it suffices to color $r_2$ with two additional colors of $Z'$. If $r_2$ is a ray of a different cycle $c''$ of $C_1$, then by Lemma \ref{poss} it is enough to color $r_2$ with
$2-\Delta(c'') \geq 2$ colors of an at least $(12 -\Delta(c''))$-element set $Z''$. Thus both these cases are easy to handle - since $r_2$ has to be colored with $4$ colors, we use $2$ colors of $Z$ and two colors of either $Z'$ or $Z''$, depending on whether $r_2$ is incident to the same cycle $c'$ or not.
If $c$ has exactly $3$ uncolored rays, then on the one hand $col(c)+flex^+(c) \geq 8$ and on the other it suffices to color the uncolored rays of $c$ with $14-kol(c)-flex^+(c) \leq 6$ colors that increase $kol(c)$. If the uncolored rays of $c$ are also the last $3$ uncolored rays
of some different cycle $c'$, then we may also need to use $6$ colors of ${\cal K} \setminus col(c')$. This can be easily achieved as we may use up to $12$ different colors for coloring $3$ rays.
\hfill $\Box$\\[.1ex]
\begin{lemma} \label{2cycle}
Let $c$ such that $\lambda(c)=2$ be an unprocessed cycle of $C_1$ that at some step of Algorithm Color7 has a minimal number of uncolored rays. Then it is always possible to process $c$.
\end{lemma}
\noindent{\bf Proof.~}
If $c$ has only one uncolored ray, then the proof is the same as in Lemma \ref{min}.
Let $O_2$ denote a set of cycles of $C_1$ such that each cycle $c$ of $O_2$ has no chords and has exactly two uncolored rays coincident with the same edge $e$ of $c$.
\begin{claim}
It never happens that each of the uncolored rays of $O_2$ is incident to another cycle of $O_2$.
\end{claim}
\noindent{\bf Proof.~}
Suppose to the contrary that each of the uncolored rays of $O_2$ is incident to another cycle of $O_2$. Then the rays of these cycles together
with edges of these cycles, with which the uncolored rays are coincident form a good alternating cycle contradicting Lemma \ref{alt}.
\hfill $\Box$\\[.1ex]
By this claim, we can always choose for processing a $2$-cycle that either does not belong to $O_2$ or belongs to $O_2$ but one of its rays is not incident to a cycle of $O_2$.
Suppose that $c$ belongs to $O_2$ and has two uncolored rays $r_1, r_2$. Rays $r_1, r_2$ have to be diverse with the already colored rays of $c$. Thus, there exists an at least $12$-element set $Z$ such that we have to color $r_1, r_2$ with $8$ colors of $Z$. If $r_1, r_2$ are also rays of another cycle $c'$, then by the above claim, $c' \notin O_2$. If $r_1, r_2$ are also the last uncolored rays of $c'$, then there exists a number $\Delta(c') \leq 4$
and an at least $(12-\Delta(c')+ \rho(c'))$-element set $Z' \subseteq K$ such that coloring $r_1, r_2$ with $8-\Delta(c')$ colors of $Z'$ guarantees that $c'$ does not become blocked. Since $\rho(c')=4$, we get that $|Z \cap Z'| \geq 8-\Delta(c')$, which means that we have enough colors at our disposal. If $r_1, r_2$ are not the last uncolored rays of $c'$, then the task is even easier, because
either $Z'$ has more colors or we have to use fewer than $8-\Delta(c')$ colors of $Z'$.
Consider now the case when $r_i, i \in \{1,2\}$ is a ray or antenna of $c_i$ and $c_1 \neq c_2$. Then the coloring of $r_1, r_2$ is the most difficult when both $r_1$ and $r_2$ are antennas. In such a case by Lemma \ref{poss} there exist sets $Z_1, Z_2$, each of size at least $16$ such that coloring
$r_i$ with $4$ colors of $Z_i$ guarantees that $c_i$ is not blocked. Since we have that $|Z\cap (Z_1 \cup Z_2)| \geq 8$, it is also possible to color $r_1, r_2$, so than none of the cycles $c, c_1, c_2$ is blocked.
If $c$ has more than two uncolored rays, then processing $c$ is easy, because we can use all colors of ${\cal K}$ so as not to block $c$ and even if each of the uncolored rays $r_i$ is an antenna, then there exists an at least $16$-element set $Z_i$, which can be used for coloring $r_i$.
\hfill $\Box$\\[.1ex]
If $a$ is an antenna of two different halfy $2$-cycles, then it is said to be a {\bf \em bilateral antenna}.
\begin{lemma} \label{path}
Let $p$ be an unprocessed path of $C_1$. Then it is possible to process it.
\end{lemma}
\noindent{\bf Proof.~} The proof is similar to the one above. Since paths are processed after cycles of $C_1$, the only thing we have to take care of
is that antennas of the same halfy $2$-cycle are diverse. The path $p$ has at most two incident bilateral antennas $a_1, a_2$ and if it does, then at most one of them is an inray and at most one an outray of $p$. Assume that $a_1$ is an inray and $a_2$ an outray. (They may also be chords.) Each $a_i$ may have to be diverse with two different edges. Thus for each $a_i$ it may happen that up to $8$ colors are forbidden on it. Let $Z_i$ denote the set of colors forbidden on $a_i$. We partition ${\cal K}$ into
two $10$-element sets $Z^-(p)$ and $Z^+(p)$ so that $|Z^-(p) \setminus Z_1| \geq 5$ and $|Z^+(p) \setminus Z_2| \geq 5$.
To achieve this we divide ${\cal K} \setminus (Z_1 \cap Z_2)$ (almost) equally between $Z^-(p)$ and $Z^+(p)$. Since $|Z_1 \cap Z_2| \leq 10$, it is always possible.
Then we are able to color each ray of $p$ so as to ensure that each antenna is diverse with required antennas.
\hfill $\Box$\\[.1ex]
\section{Computation of a relaxed cycle cover $C_1$}
To compute a relaxed cycle cover $C_1$ improving $C_{max}$ we construct the following undirected graph $G'=(V',E')$.
For each vertex $v$ of $G$ we add two vertices $v_{in}, v_{out}$ to $V'$. For each edge $(u,v)$ that belongs to a tricky $2$-cycle, $3$-triangle or $2$-triangle of $R$
we add vertices $e^1_{uv}, e^2_{uv}$,called {\bf \em subdivision vertices} of $(u,v)$, an edge $(e^1_{uv}, e^2_{uv})$ of weight $0$ and edges $(u_{out}, e^1_{uv}), (v_{in}, e^2_{uv})$ having weights such that $w(u_{out}, e^1_{uv}) +w(v_{in}, e^2_{uv})= w(u,v)$. Edges $(u_{out}, e^1_{uv}), (v_{in}, e^2_{uv})$ are also called {\bf \em half-edges} of $(u,v)$. If $(u,v)$ does not belong to $t$ or $opp(t)$ such that $t$ is a tricky $3$-triangle , then each of the half-edges of $(u,v)$ gets weight $\frac{1}{2}w(u,v)$.
For every other edge $(u,v) \in E$ we add an edge $(u_{out}, v_{in})$ of weight $w(u,v)$.
Next we build so-called gadgets.
For each tricky $2$-cycle $c$ on vertices $u$ and $v$, we add vertices $\gamma^c_u$ and $\gamma^c_v$
and edges $(\gamma^c_{u}, e^1_{uv}), (\gamma^c_{u}, e^2_{vu}), (\gamma^c_{v}, e^1_{vu}), (\gamma^c_{v}, e^2_{uv})$ with weight $0$.
The gadget is shown in Figure \ref{2cykl}. If $c$ is not a subcycle of a tricky $3$-triangle or $2$-triangle of $R$, each of the half-edges
of $(u,v)$ gets weight $\frac{1}{2}w(u,v)$ and each of the half-edges
of $(v,u)$ gets weight $\frac{1}{2}w(v,u)$.
\begin{figure}[h]
\centering{\includegraphics[scale=0.8]{gadzet2cykl.pdf}}
\caption{{\scriptsize A gadget for a $2$-cycle $(q,r)$ .}
} \label{2cykl}
\end{figure}
Let $t$ be any tricky $3$-triangle $t=(p,q,r)$. Among edges of $opp(t)$ we choose one with maximum weight. Suppose that the chosen edge is $(r,q)$. For each such $t$, we build the following gadget.
We add vertices $\gamma^{t-}_p, \gamma^{t+}_p$ and connect them to vertices $e^2_{qp}, e^2_{rp}$ and $e^1_{qp}, e^1_{rp}$, respectively,
via edges of weight $0$. Each of the edges $(e^2_{qp}, p_{in}), (e^2_{rp}, p_{in})$ gets weight $\frac{1}{2}\max\{w(q,p), w(r,p)\}$. Thus,
$w(q_{out}, e^1_{qp})= w(q,p) - \frac{1}{2}\max\{w(q,p), w(r,p)\}, \ w(r_{out}, e^1_{rp})= w(r,p) - \frac{1}{2}\max\{w(q,p), w(r,p)\}$.
We proceed analogously for pairs of edges $(r,q), (p,q)$ and $(p,r), (q,r)$.
Thus, we add vertices $\gamma^{t-}_q, \gamma^{t+}_q$ and connect them to vertices $e^2_{pq}, e^2_{rq}$ and $e^1_{pq}, e^1_{rq}$, respectively,
via edges of weight $0$, and we add vertices $\gamma^{t-}_r, \gamma^{t+}_r$ and connect them to vertices $e^2_{pr}, e^2_{qr}$ and $e^1_{pr}, e^1_{qr}$, respectively,
via edges of weight $0$. Each of the edges $(e^2_{pq}, q_{in}), (e^2_{rq}, q_{in})$ gets weight $\frac{1}{2}\max\{w(p,q), w(r,q)\}$ and each of the edges $(e^2_{pr}, r_{in}), (e^2_{qr}, r_{in})$ gets weight $\frac{1}{2}\max\{w(p,r), w(q,r)\}$.
Additionally, if the $2$-cycle $c=(r,q)$ is not tricky, we add a gadget for $c$, which is the same as the gadget for a tricky $2$-cycle.
The gadget is depicted in Figure \ref{gtriangle}.
We say that a half-edge is {\bf \em incoming} if it is a half-edge of some edge $(u,v)$ incident to $v$. A half-edge of $(u,v)$ incident to $u$ is called {\bf \em outgoing}.
Let $e_1, e_2$ denote two different edges of $G$ incident with the same vertex $v$.
Assume that a relaxed cycle cover $\tilde C$ contains exactly one half-edges of each of $e_1, e_2$.
We say that these two half-edges are {\bf \em crossing} if exactly one of them is incident to $v$ and
{\bf \em non-crossing} otherwise.
A {\em quasi relaxed cycle cover} denotes a relaxed cycle cover that does not satisfy point $(iii)$ of Definition \ref{rel2}.
We say that a (quasi) relaxed cycle cover $\tilde C$ is {\bf \em non-integral} on a set $S$ of edges if there exists some edge $e \in S$ such
that $\tilde C$ contains only one half-edge of $e$.
We say that a half-edge $e_h$ of $\tilde C$ is within a set of edges $F\subseteq E$ if $e_h$ is a half-edge of some edge in $F$.
Let $w(\tilde C)_t$ denote the total weight of half-edges of $\tilde C$ within $t \cup opp(t)$.
\begin{definition}
A tricky triangle $t=(p,q,r)$ is said to be {\bf \em harmonious} in a relaxed cycle cover $\tilde C$ if $\tilde C$ satisfies the following:
\begin{enumerate}
\item The difference between the numbers of edges of $\tilde C$ incoming to $t$ and outgoing of $t$, denoted $dif(t)$, is either zero or two.
\item If $dif(t)=0$, then $\tilde C$ is integral on $t$.
\item If $dif(t)=2$, then depending on the configuration of edges incoming to and outgoing of $t$, $10w(\tilde C)_t +4w(t)$ is upper bounded
by:
\begin{itemize}
\item
$\max\{10w(t) -5w(r,p),
10(w(p,q)+w(q,p))\}$, if $\tilde C$ contains one edge ougoing of $t$, incident to $r$ and three edges incoming to $t$,
\item $ \max\{10w(t) -5w(q,r),
10(w(p,q)+w(q,p))\}$, if $\tilde C$ contains one incoming to $t$ incident to $r$ and three edges outgoing of $t$,
\item
$\max\{10w(t) +5w(r,p), 10( w(p,r)+w(p,q)+w(q,p)), 10(w(p,q)+w(p,r)+w(r,p))\} $, if $\tilde C$ contains two edges outgoing of $t$, incident to $p$ and $q$ and no edge incoming to $t$,
\item $\max\{10w(t) +5w(q,r), 10( w(q,r)+w(r,p)+w(p,r)), 10( w(p,r)+w(r,q)+w(q,r))\}$, if $\tilde C$ contains two edges incoming to $t$ incident to $p$ and $q$ and no edge outgoing of $t$.
\end{itemize}
\end{enumerate}
\end{definition}
\begin{lemma}\label{relax}
Any perfect matching of $G'$ yields a quasi relaxed cycle cover $\tilde C$ with the following properties:
\begin{enumerate}
\item[(i)]
for each problematic $2$-cycle $(u,v)$, if $\tilde C$ contains two half-edges from
\newline $\{(u, x_{(u,v)}), (x_{(u,v)}, v), (v, x_{(v,u)}), (x_{(v,u)}, u)\}$, then they either belong to the same edge or are crossing - thus one of them is incident with $u$ and the other with $v$ and are either both incoming or both outgoing.
\item [(ii)] for each tricky $3$-triangle $t=(p,q,r)$, $t$ is harmonious in $\tilde C$.
\end{enumerate}
\end{lemma}
The proof is in Section \ref{miss}.
We construct an undirected graph $G''$ by extending and modifying $G'$ as follows. For each tricky triangle $t=(p,q,r)$ such that $(q,r)$ is a $2$-cycle of $C_{max}$, we add the following gadget. We add vertices $a_{\{p,q,r\}}, b_{\{p,q,r\}}$ and connect them to vertices $e^1_{pq}, e^2_{rp}$ and $e^2_{pq}, e^1_{rp}$, respectively,
via edges of weight $\frac{\kappa(t)}{2}$ and connect $a_{\{p,q,r\}}$ and $b_{\{p,q,r\}}$ via an edge of weight $0$. We also decrease the weight of each of the edges $(r_{out}, e^1_{rq}), (e^2_{rq}, q_{in}), (q_{out}, e^1_{qr}), (e^2_{qr}, r_{in})$ by $\frac{\kappa(t)}{2}$.
\begin{figure}
\centering{\includegraphics[scale=0.9]{triangle2new.pdf}}
\caption{{\scriptsize A gadget for a triangle $t= (p,q,r)$ .}
} \label{gtriangle}
\end{figure}
\begin{lemma}\label{tricky}
Any perfect matching of $G''$ yields a relaxed cycle cover $\hat C$ with the following properties.
Let $t=(p,q,r)$ be a tricky triangle of $R$, where $c=(q,r)$ is its t-cycle.
\begin{enumerate}
\item[(i)] If $\hat C$ is non-integral on $t \cup c$, then it contains
(i) crossing half-edges within $c$ or (ii) crossing half-edges within $\{(r,p), (p,q)\}$.
\item[(ii)] $\hat C$ contains all edges of $t$ if and only if it also contains a loop incident to $v_t$.
\item[(iii)] If $\hat C$ contains a loop incident to $v'_t$, then it contains no half-edges within $c$.
\end{enumerate}
\end{lemma}
\noindent{\bf Proof.~} Let $M$ be a perfect matching of $G''$. If $M$ does not contain the edge $(a_{\{p,q,r\}}, b_{\{p,q,r\}})$ (which means that these vertices are matched via edges of weight $\frac{\kappa(t)}{2}$) and does not contain any edge corresponding to a half-edge of an edge of $c$, then $\hat C$ contains the loop $e'_t$
with weight $\kappa(t)$. If $M$ contains edges corresponding to all half-edges within $t$, then $\hat C$ contains all edges of $t$ as well as
the loop $e_t$ with weight $-\kappa(t)$. In other respects the proof is similar to that of Lemma \ref{relax}.\hfill $\Box$\\[.1ex]
\begin{theorem}
Any perfect matching of $G''$ yields a relaxed cycle cover $C_1$ improving $C_{max}$.
A maximum weight perfect matching of $G''$ yields a relaxed cycle cover $C_1$ improving $C_{max}$ such that $w(C_1) \geq opt$.
\end{theorem}
\noindent{\bf Proof.~} The first statement follows from the preceding two lemmas.
The second statement follows from the fact that a traveling salesman tour is also a cycle cover that does not contain any $2$-cycles or triangles unless the whole graph has two or three vertices.
\hfill $\Box$\\[.1ex]
\section{Return from $G'_1$ to $G_1$}
While building the multigraph $G_1$, we have modified it in two types of places (tricky $2$-triangles and strange $2$-cycles) creating the multigraph $G'_1$.
\begin{lemma}
Given a path-$20$-coloring of $G'_1$, we can obtain a path-$20$-coloring of $G_1$.
\end{lemma}
\noindent{\bf Proof.~} Let $t=(p,q,r)$ be a tricky triangle of $C_1$ such that in $G'_1$ we have replaced the edges $(p,q), (q,q_1)$ with one edge.
We color the edges as follows. Edge $(q,q_1)$ is colored in the same way as $(p,q_1)$ in $G'_1$.
Suppose first that $w(r,p) \geq w(p,q)$.
Edges $(p_2, p)\in C'_{max}$ and $(p_1, p)$ are colored with at most $15$ colors of ${\cal K}$. We color $(r,p)$ with $5$ colors of ${\cal K} \setminus (col(p_2,p) \cup col(p_1,p))$. For each color $k \in col(r,p)$, if $k \in col(p,p_3)$, then we assign $k$ to $(q,r)$; otherwise we assign $k$
to $(p,q)$. Note that $k$ cannot be assigned to $(q,q_1)$, because $(p,q_1)$ is coincident with $(p,p_3)$ in $G'_1$. Hence $(p,q_1)$
and $(p,p_3)$ have to be diverse in $G'_1$. Next we assign all $15$ colors of ${\cal K} \setminus col(r,p)$ to $(r,q)$.
We easily notice that each of the edges of $t$ as well as the edge $(r,q)$ are safe.
Suppose next that $w(r,p) < w(p,q)$. This case is, in fact, easier than the one above. We choose $5$ colors from the set ${\cal K} \setminus (col(p,p_3) \cup col(q,q_1))$ and assign them to both $(p,q)$ and $(q,r)$. Next, we assign all $15$ colors of ${\cal K} \setminus col(p,r)$ to $(r,q)$.
\koniec |
2,869,038,153,760 | arxiv | \section{Introduction}
Quantum chromodynamics (QCD), the theory of the strong interaction, requires a non-perturbative approach at the hadronic energy. The various form factors that can be extracted through the QCD factorization of the cross-section of physical processes give access to properties describing the structure of hadrons. The scalar form factors, for instance, can be used to explore the interplay of the emergent hadronic mass (EHM), a mechanism used to describing the large mass of hadrons~\cite{Cui:2020dlm,Roberts:2021nhw}, and the Higgs boson interaction, which increases the masses of the Goldstone bosons. The vector form factors provide insight to electromagnetic properties, and the tensor form factors are useful for beyond the standard model studies. The importance of the pion and kaon to understanding the long-range dynamics of QCD is well-established~\cite{Hagler:2009ni} through an extensive experimental investigations of the pion since the 1970s.
In this work we calculate of the scalar, vector, and tensor form factors of the pion and kaon. We consider only the connected contributions, as we expect that disconnected contributions are small, at least for the vector and tensor form factors, for larger than physical pion mass. We present the $Q^2$ dependence of the form factors and derived quantities, such as the radii and tensor anomalous magnetic moment.
\section{Theory and Lattice Setup}
We obtain the form factors for a particular flavor $f$ from the matrix elements of ultra-local operators
\begin{equation}
\langle M({p}') | {\cal O}^f_\Gamma | M({p}) \rangle \,,
\end{equation}
where the operator structure ${\cal O}^f_\Gamma $ for 0-spin mesons is the scalar, ${\cal O}^f_S=\bar{\psi}\hat{1}\psi$, vector, ${\cal O}^f_V=\bar{\psi}\gamma^\mu\psi$, and tensor, ${\cal O}^f_T=\bar{\psi}\sigma^{\mu\nu}\psi$ with $\sigma^{\mu\nu}=\frac{1}{2}[\gamma^\mu,\gamma^\nu]$. We extract the 4-vector momentum transfer, $t\equiv-Q^2$, dependence of the form factor from the off-forward matrix element, where the momentum transfer between the initial (${p}$) and final (${p'}$) state is ${Q}={p'} - {p}$. The decomposition of each matrix element for the general frame in Euclidean space is~\cite{Hagler:2009ni}
\begin{align}
\langle M({p}') | {\cal O}^f_S | M({p}) \rangle &= \frac{1}{\sqrt{4 E(p) E(p')}} A^{M^f}_{S10}\,, \\[1ex]
\langle M({p}') | {\cal O}^f_{V^\mu} | M({p}) \rangle &= -i\, \frac{2\, P^\mu}{\sqrt{4 E(p) E(p')}} \, A^{M^f}_{10}\,, \\[1ex]
\langle M({p}') | {\cal O}^f_{T^{\mu\nu}} | M({p}) \rangle &= i\, \frac{(P^\mu \Delta^\nu - P^\nu \Delta^\mu)}{m_M \sqrt{4 E(p) E(p')}} \,B^{M^f}_{T10}\,.
\label{eq:tensor_decomp2}
\end{align}
$P^\mu$ is the average momentum, $P \equiv (p'+p)/2$, and $\Delta$ is the momentum difference, $\Delta \equiv p'-p$. The mass of meson $M$ is indicated by $m_M$, and its energy at momentum $\vec{p}$ is $E(p){=}\sqrt{m_M^2 + \vec{p}\,^2}$. We omit the index $M$ from the energy to simplify the notation. Here we use the notation $F^{M,f}_S \equiv A^{M^f}_{S10}$, $F^{M,f}_V \equiv A^{M^f}_{10}$, $F^{M,f}_T \equiv B^{M^f}_{T10}$.
We use an ensemble of twisted-mass clover fermions and Iwasaki improved gluons with pion mass 265 MeV and $a=0.09471(39)$ fm. The ensemble contains the two light mass-degenerate quarks as well as the strange and charm quarks in the sea ($N_f=2+1+1$). Additionally, the volume ($L^3\times T$) is $32^3\times64$, $L m_\pi=4$, and $L=3.0$ fm. These gauge configurations have been produced by the Extended Twisted Mass Collaboration (ETMC)~\cite{Alexandrou:2018egz}.
We extract matrix elements in both the rest and boosted frames. For the latter, we choose a momentum transfer of the form $\mathbf{p'}=2\pi \mathbf{n'}/L$ with $\mathbf{n'}=(\pm1,\pm1,\pm1)$. This choice is such that one can extract matrix elements with up to three covariant operators~\cite{Alexandrou:2021mmi}, avoiding any mixing under renormalization. The fact that we have eight combinations of the momentum boost increases the computational cost by a factor of eight. We note that, the lattice data these combinations can be averaged in the forward limit, as we have done for $\langle x \rangle$ - $\langle x^3 \rangle$~\cite{Alexandrou:2020gxs,Alexandrou:2021mmi}. However, this does not apply for the form factors because the various $\mathbf{p'}$ do not correspond to the same value of $Q^2$ in the boosted frame.
The statistics for the rest frame is the same for all values of the source-sink time separations ($t_s/a=12,14,16,18,20,24$) and equal to 1,952. For the boosted frame we have a statistics of 46,848 for $t_s/a=12$, and 101,504 for $t_s/a=14,16,18$. More details can be found in Ref.~\cite{Alexandrou:2021ztx}.
We extract the meson matrix moments using the optimized ratio
\begin{equation}
\label{eq:ratio}
R^M_\Gamma(t_s,t;\mathbf{p}', \mathbf{q} = \mathbf{p}' - \mathbf{p}) = \frac{ C^M_\Gamma(t_s,t;\mathbf{p}',\mathbf{q}) }{ C_M(t;\mathbf{p}'^2) }
\sqrt{ \frac{ C_M(t_s-t;\mathbf{p}^2) C_M(t;\mathbf{p}'^2) C_M(;t_s\mathbf{p}'^2) }{ C_M(t_s-t;\mathbf{p}'^2) C_M(t;\mathbf{p}^2) C_M(t_s;\mathbf{p}^2) } } \,,
\end{equation}
which cancels the time dependence in the exponentials, as well as the overlaps between the interpolating field and the meson state. As the ratio in Eq.~\eqref{eq:ratio} is written for a general frame, we use $C_M(t;\mathbf{p}^2)=c_0 e^{-E_0(\mathbf{p}^2) t}$ for the two-point functions, where $c_0$ is calculated from the two-state fit on the two-point functions, and $E_0=\sqrt{m^2 + \mathbf{p}^2}$ is calculated from the plateau fit on the effective mass. At insertion times far enough from the source and sink positions, the ratio becomes independent of insertion time,
\begin{equation}
R^M_\Gamma(t_s,t;\mathbf{p}', \mathbf{q}) \xlongrightarrow[\text{$Et\gg1$}]{\text{$\Delta E(t_s-t)\gg1$}} \Pi^M_\Gamma(t_s; \mathbf{p}', \mathbf{q}) \,.
\label{eq:ratio2}
\end{equation}
We use two methods to calculate $\Pi^M_\Gamma$: \textbf{(a)} by fitting the plateau region of the data to a constant value; \textbf{(b)} by performing a two-state fit on the three-point functions. Combining these methods, we can study and eliminate excited-states contamination. Representative results are shown in the next section.
\section{Results on Form Factor}
Due to space limitations, we only show selected results on the kaon form factors. The complete set of results can be found in Ref.~\cite{Alexandrou:2021ztx}. In Fig.~\ref{fig:FK_v_all} we show a comparison between the rest and boosted frame for the vector form factor, and in Fig.~\ref{fig:FK_s_t_all} for the up and strange contributions to the scalar and tensor ones. We only include $t_s/a=12,14,16,18$ for better visibility, as well, the two-state fits. We find that the results for the vector become fully compatible between the two frames at $t_s$ values where excited-state are eliminated. There is some tension in the slope between the two frames for the up-quark scalar and tensor form factors, with the rest-frame results being a bit lower. The strange-quark scalar and tensor form factors are compatible for the two frames.
\begin{figure}[h!]
\begin{minipage}{6cm}
\includegraphics[scale=0.20]{FK_psq.pdf}
\end{minipage}
\hfill\begin{minipage}{6cm}
\caption{Comparison of the vector form factor of the kaon between the rest (open symbols) and boosted frame (filled symbols). The two-state fit as applied on these data is shown with purple stars. Blue, red, green, and magenta points correspond to $t_s/a=12,\,14,\,16,\,18$. Statistical errors are included, but are too small to be visible.}
\label{fig:FK_v_all}
\end{minipage}
\end{figure}
\vspace*{-0.40cm}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.21]{FFs_kaon_psq.pdf}
\caption{Comparison of the scalar and tensor form factor of the kaon between the rest and boosted frame. The notation is the same as Fig.~\ref{fig:FK_v_all}. Statistical errors are included, but are too small to be visible. }
\label{fig:FK_s_t_all}
\end{figure}
\section{Parametrization of Form Factors}
We parameterize the $Q^2$ dependence of the form factors using the monopole ansatz depicted by the Vector Meson Dominance (VMD) model~\cite{OConnell:1995fwv},
\begin{equation}
\label{eq:fit}
F_\Gamma(Q^2) = \frac{F_\Gamma(0)}{1 + \frac{Q^2}{{\cal M}_\Gamma^2}} \,,
\qquad\qquad
\langle r^2 \rangle_\Gamma = -\frac{6}{F_\Gamma(0)} \frac{\partial F_\Gamma (Q^2)}{\partial Q^2}\Bigg{|}_{Q^2=0} = \frac{6}{M^2_\Gamma}\,.
\end{equation}
$F_\Gamma(0)$ is the forward limit of the form factor, and ${\cal M}_\Gamma$ is the monopole mass. For the scalar and vector form factors, we also employ a one-parameter fit by fixing $F_\Gamma(0)$ to the value obtained from our lattice data. The radius, $\langle r^2 \rangle_\Gamma$, is an interesting quantity that is defined as the slope of the form factor at $Q^2=0$ and can be obtained from ${\cal M}_\Gamma$ as shown above.
For the parameterization, we utilize the results from the two-state fits to ensure that excited-states are eliminated. We apply the fit of Eq.~(\ref{eq:fit}) to the results of the rest frame, the boosted frame, and a combination of both frames. For the pion, we test the $Q^2$ values up to 0.55, 1, and 2.5 GeV$^2$. For the kaon, we test $Q^2$ up to 1 and 3 GeV$^2$. In the case of both mesons, we choose the values of $F_\Gamma$ and $M_\Gamma$ from the combined fit and the entire $Q^2$ range.
In Fig.~\ref{fig:Pion_fit} we plot $F_\Gamma(Q^2)$ using the two-state fit data in the rest and boosted frames for the pion. We compare these against the fitted form factors for the cases described above. There is a small difference between the fits of the rest and boosted data sets and the two-state fit data for the case of $Q_{\rm max}^s=0.55$ GeV$^2$. The fits of the combined data sets fall between those of the individual rest and boosted fits, as is expected. We also find agreement between the bands of $F_S$ and $F_V$ and the corresponding value at $Q^2=0$. The discrepancy for the case of $\kappa_T$ discussed is due to the change in the slope for different data sets.
\begin{figure}[h!]
\begin{minipage}{7cm}
\includegraphics[scale=0.25]{FFs_pion_monopoleFit_psq.pdf}
\end{minipage}
\hfill\begin{minipage}{5cm}
\caption{From top to bottom: The scalar, vector and tensor form factors of the pion using two-state fit in the rest (blue points) and boosted (red points) frame. The two-parameter fitted form factors are shown with bands for the cases of the rest frame R (blue), boosted frame B (red), and combined data R$\&$B (green). The length of the band indicates the at $Q_{\rm max}^2$ interval used for the fit. Statistical errors are included, but are too small to be visible. }
\label{fig:Pion_fit}
\end{minipage}
\end{figure}
For the kaon, we plot the two-state fit data compared to fitted form factors for the vector in Fig.~\ref{fig:kaon_fit_v} and the scalar and tensor in Fig.~\ref{fig:kaon_fit_s_t} as done for the pion above. We find the fitted $F_\Gamma(0)$ to be independent of the fit range and the included data sets. There is some tension between the estimates of $M_S^{K^u}$ extracted from the rest frame and that of the boosted and combined frames. Similar behavior is observed in the tensor plot for the up-quark.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.25]{FK_monopoleFit_psq.pdf}
\caption{Parametrization of the vector form factor for the kaon. The notation is the same as Fig.~\ref{fig:Pion_fit}.}
\label{fig:kaon_fit_v}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.23]{FFs_kaon_monopoleFit_psq.pdf}
\caption{Parametrization of the scalar and tensor form factor for the up (left) and strange (right) quark components of the kaon. The notation is the same as Fig.~\ref{fig:Pion_fit}. }
\label{fig:kaon_fit_s_t}
\end{figure}
As stated above, we choose the full $Q^2$ range and the combined frame for the fit parameters of each meson. However, for the radii calculations for the pion, we give results by constraining the fit up to $Q^2=0.55$ GeV$^2$. We also do not use the entire range for the kaon, constraining the fit up to $Q^2=1$ GeV$^2$. We report differences between radii extracted from differently constrained fits as systematic error. Table~\ref{tab:pion_radii} contains the fit parameters for the selected data sets, as well as the radii. Our results for $\langle r^2 \rangle^{\pi^u}_S$ are compatible with the ones obtained in Ref.~\cite{Gulpers:2013uca} from the connected contributions on an $N_f=2$ ${\cal O}(a)$-improved Wilson fermions ensemble at a pion mass of 280 MeV. Similar values are also obtained from a 310 MeV pion mass ensemble of $N_f=2+1$ overlap fermions~\cite{Kaneko:2010ru}. A sizeable logarithmic behavior in the pion mass is found in chiral perturbation theory~\cite{Gasser:1990bv,Bijnens:1998fm} which causes a rise in the radii. Therefore, at this stage, we do not attempt any comparison with the PDG value of $\langle r^2 \rangle^{\pi}_{V}$, as the ensemble we used is not at the physical value of the pion mass. Additionally, we observe that the extraction of the tensor radius is more sensitive to the fit range. We note that our results for $\langle r^2 \rangle^{K}_V$ are compatible with the ones of Ref.~\cite{Kaneko:2010ru} obtained from an $N_f=2+1$ ensemble of overlap fermions producing a pion mass of 310 MeV.
\begin{table}[h!]
\centering
\renewcommand{\arraystretch}{1.2}
\renewcommand{\tabcolsep}{3pt}
\resizebox{\textwidth}{!}{
\begin{tabular}{l c c c c c c c c c}
\\ [-3ex]
& $F_{S}(0)$ & $\,F_V(0)$ & $\,\kappa_T$ & $\,M_S$ & $\,M_V$ & $\,M_T$ & $\,\langle r^2 \rangle_{S}$ & $\,\langle r^2 \rangle_{V}$ & $\,\langle r^2 \rangle_{T}$\\%[0.75ex]
\hline
$\pi^u$ & $1.165(6)(4)$ & $1.017(6)(6)$ & $0.376(5)(6)$ & $1.221(36)(60)$ & $0.832(8)(14)$ & $0.800(12)(29)$ & $0.232(22)(54)$ & $0.291(6)(36)$ & $0.461(44)(121)$\\%[0.5ex]
\hline
$K^u$ & $1.093(8)(10)$ & $1.016(5)(11)$ & $0.844(9)(61)$ & $1.291(15)(40)$ & $0.822(5)(19)$ & $0.724(5)(59)$ & $0.149(3)(10)$ & $0.289(3)(13)$ & $0.382(4)(45)$\\
\hline
$K^s$ & $1.158(7)(8)$ & $1.017(4)(11)$ & $0.717(5)(17)$ & $1.552(17)(46)$ & $1.000(6)(22)$ & $0.930(6)(37)$ & $0.103(2)(6)$ & $0.289(3)(13)$ & $0.250(3)(20)$\\
\hline
\end{tabular}
}
\caption{The fit parameters for the pion and kaon form factors. The monopole masses are given in GeV and the radii in fm$^2$. The number in the first parentheses is the statistical uncertainty. The number in the second parentheses is the systematic error related to the fit range, and it is the the difference with the values using $Q^2_{\rm max}=1$ GeV$^2$ for the pion and $Q^2_{\rm max}=3$ GeV$^2$ for the kaon.}
\label{tab:pion_radii}
\vspace*{0.2cm}
\end{table}
\section{SU(3) flavor symmetry breaking}
The pion and kaon form factors are useful for studying SU(3) flavor symmetry breaking effects, which have been observed in nature in the charge radii of $\pi^{\pm}$ and $K^\pm$, as well as in $\pi^{0}$ and $K^0$. We examine the ratios $F^{\pi^u}/F^{K^u}$, $F^{\pi^u}/F^{K^s}$, and $F^{K^u}/F^{K^s}$ for the form factors to draw conclusions on these effects. Here, we only show results for the vector case. Since the value of $Q^2$ depends on the mass of the meson, we use the fitted values of the form factors in these ratios. We are also interested in the effects of excited-state contamination on the ratios, so we use the parameterizations on individual plateau values as well as the two-state fit.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.23]{SU3_ratio_GE.pdf}
\caption{The ratio $F_V^{\pi^u}/F_V^{K^u}$ (top), $F_V^{\pi^u}/F_V^{K^s}$ (center), and $F_V^{K^u}/F_V^{K^s}$ (bottom) for the vector form factor as a function of $Q^2$ using the results obtained from both frames. The results for $t_s/a=14,\,18$ and the two-state fit are shown with blue, red and green bands, respectively.}
\label{fig:SU3_vector}
\end{figure}
In Fig.~\ref{fig:SU3_vector} we show the ratios described above, and find that the excited-state contamination is much more suppressed than the individual form factors. Notably, the ratio $F^{\pi^u}/F^{K^u}$ has little $Q^2$ dependence and is nearly 1. Due to the similarity in the up-quark component of the pion and the kaon, we find the up-quark contribution to be about $80\%$ of that of the strange-quark for the ratios $F^{\pi^u}/F^{K^s}$ and $F^{K^u}/F^{K^s}$.
\section{Summary}
We present a calculation in lattice QCD of the scalar, vector, and tensor form factors for the pion and kaon obtained on an $N_f=2+1+1$ ensemble of twisted mass fermions with clover improvement that corresponds to 265 MeV pion mass and 530 MeV kaon mass. We renormalize the scalar and tensor form factors non-perturbatively and give the results in the $\overline{\rm MS}$ scheme at a scale of $2$ GeV. The vector form factor does not need renormalization as we use the conserved vector operator.
We utilize two kinematic setups to obtain the form factor: the rest frame and a momentum-boosted frame of 0.72 GeV ($\mathbf{p'}=\frac{2\pi}{L}(\pm1,\pm1,\pm1)$). We use a factor of 50 more statistics in the boosted frame compared to the rest frame in order to control statistical uncertainties. We extract the form factors up to $Q^2=2.5$ GeV$^2$ for the pion and up to $Q^2=3$GeV$^2$ for the kaon. Due to frame independence, we are able to combine the data of the rest and boosted frames. We find excellent agreement between the two frames for the vector form factors of both mesons, as well as for the strange-quark contributions for the scalar and tensor form factors of the kaon. We find good agreement in the small-$Q^2$ region for the up-quark part of the pion and kaon scalar and tensor form factors with deviations in the slope from $0.25-0.5$ GeV$^2$ for the pion and $0.35-1$ GeV$^2$. This indicates systematic uncertainties, such as cutoff effects.
We give final results for the form factors using the two-state fits and parameterize their $Q^2$ dependence using a monopole fit. This leads to the scalar, vector, and tensor monopole masses and the corresponding radii. We also extract the tensor anomalous magnetic moment, $\kappa_T$, which can only be obtained from fits on the tensor form factor data. In the study of the sensitivity of the extracted parameters on the fit range of $Q^2$ and the included frames, we find some tension in the scalar and tensor monopole masses and radii based on the data sets included in the fit. For the pion radii we use all data up to $Q^2=0.5$ GeV$^2$ and up to $Q^2=1$ GeV for the kaon. We provide a systematic error by varying the fit range.
We address SU(3) flavor symmetry breaking effects by comparing the parameterized form factors for the pion and kaon. We find that excited states ar suppressed in these ratios. Additionally, we find mild $Q^2$ dependence in the $F^{\pi^u}/F^{K^u}$ ratio for all operators. For the $F^{\pi^u}/F^{K^s}$ and $F^{K^u}/F^{K^s}$ cases we find SU(3) flavor symmetry breaking effects up to $20\%$.
\section{Acknowledgements}
We would like to thank all members of ETMC for a very constructive and enjoyable collaboration. M.C. thanks Martin Hoferichter for interesting discussions on Ref.~\cite{Hoferichter:2018zwu}.
M.C. and J.D. acknowledge financial support by the U.S. Department of Energy Early Career Award under Grant No.\ DE-SC0020405.
K.H. is financially supported by the Cyprus Research Promotion foundation under contract number POST-DOC/0718/0100, CULTURE/AWARD-YR/0220 and EuroCC project funded by the Deputy Ministry of Research, Innovation and Digital Policy, the Cyprus The Research and Innovation Foundation and
the European High-Performance Computing Joint Undertaking (JU) under grant agreement No. 951732. The JU received
support from the European Union’s Horizon 2020 research and innovation programme.
S.B. is supported by the H2020 project PRACE 6-IP (grant agreement No 82376) and the EuroCC project (grant agreement No. 951732).
C.L. is supported by the Argonne National Laboratory with a research subcontract with Temple University.
A.V is supported by the U.S. National Science Foundation under Grants No. PHY17-19626 and PHY20-13064.
This work was in part supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, contract no.~DE-AC02-06CH11357 and the European Joint Doctorate program STIMULATE funded from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 765048.
This work used computational resources from Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number TG-PHY170022.
It also includes calculations carried out on the HPC resources of Temple University, supported in part by the National Science Foundation through major research instrumentation grant number 1625061 and by the US Army Research Laboratory under contract number W911NF-16-2-0189.
\bibliographystyle{ieeetr}
|
2,869,038,153,761 | arxiv | \section{Introduction}
Galaxy clusters are the most massive collapsed objects in the Universe. Their total mass is dominated by the dark matter ($\sim80\%$) that shapes deep potential wells where the baryons ($\sim20\%$) virialize. The majority of the baryonic matter in clusters is in the form of intra-cluster medium (ICM), a hot ($kT \sim 2 - 10$ \kev) and tenuous ($n \sim 10^{-3} - 10^{-4}$ cm$^{-3}$) plasma emitting via thermal bremsstrahlung in the X-rays. In the past two decades, \chandra\ and \xmm\ established a dichotomy between cool-core (CC) and non cool-core (NCC) clusters \citep[\eg][]{molendi01}, depending whether their core region shows a drop in the temperature profile or not. The reason of this drop is a natural consequence of the strongly peaked X-ray emissivity of relaxed systems that leads to the gas cooling in this denser environment, counterbalanced by the active galactic nucleus (AGN) feedback \citep[\eg][for a review]{peterson06rev}. On the other hand, disturbed systems exhibit shallower X-ray emissivity, hence lower cooling rates. For this reason there is a connection between the properties of the cluster core to its dynamical state: relaxed (\ie\ in equilibrium) systems naturally end to form a CC while NCCs are typically found in unrelaxed objects \citep[\eg][]{leccardi10}, where the effects of energetic events such as mergers have tremendous impact on their core, leading either to its direct disruption \citep[\eg][]{russell12, rossetti13, wang16} or to its mixing with the surrounding hot gas \citep{zuhone10}. \\
\indent
In the hierarchical process of large-scale structures formation the cluster mass is assembled from aggregation and infall of smaller structures \citep[\eg][for a review]{kravtsov12rev}. Mergers between galaxy clusters are the most energetic phenomena in the Universe, with the total kinetic energy that is dissipated in a crossing time-scale ($\sim$ Gyr) during the collision in the range $10^{63-64}$ ergs. At this stage shock waves, cold fronts, hydrodynamic instabilities, turbulence, and non-thermal components are generated in the ICM, making merging clusters unique probes to study several aspects of plasma astrophysics. \\
\indent
The sub-arcsec resolution of \chandra\ in particular allowed detailed studies of previously unseen edges, \ie\ shocks and cold fronts \citep[see][for a review]{markevitch07rev}. Both are sharp surface brightness (SB) discontinuities that differ in the sign of the temperature jump across the front. Shocks mark pressure discontinuities where the gas is heated in the downstream (\ie\ post-shock) region, showing higher temperature values with respect to the upstream (\ie\ pre-shock) region. In cold fronts, instead, this jump is inverted and the pressure across the edge is almost continuous. \\
\indent
Shocks and cold fronts have been observed in several galaxy clusters that are clearly undergoing significant merging activity \citep[\eg][for some collections]{markevitch07rev, owers09sample, ghizzardi10, markevitch10arx}. The most remarkable example is probably the Bullet Cluster \citep{markevitch02bullet}, where an infalling subcluster (the ``Bullet'') creates a contact discontinuity between its dense and low-entropy core and the surrounding hot gas. Ahead of this cold front another drop in SB but with reversed temperature jump, \ie\ a shock front, is also detected. The observation of this kind of fronts requires that the collision occurs almost in the plane of the sky as projection effects can hide the SB and temperature jumps. Since shocks move quickly in cluster outskirts, in regions where the thermal brightness is fainter, they are more difficult to observe than cold fronts. \\
\indent
To be thorough, we mention that also morphologically relaxed clusters can exhibit SB discontinuities. However, in this case their origin is different: shocks can be associated with the outbursts of the central AGN \citep[\eg][]{forman05, mcnamara05, nulsen05} and cold fronts can be produced by sloshing motions of the cluster core that are likely induced by off-axis minor mergers \citep[\eg][]{ascasibar06, roediger11, roediger12}. \\
\indent
The observation of shocks and cold fronts allows to investigate relevant physical processes in the ICM. Shocks (and turbulence) are able to (re)-accelerate particles and amplify magnetic fields leading to the formation of cluster-scale diffuse radio emission known as radio relics (and radio halos; \eg\ \citealt{brunetti14rev}, for a review). In the presence of a strong shock it is also possible to investigate the electron-ion equilibration timescale in the ICM \citep{markevitch06}. Cold fronts are complementary probes of the ICM microphysics \citep[see][for a recent review]{zuhone16rev}. The absence of Rayleigh-Taylor or Kelvin-Helmholtz instabilities in these sharp discontinuities indeed gives information on the suppression of transport mechanisms in the ICM (\eg\ \citealt{ettori00, vikhlinin01cold, vikhlinin01magnetic}; however, see \citealt{ichinohe17}). The cold fronts generated by the infalling of groups and galaxies in clusters \citep[\eg][for recent works]{eckert14, eckert17a2142, ichinohe15, degrandi16, su17ngc1404} enable the study of other physical processes, such as magnetic draping and ram pressure stripping, providing information on the plasma mixing in the ICM \citep[\eg][and references therein]{dursi08}. \\
\indent
Currently, the number of detected edges in galaxy clusters is modest for observational limitations. This is reflected in the handful of merger shocks that have been confirmed using both X-ray imaging and spectral analysis. In this work we aim to search in an objective way for new merger induced shocks and cold fronts in massive and NCC galaxy clusters. The reason is to look for elusive features that can be followed-up in the radio band. To do that in practice we analyzed 15 clusters that were essentially selected because of the existence of adequate X-ray data available in the \chandra\ archive. The \chandra\ satellite is the the best instrument capable to resolve these sharp edges thanks to its excellent spatial resolution. We applied different techniques for spatial and spectral analysis including the application of an edge detection algorithm on the cluster images, the extraction and fitting of SB profiles, the spectral modeling of the X-ray (astrophysical and instrumental) background and the production of maps of the ICM thermodynamical quantities. This analysis is designed to properly characterize sharp edges distinguishing shocks from cold fronts. \\
\indent
The paper is organized as follows. In Section~\ref{sec:sample} we present the cluster sample. In Section~\ref{sec:analysis} we outline the edge-detection procedure and provide details about the X-ray data reduction (see also Appendices~\ref{app:absorption} and \ref{app:nxb}). In Section~\ref{sec:search} we describe how shocks and cold fronts were characterized in the analysis and in Section~\ref{sec:results} we present our results. Finally, in Section~\ref{sec:conclusions} we summarize and discuss our work. \\
\indent
Throughout the paper, we assume a \lcdm\ cosmology with $\omegal = 0.7$, $\omegam = 0.3$, and $\hzero = 70$ \kmsmpc. Statistical errors are provided at the $1\sigma$ confidence level, unless stated otherwise.
\section{Cluster sample}\label{sec:sample}
\begin{table*}
\centering
\caption{The galaxy clusters analyzed in this work (\textit{top}) and the ones that have been excluded as the presence of a shock/cold front (or both) has been already claimed (\textit{bottom}). Reported values of \mfive\ and $K_0$ are taken from \citet{planck14xxix} and \citet{cavagnolo09}, respectively.}
\label{tab:sample}
\begin{tabular}{lccccccc}
\hline
Cluster name & RA$_{\rm{J}2000}$ & DEC$_{\rm{J}2000}$ & \mfive\ & $z$ & $K_0$ & Shock & Cold front\\
& (h,m,s) & (\deg,\arcmin,\arcsec) & ($10^{14}$ M$_\odot$) & & (keV cm$^2$) & (ref.) & (ref.) \\
\hline
A2813 & 00 43 24 & $-$20 37 17 & 9.16 & 0.292 & $268\pm44$ & $\ldots$ & $\ldots$ \\
A370 & 02 39 50 & $-$01 35 08 & 7.63 & 0.375 & $322\pm91$ & \multicolumn{2}{c}{$^{\tilde{1}}$} \\
A399 & 02 57 56 & +13 00 59 & 5.29 & 0.072 & $153\pm19$ & $\ldots$ & $^1$ \\
A401 & 02 58 57 & +13 34 46 & 6.84 & 0.074 & $167\pm8$ & $\ldots$ & $^1$ \\
MACS J0417.5-1154 & 04 17 35 & $-$11 54 34 & 11.7 & 0.440 & $27\pm7$ & $\ldots$ & $^1$ \\
RXC J0528.9-3927 & 05 28 53 & $-$39 28 18 & 7.31 & 0.284 & $73\pm14$ & $\ldots$ & $^1$ \\
MACS J0553.4-3342 & 05 53 27 & $-$33 42 53 & 9.39 & 0.407 & $\ldots$ & $^1$ & $^1$ \\
AS592 & 06 38 46 & $-$53 58 45 & 6.71 & 0.222 & $59\pm14$ & $^1$ & $\ldots$ \\
A1413 & 11 55 19 & +23 24 31 & 5.98 & 0.143 & $164\pm8$ & $\ldots$ & $\ldots$ \\
A1689 & 13 11 29 & $-$01 20 17 & 8.86 & 0.183 & $78\pm8$ & $\ldots$ & $\ldots$ \\
A1914 & 14 26 02 & +37 49 38 & 6.97 & 0.171 & $107\pm18$ & $^1$ & $^1$ \\
A2104 & 15 40 07 & $-$03 18 29 & 5.91 & 0.153 & $161\pm42$ & $^1$ & $\ldots$ \\
A2218 & 16 35 52 & +66 12 52 & 6.41 & 0.176 & $289\pm20$ & $^1$ & $\ldots$ \\
Triangulum Australis & 16 38 20 & $-$64 30 59 & 7.91 & 0.051 & $\ldots$ & \multicolumn{2}{c}{$^{\tilde{1}}$} \\
A3827 & 22 01 56 & $-$59 56 58 & 5.93 & 0.098 & $165\pm12$ & $\ldots$ & $\ldots$ \\
\hline
A2744 & 00 14 19 & $-$30 23 22 & 9.56 & 0.308 & $438\pm59$ & $^2$ & $^3$ \\
A115 & 00 55 60 & +26 22 41 & 7.20 & 0.197 & $\ldots$ & $^4$ & $\ldots$ \\
El Gordo & 01 02 53 & $-$49 15 19 & 8.80 & 0.870 & $\ldots$ & $^5$ & $\ldots$ \\
3C438 & 01 55 52 & +38 00 30 & 7.35 & 0.290 & $\ldots$ & $^6$ & $^6$ \\
A520 & 04 54 19 & +02 56 49 & 7.06 & 0.199 & $325\pm29$ & $^7$ & $\ldots$ \\
A521 & 04 54 09 & $-$10 14 19 & 6.90 & 0.253 & $260\pm36$ & $^8$ & $^8$ \\
Toothbrush Cluster & 06 03 13 & +42 12 31 & 11.1 & 0.225 & $\ldots$ & $^{9,10}$ & $^{10}$ \\
Bullet Cluster & 06 58 31 & $-$55 56 49 & 12.4 & 0.296 & $307\pm19$ & $^{11,12}$ & $^{11}$ \\
MACS J0717.5+3745 & 07 17 31 & +37 45 30 & 11.2 & 0.546 & $220\pm96$ & $\ldots$ & $^{13}$ \\
A665 & 08 30 45 & +65 52 55 & 8.23 & 0.182 & $135\pm23$ & $^{14}$ & $^{14}$ \\
A3411 & 08 41 55 & $-$17 29 05 & 6.48 & 0.169 & $270\pm5$ & $^{15}$ & $\ldots$ \\
A754 & 09 09 08 & $-$09 39 58 & 6.68 & 0.054 & $270\pm24$ & $^{16}$ & $^{17}$ \\
MACS J1149.5+2223 & 11 49 35 & +22 24 11 & 8.55 & 0.544 & $281\pm39$ & $\ldots$ & $^{18}$ \\
Coma Cluster & 12 59 49 & +27 58 50 & 5.29 & 0.023 & $\ldots$ & $^{19,20}$ & $\ldots$ \\
A1758 & 13 32 32 & +50 30 37 & 7.99 & 0.279 & $231\pm37$ & $\ldots$ & $^{21}$ \\
A2142 & 15 58 21 & +27 13 37 & 8.81 & 0.091 & $68\pm3$ & $\ldots$ & $^{22}$ \\
A2219 & 16 40 21 & +46 42 21 & 11.0 & 0.226 & $412\pm43$ & $^{23}$ & $\ldots$ \\
A2256 & 17 03 43 & +78 43 03 & 6.34 & 0.058 & $350\pm12$ & $^{24}$ & $^{25}$ \\
A2255 & 17 12 31 & +64 05 33 & 5.18 & 0.081 & $529\pm28$ & $^{26}$ & $\ldots$ \\
A2319 & 19 21 09 & +43 57 30 & 8.59 & 0.056 & $270\pm5$ & $\ldots$ & $^{27}$ \\
A3667 & 20 12 30 & $-$56 49 55 & 5.77 & 0.056 & $160\pm15$ & $^{28,29}$ & $^{30}$ \\
AC114 & 22 58 52 & $-$34 46 55 & 7.78 & 0.312 & $200\pm28$ & $\ldots$ & $^{31}$ \\
\hline
\multicolumn{8}{{p{.8\textwidth}}}{\textit{Notes.} References: $^1$ this work (if a tilde is superimposed the edge nature is uncertain); $^2$ \citet{eckert16}; $^3$ \citet{owers11}; $^4$ \citet{botteon16a115}; $^5$ \citet{botteon16gordo}; $^6$ \citet{emery17}; $^7$ \citet{markevitch05}; $^8$ \citet{bourdin13}; $^9$ \citet{ogrean13toothbrush}; $^{10}$ \citet{vanweeren16toothbrush}; $^{11}$ \citet{markevitch02bullet}; $^{12}$ \citet{shimwell15}; $^{13}$ \citet{vanweeren17macs0717}; $^{14}$ \citet{dasadia16a665}; $^{15}$ \citet{vanweeren17a3411}; $^{16}$ \citet{macario11}; $^{17}$ \citet{ghizzardi10}; $^{18}$ \citet{ogrean16} $^{19}$ \citet{akamatsu13coma}; $^{20}$ \citet{ogrean13coma}; $^{21}$ \citet{david04}; $^{22}$ \citet{markevitch00}; $^{23}$ \citet{canning17}; $^{24}$ \citet{trasatti15}; $^{25}$ \citet{sun02}; $^{26}$ \citet{akamatsu17a2255}; $^{27}$ \citet{ohara04}; $^{28}$ \citet{finoguenov10}; $^{29}$ \citet{storm17arx}; $^{30}$ \citet{vikhlinin01cold}; $^{31}$ \citet{defilippis04}.}
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Summary of the \chandra\ observations analyzed in this work. The net exposure time is after the flare filtering. The averaged values of $N_{\rm H_{I}}$ \citep{kalberla05} and $N_{\rm H,tot}$ \citep{willingale13} measured in the direction of the clusters are also reported; these are compared in Fig.~\ref{fig:nh-vs-nh}.}
\label{tab:chandra_obs}
\begin{tabular}{lccccccc}
\hline
Cluster name & & Observation & Detector & Exposure & Total exposure & $N_{\rm H_{I}}$ & $N_{\rm H,tot}$ \\
& & ID & (ACIS) & (ks) & (net ks) & $10^{20}$ cm$^{-2}$ & $10^{20}$ cm$^{-2}$ \\
\hline
\multirow{2}*{A2813} & \ldelim\{{2}{1mm} & 9409, 16278, 16366 & I, I, S & 20, 8, 37 &
\multirow{2}*{114} & \multirow{2}*{1.83} & \multirow{2}*{1.93} \\
& & 16491, 16513 & S, S & 37, 30 \\
A370 & & 515$^\dagger$, 7715 & S$^\ast$, I & 90, 7 & 64 & 3.01 & 3.32 \\
A399 & & 3230 & I & 50 & 42 & 10.6 & 17.1 \\
\multirow{2}*{A401} & \ldelim\{{2}{1mm} & 518$^\dagger$, 2309, 10416, 10417 & I$^\ast$, I$^\ast$, I, I & 18, 12, 5, 5 & \multirow{2}*{176} & \multirow{2}*{9.88} & \multirow{2}*{15.2} \\
& & 10418, 10419, 14024 & I, I, I & 5, 5, 140 \\
MACS J0417.5-1154 & & 3270, 11759, 12010 & I, I, I & 12, 54, 26 & 87 & 3.31 & 3.87 \\
RXC J0528.9-3927 & & 4994, 15177, 15658 & I, I, I & 25, 17, 73 & 96 & 2.12 & 2.26 \\
MACS J0553.4-3342 & & 5813, 12244 & I, I & 10, 75 & 77 & 3.32 & 3.79 \\
\multirow{2}*{AS592} & \ldelim\{{2}{1mm} & 9420, 15176 & I, I & 20, 20 & \multirow{2}*{98} &
\multirow{2}*{6.07} & \multirow{2}*{8.30}\\
& & 16572, 16598 & I, I & 46, 24 \\
\multirow{2}*{A1413} & \ldelim\{{2}{1mm} & 537, 1661, 5002 & I, I, I & 10, 10, 40 & \multirow{2}*{128} &
\multirow{2}*{1.84} & \multirow{2}*{1.97} \\
& & 5003, 7696 & I, I & 75, 5 \\
\multirow{2}*{A1689} & \ldelim\{{2}{1mm} & 540, 1663, 5004 & I$^\ast$, I$^\ast$, I & 10, 10, 20 & \multirow{2}*{185} &
\multirow{2}*{1.83} & \multirow{2}*{1.98}\\
& & 6930, 7289, 7701 & I, I, I & 80, 80, 5 \\
A1914 & & 542$^\dagger$, 3593 & I, I & 10, 20 & 23 & 1.06 & 1.10 \\
A2104 & & 895 & S$^\ast$ & 50 & 48 & 8.37 & 14.5 \\
\multirow{2}*{A2218} & \ldelim\{{2}{1mm} & 553$^\dagger$, 1454$^\dagger$ & I$^\ast$, I$^\ast$ & 7, 13 & \multirow{2}*{47} &
\multirow{2}*{2.60} & \multirow{2}*{2.83} \\
& & 1666, 7698 & I, I & 50, 5 \\
Triangulum Australis & & 17481 & I & 50 & 49 & 11.5 & 17.0 \\
A3827 & & 3290 & S & 50 & 45 & 2.65 & 2.96 \\
\hline
\multicolumn{8}{{p{.95\textwidth}}}{\textit{Notes.} \obsid s marked with $^\dagger$ were excluded in the spectral analysis as the focal plane temperature was warmer than the standard $-119.7$ \deg C observations and there is not Charge Transfer Inefficiency correction available to apply to this data with subsequent uncertainties in the spectral analysis of these datasets. All the observations were taken in \vfaint\ mode except the ones marked by $^\ast$ that were instead taken in \faint\ mode.}
\end{tabular}
\end{table*}
We selected a number of galaxy clusters where it is likely to detect merger-induced discontinuities searching for (i) massive systems in a dynamical disturbed state and (ii) with an adequate X-ray count statistics, based on current observations available in the \chandra\ archive. In particular the following.
\begin{enumerate}
\item Using the \planck\ Sunyaev-Zel'dovich (SZ) catalog (PSZ1; \citealt{planck14xxix}) we selected clusters with mass\footnote{\mfive\ is the mass within the radius that encloses a mean overdensity of 500 with respect to the critical density at the cluster redshift.}, as inferred from the SZ signal, $\mfive > 5 \times 10^{14}$ \msun. Searching for diffuse radio emission connected with shocks (radio relics and edges of radio halos) is a natural follow-up of our study, hence this high mass threshold has been set mainly because non-thermal emission is more easily detectable in massive merging systems \citep[\eg][]{cassano13, degasperin14, cuciti15}. As a second step, we select only dynamically active systems excluding all the CC clusters. To do that we used the Archive of \chandra\ Cluster Entropy Profile Tables (ACCEPT; \citealt{cavagnolo09}) and the recent compilation by \citet{giacintucci17} to look for the so-called core entropy value $K_0$ \citep[see Eq.~4 in][]{cavagnolo09}, which is a good proxy to identify NCC systems \citep[\eg][]{mccarthy07}: clusters with $K_0 < 30 - 50$ \kevcmsq\ exhibit all the properties of a CC hence they were excluded in our analysis.
\item Detecting shocks and cold fronts requires adequate X-ray count statistics as in particular the latter discontinuities are found in cluster outskirts, where the X-ray brightness is faint. For this reason, among the systems found in the \chandra\ data archive\footnote{http://cda.harvard.edu/chaser/} satisfying (i), we excluded clusters with $\lesssim 4-5 \times 10^4$ counts in the \chandra\ broad-band $0.5-7.0$ \kev\ with the exposure available at the time of writing. We did that by converting the \rosat\ flux in the $0.1-2.4$ \kev\ band reported in the main X-ray galaxy cluster catalogs [Brightest Cluster Sample (BCS), \citealt{ebeling98}; extended Brightest Cluster Sample (eBCS), \citealt{ebeling00}; Northern ROSAT All-Sky (NORAS), \citealt{bohringer00}; \rosat-ESO Flux Limited X-ray (REFLEX), \citealt{bohringer04}; MAssive Cluster Survey (MACS), \citealt{ebeling07, ebeling10}] into a \chandra\ count rate using the PIMMS software\footnote{http://heasarc.gsfc.nasa.gov/Tools/w3pimms.html} and assuming a thermal emission model. Clusters without a reported \rosat\ flux in the catalogs were individually checked by measuring the counts in circle enclosing the cluster SB profile when it is below the level of the background and thus rejected adopting the same count threshold.
\end{enumerate}
We ended up with 37 massive and NCC cluster candidates for our study (Tab.~\ref{tab:sample}). In 22 of these systems (bottom of Tab.~\ref{tab:sample}) shocks/cold fronts (or both) have been already discovered and consequently we focused on the analysis of the remaining 15 clusters (top of Tab.~\ref{tab:sample}). We anticipate that the results on the detection of shocks and cold fronts in these clusters are summarized in Section~\ref{sec:summary}.
\section{Methods and data analysis}\label{sec:analysis}
To firmly claim the presence of a shock or a cold front in the ICM, both imaging and spectral analysis are required. Our aim is to search for SB and temperature discontinuities in the most objective way as possible, without being too much biased by prior constraints due to guesses of the merger geometry or presence of features at other wavelengths (\eg\ a radio relic). To do so did the following.
\begin{enumerate}
\item Applied an edge detection filter to pinpoint possible edges in the clusters that were also searched visually in the X-ray images for a comparison.
\item Selected the most clear features three times above the root mean square noise level of the filtered images following a coherent arc-shaped structure extending for $>100$ kpc in length.
\item Investigated deeper the pre-selected edges with the extraction and fitting of SB profiles.
\item Performed the spectral analysis in dedicated spectral regions to confirm the nature of the jumps.
\end{enumerate}
\noindent
In addition, we produced maps of the ICM thermodynamical quantities to help in the interpretation of the features found with the above-mentioned procedure. \\
\indent
In the following sections we describe into details the X-ray data analysis performed in this work.
\subsection{X-ray data preparation}
In Tab.~\ref{tab:chandra_obs} we report all the \chandra\ Advanced CCD Imaging Spectrometer I-array (\acisi) and Advanced CCD Imaging Spectrometer S-array (\aciss) observations of our cluster sample. Data were reprocessed with \ciao\ v4.9 and \chandra\ \caldb\ v4.7.3 starting from \texttt{level=1} event file. Observation periods affected by soft proton flares were excluded using the \texttt{deflare} task after the inspection of the light curves extracted in the $0.5-7.0$ \kev\ band. For \acisi, these where extracted from the front-illuminated S2 chip that was kept on during the observation or in one front-illuminated \acisi\ chip, avoiding the cluster diffuse emission, if S2 was turned off. In \aciss\ observations the target is imaged in the back-illuminated S3 chip hence light curves were extracted in S1, also back-illuminated\footnote{In the \aciss\ \obsid\ 515 the light curve was extracted in S2 as S1 was turned off.}. \\
Cluster images were created in the $0.5-2.0$ \kev\ band and combined with the corresponding monochromatic exposure maps (given the restricted energy range) in order to produce exposure-corrected images binned to have a pixel size of 0.984 arcsec. The datasets of clusters observed multiple times (11 out of 15) were merged with \texttt{merge\_obs} before this step. \\
The \texttt{mkpsfmap} script was used to create and match point spread function (PSF) map at $1.5$ \kev\ with the corresponding exposure map for every \obsid. For cluster with multiple \obsid s we created a single exposure-corrected PSF map with minimum size. Thus, point sources were detected with the \texttt{wavdetect} task, confirmed by eye and excised in the further analysis.
\subsection{Edge detection filter}
In practice, the visual inspection of X-ray images allows to identify the candidate discontinuities \citep{markevitch07rev}. We complement this approach with the visual inspection of filtered images. \citet{sanders16ggm} presented a Gaussian gradient magnitude (GGM) filter that aims to highlight the SB gradients in an image, similarly to the Sobel filter (but assuming Gaussian derivatives); in fact, it has been shown that these GGM images are particularly useful to identify candidate sharp edges, such as shocks and cold fronts \citep[\eg][]{walker16}. The choice of the Gaussian width $\sigma$ in which the gradient is computed depends on the physical scale of interest, magnitude of the jump and data quality: edges become more visible with increasing jump size and count rate; this requires images filtered on multiple scales to best identify candidate discontinuities \citep[\eg][]{sanders16ggm, sanders16centaurus}. In this respect, we applied the GGM filter adopting $\sigma = 1, 2, 4$, and $8$ pixels (a pixel corresponds to 0.984 arcsec) to the exposure-corrected images of the clusters in our sample. We noticed that the use of small length filters (1 and 2 pixels) is generally ineffective in detecting discontinuities in cluster outskirts due to the low counts in these peripheral regions \citep[see also][]{sanders16ggm}. Instead Gaussian widths of $\sigma = 4$ and $8$ pixels better highlight the SB gradients without saturating too much the ICM emission (as it would result with the application of filters with scales $\sigma = 16$ and $32$ pixels). For this reason, here we will report GGM filtered images with these two scales.
\subsection{Surface brightness profiles}
After looking at X-ray and GGM images, we extracted and fitted SB profiles of the candidate discontinuities on the $0.5-2.0$ \kev\ exposure-corrected images of the clusters using \proffit\ v1.4 \citep{eckert11}. A background image was produced by matching (with \texttt{reproject\_event}) the background templates to the corresponding event files for every \obsid. This was normalized by counts in the $9.5-12.0$ \kev\ band and subtracted in the SB analysis. Corrections were typically within 10\% except for the S3 chip in \faint\ mode (\obsid s 515 and 895) where the correction was $\sim45\%$. For clusters observed multiple times, all the \obsid s were used in the fits. In the profiles, data were grouped to reach a minimum signal-to-noise ratio threshold per bin of 7.
\subsection{Spectra}
\begin{figure}
\centering
\includegraphics[width=\hsize]{figure/nH.pdf}
\caption{Comparison between the H$_{\rm I}$ density column from \citet{kalberla05} and the total (H$_{\rm I}$+H$_{\rm 2}$) density column from \citet{willingale13}. The dashed line indicates the linear correlation as a reference.}
\label{fig:nh-vs-nh}
\end{figure}
The scientific scope of our work requires a careful treatment of the background of X-ray spectra, as in particular shock fronts are typically observed in the cluster outskirts, where the source counts are low. We modeled the background by extracting spectra in source free regions at the edge of the field-of-view. This was not possible for \acisi\ observations of nearby objects and for clusters observed with \aciss\ as the ICM emission covers all the chip area. In this respect we used observations within 3\deg\ to the target pointing (\ie\ \obsid\ 15068 for A399 and A401, \obsid\ 3142 for A2104, \obsid\ 2365 for Triangulum Australis and \obsid\ 17881 for A3827) to model the components due to the cosmic X-ray background (CXB) and to the Galactic local foreground. The former is due to the superposition of the unresolved emission from distant point sources and can be modeled as a power-law with photon index $\Gamma_{\rm cxb} = 1.42$ \citep[\eg][]{lumb02}. The latter can be decomposed in two-temperature thermal emission components \citep{kuntz00} due to the Galactic Halo (GH) emission and the Local Hot Bubble (LHB), with temperature $kT_{\rm gh} = 0.25$ \kev\ and $kT_{\rm lhb} = 0.14$ \kev\ and solar metallicity. Galactic absorption for GH and CXB was taken into account using the averaged values measured in the direction of the clusters from the Leiden/Argentine/Bonn (LAB) Survey of Galactic H$_{\rm I}$ \citep{kalberla05}. However, it has to be noticed that the total hydrogen column density is formally $N_{\rm H,tot} = N_{\rm H_{I}} + 2N_{\rm H_{2}}$, where $N_{\rm H_{2}}$ accounts for molecular hydrogen whose contribution seems to be neglected for low-density columns. In Tab.~\ref{tab:chandra_obs} we reported the values of $N_{\rm H_{I}}$ \citep{kalberla05} and $N_{\rm H,tot}$ \citep{willingale13} in the direction of the clusters in our sample, while in Fig.~\ref{fig:nh-vs-nh} we compared them. In Appendix~\ref{app:absorption} we discuss the five clusters (A399, A401, AS592, A2104 and Triangulum Australis) that do not lay on the linear correlation of Fig.~\ref{fig:nh-vs-nh}. \\
\indent
Additionally to the astrophysical CXB, GH and LHB emission, an instrumental non-X-ray background (NXB) component due to the interaction of high-energy particles with the satellite and its electronics was considered. Overall, the background model we used can be summarized as
\begin{equation}\label{eq:bgk_x}
apec_{\rm lhb} + phabs * (apec_{\rm gh} + powerlaw_{\rm cxb}) + bkg_{\rm nxb}
\end{equation}
\noindent
where the $bkg_{\rm nxb}$ was modeled with
\begin{equation}\label{eq:bkg_model}
\begin{array}{ll}
expdec + power + \sum gaussian, & \mbox{for\quad \acisi} \\
\\
expdec + bknpower + \sum gaussian, & \mbox{for \quad \aciss} \\
\end{array}
\end{equation}
\noindent
where a number of Gaussian fluorescence emission lines were superimposed on to the continua. For more details on the NXB modeling the reader is referred to Appendix~\ref{app:nxb}. \\
\indent
The ICM emission was described with a thermal model taking into account the Galactic absorption in the direction of the clusters (\cf\ Tab.~\ref{tab:chandra_obs} and Appendix~\ref{app:absorption})
\begin{equation}\label{eq:source_icm}
phabs*apec_{\rm icm}\:,
\end{equation}
\noindent
the metallicity of the ICM was set to $0.3$ \zsun\ \citep[\eg][]{werner13}. \\
\indent
Spectra were simultaneously fitted (using all the \obsid s available for each cluster, unless stated otherwise) in the $0.5-11.0$ \kev\ energy band for \acisi\ and in the $0.7-10.0$ \kev\ band for \aciss, using the package \xspec\ v12.9.0o with \citet{anders89} abundances table. Since the counts in cluster outskirts are poor, Cash statistics \citep{cash79} was adopted.
\subsubsection{Contour binning maps}
We used \contbin\ v1.4 \citep{sanders06contbin} to produce projected maps of temperature, pressure and entropy for all the clusters of our sample. The clusters were divided in regions varying the geometric constraint value \citep[see][for details]{sanders06contbin} according to the morphology of each individual object to better follow the SB contour of the ICM. We required $\sim 2500$ background-subtracted counts per bin in the $0.5-2.0$ \kev\ band. Spectra were extracted and fitted as described in the previous section. \\
\indent
While the temperature is a direct result of the spectral fitting, pressure and entropy require the passage through the normalization value of the thermal model, \ie
\begin{equation}\label{eq:norm-xspec}
\mathcal{N} = \frac{10^{-14}}{4\pi[D_{A}(1+z)]^2} \int n_e n_H\:dV\quad({\rm cm^{-5}})
\end{equation}
\noindent
where $D_A$ is the angular size distance to the source (cm) whereas $n_e$ and $n_H$ are the electron and hydrogen density (cm$^{-3}$), respectively. The projected emission measure is
\begin{equation}\label{eq:pseudo-em}
\textrm{EM} = \mathcal{N}/A\quad({\rm cm^{-5}\,arcsec^{-2}})
\end{equation}
\noindent
with $A$ the area of each bin, and it is proportional to the square of the electron density integrated along the line of sight. Using Eq.~\ref{eq:pseudo-em} we can compute the pseudo-pressure
\begin{equation}\label{eq:pseudo-pressure}
P=kT(\textrm{EM})^{1/2}\quad({\rm keV\,cm^{-5/2}\,arcsec^{-1}})
\end{equation}
\noindent
and pseudo-entropy
\begin{equation}\label{eq:pseudo-entropy}
K=kT(\textrm{EM})^{-1/3}\quad({\rm keV\,cm^{5/3}\,arcsec^{-2/3}})
\end{equation}
\noindent
values for each spectral bin. The prefix pseudo- underlines that these quantities are projected along the line of sight \citep[\eg][]{mazzotta04}.
\section{Characterization of the edges}\label{sec:search}
The inspection of the cluster X-ray and GGM filtered images provide the first indication of putative discontinuities in the ICM. These need to be characterized with standard imaging and spectral analysis techniques to be firmly claimed as edges. \\
\indent
The SB profiles of the candidate shocks and cold fronts were modeled assuming that the underlying density profile follows a broken power-law \citep[\eg][and references therein]{markevitch07rev}. In the case of spherical symmetry, the downstream and upstream (subscripts $d$ and $u$) densities differ by a factor $\compr \equiv n_d/n_u$ at the distance of the jump $r_j$
\begin{equation}\label{eq:bknpow}
\begin{array}{ll}
n_d (r) = \compr n_0 \left( \frac{r}{r_j} \right)^{a_1}, & \mbox{if} \quad r \leq r_j \\
\\
n_u (r) = n_0 \left( \frac{r}{r_j} \right)^{a_2}, & \mbox{if} \quad r > r_j
\end{array}
\end{equation}
\noindent
where $a_1$ and $a_2$ are the power-law indices, $n_0$ is a normalization factor and $r$ denotes the radius from the center of the sector. In the fitting procedure all these quantities were free to vary. We stress that the values of \compr\ reported throughout the paper have been deprojected along the line of sight under the spherical assumption by \proffit\ \citep{eckert11}. \\
\indent
A careful choice of the sector where the SB profile is extracted is needed to properly describe a sharp edge due to a shock or a cold front. In this respect, the GGM filtered images give a good starting point to delineate that region. During the analysis, we adopted different apertures, radial ranges and positions for the extracting sectors, then we used the ones that maximize the jump with the best-fitting statistics. Errors reported for \compr\ however do not account for the systematics due to the sector choice. \\
\indent
Spectral fitting is necessary to discriminate the nature of a discontinuity as the temperature ratio $\rat \equiv T_{d}/T_{u}$ is $>1$ in the case of a shock and $<1$ in the case of a cold front \citep[\eg][]{markevitch02bullet}. The temperature map can already provide indication about the sign of the jump. However, once that the edge position is well identified by the SB profile analysis, we can use a sector with the same aperture and center of that maximizing the SB jump to extract spectra in dedicated sectors covering the downstream and upstream regions. In this way we can carry out a self-consistent analysis and avoid possible contamination due to large spectral bins that might contain plasma at different temperature unrelated to the shock/cold front. \\
\indent
In the case of a shock, the Mach number \mach\ can be determined by using the Rankine-Hugoniot jump conditions \citep[\eg][]{landau59} for the density
\begin{equation}\label{eq:mach-from-dens}
\compr \equiv \frac{n_d}{n_u} = \frac{4\mach_{\rm{SB}}^2}{\mach_{\rm{SB}}^2 + 3}
\end{equation}
\noindent
and temperature
\begin{equation}\label{eq:mach-from-temp}
\rat \equiv \frac{T_d}{T_u} = \frac{5\mach_{\rm kT}^4 + 14\mach_{\rm kT}^2 -3}{16\mach_{\rm kT}^2}
\end{equation}
\noindent
here reported for the case of monatomic gas.
\section{Results}\label{sec:results}
We find 29 arc-shaped features three times above the root mean square noise level in the GGM filtered images, 22 of them were found to trace edges in the SB profiles. In Fig.~\cref{fig:a2813,fig:a370,fig:a399,fig:a401,fig:macsj0417,fig:rxcj0528,fig:macsj0553,fig:as592,fig:a1413,fig:a1689,fig:a1914,fig:a2104,fig:a2218,fig:triangulum,fig:a3827} we show a \chandra\ image in the $0.5-2.0$ \kev\ energy band, the products of the GGM filters, the maps of the ICM thermodynamical quantities and the SB profiles for each cluster of the sample. The \cstatdof\ and the temperature fractional error for each spectral region are reported in Appendix~\ref{app:errors}. The edges are highlighted in the \chandra\ images in white for shocks and in green for cold fronts. Discontinuities for whose spectral analysis does not firmly allow this distinction are reported in yellow. The temperature values obtained by fitting spectra in dedicated upstream and downstream regions are reported in shaded boxes (whose lengths cover the radial extent of the spectral region) in the panels showing the SB profiles. If the jump was detected also in temperature, the box is colored in red for the hot gas and in blue for the cold gas; conversely, if the upstream and downstream temperatures are consistent (within $1\sigma$), the box is displayed in yellow. As a general approach, in the case of weak discontinuities we also compare results with the best fit obtained with a single power-law model. \\
\indent
In the following we discuss the individual cases. In particular, in Sections~\ref{sec:detection} and \ref{sec:non-detection} we report the clusters with and without detected edges, respectively. The results of our detections are summarized in Section~\ref{sec:summary} and in Tab.~\ref{tab:results}. In Appendix~\ref{app:null} we show the seven arc-like features selected by the GGM filtered images that do not present a discontinuity in the SB profile fitting.
\subsection{Detections}\label{sec:detection}
\subparagraph{A370.} This represents the most distant object in Abell catalog \citep{abell89}, at a redshift of $z=0.375$. It is famous to be one of the first galaxy clusters where a gravitational lens was observed \citep{soucail87, kneib93}. The X-ray emission is elongated in the N-S direction (Fig~\ref{fig:a370}a); the bright source to the north is a nearby ($z=0.044$) elliptical galaxy not associated with the cluster. \\
A370 was observed two times with \chandra. The longer observation (\obsid\ 515) was performed in an early epoch after \chandra\ launch in which an accurate modeling of the ACIS background is not possible, making the spectral analysis of this dataset unfeasible (see notes in Tab.~\ref{tab:chandra_obs} for more details). The other observation of A370 (\obsid\ 7715) is instead very short. For this reason we did only a spatial analysis for this target. \\
The GGM images in Fig.~\ref{fig:a370}b,c suggest the presence of a rapid SB variation both in the W and E direction. The SB profiles taken across these directions were precisely modeled in our fits in Fig.~\ref{fig:a370}d,e, revealing jumps with similar entity ($\compr\ \sim 1.5$). The inability of performing spectral analysis in this cluster leaves their origin unknown.
An additional SB gradient suggested by the GGM images toward the S direction did not reveal the presence of an edge with the SB profile fitting (Fig.~\ref{fig:a370_noedge}).
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/02_a370/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/02_a370/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/02_a370/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{d)}{figure/02_a370/edge_E_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{e)}{figure/02_a370/edge_W_conf_label.png}}
\caption{A370. \chandra\ $0.5-2.0$ \kev\ image (\textit{a}), GGM filtered images on scales of 4 (\textit{b}) and 8 (\textit{c}) pixels and best-fitting broken power-law (solid blue) and single power-law (dashed red) models (residuals on the bottom are referred to the former) of the extracted SB profiles (\textit{d,e}). The sectors where the SB profiles were fitted and the positions of the relative edges are marked in the \chandra\ image in yellow.}
\label{fig:a370}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/03_a399/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/03_a399/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/03_a399/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace d)}{figure/03_a399/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace e)}{figure/03_a399/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace f)}{figure/03_a399/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/03_a399/edge_SE_inner_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{h)}{figure/03_a399/edge_SE_outer_conf_label.png}}
\caption{A399. \chandra\ $0.5-2.0$ \kev\ image (\textit{a}), GGM filtered images on scales of 4 (\textit{b}) and 8 (\textit{c}) pixels, projected maps of temperature (\textit{d}), pressure (\textit{e}), entropy (\textit{f}) and best-fitting broken power-law (solid blue) and single power-law (dashed red) models (residuals on the bottom are referred to the former) of the extracted SB profiles (\textit{g,h}). The goodness of fits is reported in Fig.~\ref{fig:a399_errors}. The sectors where the SB profiles were fitted and the positions of the relative edges are marked in the \chandra\ image in green (cold front) and yellow. The dashed arcs show the radial limits used for measuring the temperature downstream and upstream the front, which values (in \kev) are reported in the shaded boxes in the SB profiles. Note that in the GGM filtered images the straight and perpendicular features are artifacts due to chip gaps.}
\label{fig:a399}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/04_a401/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/04_a401/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/04_a401/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{d)}{figure/04_a401/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{e)}{figure/04_a401/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{f)}{figure/04_a401/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/04_a401/edge_SE_conf_label.png}}
\caption{A401. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:a401_errors}. The position of the edge is marked in the \chandra\ image in green (cold front).}
\label{fig:a401}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/05_macsj0417/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/05_macsj0417/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/05_macsj0417/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad d)}{figure/05_macsj0417/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad e)}{figure/05_macsj0417/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad f)}{figure/05_macsj0417/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/05_macsj0417/edge_SE_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{h)}{figure/05_macsj0417/edge_NW_conf_label.png}}
\caption{MACSJ0417. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:macsj0417_errors}. The positions of the edges are marked in the \chandra\ image in green (cold fronts).}
\label{fig:macsj0417}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/06_rxcj0528/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/06_rxcj0528/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/06_rxcj0528/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad d)}{figure/06_rxcj0528/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad e)}{figure/06_rxcj0528/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad f)}{figure/06_rxcj0528/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/06_rxcj0528/edge_W_conf_label.png}}
\caption{RXCJ0528. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:rxcj0528_errors}. The position of the edge is marked in the \chandra\ image in green (cold front).}
\label{fig:rxcj0528}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/07_macsj0553/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/07_macsj0553/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/07_macsj0553/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad d)}{figure/07_macsj0553/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad e)}{figure/07_macsj0553/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad f)}{figure/07_macsj0553/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/07_macsj0553/edge_E_inner_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{h)}{figure/07_macsj0553/edge_E_outer_narrow_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{i)}{figure/07_macsj0553/edge_W_conf_label.png}}
\caption{MACS J0553. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:macsj0553_errors}. The positions of the edges are marked in the \chandra\ image in green (cold fronts) and white (shock).}
\label{fig:macsj0553}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/08_as592/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/08_as592/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/08_as592/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad d)}{figure/08_as592/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad e)}{figure/08_as592/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad f)}{figure/08_as592/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/08_as592/edge_SW_conf_label.png}}
\caption{AS592. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:as592_errors}. The position of the edge is marked in the \chandra\ image in white (shock).}
\label{fig:as592}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/11_a1914/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/11_a1914/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/11_a1914/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace d)}{figure/11_a1914/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace e)}{figure/11_a1914/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace f)}{figure/11_a1914/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/11_a1914/edge_E_upper_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{h)}{figure/11_a1914/edge_E_lower_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{i)}{figure/11_a1914/edge_W_conf_label.png}}
\caption{A1914. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:a1914_errors}. The positions of the edges are marked in the \chandra\ image in green (cold front) and white (shock).}
\label{fig:a1914}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/12_a2104/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/12_a2104/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/12_a2104/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace d)}{figure/12_a2104/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace e)}{figure/12_a2104/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace f)}{figure/12_a2104/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/12_a2104/edge_SE_inner_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{h)}{figure/12_a2104/edge_SE_outer_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{i)}{figure/12_a2104/edge_SW_conf_label.png}}
\caption{A2104. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:a2104_errors}. The positions of the edges are marked in the \chandra\ image in white (shock) and in yellow.}
\label{fig:a2104}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/13_a2218/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/13_a2218/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/13_a2218/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace d)}{figure/13_a2218/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace e)}{figure/13_a2218/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace f)}{figure/13_a2218/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/13_a2218/edge_N_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{h)}{figure/13_a2218/edge_SE_inner_conf_label.png}}
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{i)}{figure/13_a2218/edge_SE_outer_conf_label.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{j)}{figure/13_a2218/edge_SW_conf_label.png}}
\caption{A2218. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:a2218_errors}. The positions of the edges are marked in the \chandra\ image in green (cold front), white (shocks) and yellow.}
\label{fig:a2218}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/14_triangulum/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/14_triangulum/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/14_triangulum/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad d)}{figure/14_triangulum/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad e)}{figure/14_triangulum/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad f)}{figure/14_triangulum/S.png}}\\
\subfloat{\subfigimgsb[width=.3\textwidth,trim={0cm 0cm 4cm 0cm},clip]{g)}{figure/14_triangulum/edge_E_conf_label.png}}
\caption{Triangulum Australis. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:triangulum_errors}. The position of the edge is marked in the \chandra\ image in yellow. Note that in the GGM filtered images the straight and perpendicular features are artifacts due to chip gaps.}
\label{fig:triangulum}
\end{figure*}
\subparagraph{A399 and A401.} These two objects constitute a close system ($z=0.072$ and $z=0.074$, respectively) of two interacting galaxy clusters \citep[\eg][]{fujita96}. Their X-ray morphology is disturbed (Fig.~\ref{fig:a399}a and \ref{fig:a401}a) and the ICM temperature distribution irregular \citep{bourdin08}, revealing the unrelaxed state of the clusters. Recently, \citet{akamatsu17filament} claimed the presence of an accretion shock between the two using \suzaku\ data. This cluster pair hosts two radio halos \citep{murgia10}. The boundary of the halo in A399 is coincident with an X-ray edge, as already suggested by \xmm\ observations \citep{sakelliou04}. \\
Only one \chandra\ observation is available for A399, whereas several observations were performed on A401. Despite this, we only used \obsid\ 14024 (which constitutes the 74\% of the total observing time) to produce the maps shown in Fig.~\ref{fig:a401}d,e,f as the remainder \obsid s are snapshots that partially cover the cluster emission. This is also the only case where we required $\sim5000$ counts in each spectral bin given the combination of high brightness and long exposure on A401. \\
The temperature maps in Fig.~\ref{fig:a399}d and \ref{fig:a401}d indicate an overall hot ICM and the presence of some hot substructures, in agreement with previous studies \citep{sakelliou04, bourdin08}. \\
The GGM images of A399 reveal a SB gradient toward the SE direction. The SB profile across this region and its temperature jump reported in Fig.~\ref{fig:a399}g show that this ``inner'' edge is a cold front with $\rat = 0.74^{+0.14}_{-0.12}$ and $\compr = 1.72^{+0.13}_{-0.12}$. Ahead of that, the X-ray SB rapidly fades away, as well as the radio emission of the halo \citep{murgia10}. The ``outer'' SB profile in this direction indeed shows another discontinuity with $\compr = 1.45^{+0.10}_{-0.10}$ (Fig.~\ref{fig:a399}h). The broken-power law model provides a better description of the data ($\chisqdof = 68.6/72$) compared to a single power-law fit ($\chisqdof = 122.6/74$), corresponding to a null-hypothesis probability of $8 \times 10^{-10}$ ($6.1\sigma$ level) with the F-test. In this case, however, the temperatures across the edge are consistent, not allowing us to firmly claim the nature of the SB jump. We mention that the presence of a shock would be in agreement with the fact that cold fronts sometimes follow shocks \citep[\eg][]{markevitch02bullet} and that shocks might (re)accelerate cosmic rays producing the synchrotron emission at the boundary of some radio halos \citep[\eg][]{shimwell14}. \\
A401 has a more elliptical X-ray morphology and an average temperature higher than A399. The hottest part of the ICM is found on the E direction. Indeed, the GGM image with $\sigma=8$ pixels in Fig.~\ref{fig:a401}c highlights a kind of spiral structure in SB on this side of the cluster, with maximum contrast toward the SE. The SB profile in this sector is well described by a broken power-law with compression factor $\compr = 1.39^{+0.04}_{-0.04}$ (Fig.~\ref{fig:a401}g). The higher temperature in the upstream region ($\ktu = 10.4^{+0.8}_{-0.6}$ \kev\ against $\ktd = 8.1^{+0.4}_{-0.4}$ \kev) confirms that this is a cold front. This could be part of a bigger spiral-shaped structure generated by a sloshing motion.
\subparagraph{MACS J0417.5-1154.} It is the most massive ($\mfive = 1.2\times10^{15}$ \msun) and most distant ($z=0.440$) cluster of our sample. Its extremely elongated X-ray morphology (Fig.~\ref{fig:macsj0417}a) suggests that this cluster is undergoing a high speed merger \citep{ebeling10, mann12}. Despite this, the value of $K_0 = 27\pm7$ \kevcmsq\ indicates that its compact core has not been disrupted yet, acting as a ``bullet'' in the ICM \citep[\eg][for a similar case]{markevitch02bullet}. Radio observations show the presence of a giant radio halo that remarkably follows the ICM thermal emission \citep{dwarakanath11, parekh17six}. \\
The most striking feature of MACS J0417.5-1154 is certainly its prominent cold front in the SE generated by an infalling cold and low-entropy structure, as highlighted by our maps in Fig.~\ref{fig:macsj0417}d,e,f. The SB across this region abruptly drops ($\compr = 2.44^{+0.31}_{-0.25}$) in the upstream region (Fig.~\ref{fig:macsj0417}g), for which spectral analysis provided a clear jump in temperature of $\rat = 0.44^{+0.17}_{-0.10}$, leading us to confirm the cold front nature of the discontinuity. The high-temperature value of $\ktu = 16.9^{+6.1}_{-3.3}$ \kev\ found upstream is an indication of a shock-heated region; a shock is indeed expected in front of the CC similarly to other clusters observed in an analogous state \citep[\eg][]{markevitch02bullet, russell12, botteon16gordo} and is also suggested by our temperature and pseudo-pressure maps. Nonetheless, we were not able to characterize the SB jump of this potential feature. On the opposite side, GGM images pinpoint another edge toward the NW direction, representing again a huge jump ($\compr = 2.50^{+0.29}_{-0.25}$) in the SB profile (Fig.~\ref{fig:macsj0417}h). The spectral analysis in a dedicated region upstream of this feature allowed us only to set a lower limit of $\ktu > 12.7$ \kev, suggesting the presence of a hot plasma, in agreement with our temperature map and the one reported in \citet{parekh17six}, where the pressure is almost continuous (Fig.~\ref{fig:macsj0417}e), as expected for a cold front.
\subparagraph{RXC J0528.9-3927.} No dedicated studies exist on this cluster located at $z=0.284$. The ICM emission is peaked on the cluster core, the coldest region in the cluster \citep{finoguenov05}, and fades in the outskirts where the emission is faint and diffuse (Fig.~\ref{fig:rxcj0528}a). \\
Our maps of the ICM thermodynamical quantities in Fig.~\ref{fig:rxcj0528}d,e,f are rather affected by large spectral bins due to the low counts of the cluster. The X-ray emission is peaked on the central low entropy region, which is surrounded by hot gas. An edge on the W is suggested both from the GGM images and from the above mentioned maps. The SB profile in Fig.~\ref{fig:rxcj0528}g is well fitted with a broken power-law with $\compr = 1.51^{+0.10}_{-0.09}$ and the dedicated spectral analysis confirms the value reported in the temperature map ($\ktu = 10.5^{+3.6}_{-1.8}$ \kev\ and $\ktd = 7.2^{+0.9}_{-0.7}$ \kev), indicating the presence of a cold front. Two more SB gradients pinpointed in the GGM images to the E and W directions did not provide evidence for any edge with the SB profile fitting (Fig.~\ref{fig:rxcj0528_noedge}).
\subparagraph{MACS J0553.4-3342.} It is a distant cluster ($z=0.407$) in a disturbed dynamical state, as shown from both optical and X-ray observations \citep{ebeling10, mann12}. The X-ray morphology (Fig.~\ref{fig:macsj0553}a) suggests that a binary head-on merger is occurring approximately in the plane of the sky \citep{mann12}. No value of the central entropy $K_0$ is reported in \citet{cavagnolo09} nor in \citet{giacintucci17}. A radio halo that follows the ICM emission has been detected in this system \citep{bonafede12}. At the time of this writing, two more papers on MACS J0553.4-3342, both containing a joint analysis of \hstE\ and \chandra\ observations, were published \citep{ebeling17, pandge17}. \\
The maps of the ICM thermodynamical quantities shown in Fig.~\ref{fig:macsj0553}d,e,f further support the scenario of an head-on merger in the E-W direction for MACS J0553.4-3342 in which a low-entropy structure is moving toward E, where GGM images highlight a steep SB gradient. This is confirmed by the SB profile fit (Fig.~\ref{fig:macsj0553}g) that leads to a compression factor of $\compr = 2.49^{+0.32}_{-0.26}$, while the temperature jump found by spectral analysis of $\rat = 0.62^{+0.33}_{-0.18}$ indicates that this discontinuity is a cold front \citep[see also][]{ebeling17, pandge17}. The high value of $\ktu=13.7^{+6.9}_{-3.7}$ \kev\ suggests a shock-heated region to the E of the cold front; indeed the ``outer'' SB profile of Fig.~\ref{fig:macsj0553}h indicates the presence of an edge in the cluster outskirts. We used for the characterization of the SB profile a sector of aperture $133\deg-193\deg$ (where the angles are measured in an anticlockwise direction from W) whereas we used a wider sector ($133\deg-245\deg$) as depicted in Fig.~\ref{fig:macsj0553}a to extract the spectra in order to ensure a better determination of the downstream and upstream temperatures, which ratio of $\rat = 2.00^{+1.14}_{-0.63}$ confirms the presence of a shock with Mach number $\machsb = 1.58^{+0.30}_{-0.22}$ and $\machkt = 1.94^{+0.77}_{-0.56}$, respectively derived from the SB and temperature jumps. This edge is spatially connected with the boundary of the radio halo found by \citet{bonafede12}. On the opposite side of the cluster, another roundish SB gradient is suggested from the inspection of the GGM images (Fig.~\ref{fig:macsj0553}b,c). The W edge is well described by our fit (Fig.~\ref{fig:macsj0553}i) that leads to $\compr = 1.70^{+0.12}_{-0.11}$, while spectral analysis provides $\rat = 0.33^{+0.22}_{-0.12}$, consistent with the presence of another cold front. Even though the upstream temperature is poorly constrained, the spectral fit suggests high temperature values, also noticed in \citet{ebeling17}, possibly indicating another shock-heated region ahead of this cold front; however, the presence of a possible discontinuity associated with this shock can not be claimed with current data. The symmetry of the edges strongly supports the scenario of a head-on merger in the plane of the sky. However, the serious challenges to this simple interpretation described in \citet{ebeling17} in term of the relative positions of the brightest central galaxies, X-ray peaks, and dark matter distributions need to be reconsidered in view of the presence and morphology of the extended X-ray tail discussed in \citet{pandge17} and clearly highlighted by the GGM image (see Fig.~\ref{fig:macsj0553}c).
\subparagraph{AS592.} Known also with the alternative name RXC J0638.7-5358, this cluster located at $z=0.222$ is one of those listed in the supplementary table of southern objects of \citet{abell89}. The ICM has an overall high temperature \citep{menanteau10, mantz10scaling} and is clearly unrelaxed (Fig.~\ref{fig:as592}a), despite the fact that AS592 has one of the lowest $K_0$ value of our sample (\cf\ Tab.~\ref{tab:sample}). \\
The maps in Fig.~\ref{fig:as592}d,e,f highlight the presence of two low entropy and low temperature CCs surrounded by an overall hot ICM. In the SW, a feature in SB is suggested from the GGM image with $\sigma=8$ pixels. The analysis of the X-ray profile and spectra across it result in a SB discontinuity with compression factor $\compr = 1.99^{+0.17}_{-0.15}$ and temperature ratio $\rat = 1.61^{+0.66}_{-0.43}$ (Fig.~\ref{fig:as592}g), leading us to claim the presence of a shock front with Mach number derived from the SB jump of $\machsb = 1.72^{+0.15}_{-0.12}$, in agreement with that derived by the temperature jump $\machkt = 1.61^{+0.54}_{-0.42}$. The SB variation indicated by the GGM images toward the NE direction did not show the presence of a discontinuity with the SB profile fitting (Fig.~\ref{fig:as592_noedge}).
\subparagraph{A1914.} It is a system at $z=0.171$ in a complex merger state \citep[\eg][]{barrena13}, geometry of which is still not understood well \citep{mann12}. In particular, the irregular mass distribution inferred from weak lensing data \citep{okabe08} is puzzling if compared to near-spherical X-ray emission of the ICM on larger scales (Fig.~\ref{fig:a1914}a). Previous \chandra\ studies highlighted the presence of a heated ICM with temperature peak in the cluster center \citep{govoni04chandra, baldi07}. At low frequency, a bright steep spectrum source 4C~38.39 \citep{roland85} and a radio halo \citep{kempner01} are detected. \\
Among the two \chandra\ observations on A1914 retrieved from the archive we had to discard \obsid\ 542 since it took place in an early epoch of the \chandra\ mission, as described above for the case of A370 (see also notes in Tab.~\ref{tab:chandra_obs}). We mention that in the \chandra\ archive other four datasets (\obsid s 12197, 12892, 12893, 12894) can be found for A1914. However these are 5~ks snapshots pointed in four peripheral regions of the cluster that are not useful for our edge research; for this reason, they were excluded in our analysis. \\
Our maps of the ICM thermodynamical quantities in Fig.~\ref{fig:a1914}d,e,f indicate the presence of a bright low-entropy region close to the cluster center with a lower temperature with respect to an overall hot ICM. The adjacent spectral bin toward the E suggests the presence of high-temperature gas while GGM images indicate a rapid SB variation. This feature is quite sharpened, recalling the shape of a tip, and can not be described under a spherical assumption. For this reason two different, almost perpendicular, sectors were chosen to extract the SB profiles to the E, one in an ``upper'' (toward the NE) and one a ``lower'' (toward the SE) direction of the tip. Their fits in Fig.~\ref{fig:a1914}g,h both indicate a similar drop in SB ($\compr \sim 1.5$). Spectra were instead fitted in joint regions downstream and upstream of the two SB sectors, leading to a single value for \ktu\ and \ktd. The temperature jump is consistent with a cold front ($\rat = 0.40^{+0.21}_{-0.12}$). Although the large uncertainties, spectral analysis provides indication of a high upstream temperature, likely suggesting the presence of a shock-heated region. This scenario is similar to the Bullet Cluster \citep{markevitch02bullet} and to the above-mentioned MACS J0417.5-1154. A shock moving into the outskirts can not be claimed with the current data but it is already suggested in Fig.~\ref{fig:a1914}g,h by the hint of a slope change in the upstream power-law in correspondence of the outer edge of the region that we used to extract the upstream spectrum. Another SB feature to the W direction is highlighted by the GGM images and confirmed by the profile shown in Fig.~\ref{fig:a1914}i. Its compression factor of $\compr = 1.33^{+0.08}_{-0.07}$ and temperature ratio achieved from spectral analysis of $\rat = 1.27^{+0.26}_{-0.21}$ allow us to claim the presence of a weak shock with Mach number consistently derived from the SB and temperature jumps, \ie\ $\machsb = 1.22^{+0.06}_{-0.05}$ and $\machkt = 1.28^{+0.26}_{-0.21}$ respectively. This underlines the striking similarly of A1914 with other head-on mergers where a counter-shock (\ie\ a shock in the opposite direction of the infalling subcluster) has been detected, such as the Bullet cluster \citep{shimwell15} and El Gordo \citep{botteon16gordo}, for which it also shares a similar double tail X-ray morphology.
\subparagraph{A2104.} This is a rich cluster at $z=0.153$. Few studies exist in the literature on A2104. \citet{pierre94} first revealed with \rosat\ that this system is very luminous in the X-rays and has a hot ICM. This result was confirmed more recently with \chandra\ \citep{gu09}, which also probed a slight elongation of the ICM in the NE-SW direction (Fig.~\ref{fig:a2104}a), and a temperature profile declining toward the cluster center \citep{baldi07}. \\
The maps of the ICM thermodynamical quantities (Fig.~\ref{fig:a2104}d,e,f) and GGM filtered images (Fig.~\ref{fig:a2104}b,c) of A2104 confirm an overall high temperature of the system as well as some SB contrasts in the ICM. We extract SB profiles across two sectors toward the SE and one toward the SW directions. The most evident density jump ($\compr = 1.54^{+0.16}_{-0.14}$) is detected for the SE ``outer'' sector shown in Fig.~\ref{fig:a2104}h, while the others show only the hint of a discontinuity (Fig.~\ref{fig:a2104}g,i). However, the fit statistics of the broken power-law and single power-law models indicate that the jump model is in better agreement with the data in both the cases, being respectively $\chisqdof=17.2/16$ and $\chisqdof=37.4/18$ for the SE ``inner'' sector ($3.1\sigma$ significance, F-test analysis) whereas it is $\chisqdof=64.5/63$ and $\chisqdof=122.5/65$ for the SW sector ($6.0\sigma$ significance, F-test analysis). Spectral analysis allowed us only to find a clear temperature jump for the SE ``inner'' edge, leaving the nature of the other two SB jumps more ambiguous. The temperature ratio across the SE ``inner'' sector is $\rat = 1.33^{+0.27}_{-0.19}$, leading us to claim a shock with Mach number $\machkt = 1.34^{+0.26}_{-0.20}$, comparable to the one computed from the upper limit on the compression factor ($\compr < 1.47$) of the SB jump, \ie\ $\machsb < 1.32$.
\subparagraph{A2218.} Located at $z=0.176$, this cluster is one of the most spectacular gravitational lens known \citep{kneib96}. The system is in a dynamically unrelaxed state, as revealed by its irregular X-ray emission (Fig.~\ref{fig:a2218}a; \citealt{machacek02}) and by the substructures observed in optical \citep{girardi97}. Detailed spectral analysis already provided indication of a hot ICM in the cluster center \citep{govoni04chandra, pratt05, baldi07}. A small and faint radio halo has also been detected in this system \citep{giovannini00}. \\
Four \chandra\ observations exist on A2218. Unfortunately, two of these (\obsid s 553 and 1454) can not be used for the spectral analysis because, as mentioned above for A370 and A1914, they are early \chandra\ observations for which the ACIS background modeling is not possible (see notes in Tab.~\ref{tab:chandra_obs} for more details), hence we only used the remainder two \obsid s to produce the maps shown in Fig.~\ref{fig:a2218}. \\
The low counts on A2218 result in maps of the ICM thermodynamical quantities with large bins, as shown in Fig.~\ref{fig:a2218}d,e,f. The ICM temperature is peaked toward the cluster center, in agreement with previous studies \citep{pratt05, baldi07}. The analysis of GGM images highlights the presence of rapid SB variations in more than one direction. The SB profile toward the N shows the greatest of these jumps, corresponding to $\compr = 1.47^{+0.21}_{-0.18}$ (Fig.~\ref{fig:a2218}g). From the spectral analysis we achieve a temperature ratio $\rat = 1.38^{+0.40}_{-0.28}$ across the edge, indicating the presence of a shock with consistent Mach number derived from the SB jump, \ie\ $\machsb = 1.32^{+0.15}_{-0.13}$, and from the temperature jump, \ie\ $\machkt = 1.39^{+0.37}_{-0.29}$. The presence of a shock in this cluster region is consistent with the temperature map variations reported in \citet{govoni04chandra}. In the SE direction, there is indication of two discontinuities from the SB profile analysis (Fig.~\ref{fig:a2218}h,i): spectra suggest that the ``inner'' discontinuity is possibly a cold front (however the temperature jump is not clearly detected, \ie\ $\rat = 0.84^{+0.35}_{-0.17}$) while the ``outer'' one is consistent with a shock ($\rat = 1.44^{+0.48}_{-0.33}$) and might be connected with the SE edge of the radio halo. The shock Mach numbers derived from SB and temperature jump are $\machsb = 1.17^{+0.10}_{-0.09}$ and $\machkt = 1.45^{+0.43}_{-0.33}$, respectively. The SB profile taken in the SW region shows the hint of a kink (Fig.~\ref{fig:a2218}j); in this case the broken power-law model ($\chisqdof = 7.0/15$) yields to an improvement compared to a single power-law fit ($\chisqdof = 15.0/17$), which according to the F-test corresponds to a null-hypothesis probability of $3 \times 10^{-3}$ ($3.0\sigma$ level). Spectral analysis leaves the nature of this feature uncertain.
\subparagraph{Triangulum Australis.} It is the closest ($z=0.051$) cluster of our sample. Despite its proximity, it has been overlooked in the literature due to its low Galactic latitude. \citet{markevitch96triangulum} performed the most detailed X-ray analysis to date on this object using \asca\ and \rosat\ and revealed an overall hot temperature ($\sim10$ \kev) in its elongated ICM (Fig.~\ref{fig:triangulum}a). Neither \xmm\ nor \chandra\ dedicated studies have been published on this system. Its $K_0$ value is reported neither in \citet{cavagnolo09} nor in \citet{giacintucci17}, nonetheless its core was excluded to have low entropy by \citet{rossetti10}. Recently, a diffuse radio emission classified as a halo has been detected \citep{scaife15, bernardi16}. \\
Three observations of Triangulum Australis can be found in the \chandra\ data archive. However, the oldest two (\obsid s 1227 and 1281) are calibration observations from the commissioning phase and took place less than two weeks after \chandra\ first light, when the calibration products had very large uncertainties. For this reason, we only used \obsid\ 17481 in our analysis. \\
From the maps of the ICM thermodynamical quantities in Fig.~\ref{fig:triangulum}d,e,f, one can infer the complex dynamical state of Triangulum Australis. The GGM filtered on the larger scale gives a hint of a straight structure in SB in the E direction, and it is described by our broken power-law fit ($\compr \sim 1.3$) in Fig.~\ref{fig:triangulum}g. However, no temperature jump is detected across the edge, giving no clue about the origin of this SB feature. We mention that this region was also highlighted by \citet{markevitch96triangulum} with \asca\ and \rosat\ as a direct proof of recent or ongoing heating of the ICM in this cluster.
\subsubsection{Summary of the detected edges}\label{sec:summary}
\begin{table*}
\centering
\caption{Properties of the jumps detected. Upper and lower bound errors on $\rat$ and $\mathcal{P}$ were computed adding separately the negative error bounds and the positive error bounds in quadrature. Mach numbers from SB and temperature jumps are reported for shocks (S), for discontinuities whose nature is still uncertain (U) only the Mach derived from the SB is displayed while for spectroscopically confirmed cold fronts (CF) the Mach number determination is not applicable (n.a.).}
\label{tab:results}
\begin{tabular}{lcccccccc}
\hline
Cluster name & & Position & \compr\ & $\rat$ & $\mathcal{P}$ & \machsb\ & \machkt\ & Nature \\
\hline
\multirow{2}*{A370} & \ldelim\{{2}{1mm} & E & $1.48^{+0.11}_{-0.10}$ & $\ldots$ & $\ldots$ & $1.33^{+0.08}_{-0.07}$ & $\ldots$ & U \\
& & W & $1.56^{+0.13}_{-0.12}$ & $\ldots$ & $\ldots$ & $1.38^{+0.10}_{-0.09}$ & $\ldots$ & U \\
\multirow{2}*{A399} & \ldelim\{{2}{1mm} & SE inner & $1.72^{+0.13}_{-0.12}$ & $0.74^{+0.14}_{-0.12}$ & $1.27^{+0.26}_{-0.22}$ & n.a. & n.a. & CF \\
& & SE outer & $1.45^{+0.10}_{-0.10}$ & $1.20^{+0.39}_{-0.26}$ & $1.74^{+0.58}_{-0.40}$ & $1.31^{+0.07}_{-0.07}$ & $\ldots$ & U \\
A401 & & SE & $1.39^{+0.04}_{-0.04}$ & $0.78^{+0.07}_{-0.06}$ & $1.08^{+0.10}_{-0.09}$ & n.a. & n.a. & CF \\
\multirow{2}*{MACS J0417.5-1154} & \ldelim\{{2}{1mm} & NW & $2.50^{+0.29}_{-0.25}$ & $<0.59$ & $<1.64$ & n.a. & n.a. & CF \\
& & SE & $2.44^{+0.31}_{-0.25}$ & $0.44^{+0.17}_{-0.10}$ & $1.07^{+0.44}_{-0.27}$ & n.a. & n.a. & CF \\
RXC J0528.9-3927 & & E & $1.51^{+0.10}_{-0.09}$ & $0.73^{+0.25}_{-0.14}$ & $1.10^{+0.38}_{-0.22}$ & n.a. & n.a. & CF \\
\multirow{3}*{MACS J0553.4-3342} & \ldelim\{{3}{1mm} & E inner & $2.49^{+0.32}_{-0.26}$ & $0.62^{+0.33}_{-0.18}$ & $1.54^{+0.85}_{-0.48}$ & n.a. & n.a. & CF \\
& & E outer & $1.82^{+0.35}_{-0.29}$ & $2.00^{+1.14}_{-0.63}$ & $3.64^{+2.19}_{-1.28}$ & $1.58^{+0.30}_{-0.22}$ & $1.94^{+0.77}_{-0.56}$ & S \\
& & W & $1.70^{+0.12}_{-0.11}$ & $0.33^{+0.22}_{-0.12}$ & $0.56^{+0.38}_{-0.21}$ & n.a. & n.a. & CF \\
AS592 & & SW & $1.99^{+0.17}_{-0.15}$ & $1.61^{+0.66}_{-0.43}$ & $3.20^{+1.34}_{-0.89}$ & $1.72^{+0.15}_{-0.12}$ & $1.61^{+0.54}_{-0.42}$ & S \\
\multirow{3}*{A1914} & \ldelim\{{3}{1mm} & E upper & $1.48^{+0.11}_{-0.12}$ & \multirow{2}*{$0.40^{+0.21}_{-0.12}$} & $0.59^{+0.31}_{-0.18}$ & \multirow{2}*{n.a.} & \multirow{2}*{n.a.} & \multirow{2}*{CF} \\
& & E lower & $1.64^{+0.13}_{-0.12}$ & & $0.66^{+0.35}_{-0.20}$ & & \\
& & W & $1.33^{+0.08}_{-0.07}$ & $1.27^{+0.26}_{-0.21}$ & $1.69^{+0.36}_{-0.29}$ & $1.22^{+0.06}_{-0.05}$ & $1.28^{+0.26}_{-0.21}$ & S \\
\multirow{3}*{A2104} & \ldelim\{{3}{1mm} & SE inner & $<1.47$ & $1.33^{+0.27}_{-0.19}$ & $<2.36$ & $<1.32$ & $1.34^{+0.26}_{-0.20}$ & S \\
& & SE outer & $1.54^{+0.16}_{-0.14}$ & $0.77^{+0.30}_{-0.21}$ & $1.19^{+0.48}_{-0.34}$ & $1.37^{+0.12}_{-0.10}$ & $\ldots$ & U \\
& & SW & $1.27^{+0.07}_{-0.06}$ & $0.85^{+0.20}_{-0.15}$ & $1.08^{+0.26}_{-0.20}$ & $1.18^{+0.05}_{-0.04}$ & $\ldots$ & U \\
\multirow{4}*{A2218} & \ldelim\{{4}{1mm} & N & $1.47^{+0.21}_{-0.18}$ & $1.38^{+0.40}_{-0.28}$ & $2.03^{+0.66}_{-0.48}$ & $1.32^{+0.15}_{-0.13}$ & $1.39^{+0.37}_{-0.29}$ & S \\
& & SE inner & $1.38^{+0.14}_{-0.11}$ & $0.84^{+0.35}_{-0.17}$ & $1.16^{+0.50}_{-0.25}$ & $1.26^{+0.10}_{-0.08}$ & $\ldots$ & U \\
& & SE outer & $1.26^{+0.14}_{-0.14}$ & $1.44^{+0.48}_{-0.33}$ & $1.81^{+0.64}_{-0.46}$ & $1.17^{+0.10}_{-0.09}$ & $1.45^{+0.43}_{-0.33}$ & S \\
& & SW & $1.41^{+0.23}_{-0.21}$ & $1.41^{+0.83}_{-0.49}$ & $1.99^{+1.21}_{-0.75}$ & $1.28^{+0.17}_{-0.14}$ & $\ldots$ & U \\
Triangulum Australis & & E & $1.34^{+0.04}_{-0.04}$ & $1.00^{+0.15}_{-0.10}$ & $1.34^{+0.20}_{-0.14}$ & $1.23^{+0.03}_{-0.03}$ & $\ldots$ & U \\
\hline
\end{tabular}
\end{table*}
Overall, we found six shocks, eight cold fronts and other eight discontinuities with uncertain origin due to the poorly constrained temperature jump. The properties of the detected edges are summarized in Tab.~\ref{tab:results}, while the distributions of \compr\ and $\rat$ are displayed in Fig.~\ref{fig:histogram}. Although we are not carrying out a statistical analysis of shocks and cold fronts in galaxy clusters, we notice that the majority of the reported jumps are associated with weak discontinuities with $\compr < 1.7$ and $0.5<\rat<1.5$. This may indicate that the GGM filters allow to pick up also small SB jumps that are usually lost in a visual inspection of unsmoothed cluster images. \\
\indent
We mention that in the case of a shock the SB and temperature jumps allow to give two independent constraints on the Mach number (Eq.~\ref{eq:mach-from-dens}, \ref{eq:mach-from-temp}). However, so far, only few shocks reported in the literature have Mach number consistently derived from both the jumps (\eg\ A520, \citealt{markevitch05}; A665, \citealt{dasadia16a665}; A115, \citealt{botteon16a115}). Instead, in our analysis there is a general agreement between these two quantities, further supporting the robustness of the results. \\
\indent
One could argue that the nature of the weakest discontinuities claimed is constrained at slightly more than $1\sigma$ level from the temperature ratio. This is a consequence of the small temperature jump implied by these fronts and the large errors associated with the spectral analysis (despite the careful background treatment performed). However, we can check the presence of pressure discontinuities at these edges by combining the density and temperature jumps achieved from SB and spectral analysis. The values of $\mathcal{P} \equiv P_d / P_u = \compr \times \rat$ computed for all the discontinuities are reported in Tab.~\ref{tab:results} and show at higher confidence levels the presence of a pressure discontinuity in the shocks and the absence of a pressure jump in the cold fronts, strengthening our claims. Although this procedure combines a deprojected density jump with a temperature evaluated along the line of sight, we verified that given the uncertainties on the temperature determination and the errors introduced by a deprojection analysis, the projected and deprojected values of temperature and pressure ratios are statistically consistent even in the cases of the innermost edges (\ie\, those more affected by projection effects). \\
\indent
With the present work, we increased the number of known shocks and cold fronts in galaxy clusters. The detected shocks have all $\mach < 2$ likely due to the combination of the fact that shocks crossing the central Mpc regions of galaxy clusters are weak \citep[\eg][and references therein]{vazza12why} and that fast moving shocks would be present for a short time in the ICM. \\
\indent
The distinction between shock and cold fronts for the eight discontinuities with uncertain origin can tentatively be inferred from the current values of \rat\ and $\mathcal{P}$ reported in Tab.~\ref{tab:results}. In this respect, deeper observations of these edges will definitely shed light about their nature.
\begin{figure}
\centering
\includegraphics[width=\hsize,trim={0.1cm 13.8cm 0.5cm 0},clip]{figure/comp-isto.pdf}
\includegraphics[width=\hsize,trim={0.1cm 13.8cm 0.5cm 0},clip]{figure/temp-isto.pdf}
\caption{Distribution of the central values of \compr\ (\textit{top}) and $\rat$ (\textit{bottom}) reported in Tab.~\ref{tab:results}.}
\label{fig:histogram}
\end{figure}
\subsection{Non-detections}\label{sec:non-detection}
Our analysis did not allow us to detect any edge in the following objects: A2813 ($z=0.292$), A1413 ($z=0.143$), A1689 ($z=0.183$) and A3827 ($z=0.098$). All these systems seem to have a more regular X-ray morphology (Fig.~\cref{fig:a2813,fig:a1413,fig:a1689,fig:a3827}) with respect to the other clusters of the sample.
\subparagraph{A2813.} This cluster has a roundish ICM morphology (Fig.~\ref{fig:a2813}a), nonetheless its value of $K_0=268\pm44$ \kevcmsq\ is among the highest in our sample (\cf\ Tab.~\ref{tab:sample}). The core is slightly elongated in the NE-SW direction and has a temperature $\sim7.7$ \kev, consistently with the \xmm\ value reported by \citet{finoguenov05}. The maps shown in Fig.~\ref{fig:a2813} were produced using all the \obsid s listed in Tab.~\ref{tab:chandra_obs}. We mention that the original target of the \aciss\ datasets (\obsid s 16366, 16491, 16513) is XMMUJ0044.0-2033; however, A2813 is found to entirely lay on an \acisi\ chip that was kept on during the observations. These data provide the largest amount ($\sim80\%$) of the total exposure time on A2813 and were used in our analysis although the unavoidable degradation of the instrument spatial resolution due to the \acisi\ chip being off-axis with this observing configuration.
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/01_a2813/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/01_a2813/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/01_a2813/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad d)}{figure/01_a2813/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad e)}{figure/01_a2813/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad f)}{figure/01_a2813/S.png}}
\caption{A2813. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:a2813_errors}.}
\label{fig:a2813}
\end{figure*}
\subparagraph{A1413.} It has a border line value of $K_0$ (\cf\ Tab.~\ref{tab:sample}) from the threshold set in this work. The distribution of cluster gas is somewhat elliptical, elongated in the N-S direction (Fig.~\ref{fig:a1413}a). Our analysis and previous \chandra\ temperature profiles \citep{vikhlinin05, baldi07} are in contrast with \xmm\ that does not provide evidence of a CC \citep{pratt02}. This discrepancy is probably due to the poorer PSF of the latter instrument. A radio mini-halo covering the CC region is also found by \citet{govoni09}. The region in the NW direction with a possible discontinuity suggested by the GGM filtered images did not show the evidence for an edge with the SB profile fitting (Fig.~\ref{fig:a1413_noedge}).
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/09_a1413/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/09_a1413/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/09_a1413/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad d)}{figure/09_a1413/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad e)}{figure/09_a1413/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad f)}{figure/09_a1413/S.png}}
\caption{A1413. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:a1413_errors}.}
\label{fig:a1413}
\end{figure*}
\subparagraph{A1689.} It represents a massive galaxy cluster deeply studied in the optical band because its weak and strong gravitational lensing \citep[\eg][]{broadhurst05, limousin07}. The X-ray emission is quasi-spherical and centrally peaked (Fig.~\ref{fig:a1689}a), features that apparently indicate a CC. Nevertheless, optical \citep{girardi97} and \xmm\ observations \citep{andersson04} both suggest that the system is undergoing to a head-on merger seen along the line of sight due either to the presence of optical substructures or to the asymmetric temperature of the ICM, hotter in the N. Our results confirm the presence of asymmetry in temperature distribution (Fig.~\ref{fig:a1689}d). The fact that a radio halo is also detected \citep{vacca11} fits with the dynamically unrelaxed nature of the system.
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/10_a1689/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/10_a1689/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/10_a1689/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace d)}{figure/10_a1689/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace e)}{figure/10_a1689/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\enspace f)}{figure/10_a1689/S.png}}
\caption{A1689. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:a1689_errors}.}
\label{fig:a1689}
\end{figure*}
\subparagraph{A3827.} It constitutes another cluster studied in detail mainly for its optical properties. Its central galaxy is one of the most massive known found in a cluster center and exhibits strong lensing features \citep{carrasco10}. Gravitational lensing also indicates a separation between the stars and the center of mass of the dark matter in the central galaxies \citep{massey15}, making A3827 a good candidate to investigate the dark matter self-interactions \citep{kahlhoefer15}. On the X-ray side, the cluster emission is roughly spherical (Fig.~\ref{fig:a3827}a), with an irregular temperature distribution (Fig.~\ref{fig:a3827}d) and a mean value of $\sim7$ \kev\ \citep{leccardi08}. Two regions to the E and W directions suggested by the GGM images did not show any discontinuity with the SB profile fitting (Fig.~\ref{fig:a3827_noedge}).
\begin{figure*}
\centering
\begin{tabular}{cc}
\multirow{2}{*}{\subfloat{\subfigimgwhitebig[width=.6\textwidth]{\quad a)}{figure/15_a3827/sb.png}}} & \\
& \vspace{0.15cm}\hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad b)}{figure/15_a3827/ggm4.png}} \\
& \hspace{-0.3cm}\subfloat{\subfigimgwhiteggm[width=.28\textwidth]{\quad c)}{figure/15_a3827/ggm8.png}}
\end{tabular}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad d)}{figure/15_a3827/kT.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad e)}{figure/15_a3827/Plog10.png}}
\subfloat{\subfigimgblack[width=.3\textwidth]{\quad f)}{figure/15_a3827/S.png}}
\caption{A3827. The same as for Fig~\ref{fig:a399}. The goodness of fits is reported in Fig.~\ref{fig:a3827_errors}.}
\label{fig:a3827}
\end{figure*}
\section{Conclusions}\label{sec:conclusions}
Shocks and cold fronts produced in a collision between galaxy clusters give information on the dynamics of the merger and can be used to probe the microphysics of the ICM. Nonetheless their detection is challenged by the low number of X-ray counts in cluster outskirts and by possible projection effects that can hide these sharp edges. For this reason only a few of them have been successfully detected both in SB and in temperature jumps. \\
\indent
In this work we explored a combination of different analysis approaches of X-ray observations to firmly detect and characterize edges in NCC massive galaxy clusters. Starting from GGM filtered images on different scales and the maps of the ICM thermodynamical quantities of the cluster, one can pinpoint ICM regions displaying significant SB and/or temperature variations. These can be thus investigated with the fitting of SB profiles, whose extracting sectors have to be accurately chosen in order to properly describe the putative shock or cold front. Once that the edge is well located, spectral analysis on dedicated upstream and downstream regions can also be performed in an optimized way. The discontinuity is firmly detected if the jump is observed both in images and in spectra. \\
\indent
In this paper we selected 37 massive NCC clusters with adequate X-ray data in the \chandra\ archive to search for new discontinuities driven in the ICM by merger activity. In particular we looked at 15 of these systems for which no claim of edges was published. We were able to characterize at least one SB jump in 11 out of these 15 clusters of the sample. The performed SB analysis relies on the spherical assumption. Among the detected edges, we also constrained the temperature jump for 14 discontinuities, six shocks and eight cold fronts, while for eight edges the classification is still uncertain. As a further check, we also computed the pressure ratios across the edges and verified the presence of the pressure discontinuity in the shocks and the absence of a pressure jump in the cold fronts. \\
\indent
Our work provides a significant contribution to the search for shocks and cold fronts in merging galaxy clusters demonstrating the strength of combining diverse techniques aimed to identify edges in the ICM. Indeed, many shocks and cold fronts reported in the literature have been discovered because either they were evident in the unsmoothed cluster images or there were priors suggesting their existence (\eg\ merger geometry and/or presence of a radio relic). The usage of edge detection algorithms (as the GGM filter) in particular helps in highlighting also small SB gradients to investigate with SB profile and spectral fitting. Among the small jumps detected we found low Mach numbers ($\mach < 2$) shocks; this is a possible consequence of the fact that the central regions of the ICM are crossed by weak shocks while the strongest ones quickly fades in the cluster outskirts, making their observation more difficult (see also discussion in \citealt{vazza12why} on the occurrence of radio relics in clusters as a function of radius). \\
\indent
Many shocks in the literature were found thanks to the presence of previously observed radio relics (or edges of radio halos). As a consequence, the radio follow-up of the shocks detected in this paper will be useful to study the connection between weak shocks and non-thermal phenomena in the ICM.
\section*{Acknowledgments}
We thank the anonymous referee for useful comments. AB and GB acknowledge partial support from PRIN-INAF 2014 grant. AB thanks V.~Cuciti and A.~Ignesti for probing the legibility of the images. This research has made use of the SZ-Cluster Database operated by the Integrated Data and Operation Center (IDOC) at the Institut d'Astrophysique Spatiale (IAS) under contract with CNES and CNRS. The scientific results reported in this paper are based on observations made by the \chandra\ X-ray Observatory. This research made use of the NASA/IPAC Extragalactic Database (NED), operated by the Jet Propulsion Laboratory (California Institute of Technology), under contract with the National Aeronautics and Space Administration. This research made use of APLpy, an open-source plotting package for Python hosted at http://aplpy.github.com.
\bibliographystyle{mn2e}
|
2,869,038,153,762 | arxiv | \section{Introduction}
In counting complexity we explore the computational complexity of functions that count the number of solutions of a decision problem. In general, the counting versions of problems are computationally more difficult than their decision versions. For example, given a DNF formula, it is easy to determine if it is satisfiable, but it seems hard to count the number of its satisfying assignments. Another example is counting independent sets of all sizes for a given graph. It is obvious that it is easy to tell if there is some independent set of any size, since a single node always is an independent set.
However it is one of the hardest problems to count, or even approximate, the number of all independent sets.
In this paper we consider the class of all self reducible problems with easy decision version. A problem here is called self reducible if the computation on a given instance can be reduced to a polynomial number of sub-instances of the same problem, and the the height of the corresponding self-reducibility tree is polynomial in the size of the input. It is proven in \cite{PZ06} that the Karp closure of self reducible with easy decision version is exactly the class TotP, which is the class of all functions $f$ for which there exist a non-deterministic polynomial time Turing Machine s.t. the number of all computation paths on input $x$ equals $f(x)+1$. A great number of problems of interest in the literature are self reducible, and many of them have easy decision version.
Examples of such problems, from many different scientific areas are $\#$DNF-Sat, $\#$Monotone-2-Sat, $\#$Non-Cliques, $\#$NonIndependent Sets, NonNegative Permanent, Ranking,
Graph reliability,
$\#$matchings, computing the determinant of a matrix, computing the partition function of several models from statistical physics, like the Ising and the hard-core model, counting colorings of a graph with a number of colors bigger than the maximum degree, counting bases of a matroid, $\#$independent sets of all sizes \cite{SinclairNotesMC}, and many more. (Definitions of the above and references can be found in \cite{PZ06,AB09, SinclairNotesMC}).
Since computing exactly counting problems seems hard (unless P=NP), a first question to ask about them is their approximability status. Concerning multiplicative error, it is proven \cite{JS89} that for self reducible problems, even a polynomial multiplicative-error deterministic (or randomized) algorithm can be transformed to an FPTAS (FPRAS respectively), which means that we can approximate it to within any positive factor of approximation $\epsilon$ in time polynomial in $n,1/\epsilon$. So, for a self-reducible problem either there exist an FPTAS (respectively FPRAS), either it is non approximable to any polynomial factor unless P=NP (respectively P=RP).
The same holds even for problems with easy decision version. For example there is an FPRAS for $\#$DNF SAT (satisfying assignments of a DNF formula), but it is proved \cite{DGGJ03} that $\#$SAT can be reduced to $\#$IS (independent sets of all sizes), under an approximation-preserving reduction, so since $\#$SAT is inapproximable unless NP=RP, the same holds for $\#$IS.
So a second question to ask for such problems, especially if they are inapproximable within a multiplicative error, is whether we can achieve an additive error, for the fraction of accepting solutions over the space of solutions, (e.g. the number of independent sets over the number of all subsets of nodes, or the number of satisfying assignments over $2^n$). Even for problems that admit multiplicative approximation, such an additive approximation algorithm is not comparable to a multiplicative one, in the sense it can give either a better or a worse result, depending on the input. It is better when the number of solutions is big, and worse if the number of solutions is very small.
We investigate this question and we give a randomized polynomial time algorithm with additive error $\epsilon, \forall\epsilon>0$, for all problems/functions in TotP.
Another question of interest is the exact (probably exponential) deterministic time of computing or approximating such problems. We show, among other things, that we can have a randomized approximation scheme (i.e. a multiplicative error as small as we want) in time $O(\epsilon^{-2} poly(n)2^{2n/3})$, which is strictly smaller than the exhaustive search solution.
Finally we have the following connection to derandomization and circuit lower bounds. There is a well studied problem, the Circuit Acceptance Probability Problem (CAPP): given a circuit $C$ with $n$ input gates, estimate the proportion of satisfying assignments, i.e. if $p=\frac{\#sat-assignments}{2^n}=\Pr_x[C(x)=1]$, find $\hat{p}=p\pm \epsilon$, for e.g. $\epsilon=1/6$. This is connected to derandomization, and to circuit lower bounds. In particular it is shown by Williams in \cite{Williams10}, that if CAPP can be solved, even non-deterministically, and even in time of order $2^{\delta n}poly(n)$ for some $\delta<1$, then NEXP$\nsubseteq$P/poly.
We show that our algorithm can be used to solve the Circuit Acceptance Probability Problem (CAPP) in polynomial time and with high probability, for the family of all circuits for which the problems of either (a) counting the number of satisfying assignments, or (b) counting the number of unsatisfying assignments, belong to TotP. For example, CNF formulas belong to this class, as well as other kinds of circuits that we mention. We believe that this fact together with some sharpening and combinations of the proofs in some references that we will mention (in the related work section), will yield interesting, non trivial, circuit lower bounds. We have left the latter for further research.
\subsection{Our Contribution}
Until now, problems in this class are treated individually, and algorithms are designed based on the specific characteristics of each problem. We instead explore what can be done if we exploit their two basic, common structural characteristics, which are self-reducibility and easy decision version.
Based on these properties, we present a randomized algorithm which achieves for every $\epsilon$, an additive $\epsilon$ approximation to the quantity $p=f(x)/2^{n'}$, with running time polynomial in the size of the input and in $1/ \epsilon$ (precisely $O(\epsilon^{-2})$), where $n'$ is the amount of non-determinism for that problem. This is the best we could expect in the sense that any multiplicative approximation would imply NP=RP. We also show that for many interesting problems $n'=n+constant$ and so all results hold with $n'$ substituted with $n$ (i.e. the size of the input).
Our algorithms relies on non-uniform sampling, by a Markov chain that goes back and forth in the internal nodes of the computation tree of the corresponding NPTM on the given input, (see below for details on the basic ideas).
We also show that for any function $f$ in this class we can decide deterministically if $f(x)\leq g(x)$ in time $g(x)\cdot poly(n)$, where $n$ is the size of input $x$.
We also show the following results, concerning exponential time approximability.
Our algorithm can be viewed as computing $f(x)$ with an absolute error $\epsilon\cdot 2^n, \forall\epsilon>0,$ so by setting $\epsilon$ accordingly, we get an absolute error of order $2^{(\beta+1)n/2}$ in time of order $2^{(1-\beta) n}poly(n),\forall\beta\in(0,1)$. We also show that we can have in time of order $(2^{(\beta+1)n/2}+2^{(1-\beta) n})poly(n), \forall\beta\in(0,1)$ and polynomial in $\epsilon^{-2}$ time, approximation scheme, (i.e. we can get a multiplicative error $\epsilon,\forall \epsilon>0).$ All these running times are better than the exhaustive search solutions.
Then we show how our algorithm can solve with high probability, in polynomial time, and for every $\epsilon$, the Circuit Acceptance Probability Problem, for all polynomial size circuits, for which the problems of either (a) counting the number of satisfying assignments, or (b) counting the number of unsatisfying assignments, belong to TotP, e.g. DNF formulas, CNF formulas, Monotone circuits, Tree-monotone circuits, etc.
Concerning improvements and extensions, we have that for TotP this algorithm is the best we can achieve unless NP=RP , and we have also that this kind of approximation is impossible to be extended to $\#$P unless NEXP$\nsubseteq$P/poly. If any of these conjectures holds, our results can be viewed as a possible step towards proving that.
\subsection{The basic ideas}
A key element in our proof relies on a fact proved in \cite{PZ06} that the karp-closure of self-reducible problems with easy decision version coincides with the counting class TotP. This is the class of functions $f$ in $\#$P for which there exist a polynomial-time non-deterministic Turing Machine M, such that for every input $x$, $f(x)$ equals the total number of leaves of the computational tree of M, minus $1$. Moreover M can always be s.t. its computation tree is binary (although not full). So $f(x)$ also equals the number of internal nodes of this tree.
Our first idea is the following. Instead of trying to count the number of accepting paths/solutions for the input, try to count the number of internal nodes of the computation tree of the corresponding NPTM M. This approach doesn't take into account any special characteristics of the problem at hand, but only the structural properties we already mentioned. So it can be applied to any problem in this class.
It is worth noting that in this way we reduce a problem whose set of solutions might be of some unknown structure, difficult to determine and understand, to a problem whose 'set of solutions' (the internal nodes of the tree) has the very particular structure of some binary tree of height polynomial in $n=|x|$.
Our second idea is the following. In order to estimate the number of internal nodes of the computation tree, we could try to perform uniform sampling, e.g. with a random walk. However a random walk on a tree, in general needs time exponential in the height of the tree, (polynomial in the number of nodes), and besides that, it can be proved that uniform sampling is impossible unless NP=RP. So instead, we design a Markov Chain converging in polynomial time by construction, but whose stationary distribution, although not uniform, gives us all the information needed to estimate the number of nodes, and thus the value of the function.
\subsection{Related work-Comparisons- Open Questions}
We will give some related work, comparisons to our results, and open questions.
\subsubsection{On Counting Complexity}
Counting complexity started by Valiant in \cite{Valiant79} where he defined $\#$P and showed that computing the Permanent is $\#$P-complete under Cook reductions. For a survey on Counting Complexity see chapter 17 in \cite{AB09}.
As shown by Zachos et.al. in \cite{KPZ99}, Cook reductions blur structural differences between counting classes, so several classes inside $\#$P have been defined and studied. $\#$PE was defined in \cite{Pagourtzis01} by Pagourtzis as the problems in $\#$P with decision version in P, TotP was introduced in \cite{KPSZ98}(see the definition in the preliminaries section), and in \cite{PZ06} was shown that it coincides with the Karp-closure of self reducible problems in $\#$PE. Other classes related to TotP, with properties, relations, and completeness results, were studied e.g. in \cite{KPSZ98, Pagourtzis01, PZ06, KPSZ2001, BGPT, FFS91, SST95, HHKW05}.
Concerning the approximability of problems in TotP, no unified approach existed until now. Problems are studied individually, and for some of them it is shown that FPRAS exist(i.e. multiplicative approximation within any factor, in time polynomial in the size of the input and in the inverse of the error), e.g. for counting satisfying assignments of DNF formulas \cite{KLM89}, and for counting perfect matchings \cite{JS96perm}, while for other it is proved that are inapproximable unless NP=P (or NP=RP for the randomized case), e.g. $\#$IS: counting independent sets of all sizes in a graph \cite{DFJ02,Wei06}. Collections on relevant results, proofs and references can be found (among other things) in e.g. \cite{Goldreich, AB09,SinclairNotesMC, Vazirani, Snotes}.
Two significant papers are related to our work. Firstly, Sinclair et.al. in \cite{JS89} showed that for self reducible problems, FPRAS is equivalent to uniform sampling, and that a polynomial factor approximation implies FPRAS. So problems in TotP either have an FPRAS, either are inapproximable within a polynomial factor. Secondly, in \cite{DGGJ03} Goldberg et. al. defined approximation preserving reductions, and classified problems according to their (multiplicative) approximability. They also showed (among other things) that $\#$IS, which is in TotP, is under approximation preserving reductions interreducible to \#SAT, which is considered inapproximable, since its decision version is NP-complete. Also in \cite{B11} is shown by Bordewich that there exist an infinite number of approximability levels between the polynomial factor and the approximability of \#SAT, if NP$\neq$RP.
So for TotP, polynomial-factor multiplicative error in polynomial time is impossible unless NP=RP. Our results show that we can have (a) time strictly smaller than brute force for a RAS, and (b) polynomial time for additive error.
We have to note here that the result about the RAS do not extend to \#SAT through the reduction of Goldberg in \cite{DGGJ03} that we mentioned earlier, because the reduction maps a formula on $n$ variables, to a graph on $n^2$ vertices. However the additive error results extend to \#SAT for CNF formulas, through a reduction to DNF that preserves the number of variables.
As for the question whether we can have such an additive approximation for the whole $\#$P, we can't rule out this possibility, but as we will see in the discussion on the connections with circuit lower bounds, we know that this is impossible unless NEXP $\nsubseteq$ P/poly.
Another interesting open question is to find inside TotP a structural characterization of the class of problems that admit an FPRAS, i.e. to find what is the significant common property that makes them approximable.
\subsubsection{On Exponential Time Complexity}
The study of the exponential time complexity of problems in NP has started by Impagliazzo et al in \cite{IPZ01} where they showed exponential hardness results. For $\#$k-SAT there have been some algorithms in the literature.
In \cite{St85} Stockmeyer proved that polynomial time randomized algorithms exist, with access to a $\Sigma_2P$ oracle.
In \cite{Tr16} Taxler proved randomized, constant-factor approximation algorithms, of time exponential but smaller than $2^n\cdot poly(m,n)$, where $n$ is the number of variables and $m$ the number of clauses of the input, provided that k-SAT (decision version) is solvable in $O(2^{cn} m^d)$ for some $0<c<1$ and $d\geq 1$ (i.e. provided that the ETH conjecture is false).
In \cite{T12} Thurley gives approximation scheme i.e. $\forall \epsilon>0$ can achieve multiplicative error $\epsilon$ in time $O^*(\epsilon^{-2}c_k^n)$ for $c_k<2$.
In \cite{IMR12} Impagliazzo et al gives randomized exact counting in time $O^*(2^{(1-\frac{1}{30k})n})$.
There is also an algorithm, without theoretical guarantee, in \cite{GSS06} by Gomes et al. that implements Stockmeyer's idea with some SAT solver, with outstanding performance.
Since $\#3$-SAT is $\#$P-complete, all of TotP can be reduced to that, and so we can achieve such approximations too, using the above algorithms.
However since counting is in general harder than decision, it is meaningful to explore the exponential time complexity of counting even for problems with decision in P, since as we saw already, they might be inapproximable in polynomial time.
Our algorithm for TotP is better than all the above, in the sense that we can have an approximation scheme in $O(\epsilon^{-2}2^{\gamma n})$ time, for all $\gamma\in(2/3,1)$, and without call to any oracle, and without any unproven assumptions. Impagliazzo's algorithm is better than ours in the sense that it gives exact counting, and it is worse in running time. Note of course that through a reduction to 3-SAT, if the number of variables of the resulting formula is more than $n$+constant, (where $n$ would be the size of the input of the original problem) the above algorithms perform even worse w.r.t. $n$.
We note again that deterministic/ randomized polynomial time approximation scheme for TotP does not exist, unless P=NP/ NP=RP respectively . It is an open problem whether we can have something in time superpolynomial and subexponential like $n^{\log n}$.
\subsection{On Circuit Lower Bounds}
Excellent surveys on circuit complexity, as was the state of the art until 2009, can be found in
\cite{BS90,AB09}.
Afterwards, progress has been made by Williams in \cite{Wil11}, where he proved ACC circuit lower bounds for NEXP and E$^{NP}$, by finding improved algorithms for the circuit class ACC. His work was based on ideas first presented in \cite{Williams10} where he proved connections between circuit lower bounds and improved algorithms for Circuit-SAT.
There he also proved connections between solving the Circuit Acceptance Probability Problem and circuit lower bounds. If CAPP can be solved for all circuits of polynomial size, even non-deterministically, and even in time of order $2^{\delta n}poly(n)$ for some $\delta<1$, then NEXP$\nsubseteq$P/poly.
He also proved in \cite{Wil11, Will13} that for any circuit family ${\cal C}$ closed under composition of circuits, improved SAT algorithms imply $E^{NP} \nsubseteq {\cal C}$.
The CAPP problem was first defined and studied in relation to derandomization and circuit lower bounds in \cite{Bar02,For01, IKW02,KC99, KRC00}. In particular in \cite{IKW02} was shown that solving the CAPP in subexponential nondeterministic time infinitely often, implies NEXP$\nsubseteq$P/poly.
We solve the CAPP in polynomial time, with high probability, for the family of all polynomial size circuits, for which the problems of (a) counting the number of satisfying assignments, or (b) counting the number of unsatisfying assignments, belong to TotP (e.g. for CNF formulas). We believe that this result together with some combinations of the proofs in the above references, can yield non-trivial lower bounds.
\section{Preliminaries}
We assume the reader is familiar with basic notions from computational complexity, like a non-deterministic Turing Machine, a boolean circuit, a CNF formula (formula in conjunctive normal form), a DNF (disjunctive normal form) formula, and the classes NP, P, RP, NEXP, EXP$^{NP}$, P/poly, $\#$P, FP. For definitions see e.g. \cite{AB09}. We also assume familiarity with some basics on Markov chains, e.g. the notion of mixing time, and the stationary distribution. See e.g. \cite{Peres}.
We also keep the following conventions regarding the kinds of error for a value $f$. Absolute error $a$: $f\pm a$, additive error $a$: $\frac{f}{2^n}\pm a$, multiplicative error $a$: $(1\pm a)f$.
\begin{definition}
$\#$P is the class of functions $f:\{0,1\}^*\rightarrow \mathbb{N}$ for which there exists a non deterministic polynomial time Turing machine (NPTM) $M_f$ s.t. the number of accepting paths of $M_f$ on input $x$ equals $f(x)$.
$\#$PE is the class of functions $f$ in $\#$P for which the decision version, i.e. the problem of deciding if $f(x)>0$, is in P.
TotP is the class of functions $f:\{0,1\}^*\rightarrow \mathbb{N}$ for which there exists a non deterministic polynomial time Turing machine (NPTM) $M_f$ s.t. the number of all computation paths of $M_f$ on input $x$ equals $f(x)+1$.
\end{definition}
Note that in the definition of TotP we take into account all paths, not only accepting paths like in $\#$P. $M_f$ doesn't need to return yes or no, but it can return anything, or just halt.
\paragraph{Important Observation}
It is proved in \cite{KPSZ2001} that if for some function there exists an NPTM of the kind described in the above definition for TotP, then for the same function there exists another NPTM with the same properties, with the additional property that the non-deterministic choices at each (non determinisitc) step are exactly $2$. We will call such an NPTM 'binary'. Observe that in this case, the computation tree has $f(x)$ internal nodes, or 'branchings', since it is binary. This fact is extremely crucial for our proofs.
TotP is a subclass of $\#$P. For a relation/problem $A$ in NP we will call 'decision version' the problem of deciding if there exist an accepting computation of some NPTM deciding problem $A$, and we will call 'counting version' the problem of counting accepting computations. For problems/functions $f$ in $\#$P, or in TotP, we will call 'decision version, the problem of deciding if $f(x)\neq 0$.
It is proved in \cite{PZ06} that TotP is exactly the Karp-closure of self reducible problems in $\#$PE, under the following notion of self reducibility.
\begin{definition}
A function $f : \Sigma^*\rightarrow \mathbb{N}$ is called poly-time self-reducible if there exist polynomials
$r$ and $q$, and polynomial time computable functions $h : \Sigma^*\times\mathbb{N} \rightarrow \Sigma^*$,
$g :\Sigma^*\times\mathbb{N} \rightarrow \mathbb{N}$, and $t :\Sigma^*\rightarrow\mathbb{N}$ such that for all
$x\in\Sigma ^*$:\\
(a) $f(x) = t(x) +\sum_{i=0}^{r(|x|)} g(x,i)f(h(x,i))$, that is, $f$ can be processed recursively by reducing $x$ to $h(x,i)$ ($0 \le i\le r(|x|)$), \\
(b) the recursion terminates after at most polynomial depth (that is, $f\big(h(...h(h(x,i_1),i_2)...,i_{q(|x|)})\big)$ can be computed in polynomial time).\\
(c) $|h(...h(h(x,i_1),i_2)...,i_{q(|x|)}|\in\mathcal{O}\big(poly(|x|)\big)$.
\end{definition}
Intuitively a function $f$ is self reducible if $f(x)$ can be efficiently reduced to computing $f(x_i)$ for some other instances $x_i$, with the condition that if we continue the same procedure recursively, the resulting recursion tree (whose nodes are the respective instances) will be of polynomial height.
Note that we will refer to this recursion tree as the 'self reducibility tree'.
For example circuit satisfiability problems are self reducible under this notion. The number of solutions (i.e. satisfying assignments) of $C$ equals the number of solutions of $C_1$, which is $C$ with its first input gate fixed to $1$, plus the number of solutions of $C_0$, which is $C$ with the first input gate fixed to $0$.
Of course circuit satisfiability is not in $P$ (as far as NP$\neq$P), so its counting version is not in TotP.
To understand the definitions better, we will give another example of a problem in TotP, show that it is self reducible, and give the corresponding NPTM (whose number of paths on input $x$ equals $f(x)+1$).
The problem is $\#IS$: given a graph G on n nodes, $f(G)$ is the number of independent sets of all sizes. Clearly $f$ is in TotP, as a single node is always an independent set, and the self reducibility tree can be defined as follows. $f(G)$ equals the number of independent sets containing node $1$ plus the number of those not containing $1$, so $f(G)$ is reduced to $f(G_0)+f(G_1)$, where $G_0$ is $G$ with node $1$ and its neighbourhood removed, and $G_1$ is G with node $1$ removed. We do that recursively for all sub-instances that occur. So the height of the self reducibility tree is $n$. The corresponding NPTM proceeds as follows. In each step $i$ it checks whether for the corresponding sub-instances $f(G^i_0)$ and $f(G^i_1)$ is not zero, and if both of them are, then it branches (i.e. it proceeds non deterministically), else it proceeds deterministically to that sub-instance $G^i_b$ for which $f(G^i_b)>0$, if such exists, else it halts. Finally, in order to have in total $f(G)+1$ leaves (or, equivalently, computation paths), in the end of the whole computation, it makes one more branching in the rightmost path (the one that has no "left" choice in any level).
Note that in this case, the computation tree is exactly the same as the self reducibility tree, with one more branching at the right end. And clearly the number of non deterministic bits used by the NPTM is at most the height of the self reducibility tree plus one. This is because $f(x)$ results as a simple addition of $f$ on two sub-instances. But this is not always the case, as the definition of self reducibility is more general. On the other hand this is the case for many problems defined on graphs and circuits, like counting satisfying assignments of monotone circuits, and of DNF formulas.
\section{Approximability of TotP}
As we saw in the preliminaries section, the Karp closure of self reducible problems in $\#PE$ equals the class TotP. Since the number of all paths of a (not necessarily full) binary tree, minus one, equals the number of internal nodes of that tree, to compute a function in TotP, it suffices to compute the number of branchings of the computation tree of the corresponding NPTM.
For a problem $f$ in TotP, on input $x$, it is easy to check whether a state of some computation of the corresponding NPTM is a branching, as follows. We can associate each internal state with the string of non-deterministic choices made to reach that state.
Given such a string, we simulate the NPTM M with these non-deterministic choices, until M either has to make another non-deterministic choice, either it halts. In the first case we consider that state as a 'branching', in the second as a 'leaf'.
Thus, the problem of counting branches of such an NPTM in time polynomial in $|x|$, reduces to the problem of counting nodes of a subtree $S$ of the full binary tree $T$ of height $n$, containing the root of $T$ (if S is not empty), in time polynomial in $n$, where $S$ is given implicitly by some oracle or poly-time predicate that tells us for every node of $T$ if it belongs to $S$.
\begin{lemma} For any $f\in$TotP, on input $x$, computing $f(x)$ in time $poly(|x|)$ is reduced to counting nodes of a subtree $S$ of the full binary tree $T$ of height $n=poly(|x|)$, containing the root of $T$ (if S is not empty), in time polynomial in $n$, where $S$ is given implicitly by some oracle or poly-time predicate, that tells us for every node of $T$ if it belongs to $S$.
\end{lemma}
We are going to give a probabilistic algorithm that given such a predicate for some subtree $S$, approximates the size of $S$ in time $poly(n)$. It is based on a rapidly mixing Markov chain on the nodes of $S$. We will first present the Markov chain and prove its mixing time and its stationary distribution. Then we will show how we can approximate the size of $S$, using the Markov chain for sampling from its stationary distribution.
\subsection{The Markov Chain}
We define a Markov chain, having as states the nodes of a subtree of the full binary tree.
\begin{definition}\label{the_chain} Let $S$ be a subtree of the fully binary tree $T$ of height $n$, containing the root of $T$. We define the Markov chain $P$ over the nodes of $S$, with the following transition probabilities. \\$p(i,j)=1/2$ if $j$ is the parent of $i$, \\$p(i,j)=1/4$ if $j$ is a child of $i$, \\$p(i,j)=0$ for every other $j\neq i$, and \\$p(i,i)=1-\sum_{j\neq i}p(i,j)$.
\end{definition}
\begin{proposition}\label{the_stationary}
The stationary distribution of the above Markov chain $P$ is as follows. If $d_i$ is the depth of node $i$, i.e. its distance from the root, and $n$ the height of the tree, $\forall i, \pi(i)= \alpha 2^{n-d_i}$, where $\alpha$ is a normalizing factor, so that $\sum_{i}\pi(i)=1.$
\end{proposition}
\begin{proof}
It is easy to check that $\sum_{i}\pi(i)p(i,j)=\pi(j)$
\end{proof}
Now we will prove that $P$ is rapidly mixing, i.e. polynomial in the height of the tree $S$. The intuition is the following. The simple random walk on a tree needs time polynomial in the size of the tree, which in the worst case of a fully binary tree, it is exponential in the height of the tree. The reason is that it is difficult to go from a leaf to the root, since the probability of going downwards the levels of the tree, is double the probability of going upwards. So we designed a walk such that, on the full binary tree, the probabilities of going upwards equals the probability of going downwards. So its easy to see that the mixing time equals the time of convergence to the uniform distribution over the levels of the tree, thus polynomial to the height of the tree. (Of course what we loose is that the new walk, as we saw, does not converge to the uniform distribution over the nodes of $S$, as is the case for the simple random walk, and this is the reason we cannot get an FPRAS with this approach.)
It turns out that this Markov chain converges quickly even in the general case.
There are many ways to prove the mixing time formally, and we present one of them. We will use the following lemma from \cite{JS89}.
Let $\{X_t\}_{t\geq 0}$ be a Markov chain over a finite state space $\cal{X}$ with transition probabilities $p_{ij}$, $p_x^{(t)}$ be the distribution of $X_t$ when starting from state $x$, $\pi$ be the stationary distribution, $\tau_x(\epsilon)=\min\{t:||p_x^{(t)}-\pi ||\leq\epsilon\}$ be the mixing time when starting from state $x$. An ergodic Markov chain is called time reversible if $\forall i,j\in{\cal X}, p_{ij}\pi_i=p_{ji}\pi_j $. Let $H$ be the underlying graph of the chain for which we have an edge with weight $w_{ij}=p_{ij}\pi_i=p_{ji}\pi_j$ for each $i,j\in\cal{X}$. A Markov chain is called lazy if $\forall i\in{\cal X},p_{ii}\geq \frac{1}{2} .$ In \cite{JS89} the conductance of a time reversible Markov chain is defined, as follows $\Phi(H)=\min\frac{\sum_{i\in Y,j\notin Y}w_{ij}}{\sum_{i \in Y}\pi_{i}}$, where the minimum is taken over all $Y\subseteq {\cal X}$ s.t. $0<\sum_{i\in Y}\pi_i\leq\frac{1}{2}.$
\begin{lemma}
\cite{JS89} For any lazy, time reversible Markov chain \[\tau_{x}(\epsilon)\leq const \times \left[ \frac{1}{\Phi(H)^2}(\log\pi_x^{-1}+\log\epsilon^{-1}) \right].\]
\end{lemma}
\begin{proposition}
The mixing time of $P$, when starting from the root, is polynomial in the height of the tree $n$.
\end{proposition}
\begin{proof} First of all, we will consider the lazy version of the Markov chain, i.e. in every step, with probability $1/2$ we do nothing, and with probability $1/2$ we follow the rules as in definition \ref{the_chain}. The mixing time of $P$ is bounded by the mixing time of its lazy version. The stationary distribution is the same. The Markov chain is time reversible, and the underlying graph is a tree with edge weights $w_{uv}=\pi_u p_{uv}=2^i\alpha\times \frac{1}{8}=2^{i-3}\alpha,$ if we suppose that $u$ is the father of $v$ and $2^i\alpha$ is the probability $\pi_u$.
Now it suffices to show that $1/\Phi(H)$ is polynomial in $n$.
Let ${\cal X}$ be the set of the nodes of $S$, i.e. the state space of the Markov chain $P$. We will consider all possible $Y\subseteq {\cal X}$ with $0\leq \pi(Y)\leq 1/2.$ We will bound the quantity $\frac{\sum_{i\in Y,j \notin Y}w_{ij}}{\sum_{i \in Y}\pi_{i}}.$
If $Y$ is connected and does not contain the root of $S$, then it is a subtree of $S$, with root let say u, and $\pi_u=\alpha 2^k$ for some $k\in \mathbb{N}.$ We have
\[\sum_{i\in Y, j\notin Y} w_{ij}\geq w_{u,father(u)}=2^{k-2}\alpha.\]
Now let $Y'$ be the full binary tree with root $u$ and height the same as $Y$, i.e. $k$. We have
\[\sum_{i\in Y} \pi_i \leq \sum_{i\in Y'}\pi_i= \sum_{j=0}^{k}2^{k-j}\alpha\times 2^j=2^k(k+1)\alpha\leq 2^k(n+1)\alpha \] where this comes if we sum over the levels of the tree $Y'.$
So it holds
\[\frac{\sum w_{ij}}{\sum \pi_i}\geq \frac{2^{k-2}\alpha}{2^k(n+1)\alpha}=\frac{1}{4(n+1)}\]
If $Y$ is the union of two subtrees of $S$, not containing the root of $S$, and the root of the first is an ancestor of the second's root, then the same arguments hold, where now take as $u$ the root of the first subtree.
If $Y$ is the union of $\lambda$ subtrees not containing the root of $S$, for which it holds that no one's root is an ancestor of any other's root, then we can prove a same bound as follows. Let $Y_1,...Y_{\lambda}$ be the subtrees, and $k_1,k_2,...,k_{\lambda}$ be the respective probabilities of the roots of them in the stationary distribution. Then as before
\[\sum w_{ij}\geq 2^{k_1-2}\alpha+2^{k_2-2}\alpha+...+2^{k_{\lambda}-2}\alpha\] and
\[\sum_{i\in Y} \pi_{i}=\sum_{j=1...\lambda}\sum_{i\in Y_j} \pi_i\leq 2^{k_j}(n+1)\alpha\] thus
\[\frac{\sum w_{ij}}{\sum \pi_i}\geq \frac{\alpha\sum_{j=1...\lambda} 2^{k_j-2}}{(n+1)\alpha \sum_{j=1...\lambda} 2^{k_j}}=\frac{1}{4(n+1)}.\]
If $Y$ is a subtree of $S$ containing the root of $S$, then the complement of $Y$, i.e. $S\setminus Y$ is the union of $\lambda$ subtrees of the previous form. So if we let $Y_i,k_i$ be as before, then
\[\sum w_{ij}=\alpha\sum_{j=1...\lambda} 2^{k_j-2}\] and since from hypothesis $\pi(Y)\leq 1/2$, we have
\[\sum_{i\in Y}\pi_i\leq\sum_{i\in S\setminus Y} \pi_i\leq (n+1)\alpha\sum_{j=1...\lambda} 2^{k_j}\]
thus the same bound holds again.
Finally, similar arguments imply the same bound when $Y$ is an arbitrary subset of $S$ i.e. an arbitrary union of subtrees of $S$.
In total we have
$1/\Phi(H)\leq 4(n+1).$
\end{proof}
Note that this result implies mixing time quadratic in the height of the tree, which agrees with the intuition for the full binary tree, that it should be as much as the mixing time of a simple random walk over the levels of the tree, i.e. over a chain of length $n$.
Before going on with the approximation algorithm, we will prove two properties of this Markov chain, useful for the proofs that will follow.
\begin{lemma}
Let $R$ be a binary tree of height $n$, and let $\alpha_R$ be the normalizing factor of the stationary distribution $\pi_{R}$ of the above Markov chain. It holds $\alpha_R^{-1}\leq (n+1)2^n,$ and $\pi_R(root)\geq \frac{1}{n+1}$
\label{propertiesOfP}
\end{lemma}
\begin{proof}
Let $r_i$ be the number of nodes in depth $i$.
\[1=\sum_{u\in S}\pi_R(u)=\sum_{i=0}^n\sum_{u\ in\ level\ i}\pi_R(u)=\sum_{i=0}^n r_i\alpha_R\cdot 2^{n-i}
\Rightarrow \frac{1}{\alpha_R}=\sum_{i=0}^n r_i\cdot 2^{n-i}\] which is maximized when the $r_i$'s are maximized, i.e. when the tree is full binary, in which case $r_i=2^i$ and $\alpha_R^{-1}=(n+1)2^n.$ This also implies that for the root of $R$ it holds $\pi_R(root)=\alpha_R\cdot 2^n\geq \frac{1}{n+1}.$
\end{proof}
\subsection{The approximation algorithm}
Let $S,T$ be as before.
We will prove that we can approximate the number of nodes of $S$ using the previous Markov chain. The key idea is that much of the information we need is in the normalizing factor $\alpha$, and although the stationary distribution is far from being uniform over the nodes of $S$, $\alpha$ is fully determined from the probability of the root, since we proved it to be $2^n\alpha$, where n is the height of $S$.
Let $\pi_S$ denote the probability distribution over the nodes of $S$, as defined in proposition \ref{the_stationary}, and let $\alpha_S$ denote the associated normalizing factor.
First we will show how we can compute exactly the number of nodes of $S$ if we could somehow (e.g. with an oracle, or an algorithm) know the normalizing factor $\alpha_R$ for any subtree $R$ of $T$ containing $T$'s root.
Then we will give an approximation algorithm that relies on approximating all these factors by sampling from the stationary distribution of the Markov chain described before, and estimating the probability of the root, and from that, the corresponding $\alpha_{S_i}$.
Finally we give the total error of our algorithm.
\begin{proposition} \label{sizeOfS}
Let $S$ be a binary tree of height $n$, and $\forall i=0...n,$ let $S_i$ be the subtree of $S$ that contains all nodes up to depth $i$, and let $\alpha_{S_i}$ be the factors defined as above. Then
\[|S|=\frac{1}{\alpha_{S_n}}-\sum_{k=0}^{n-1}\frac{1}{\alpha_{S_k}}\]
\end{proposition}
\begin{proof}
For $i=1,...,n$ let $r_i$ be the number of nodes in depth $i$. So
$|S|=r_0+...+r_n.$
Obviously if $S$ is not empty,
\begin{equation}
r_0=1=\frac{1}{\alpha_{S_0}}.\label{r0}
\end{equation}
We will prove that $\forall k=1...n$
\begin{equation}\label{rk} r_k=\frac{1}{\alpha_{S_k}}-2\frac{1}{\alpha_{S_{k-1}}},
\end{equation}
so then $|S|=\frac{1}{\alpha_{S_0}}+\sum_{k=1}^n(\frac{1}{\alpha_{S_k}}-2\frac{1}{\alpha_{S_{k-1}}})=\frac{1}{\alpha_{S_n}}-\sum_{k=0}^{n-1}\frac{1}{\alpha_{S_k}}.$
We will prove claim (\ref{rk}) by induction.
For $k=1$ we have
\[\sum_{u\in S_{1}}\pi_{S_{1}}(u)=1\Rightarrow
\alpha_{S_1}\cdot r_1+2 \alpha_{S_1}\cdot r_0=1\Rightarrow
r_1=\frac{1}{\alpha_{S_1}}-2 r_0=\frac{1}{\alpha_{S_1}}-2\frac{1}{\alpha_{S_0}}.\]
Suppose claim (\ref{rk}) holds for $k<i\leq n.$ We will prove it holds for $k=i.$
\[\sum_{u\in S_{i}}\pi_{S_{i}}(u)=1\Rightarrow
\sum_{k=0}^i 2^{i-k} \alpha_{S_i}\cdot r_k =1 \Rightarrow
r_i=\frac{1}{\alpha_{S_i}}-\sum_{k=0}^{i-1}2^{i-k} r_k\]
and substituting $r_k$ for $k=0,...,i-1$ by (\ref{r0}) and (\ref{rk}), we get
$r_i=\frac{1}{\alpha_{S_i}}-2\frac{1}{\alpha_{S_{i-1}}}.$
\end{proof}
\begin{corollary}
If we have an oracle, or a poly($n$) predicate that for any subtree $R$ gives the factor $\alpha_R$ defined as above, then we can compute exactly the number of nodes of any tree $S$ of height $n$ in poly($n$) time.
\end{corollary}
Now we can estimate $\alpha_R$ for any tree $R$ of height $n$, within $(1+\zeta)$ for any $\zeta>0$, with high probability, and in polynomial time, using the Markov chain over the nodes of $R$, given in Definition \ref{the_chain}.
\begin{proposition}\label{estimOfaR}
For any binary tree $R$ of height $n$ we can estimate $\alpha_R$, within $(1\pm\zeta)$ for any $\zeta>0$, with probability $1-\delta$ for any $\delta>0$, in time $poly(n,\zeta^{-1},\log\delta^{-1})$.
\end{proposition}
\begin{proof}
Let $R$ be a binary tree of height $n$. We can estimate $\alpha_R$ as follows.
As we saw, $\pi_R(root)=2^n \alpha_R$, and we observe that this is always $\geq\frac{1}{n+1}$ (which is the case when $R$ is full binary). So we can estimate $\pi_R(root)$ within $(1\pm\zeta)$ for any $\zeta>0$, by sampling $m$ nodes of $R$ according to $\pi_R$ and taking, as estimate, the fraction $\hat{p}=\sum_{i=1}^m\frac{1}{m}X_{i}$, where $X_i=1$ if the $i$-th sample node was the root, else $X_i=0.$
It is known by standard variance analysis arguments that we need $m=O(\pi_R(root)\cdot \zeta^{-2})=poly(n,\zeta^{-1})$ to get \[\Pr [(1-\zeta)\pi_R(root)\leq \hat{p}\leq (1+\zeta)\pi_R(root)]\geq\frac{3}{4}\]
We can boost up this probability to $1-\delta$ for any $\delta>0$, by repeating the above sampling procedure $t=O(\log\delta^{-1})$ times, and taking as final estimate the median of the $t$ estimates computed each time.
(Proofs for the above arguments are elementary in courses on probabilistic algorithms or statistics, see e.g. in \cite{Snotes} the unbiased estimator theorem and the median trick, for detailed proofs.)
The random sampling according to $\pi_R$ can be performed by running the Markov chain defined earlier, on the nodes of $R$. Observe that the deviation $\epsilon$ from the stationary distribution can be negligible and be absorbed into $\zeta$, with only a polynomial increase in the running time of the Markov chain.
Finally, the estimate for $\alpha_R$ is $\hat{\alpha_R}=2^{-n}\hat{p}$, and it holds
\[\Pr[(1-\zeta)\alpha_R\leq \hat{\alpha_R}\leq (1+\zeta)\alpha_R]\geq 1-\delta.\]
\end{proof}
The final algorithm for estimating $|S|$ is as follows. We estimate $\alpha_{S_i}$ for every subtree of $S$ and we get an estimate of the size of $S$ using proposition \ref{sizeOfS}.
\begin{proposition}\label{main}
For all $\xi>0,\delta>0$ we can get an estimate $|\hat{S}|$ of $|S|$ in time $poly(n,\xi^{-1},\log\delta^{-1})$ s.t. \[\Pr[|S|-\xi 2^n\leq |\hat{S}|\leq |S|+\xi 2^n]\geq1-\delta\]
\end{proposition}
\begin{proof}
Let $\zeta=\frac{\xi}{2(n+1)}$ and $\epsilon=\frac{\zeta}{1+\zeta}$, thus
$poly(\epsilon^{-1})=poly(\zeta^{-1})$
$=poly(n,\xi^{-1})$.
So according to proposition \ref{estimOfaR} we have in time $poly(n,\xi^{-1},\log\delta^{-1})$ estimations $\forall i=1,...,n$
\begin{equation}\label{est-aSi} (1-\epsilon)\alpha_{S_i} \leq\hat{\alpha}_{S_i} \leq (1+\epsilon) \alpha_{S_i}.
\end{equation}
We will use proposition \ref{sizeOfS}. Let $A=\frac{1}{\alpha_{S_n}}$ and $B=\sum_{k=0}^{n-1}\frac{1}{\alpha_{S_k}},$ so $|S|=A-B$, and clearly $B\leq A.$
From (\ref{est-aSi}) we have
$\frac{1}{1+\epsilon}A\leq \hat{A} \leq \frac{1}{1-\epsilon} \Leftrightarrow$
$(1-\zeta)A\leq \hat{A} \leq (1+\zeta) A$ and similarly
$(1-\zeta)B\leq \hat{B} \leq (1+\zeta) B.$
Thus
$(1-\zeta)A-(1+\zeta)B\leq \hat{A}-\hat{B} \leq (1+\zeta)A-(1-\zeta)B \Leftrightarrow$
$A-B-\zeta (A+B) \leq \hat{A}-\hat{B} \leq A-B+\zeta (A+B),$ and since $A\geq B$, we have
$|S|-2\zeta A\leq |\hat{S}| \leq |S|+2 \zeta A.$ And since from lemma \ref{propertiesOfP} the maximum $A$ is $2^n(n+1)$, we have
$|S|-2\zeta(n+1)2^n \leq |\hat{S}| \leq|S|+ 2\zeta (n+1) 2^n \Leftrightarrow$
$|S|-\xi \cdot 2^n \leq |\hat{S}|\leq |S|+ \xi \cdot 2^n.$
\end{proof}
\begin{corollary}\label{pr-estimation}
Let $p=\frac{|S|}{2^n}.$ For all $\xi>0,\delta>0$ we can get an estimation $\hat{p}$ in time $poly(n,\xi^{-1},\log\delta^{-1})$ s.t.
\[\Pr[p-\xi\leq\hat{p}\leq p+\xi]\geq1-\delta\]
\end{corollary}
So since, as we already discussed, every problem in TotP reduces to the above problem of counting nodes of a tree, we proved the following theorem.
\begin{theorem}\label{main-theorem}
For any problem $f\in TotP$, with $M_f$ being its corresponding NPTM (whose total number of computation paths on input $x$ is $f(x)+1$), and with $n'$ being the number of non deterministic bits used by $M_f$ on input $x$, $\forall \xi>0, \forall x\in\{0,1\}^n$ we can have with heigh probability, in time $O(\epsilon^{-2},poly(n))$ an estimation $\hat{f}(x)=f(x)\pm \xi\cdot 2^{n'}.$
Also corollary \ref{pr-estimation} holds for $p=f(x)/2^{n'},$ i.e. we can have $\hat{p}=p\pm \xi, \forall \xi>0.$
\end{theorem}
The above theorem holds with $n$ in place of $n'$, if $n'=n+constant$, as is the case for many problems like counting non-cliques of a graph, counting independent sets of all sizes of a graph, counting non-independent sets of size k, counting satisfying assignments of DNF formulas, counting satisfying assignments of monotone circuits, e.t.c.
\section{Implications to exponential time complexity}
In what follows, let $f$ be a function in TotP, let $M$ be the corresponding NPTM for which $\forall x$ ($\#$branchings of $M(x)$)$=f(x).$ Let also $n$ be the size of the input, or some complexity parameter that we care about (e.g. the number of variables in a boolean formula or circuit), and $n'$ be the amount of non-deterministic bits, that is the height of the computation tree of $M(x)$ (where the internal nodes are the branchings i.e. the positions where $M$ makes a non-deterministic choice). Of course $n'$ is polynomial in $n$. Be careful that $n'$ here is denoted $n$ in proposition \ref{main}, as it is the height of the tree. For the results to have some meaning, we consider functions $s:\mathbb{N} \rightarrow \mathbb{N}$ that are positive, as small as we want, but at most $O(2^n)$.
We give corollaries of the main result.
\begin{corollary}\label{general-corollary}
For all $f\in TotP$, $\forall s:\mathbb{N} \rightarrow \mathbb{N}$, $\forall x\in \{0,1\}^*$, $\forall \delta \in (0,1)$, with probability $1-\delta$, in time $\frac{2^{n'}}{s(n')}poly(n,\log \delta^{-1})$, where $n'$ is as before, we can achieve an estimation $\hat{f}(x)=f(x) \pm 2^{n'/2}s(n')^{1/2}.$ For any $\beta \in (0,1)$, in time $2^{(1-\beta)n'}poly(n,\log \delta^{-1})$, we can achieve $\hat{f}(x)=f(x)\pm 2^{n'(1+\beta)/2}.$
\end{corollary}
\begin{proof}
From the proof of proposition \ref{main} and in particular from the variance analysis arguments in proposition \ref{estimOfaR}, we can see that the actual dependence of the running time on $\xi$ is proportional to $\xi^{-2}$. So we get the first estimation by setting $\xi=\sqrt{\frac{s(n')}{2^{n'}}}$, and the second by setting $s(n')=2^{\beta n'}.$
\end{proof}
For the consequent corollaries, we will need the following useful fact.
\begin{theorem}\label{exact-deterministic}
For all $f\in TotP$, $x$, $s$ as before, we can decide deterministically in time $O(s(n) \cdot poly(n))$ whether $f(x)\leq s(n).$
\end{theorem}
\begin{proof}
We perform a bfs or a dfs on the computation tree of $M(x)$ (i.e. we perform exhaustive search by trying all non deterministic choices) until we encounter at most $s(n)+1$ branchings. If the tree is exhausted before that time, then obviously $f(x)\leq s(n)$, else $f(x)>s(n).$
\end{proof}
The next corollary shows that we can have a RAS (randomized approximation scheme) for every problem in TotP, in time strictly smaller than that of exhaustive search. Note that we can't have that in polynomial time, unless NP=RP.
\begin{corollary}
For all $f$, $x$, $s$, $\delta$, $n$, $n'$ as before, and for all $k\in \mathbb{R}$, with probability $1-\delta$ and in time $poly(k,n,\log\delta^{-1})(\frac{2^{n'}}{s(n')}+2^{n'/2}s(n')^{1/2}),$ we can achieve approximation $\hat{f}(x)=f(x)(1\pm \frac{1}{k}).$
For every $\beta\in(0,1)$, we can have a RAS in time $poly(k,n,\log \delta^{-1})(2^{(1- \beta)n'}+2^{(1+\beta)n'/2})$.
We can also have uniform sampling in the same amount of time.
\label{ras}\end{corollary}
\begin{proof}
First we check deterministically if $f(x)\leq k 2^{n'/2}s(n')^{1/2}$, in which case we get the exact value of $f(x)$. Otherwise, if $f(x) > k 2^{n'/2}s(n')^{1/2}$, we apply the initial algorithm to get $\hat{f}=f(x)\pm 2^{n'/2}s(n')^{1/2}$ which is $< f(x)\pm \frac{1}{k} f(x)=(1\pm \frac{1}{k})f(x).$
The running time is a result from theorem \ref{exact-deterministic} and corollary \ref{general-corollary}.
We can also have uniform sampling, since in \cite{JS89} is proved that a randomized approximation scheme can be used for uniform sampling with a polynomial overhead in the running time.
\end{proof}
Note that $n'$ in many cases, like problems on graphs, formulas, circuits etc., equals $n+constant$. Some example is the problem $\#IS$, as we discussed in the preliminaries section in detail.
Similar simple arguments hold for other problems too, so for these problems, since $n'=n+constant$, all the above corollaries hold with $n'$ substituted with $n$.
\begin{corollary}
For problems in TotP for which $n'=n+constant$, like $\#IS$, and $\#SAT$ for DNF formulas, monotone circuits etc., all the above corollaries hold with $n'$ substituted with $n$.
\end{corollary}
We can explore whether we can extend corollary \ref{ras} for problems in $\#$P. One possible way is to find a (possibly of exponential time) approximation preserving reduction from a problem in $\#$P to a problem in TotP s.t. the amount of non-deterministic bits needed for the first doesn't increase too much with the reduction.
Precisely, if $f$ is in $\#$P with $M_f$ being its corresponding NPTM (whose number of accepting computation paths on input $x$ is f(x)), that uses $n$ non deterministic bits, and $g$ is in TotP with $M_g$ its corresponding NPTM (whose total number of computation paths on input $x$ equals $f(x)+1$), that uses $n'$ non deterministic bits, then we have the following.
\begin{corollary}If there exists an approximation preserving reduction from a problem $f\in\#P$ to a problem $g\in TotP$, s.t. $n'< (3-\gamma)n/2)$, for some $\gamma\in(0,1)$, then for all $x\in\{0,1\}^n$, $\delta\in (0,1)$, $k\in \mathbb{R}$, with probability $1-\delta$ and in time $t = poly(k,|x|,\log\delta^{-1})(2^{(1-\gamma) n}+2^{(1+\gamma)n/2}),$ we can achieve approximation $\hat{f}(x)=f(x)(1\pm \frac{1}{k}).$ The reduction suffices to be of time $O(t)$, and not polynomial.
\end{corollary}
\begin{proof}
Apply corollary \ref{ras} on $g$ with $\beta\geq 3-\frac{n}{n'}(3-\gamma).$
\end{proof}
Note that we took $n, n'$ to be the number of non deterministic bits, and not the sizes of the inputs, because we want to compare with the running time of the brute force solutions.
\section{Towards circuit lower bounds}
There are two problems related to our results, that are related to derandomization and circuit lower bounds too. The first one is the Circuit Acceptance Probability Problem (CAPP) where given a boolean circuit with $n$ input gates, and size $n^c$ for some $c$, and it is asked to approximate the probability $p=\Pr_x[C(x)=1]$ within some $\epsilon > 0$, that is to find a $\hat{p}=p\pm \epsilon$. (In fact $\epsilon=1/6$ suffices for the results that follow). The second is the problem where given a circuit that has got either $0$ or $>2^{n-1}$ satisfying assignments, and it is asked if it is satisfiable. We will call it GapCSAT (gap circuit satisfiability).
Their relationship with circuit lower bounds was proved in \cite{IKW02, Williams10}. The CAPP and its relation to derandomization is studied in \cite{Bar02,For01, IKW02,KC99, KRC00}.
\begin{theorem}
\cite{Williams10} Suppose there is a superpolynomial $s(n)$ s.t. for all $c$ there is an $O(2^n \cdot poly(n^c))/s(n)$ nondeterministic algorithm for CAPP on $n variables$ and $n^c$ gates. Then $NEXP \nsubseteq P/poly$. The proof holds even if we replace CAPP with GapCSAT.
\end{theorem}
Since a randomized algorithm can be consider as a nondeterministic algorithm, our algorithm yields a solution to CAPP for subclasses of polynomial size circuits, for wich the counting version is in TotP, e.g monotone circuits, DNF formulas, and tree-monotone circuits. (The latter are circuits monotone w.r.t. a partial order whose graph is a tree, and their counting version problem is the basic TotP-complete problem under parsimonious reductions, as shown in \cite{BCPPZ}). The same holds for circuits that can be reduced to circuits in TotP under additive-approximation preserving reductions, like CNF formulas.
\begin{corollary}
CAPP can be solved with heigh probability (and thus non deterministically) in time $poly(n,\epsilon^{-1}),\forall \epsilon >0,$ for circuits with $n$ input gates, whose counting version is in TotP, and the height of the corresponding self-reducibility tree is $n+constant.$
\end{corollary}
\begin{proof}
This is a result of corollary \ref{pr-estimation}, where there $n$ essentially denotes the height of the self reducibility tree, as we already discussed in the previous subsection.
\end{proof}
\begin{corollary}
CAPP can be solved with heigh probability in $poly(n,\epsilon^{-1})$, $\forall \epsilon >0$, for DNF formulas, monotone circuits, tree-monotone circuits, and CNF formulas of $poly(n)$ size and $n$ input gates.
\end{corollary}
\begin{proof}
The problem of counting satisfying assignments of DNF formulas, monotone circuits, and tree-monotone circuits, belongs to TotP.
To see that the corresponding self reducibility tree is of height n+constant, observe that the number of sat.assignments of a DNF formula equals the sum of sat.assignments of the two DNF subformulas that result when we set the first variable to 0 and 1 respectively.
The same holds for monotone circuits. For tree-monotone circuits the proof is more complicated, see \cite{BCPPZ}.
To see that the result holds for CNF formulas too, observe that if $\phi$ is a CNF formula, then its negation $\bar{\phi}$ can easily be transformed to a DNF $\psi$ with the same number of variables, using De Morgan's laws.
So if $p=\Pr_x[\phi(x)=1]$ and $q=\Pr_x[\psi(x)=1],$ then $p=1-q.$
If $\hat{q}=q\pm \epsilon$ then $\hat{p}=1-\hat{q}=p\pm \epsilon.$
\end{proof}
As for the GapCSAT problem, for circuits whose counting version problem is in TotP, is solved in P by definition (of TotP). By our algorithm, it is also solved in randomized polynomial time, for circuits for which the problem of counting non-satisfying assignments is in TotP, like CNFs, and in particular it can be solved for any gap $\rho$, (i.e. the number of solutions is either $0$ or $>\rho2^n.$
\begin{corollary}
The GapCSAT problem, for any gap $\rho$, is in randomized polynomial time for circuits s.t. (a)counting the number of solutions is in TotP, or (b)counting the number of non-solutions is in TotP. (e.g. DNF, CNF).
\end{corollary}
\begin{proof}
If the number of solutions are either $0$ or $>\rho 2^n,$ then the number of non solutions are either $2^n$ or $(1-\rho) 2^n$, so it suffices to apply our algorithm (theorem \ref{main-theorem}) with $\xi=\rho/2.$
\end{proof}
These results, combined with proofs of the given references, could give some lower bounds for circuits as the above. Also an additive approximation reduction even non deterministic, and even in subexponential time, from circuit sat to a problem in TotP, would give lower bounds for P/poly.
\subsection*{Acknowledgements}I want to thank Manolis Zampetakis for mentioning to me the relationship between these results and circuit lower bounds.
|
2,869,038,153,763 | arxiv | \section{Introduction}
\label{sec:introduction}
The studies on photon scattering play an important role in quantum
optics. Various significant phenomena in quantum optics are related to
scattering, such as electromagnetically induced transparency (EIT) and
resonance fluorescence~\cite{r1,master}. Photon scattering for models
like single or multi atoms coupling with optical
cavity~\cite{r6,xray1,xray2,xray2t} or
waveguide~\cite{r2,r3,r7,1,r8,Fratini14,Dai15} have long long been
active research area both in theory and experiment. In recent years, a
theoretical model based on coupling cavity array (CCA) is proposed and
quickly attracts a great deal of attention~\cite{r9}. CCA is a perfect
platform for scattering study, important achievements have been made
both on the one dimensional~\cite{r9,r10,r11,r12} and the two
dimensional~\cite{peng1} CCA platform.
In nature, the spontaneous emission of the atoms are inevitable due
to their coupling with the surrounding electromagnetic
environment~\cite{r4,r5}, and the atomic decay has long been included
in the scattering research through master equation~\cite{1,2}. But
except the intuitive results as the decrease and expansion of the
reflection (transmission) peaks, no qualitative difference has been
found in the past researches for atoms coupling with waveguide or
CCA\@.
In this paper, we revisit the photon scattering problem for two
two-level atoms coupling with the super cavity~\cite{3} but include
atomic decay. A super cavity (SC) is first present in our former work
to help study the photon scattering for a two-level atom coupling with
a multi-mode cavity on the CCA platform~\cite{zhou1}. In the two atoms
case, whether including the atomic decay or not leads to a fundamental
difference in the scattering results. Without atomic decay, the
scattering results for the two atoms are very similar with the one
atom case, configuration of the atoms has no relevant effect~\cite{3}. When
including the atomic decay, for reflection a dip appears around the
resonant energy ($\Delta=0$) only for the atoms arranged in
node-antinode configuration corresponding to the resonant mode. A very
similar phenomenon has been observed in Ref.~\cite{xray2} and a
complicated theory which attributes it to EIT is given. In this paper,
we reveals the physical mechanism behind this phenomenon is much
simpler and has nothing to do with EIT\@.
The rest of the paper is organized as follows. In Sec.~\ref{sec:2}, we
introduce our model and briefly give the results for the condition
without atomic decay. In Sec.~\ref{sec:3}, we use the master equation
to handle the photon scattering problem with atomic decay. We reveal
the physical mechanism behind the configuration dependent phenomenon. In
Sec.~\ref{sec:4}, single-mode approximation is introduced to support
the analysis we give above. Finally a brief conclusion is given.
\section{Model: two two-level atoms in a super-cavity \label{sec:2}}
The system we consider consists of three parts, see Fig.~\ref{fig:1}.
The central part contains a SC with two two-level atoms
embedded in, where the SC is formed by a 1D single-mode
cavity array with $N$ cavities, and these two atoms interacts with two
cavities of the SC respectively. The second (third) part is
the left (right) photon channel, formed by a semi-infinite 1D cavity
array connected to our central part from its left (right) side.
We will study single-photon scattering problem on this system. One
photon with wave vector $k$ from the left photon channel is scattered
by the SC system. Our aim is to figure out how the
reflection and transmission coefficients depend on the position of the
two atoms in the SC. More precisely, we will study the cases
when one atom is at the node of the resonant mode of the SC
while the other is at the antinode. In particular, we are interested
in whether the position order of the two atoms, i.e., the
node-antinode configuration or the antinode-node configuration for the
resonant mode, is related.
\begin{figure}[htbp]
\includegraphics[width=8cm]{fig0}
\caption{(Color online). Schematic set of the single-photon
scattering problem for the 1D CCA model. One photon (filled red
circle) with the wave vector $k$ injects from the left side of the
SC composed of $N$ cavities. The SC is formed by a
relatively small coupling strength $\eta$ with the outside
cavities. Two two-level atoms (filled blue circle) are in the
$n_{1}$-th and $n_{2}$-th cavities of the SC respectively.\@ Here
we take $N=7$, $n_{1}=3$ and $n_{2}=5$.\label{fig:1}}
\end{figure}
Under the rotating wave approximation, the Hamiltonian of our system
is given by
\begin{equation}
H = H_S + H_L + H_R + H_{SL} + H_{SR},
\end{equation}
where
\begin{align}
H_{S} & = \sum^{N}_{j=1} \omega_{c}a^{\dag}_{j}a_{j}
- \sum^{N}_{j=2} \xi (a^{\dag}_{j-1} a_{j} +
a^{\dag}_{j} a_{j-1}) \nonumber\\
& \quad {} + \sum_{i=1}^2 \left[\omega_{a} |e\rangle_{i}
\langle e| + \Omega (a^{\dag}_{n_i} \sigma^{-}_{i} +
\mathrm{H.c.})\right], \label{h0} \\
H_{L} & = \sum^{0}_{j=-\infty} \left[ \omega_{c}a^{\dag}_{j}a_{j}
- \xi (a^{\dag}_{j-1}a_{j} + a^{\dag}_{j}a_{j-1}) \right], \\
H_{R} & = \sum^{\infty}_{j=N+1} \left[ \omega_{c}a^{\dag}_{j}a_{j}
- \xi(a^{\dag}_{j}a_{j+1}+a^{\dag}_{j}a_{j+1}) \right], \\
H_{SL} & = -\eta(a^{\dag}_{0}a_{1} + \mathrm{H.c.}),\\
H_{SR} & = -\eta(a^{\dag}_{N}a_{N+1}+ \mathrm{H.c.}).\label{hi}
\end{align}
Here $H_{S}$ is the free Hamiltonian for the central part, and $H_{L}$
($H_{R}$) is the Hamiltonian for the left (right) photon channel,
$a_{j}$ ($a_{j}^{\dagger}$) is the annihilation (creation) operator of
the photon in the $j$-th cavity, $|e_{i}\rangle$ ($i=1,2$) is the
excited state of the atom $i$, $n_{i}$ is the label of the cavity that
the atom $i$ interacts with, $\Omega$ the coupling strength between
each atom and its cavity, $\omega_c$ is the mode frequency for each
cavity, $\omega_a$ is the eigen-energy of the atomic excited state,
$\xi$ is the hopping strength between nearest neighbor cavities within
the three parts, and $\eta$ is the hopping strength between nearest
neighbor cavities between the SC system and the left or right photon
channel.
Without atomic decay, due to the excitation number is conserved in
this model, the scattering state can be expanded as
\begin{equation}
|\psi_{k}^{(+)}\rangle = \sum_{l} C_{l} |1_{l};g_{1},g_{2}\rangle + \alpha_{1}
|\text{vac};e_{1},g_{2}\rangle + \alpha_{2}|\text{vac};g_{1},e_{2}\rangle,\label{psik}
\end{equation}
with
\begin{equation}
C_{l} = \left\{
\begin{array}{cc}
e^{ikl}+re^{-ikl}, & l<0,\\
c_{1}e^{ikl}+d_{1}e^{-ikl}, & 0< l< n_{1},\\
c_{2}e^{ikl}+d_{2}e^{-ikl}, & n_{1}< l< n_{2},\\
c_{3}e^{ikl}+d_{3}e^{-ikl}, & n_{2}<l< N, \\
te^{ikl}, & l< N.
\end{array}
\right.\label{cl}
\end{equation}
Here the parameters $t$ and $r$ are the single-photon transmission and
reflection amplitudes, respectively. Meanwhile the wave function
must be continuous at nodes $0$, $n_1$, $n_2$ and $N$. According to
the scattering theory \cite{taylor}, the scattering state
$|\psi_{k}^{(+)}\rangle$ is an eigen-state of the Hamiltonian $H$ with
eigen-energy $E_{k}$, i.e., we have
\begin{equation}
H|\psi_{k}^{(+)}\rangle=E_{k}|\psi_{k}^{(+)}\rangle.\label{eigen}
\end{equation}
Numerically solving Eq.~(\ref{eigen}) we can obtain the transmission
($T=|t|^{2}$) and reflection ($R=|r|^{2}$) coefficients. The case
without decay has been thoroughly investigated in our precious work~\cite{3}.
As Fig.~\ref{fig:5} shows, no qualitative difference appears in
transmission or reflection between arranging the atoms in
node-antinode and antinode-node configurations. Actually the result is
quite similar with the one-atom situation~\cite{zhou1}.
\begin{figure}[htbp]
\includegraphics[width=1\columnwidth]{fig5}
\caption{(Color online).(a) Single-photon reflection for system
without decay. (b) Single-photon transmission for system without
decay. Blue solid line represents the atoms in node-antinode
(8-12) configuration while the green solid line is for the
antinode-node (12-16) configuration. Here, $N=31$, $\eta=0.01$ and
$\Omega=0.1$.}
\label{fig:5}
\end{figure}
\section{Scattering with decay \label{sec:3}}
Now we consider the situation with the atomic decay. Since the two
atoms locate in two distant cavities, it is reasonable to assume
that each atom is coupling with an independent reservoir. Then the
dynamics of our system is controlled by the master
equation~\cite{master}
\begin{align}
\label{eq:4}
\dv{\rho(t)}{t} & = -i \comm{H}{\rho(t)} + \frac{\gamma}{2} \sum_{l=1}^{2}
\nonumber\\
& \quad \pqty{2 \sigma_{l}^{-} \rho(t)
\sigma_{l}^{+} - \op{e_{l}}
\rho(t) - \rho(t) \op{e_{l}}},
\end{align}
where $\gamma$ is the spontaneous decay rate of the atomic excited
state. Here we assume the decay rate for each atom is the same.
The steady state for our scattering problem is
\begin{equation}
\label{eq:5}
\rho = \op{\Psi} + \kappa \op{G},
\end{equation}
with $\ket{G}=\ket{\mathrm{vac};g_{1},g_{2}}$ being the ground state of
our system, and $\ket{\Psi}$ being the scattering state
\begin{equation}
\label{eq:2}
\ket{\Psi} = \ket{k}_{L} + r \ket{-k}_{L} + \sum_{j=1}^{N} c_{j}
\ket{j} + d_{1} \ket{e_{1}} + d_{2} \ket{e_{2}} + t \ket{k}_{R},
\end{equation}
where
\begin{align}
\label{eq:3}
\ket{k}_{L} & = \sum_{j=-\infty}^{0} e^{i k j} \ket{j}, \\
\ket{k}_{R} & = \sum_{j=N+1}^{\infty} e^{i k j} \ket{j},
\end{align}
with $\ket{j}=a_{j}^{\dagger}\ket{G}$ and
$\ket{e_{l}}=\sigma_{l}^{+}\ket{G}$. Then the time independent master
equation implies that the scattering state $\ket{\Psi}$ satisfies
\begin{equation}
\label{eq:1}
\pqty{H - i\frac{\gamma}{2} \sum_{l=1}^{2} \op{e_{l}}} \ket{\Psi} =
E_{k} \ket{\Psi},
\end{equation}
where $E_{k}=\omega_{c}-2\eta \cos k$. So we can use the effective
Hamiltonian $H_{eff}=H - i\frac{\gamma}{2} \sum_{l=1}^{2} \op{e_{l}}$
to describe this decay system~\cite{1,2}. Note that Eq.~\eqref{eq:1}
is sufficient to determine the transmission coefficient
$T=\abs{t}^{2}$ and the reflection coefficient $R=\abs{r}^{2}$.
In Fig.~\ref{fig:3} we give the numerical results for single-photon
transmission and reflection coefficients varying with $\Delta$. Here,
$\Delta=E_{k}-E_{n}$ is the energy difference between the incoming photon
and the resonant mode of the SC\@. For transmission, we see two peaks
in our selected region of $\Delta$ and their space between is
configuration-dependent. This result is quite similar with the case
without atomic decay (see Fig.~\ref{fig:5}), and due to the atomic
decay the transmission has a dramatic decline. As for reflection,
despite the peaks we expect to appear at the same positions
corresponding to the transmission, one more peak emerges or in another
word the reflection experience a sudden drop at $\Delta=0$ only for
the node-antinode configuration. This is a major difference comparing
with the situation without the atomic decay. A qualitatively similar
phenomenon has been observed in Ref.~\cite{xray2}, which is explained
with a much complicated model. Moreover, the influence of the atomic
decay on reflection is much smaller. Next we will try to figure out
the physical mechanism underlying this interesting configuration-dependent
phenomenon.
\begin{figure}[htbp]
\includegraphics[width=1\columnwidth]{fig3}
\caption{(Color online). Transmission and reflection spectrum for
the system with decay. (a) Single-photon transmission for two
atoms in different configurations with decay. Blue solid line
represents the node-antinode (8-12) configuration while the green
solid line is for the antinode-node (12-16) configuration. (b)
Single-photon reflection. Here, $\gamma=10^{-5}$ and the other
parameters are the same as in Fig.~2.}
\label{fig:3}
\end{figure}
Similarly as that discussed in Ref.~\cite{zhou1}, the reflection
coefficient $R$ at $E_{k}$ for our setting is essentially determined
by the eigen-modes of $H_{S}$ near resonant with $E_{k}$. The obvious
qualitative difference in $R$ between the two cases for the two atoms
locating in the node-antinode configuration or in the antinode-node
configuration occurs at $\Delta=0$, i.e., the input photon with energy
resonant with the resonant eigen-mode of the empty SC\@. In our model
the scatterer consists of SC and two atoms resonant coupling with SC's
$n$-th eigen-mode (with eigenvalue $E_{n}$), we can prove that $E_{n}$
is also an eigen-value of the scatterer ($H_s$) only if either atom is
located at the node of the mode. Thus, the condition is met for the
above two configurations, and $E_{n}$ will still be an eigen-value of
the scatterer in both situations. But no peak appears at $\Delta=0$
for the reflection or transmission without the atomic decay (see
Fig.~\ref{fig:5}) which violates the resonant tunneling assumption.
Therefore we need to analyze the eigen-mode of the SC system when
$\Delta=0$.
In order to clarify the confusion, Fig.~\ref{fig:4}(a) shows this
special mode with atoms arranged in two configurations. This mode is
localized and its localization condition is configuration dependent.
As for the node-antinode configuration the mode localized between the
left wall of the SC and the atom at node while for the antinode-node
configuration it localized between the atom at node and the right
wall. The appearance of this localized mode is due to coupling between
the node atom and the non-resonance modes~\cite{zhou1}, and the
antinode atom should not be excited and there are no photons around
it.
Due to the mode is localized, no photon can be transported through the
SC under this incoming energy. So the reflection (transmission) shows
no peak at $\Delta=0$ without the atomic decay, no matter atoms arranged in which
configuration. When the atoms coupling with the reservoir, new photon
leakage way is introduced due to spontaneous emission. Thus mode
localized in different way can lead to fundamental difference. For
antinode-node configuration, the mode is localized at the output
(right) side of the SC so the incoming photon from left to the SC is
totally reflected by the left wall. Then with or without the atomic decay makes
no difference since no photon goes into SC at all, so $R=1$ ($T=0$) at
$\Delta=0$. For the node-antinode configuration, the photon has
probability to go into the SC as the mode localized at the input
(left) side of the SC\@. The incoming photon will excite the atom at node
and then leak into the reservoir due to spontaneous emission. This
leads to the appearance of the sudden drop of reflection at
$\Delta=0$.
Next we calculate the photon flow inside the SC at $\Delta=0$. The
photon flow for the $l$-th cavity is defined as
\begin{equation}
J_l=-i[C_l(C^{*}_{l+1}-C^{*}_{l})+C_l^{*}(C_{l+1}-C_{l})].\label{flow}
\end{equation}
Fig.~\ref{fig:4}(b) shows no photon flow within the SC for the atoms
arranged in antinode-node configuration, meanwhile for the
node-antinode configuration there is a steady photon flow name $J_s$
before the node atom. This confirms the analysis we give above.
Further, the steady incoming photon is $J_i=2\sin{k}$, the leaking
rate $L$ of the photon into the reservoir is defined as $L=J_s/J_i$.
Then we numerically obtain $L+R+T=1$ as expected. Another major
consequence by introducing the decay is the decline of the
transmission. Under the above mentioned parameter condition
$\gamma=10^{-5}$, the transmission is so small that can be ignored
(see Fig.~\ref{fig:3}). Thus closing the transmission channel by
setting $\eta=0$ for the right wall of the SC will cause little change
to the above results but making our model much similar with the
experiment condition~\cite{xray2}.
\begin{figure}[htbp]
\includegraphics[width=1\columnwidth]{fig4}
\caption{(Color online). (a) Eigen-mode of the scatterer ($H_s$)
with $E=-2\cos{\frac{4\pi}{N+1}}$ for the atoms in different
configurations. Blue and green solid lines each stands for
node-antinode and node-antinode configuration. (b) Photon flow for
each cavity in the SC at $\Delta=0$. Blue cross (green dot) is for
the node-antinode (antinode-node) configuration. Here, the
parameters are the same as in Fig.~\ref{fig:3}. }
\label{fig:4}
\end{figure}
The localized eigen-mode can be analytically solved. When the wave
vector of the injection photon is $k=\frac{l\pi}{N+1}$, the atom
$n_{i}$ in the node implies that $\sin(k n_{i})=0$, and the atom in
the antinode implies that $\vert\sin(k n_{i})\vert=1$. For example,
$\sin(k n_{1})=0$ and $\vert\sin(k n_{2})\vert=1$ in the case of
node-antinode configuration. In this case, we find an analytical
solution of the eigen-mode of $H_{s}$ with eigen-value
$E_{l}=\omega_{c}-2\cos k$:
\begin{equation}
|\psi_{l}\rangle = \sum_{j=1}^{N} b_{j} |1_{j}; g_{1}, g_{2}\rangle
+ \alpha|\mathrm{\text{vac}}; e_{1}, g_{2}\rangle,
\end{equation}
where
\begin{align}
b_j & =
\begin{cases}
\sin(kj) \sqrt{\frac{n_{1}}{2}+\frac{\sin^2{k}}{g^2}}, &
1\leq j\leq n_{1}, \\
0, & n_{1}+1\leq j\leq N,
\end{cases} \\
\alpha & = - \frac{b_{n_{1}-1}}{g}.
\end{align}
The localized mode for antinode-node configuration is almost the same
only with the state localized to the right.
\section{Single Mode Approximation \label{sec:4}}
For $\Delta=0$, we believe only the localized mode is important and
this is the starting point of our analysis above. To prove the validness of this
assumption, we introduce the single mode approximation Hamiltonian of the scatter
\begin{equation}
H_{S}=E_l|\psi_{l}\rangle\langle\psi_{l}|
\end{equation}
while keeping $H_{L}$,$H_{R}$,$H_{SL}$ and $H_{SR}$ the same. Now the
scattering state can be expanded as
\begin{equation}
|\Psi_{k}^{(+)}\rangle = |\varphi_{k}\rangle + r |\varphi_{k}\rangle
+ \mu |\psi_{l}\rangle + t |\phi_k\rangle
\end{equation}
with
\begin{equation}
\begin{cases}
|\varphi_{k}\rangle=\sum_{j=-\infty}^{0}e^{ikj}|j\rangle, \\
|\phi_k\rangle=\sum_{j=N+1}^{\infty}e^{ikj}|j\rangle.
\end{cases}
\end{equation}
Without the atomic decay, through Eq.~(\ref{eigen}) we have
\begin{eqnarray}
\Delta^{'}\mu+\eta b_1(1+r)+t\eta b_Ne^{ik(N+1)}=0, \\
\eta b_1\mu-(e^{ik}+re^{-ik})=0, \\
\eta b_N\mu-te^{ikN}=0
\end{eqnarray}
with $\Delta^{'}=E_k-E_l$. For the antinode-node configuration, $b_1=0$
leads to $R=|r|^2=1$. While for the node-antinode case, $b_N=0$ leads
to $t=0$ and we can analytically solve $r$ as
\begin{equation}
r=-\frac{\Delta^{'}e^{ik}+b_1\eta^2}{b_1\eta^2+\Delta^{'}e^{-ik}}
\end{equation}
leading to $R=|r|^2=1$.
For the condition with atomic decay, we use the standard master equation to
handle. The master equation for steady state is
\begin{equation}
-i[H,\rho] - \frac{\Gamma}{2} (\rho \sigma_{+}^{1} \sigma_{-}^{1} +
\sigma_{+}^{1} \sigma_{-}^{1} \rho) + \Gamma \sigma_{-}^{1} \rho
\sigma_{+}^{1} = 0
\end{equation}
with
\begin{equation}
\rho = \kappa |\psi_{l}\rangle \langle\psi_{l}| + (1 - \kappa)
|\widetilde{\text{vac}}\rangle \langle\widetilde{\text{vac}}|.
\end{equation}
Projecting the master equation to various bases, we obtain the following
independent equations
\begin{align}
& (\Delta^{'}+i\Gamma|\alpha|^2/2)\mu \nonumber\\
& \quad {}+\eta b_1(1+r)+t\eta b_N e^{ik(N+1)}=0, \\
& \eta b_1\mu-(e^{ik}+re^{-ik})=0, \\
& \eta b_N\mu-te^{ikN}=0,
\end{align}
which can be analytically solved:
\begin{equation}
r=\frac{e^{ik}-\eta b_1\beta}{\eta b_1\beta-e^{-ik}}, \label{single-mode}
\end{equation}
where $\beta=\frac{i\eta b_1}{\Gamma|\alpha|^2/2-i\Delta^{'}}$. For
antinode-node configuration, $b_1=0$ and we obtain the same result
$R=1$ as the condition without the atomic decay. Meanwhile for the node-antinode configuration,
Eq.~(\ref{single-mode}) can perfectly give the reflection coefficient
around $\Delta=0$ as show in Fig.~\ref{fig:6}. Thus compare
with the exact model, we obtain the same results near $\Delta=0$ based on this
single-mode approximation model no matter under the condition without
or with atomic decay.
\begin{figure}[htbp]
\includegraphics[width=8cm]{fig6}
\caption{(Color online). Reflection vs $\Delta$ around $\Delta=0$.
The blue solid line represents the exact numerical result, the
yellow dashed line is obtained through Eq.~(\ref{single-mode}).
Here, the parameters are the same as in Fig.~\ref{fig:3}.}
\label{fig:6}
\end{figure}
\section{Conclusion}
In conclusion, based on the 1D CCA platform we investigate the
single-photon scattering problem with a SC coupling with two atoms
under the condition with decay. Compared with the condition without
atomic decay, the reflection with decay shows a significant
difference, it will drop suddenly (peak) at $\Delta=0$ only for the
node-antinode configuration. We propose the EIT like phenomenon is
actually not determined by EIT mechanism. It is due to the special
eigen-mode condition for the scatterer at $\Delta=0$. This eigen-mode
is localized and its localization condition is configuration
dependent, so photon can not be transported through this mode. This
explains why without the atomic decay, $R=1$ ($T=0$) at $\Delta=0$ in
any of the two configurations. With atomic decay, the eigen-mode for
node-antinode configuration can transport photon into the SC, and the
photon can leak into the reservoir through exciting the atom at the
node. So the reflection shows a sudden drop at $\Delta=0$. Meanwhile,
no photon can be transported into the SC through the eigen-mode for
antinode-node configuration which leads to $R=1$ at $\Delta=0$. We
calculate the photon flow and use the results of the single-mode
approximation to support our analysis. We hope our analysis can help
understand the experiment results and enlighten the study for using 1D
CCA to simulate real experiment settings.
\begin{acknowledgements}
The authors thank Y.~Li and C.P.~Sun for helpful discussions. This
work has been supported by National Natural Science Foundation of
China under Grants Nos. 11475254, 11222430, 11434011, and NKBRSF of
China under Grants No. 2014CB921202.
\end{acknowledgements}
|
2,869,038,153,764 | arxiv |
\section{Introduction}
In this concept paper authors aim to focus on the challenges posed by the Covid-19 pandemic and present a multidisciplinary and quantitative approach to respond to the sanitary and economical crisis. The proposed model is named AIRSENSE-TO-ACT, and is the description of a project submitted to the 2020 FISR Call of MIUR (Italian Ministry of University and Research), issued to collect solutions, related to the diffusion of the Covid-19 pandemic, able to contain its effects and offering a novel way for the management of the reorganization of activities and processes.
The model is based on the fusion of heterogeneous data coming from different sensors: on board of satellites and/or positioned on ground platforms, both mobile and fixed, in addition with other public data extracted from databases, all jointly processed through the application of Machine Learning (ML) algorithms, and the employment and comparison of macro- and micro-systems of analysis.
Some data can be indeed extracted from public databases (i.e. epidemiological information, number of infected, etc.), or retrieved from satellite images freely downloadable, such as those of the European Space Agency (ESA) Copernicus mission, since public databases have further increased in number and type of data, following the COVID-19 emergency.
Yet, to improve the model and the analyses carried out, it is foreseen the possibility to carry out data collection campaigns through ground-based networked sensors for instance, to validate the developed model, in particular in the Italian areas where the emergency has shown more critical issues, such as in the Po Valley, and specifically in the Emilia-Romagna and Lombardy regions. Local and punctual measures with better spatial and temporal resolution will increase the effectiveness of the fight against the spread of the contagion, allowing to focus the analysis from macro-areas covered by satellite, to micro-areas in order to increase the "granularity" of the data and to reduce the reaction time on the decisions to be taken.
It is necessary to underline that a crucial aspect will concern the creation of the datasets with which the artificial network underlying the DSS needs to be trained, since based on previous experience, this activity may take up to 3/4 months.
However, it is also worth pointing out that, although developed to combat the COVID-19 pandemic in Italy, this model can be applied to other emergency situations, where air quality has health implications, above all in other countries with similar economic-infrastructural characteristics.
The proposal falls mainly within the scope of risk prevention, developing solutions to counteract and contain the effects of COVID-19 and any future pandemics. However, thanks to the versatility of the proposed tool, it is also of immediate use for the response to other emergencies, to develop tailored solutions and for the management of the organization of activities and processes, relating to the phase of overcoming the phenomenon in safety conditions.
The idea was born because the actual measures of lockdown taken by Governments and local Institutions (Regions and Municipalities) are always \textit{a-posteriori} decisions, where increasing levels of lockdown are activated, based on the number of infected, hospitalized and dead people, up to a generalized lockdown, like the one imposed in Italy from the beginning of March until almost the end of June, and which seems to come in the next weeks of October again in Italy. These measures imitated what was decided in Wuhan area, in China, to contain COVID-19, having proved the positive impact of lockdown on the number of infections \cite{Lau2020}, \cite{Wang2020}.
Yet, to protect human health, but at the same time safeguard socio-economic aspects, these methods of intervention are not appropriate for their negative implications \cite{Nicola2020}. The type of interventions must learn from the past to take \textit{a priori} decisions, which have a minimal impact on commercial activities but are effective in limiting the spread of the virus.
An interesting analysis of responses to COVID, carried out in Indonesia during the first months of the 2020, is presented in \cite{Djalante2020}, where five recommendations are given to face efficiently the pandemic situation. Micro actions are highlighted and interventions step-by-step analysed. The only limit is related to the methodology, based on traditional tools.
Artificial Intelligence algorithms instead can lend a hand in this sense, managing to capture the hidden interactions between data, and providing the possibility of their corresponding use for a micro- and macro-analysis of the phenomenon, allowing localized and targeted interventions.
As already mentioned above, the proposed model aims to combine satellite data, and data acquired through ground platforms, both mobile and fixed, related to the concentration of some pollutants such as NOx, PM10, PM2.5, meteorological data, air mass displacement, chemical-physical parameters such as temperature and humidity, with other reference data such as population mobility, epidemiological data (number of infected), number of places still available in intensive care (global values or per-hospitalization points, distributed throughout the regions, nationally, etc.), the concentration of residents per $km^2$, the degree of implemented lockdown, and so on.
Several studies analyze the impact and correlation of the mentioned factors on COVID-19
\cite{Nicola2020, Djalante2020, Tosepu2020, Lau2020a, Bashir2020, Kraemer2020, Bashir2020a, Kim2020}.
The idea is to create a Decision Support System (DSS) able to produce as output the level of risk, based on the correlation between the selected data. The choice to base the proposed model on Artificial Intelligence algorithms stems from the enormous amount of data which must be taken into account, and the need to analyze the correlations, sometimes hidden, between this information. In fact, completely different causality structures emerge from the literature when the various factors are considered in combination. For example, the rise in temperature seems to lead to a reduction in the diffusion of COVID-19, but it depends also on the humidity values, since it has been found that high temperature values with high humidity values do not stop the spread of COVID, but on the contrary facilitate it.
In \cite{aircovid} the Covid Risk Weather (CRW) parameter is introduced, an index to evaluate the relative COVID-19 risk due both to weather and air pollution. The CRW can be used to compare the relative changes in reproductive number for the disease due to the weather factors (average and diurnal temperature, ultraviolet (UV) index, humidity, pressure, precipitation) and air pollutants (SO2 and Ozone).
The authors highlighted in \cite{aircovid} that warmer temperature and moderate outdoor UV exposure may offer a modest reduction in reproductive number. However, UV exposure can not fully contain the transmission of COVID-19. If on one hand both high temperature and solar radiation are able to speed up the inactivation rate, on the other high relative humidity may promote the diffusion rate \cite{aircovid, aereosol}.
Still from the literature, in the diffusion of respiratory diseases, the interaction between multiple elements: pollution, high population density, overcrowding, is analyzed and recent studies have shown a correlation between high concentrations of fine particulate matter and COVID-19 diffusion \cite{Li2020, Jordan2020}. However, the subject is still much debated, and further multidisciplinary investigations are in progress. Some additional considerations will be given ahead in this work.\\
Based on all the above considerations, the proposed approach aims to develop a novel model for the cooperative fusion of extremely heterogeneous data, both in terms of nature and source (epidemiological data, environmental data, and data related to the human activities), and in terms of spatial (km to m) and temporal (days to seconds) sampling, to capture the extremely complex dynamics, difficult to identify with other traditional methodological tools. As highlighted before, the algorithms of AI are able to extract the features related to the hidden correlation between several elements, such as for instance the concentrations of atmospheric particulate matter, the meteorological trends and the virus spreading, to estimate the level of risk.\\
\noindent
The designed model is expected to operate on two levels of analysis:
\begin{itemize}
\item \textbf{Macro analysis}: mainly through satellites (not limited to) with the data collection on wide areas
\item \textbf{Micro analysis}: through the use of fixed or dynamic networks for local-focus data collection
\end{itemize}
and initially it has been developed with particular attention to the most critical areas of Italy.
To the best of our knowledge, this proposal represents a significant progress compared to the state of the art. The only studies in progress, related to similar issues, concern the creation of the ESA RACE dashboard \cite{esarace}, published on the 5th of June 2020, which allows the use of satellite data to support the monitoring of commercial and productive activities, and on the other hand, the development of national and international projects such as PULVIRUS \cite{ispra1}, EpiCovAir and RESCOP \cite{rescop}, on the study of the correlation between suspended atmospheric particulate matter concentration and COVID-19. Our proposal integrates the above, and goes beyond their aims, by proposing a Decision Support System capable of producing the level of risk, but also useful to be used for simulations, for tuning the input values to establish which inputs to vary and how in order to obtain a lower level of risk.
Based on recent events and the studies of the researchers working on COVID-19 diffusion after that the contagion took place, it came out that there is a time lag between the contagion moment and the manifestation of COVID-19 symptoms in the infected person (and its positive test results), a period evaluated between 14 and 21 days. Similarly, the closure of road traffic and the lockdown restrictions may have an effect on the reduction of pollution, but this happens only after some days, and the elapsed time is different in different areas. Situation is worse particularly in some parts of Italy, such as the Po Valley, which suffers from weather and orographic conditions that make it very problematic to shuffle air masses. This latter point has intrigued us, and some simulations and data analysis have been carried out to identify the number of days which best represents the elapsed time in this case for specific areas and critical regions.
The authors started working on two case studies, that will be presented ahead in this work, where trends of NO2 have been analyzed in the Lombardy region (Italy) and Hubei region (China), where Wuhan is located, after the activation of the lockdown measures. Sentinel-5P data have been used to measure some pollutants and plot the NO2 values over time, and some analysis on the corresponding COVID-19 daily evolution has also been carried out, with considerations on the correlation between NO2 and COVID-19 evolution.
To take into consideration the presence of a delay in the cause-effect phenomena, the type of the neural network necessary for our model should be able to take into account time series deferred in time. Therefore, a particular network architecture called Long Short-Term Memory (LSTM) has been chosen for the creation of the DSS, as previously described.
The final goal is using historical data series to predict not only the level of risk at time T, but also in other subsequent instants, so that after being trained the network can also be used as a transfer function to calculate the output (the expected risk level) by using new input data.
The complementarity of the three research groups, which will cover satellite monitoring and the development of data fusion and crowd monitoring models (Univesity of Sannio), miniaturization and networking of high-resolution sensors also on drones (Politecnico di Milano) and biochemical, health and environmental issues (Univesity of Bologna), demonstrates the strong interdisciplinary nature of the project and represents a guarantee of success for the creation, and subsequent validation, of the proposed system.
\section{The Involved Factors in the DSS Design}
\noindent
Each country and its Government has a Department of Civil Protection managing the situations in case hazardous events happen. For instance, in Italy the Department of Civil Protection \cite{protezcivile} activates specific interventions when several risks, such as seismic, volcanic, health, environmental, fire, and other, occur.
In the specific case of COVID-19 pandemic, the problem and its solutions have involved three main acting factors:
\begin{itemize}
\item \textbf{Pollution$\&$Population}: pollution, overpopulation, anomalies in climate conditions can lead to the disease
\item \textbf{Spread of diseases}: the spread of diseases involves issues related to population and some geo-physical changes, but it can be mitigated by taking correct decisions
\item \textbf{Decisions taken}: decisions are taken by humans based on previous experience, but to take correct decisions the right information and robust decision support systems are needed
\end{itemize}
Usually decisions are taken by humans, based on past experience and selected data, but to take a correct decision an objective and scientific method is needed. We have learned that under Covid pandemic, behaviors and/or needs such as gatherings, mobility, sharing of work spaces, etc. can heavily affect the number of infected people, and all the countries to reduce the pandemic have activated \textit{a posteriori} interventions, based on the number of infected and dead, through several attempts of consecutive lockdowns, each time more restrictive, but not always effective, or in any case with dramatic consequences on the social and economic framework.
\\
Hence the idea of making a tool able to support Decision Makers when multiple parameters are involved, realizing an Artificial Intelligence (AI) Decision Support System (DSS), receiving information from multiple sources, and able to produce as output the degree of risk. The proposed system aims to provide an \textit{a priori} tool to take the right decisions, and the general scheme is represented in the Fig. \ref{fig:data_sources}, where human activity contribution and economic operators' activity are highlighted, bot interacting under factors such as climate conditions, pollution density, mobility data, etc., resulting in the number of infected and other effects, whose mutual interactions can be captured through AI-based paradigms.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{images/concept.png}
\caption{Data sources}
\label{fig:data_sources}
\end{figure}
Clearly, such a system to be effective must relies on the availability of data almost on real time, or with a low revisit time (i.e. Sentinel-5P and others), in order to help in realizing a continuous monitoring system, able to produce the right alert to manage jointly multiple risks such as in the case discussed, sanitary, environmental, and economic.
\section{Proposed architecture}
The proposed paradigm is based on a Long Short-Term Memory (LSTM) network, which allows the model to learn the temporal features from the training data. This type of network was invented to solve the problems of vanishing and explosion gradient of which the Recurrent Neural Networks (RNNs), a precursor of LSTMs, suffer \cite{LSTM_medium}.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.38]{images/block_diagram.png}
\caption{Block diagram of the adopted solution}
\label{fig:block_diagram}
\end{figure}
The general block diagram of the adopted solution is presented in the Fig. \ref{fig:block_diagram}, while the elementary building block of the LSTM network is shown in the Fig. \ref{fig:lstm_bb}.
\begin{figure}[!ht]
\centering
\includegraphics[scale=.25]{images/lstm.png}
\caption{Elementary building block of the LSTM network}
\label{fig:lstm_bb}
\end{figure}
As it can be seen from the Fig.\ref{fig:lstm_bb}, each elementary building block of the LSTM network receives three inputs: the input at the current time step, the output from the previous unit and the “memory” of the previous unit. Each single building block of the LSTM network makes a decision knowing the current input, previous output and previous memory, and it generates a new output and updates its memory.
The functioning of the proposed model can be splitted, as commonly it happens for neural networks, into two main phases: the training and the prediction. During the training phase the model is fed with input data and output data, both contained into the training set. That means that the model will be trained with a \textit{Supervised Learning} approach. During the supervised learning, the model needs both input and output data. Defined $X \in R^n$ a vector of inputs and $Y \in R^m$ a vector of outputs, then the model has to be trained in order to find a correlation between X and Y, as shown by the equation \ref{eqn:supervised}.
\begin{equation}
F \rightarrow Y = F(X)
\label{eqn:supervised}
\end{equation}
For the model under analysis, we suppose that the input is a matrix $X \in R^{n, m}$, where $n$ represents the number of the different data sources and $m$ represents the size of the acquisitions in the time dimension. The same considerations can be done on the output, indeed $Y \in R^{p, q}$ where $p$ represents the dimensionality of the output in a fixed instant of time and $q$ represents the time dimension of the output. In other words, both the input and the output are matrices, the inputs will contain sensed data for a fixed time interval and the output will contains the prediction of the model for a fixed time interval.
The training process can be done with the well known Back Propagation and Gradient Descent techniques \cite{hecht1992theory, lecun1990handwritten, ruder2016overview}. In few words, for each training input, the model is updated by using the error and its derivative. The error is calculated trough an \textit{a priori} defined Loss Function. The model will be trained in order to minimize this error. From this last statement, it can be better understood why both inputs and outputs are required in the training phase. For example, given a model $F$ and a set of data (X, Y) with $X \in R^{n, m}$ and $Y \in R^{p, q}$, during the training, for simplicity, a sample of the dataset is selected. Defined $X_i$ and $Y_i$ the sample, the training phase can be roughly represented by the equations:
\begin{equation}
y_{pred} = F(X_i)
\label{eqn:mod1}
\end{equation}
\begin{equation}
e = loss\_ function(y_{pred}, Y_i)
\label{eqn:error}
\end{equation}
\begin{equation}
\dot{e} = \partial_X loss\_ function(y_{pred}, Y_i)
\label{eqn:der}
\end{equation}
Using the error and the the derivative of the error, respectively given by equations \ref{eqn:error} and \ref{eqn:der}, the model weights are updated, until reaching the minimum. After the training phase, the model should be able to generalize, and given new inputs it can generate the new outputs, using the internal state $F$.
The final output of the model will be a time series of risk maps, as shown in the Fig. \ref{fig:block_diagram}, with a resolution that can vary from low resolution intended as risk for a country to high resolution intended as risk for a city, or a smaller area. An example of a single risk map at low resolution is proposed in Fig. \ref{fig:low_res_riskmap}.
\begin{figure}[!ht]
\centering
\includegraphics[scale=1]{images/riskmap.png}
\caption{Example of risk map at low resolution}
\label{fig:low_res_riskmap}
\end{figure}
\section{State of the work}
As previously discussed, our proposal was born in response to a MIUR call on COVID and aims to create a multiscale analysis system based on AI algorithms, to provide \textit{a priori} tool to decision-makers. This system is based on the combination of multi-source data with measurements obtained from fixed and mobile network sensors, from satellite information and proximity surveys, if necessary. Such a proposal required the integration of multi-disciplinary skills ranging from the development of sensor networks and satellite data processing to particle detectors, biochemical analysis of particulate matter, innovative strategies in the environmental field, and sophisticated computational technologies for treatment, analysis, and interpretation of data using Big Data technologies.
This paper intends to be a "Concept" paper, and much of the work must still be done, at least in reference to the realization of the neural network and to the creation of all necessary dataset.
Yet, some progress has been already done, and in the next sections the idea is to present the state of the work and give useful considerations and reasoning behind it.
\section{Measuring pollutants, air conditions and pandemic data}
Since the core of the idea involves the use of pollutants measurements and the infection data, beyond the involvement of other information, as already highlighted, the authors first selected a list of possible pollutants, such as nitrogen dioxide (NO2), sulphur dioxide (SO2), formaldehyde (HCHO), ozone (O3), which can be retrieved through the use of Sentinel-5P, and other data (for instance, PM2.5, PM10, etc.) which can be retrieved by using other sources, at different multi-scale levels.
The purpose of the measurement of pollutants is twofold. As it will be discussed ahead, some pollutants favor the transmission of the respiratory diseases and therefore by measuring them and feeding the model with their values, a contribution to the final risk level is obtained. In addition, some pollutants give us information on the degree of lockdown. Again, the model is able to trace the goodness of the lockdown by using as input certain types of pollutants, and to capture the inter-relationships among the data.
What is important is to understand that the analysis wants to highlight two different levels of vision: a macro high-level vision (i.e. through the use of satellites, for instance, or data at a wide geographical level, national, etc.), and a micro low-level vision where local understanding of the phenomenon is carried out.
The proposed tool based on AI and the specific neural network can be used at each level, once trained, and give important insights. An interesting analysis has been done in \cite{Laryetal}, which discusses how ML algorithms can help characterizing airborne particulates for applications related to remote sensing.
\\
An example of data retrieved both through ground and satellite platforms is shown in Fig. \ref{fig:air_quality} and Fig. \ref{fig:s5p_data}, where an air quality map at European level is presented (retrieved through the European Environmental Agency website \cite{europeanmap}) together with some pollutants' levels obtained through the Sentinel-5P satellite, while regarding the COVID-19, the use of the public Johns Hopkins Dashboard (see Fig. \ref{fig:covid_dashboard}) has been proposed, that allows to download the contagion data organized per state, region, cities, and with much more information, such as the daily new cases, daily new deaths and so on, in all the world \cite{coviddashboard}.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{images/Picture2.png}
\caption{Air quality map retrieved from the\\ European Environment Agency website \cite{europeanmap}}
\label{fig:air_quality}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{images/Picture1.png}
\caption{Sample of Sentinel-5P pollution data}
\label{fig:s5p_data}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.6]{images/covid_dashboard.png}
\caption{COVID-19 Johns Hopkins Dashboard \cite{coviddashboard}.}
\label{fig:covid_dashboard}
\end{figure}
\subsection{Characterizing airborne particulates}
About the relationship between pollution and COVID diffusion, there are several schools of thought, and among these, those who consider the causal relationship between pollution and COVID confirmed, therefore believing that it plausible that the coronavirus can be transported in the air and the contagion may happen also at a less than 2 meter distance. Other schools of thought believe that the respiratory system is already put in great difficulty by pollution in some areas, so when COVID arrives it is more likely to affect in a severe way the population and kill it. Moreover, there are two sub-classes of thought: those who believe that there can be a chronic factor (i.e. those who are used to being in polluted air are more exposed to serious effects of the virus), and those who instead think that the problem is contextual (i.e. during strong episodes of pollution, the organism is acutely exposed to severe stress, which makes it more vulnerable to the virus).
At the moment none of these hypotheses has been incontrovertibly demonstrated, so they could still be all valid, and therefore various factors could be responsible for the spread of the infection.
In any case, we considered very important to analyze the NOx presence in the air, and we are following the idea that particles can carry on the virus, stabilizing it over time, and spreading it in space. For this reason, some work has been done to find a correlation between the virus and the NOx, since there is a virus/dust correlation (the dust may cause the virus transport), and NOx is part of the elements captured and transported by the dust.
\subsection{Particulate Matter (PM)-virus correlation}
More than 240 scientists recently signed a Commentary, appealing to the medical community and highlighting the need of dealing with the airborne transmission of SARS-CoV-2 \cite{morawska2020time}. Many researches have proven that the previously indicated “safe distance” of 6 feet cannot be considered sufficient, since different way of diffusion can occur, in indoor and outdoor environments \cite{setti2020airborne}.
Since the beginning of the COVID-19 pandemic, several studies have been carried out to investigate the reasons of the uneven distribution of infections and fatalities within the different Countries, and positive correlations with air pollution, particularly with suspended fine particles, have been found \cite{frontera2020regional, setti2020potential, magazzino2020relationship, yongjian2020association, sanita2020novel}. Some researchers explain these results considering acute and chronic effects towards the respiratory system, that could make it more susceptible to pathogen infection, while others suggest that different biotic and abiotic factors could be inhaled adhering to the already suspended fine particles \cite{ma2020understanding}.\\
Even though the modes of COVID-19 transmission are still under discussion \cite{al2020sars, farhangrazi2020airborne, tung2020particulate}, the possibility of considering the presence of SARS-CoV-2 RNA on PM10 in outdoor environments has also been suggested \cite{setti2020searching}. Many previous studies already demonstrated that different varieties of microorganisms are present on the suspended particulate \cite{Cao2014inhalable}, and recently early warnings were given about the possibility that they could play a role of carrier for the coronavirus \cite{qu2020imperative}, supported by the finding of SARS-CoV-2 or viral nucleic acid on particulate samples in outdoor environments \cite{liu2020aerodynamic}, \cite{setti2020sars}.\\
According to the cited researches, both PM10 and PM2.5 could be relevant in improving viral infectivity. However, it has been hypothesised that the effect is not linear at all conditions, but that the prolonged high concentration of fine particulate (for instance, more than the daily limit of 50 $\mu$g/m3 of PM10) could trigger a 'boost' effect on the spread of virus \cite{setti2020potential}.\\
\subsection{Wireless Sensors Networks for PM Mapping}
Given the above considerations, and by taking into account that not all the pollutants can be recovered through a satellite analysis, and that however different levels of investigations may become necessary, it is important to discuss the characteristics that the sensor networks must hold in order to guarantee the collection of the pollutant values in a way that is useful for the intended purposes.
In regards to the sensors used at the ground level, the main challenge, and a significant element of innovation, concerns exactly the identification of dust sensors, in particular for PM2.5, of a certain cost, consumption, and footprint for a widespread installation in both fixed and mobile (for example with drones) wireless networks, to investigate geographical areas of particular interest, while maintaining peculiar characteristics such as selectivity and sensitivity in the measurement.\\
Traditionally, PM is measured by means of the laser scattering technique. Particles are hydro- dynamically focused to flow in a single stream on which a laser beam is focused. The presence and size of each single particle can be determined from the intensity of the scattered light pulse, assuming a spherical shape and average optical properties. The granulometric distribution is thus obtained with reasonable accuracy, and the smallest detectable diameter is diffractionl-imited, in the order of the light wavelength, i.e. hundreds of nm. The laser scattering technique represents the state of the art for particles in the 0.3-10 $\mu$m size range. However, due to their cost and bulkiness these instruments are not suitable for massive deployment in the environment. They are typically installed in a few fixed monitoring stations controlled by local environmental protection agencies.
Given the relevance of air pollution for human health and the need for better spatio-temporal resolution in mapping complex phenomena, such as dust generation, concentration and transport, several efforts were carried out in the last decade, to develop compact and affordable devices for measuring PM in a distributed, pervasive way. Recent implementations of wireless sensors networks at city scale have demonstrated the feasibility of this paradigm \cite{cityscale}, and specific technologies have been leveraged to address several challenges, in particular to preserve the environment and human health, spanning from monitoring air, water \cite{water} and the surrounding environment to control natural disasters or preserving landscape resources \cite{prospects}.
The employment of miniaturized devices has spread out, with different characteristics, mainly grouped into two classes. Micro-machined silicon-based sensors represent the ultimate degree of miniaturization and leverage micro-fabrication capabilities to enhance the detection sensitivity \cite{emerging}. Two main approaches have been proposed for solid-state detection: the use of mechanical resonance in oscillating micro- and nano-weighting scales and high-resolution capacitance measurements of the single-particle impedance on chip \cite{zepto}. Despite very promising preliminary results and potential for nanoparticles detection as well as ubiquitous integration in handheld devices, such as smartphones, thanks to their millimetric size, they appear still far from a commercial maturity. A critical aspect, in addition to clogging, cleaning, lifetime and power dissipation, when shrinking down the sensor size, remains the active fluidics necessary to capture and collect dust particles in the chip.
Another, more consolidated class of PM sensors is that of low-cost optical sensors, named Optical Particle Counter (OPC). They represent an evolution of smoke detectors in which a photodetector measures the amount of scattered light from the dust when illuminated by a LED. Several sensors of this class are available in the market from companies such as Alphasense, Honeywell, Plantower, Sharp and Shinyei. The latters are very compact (credit-card sized, Fig. \ref{fig:particolato}, (a) and (b)) and consolidated, enabling new versatile scenarios (Fig. \ref{fig:particolato}, (c)) in pervasive monitoring of dust concentration, also capable of coping with emergency situations such as the acute phases of pandemics.
Different works have compared their performance with reference instrumentation, both in the lab \cite{PMLab1, PMLab2} and in the field \cite{fieldtest}, by finding good agreements. In the majority of conditions, they can quantify the concentration of particles with a size range similar to that of laser scattering, with a full scale of about 1000 $\mu g/cm^3$, and a sensitivity of a few tens of $\mu g/cm^3$, well matched with the regulatory limit of 50 $\mu g/cm^3$. Furthermore, the response time is in the orders of seconds, fast enough to capture the dynamics of human and air transport. Thus, they are suitable for the purpose of this project, enabling the deployment of thousands of sensing nodes capable of quickly detecting exceeding of PM10 and PM2.5 concentration limits. Indeed, Artificial Intelligence can leverage the high level of redundancy and overlap in spatial mapping to compensate for the intrinsic limits of this class of devices.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{images/particolato.png}
\caption{Examples of low-cost credit-card sized PM sensors from Sharp (a) and Shinyei (b) enabling new versatile scenarios (c) in pervasive monitoring of dust concentration, also capable of coping with emergency situations such as the acute phases of pandemics.}
\label{fig:particolato}
\end{figure}
~\newpage
\subsection{Case studies: correlation between NO2 data and COVID-19 related data}
As highlighted before, it is extremely important to change the level of analysis, from a macro to a micro-level, to better understand and monitor the evolution of the phenomenon, and to get the right input values for the DSS in such a way that the initial diffusion of the virus can be detected and stopped with targeted levels of lockdown.
In a first part of the analysis, a study has been carried out to evaluate the correlation between one of the pollutant, the NO2, and the COVID-19.
The data relating to the NO2 concentration have been acquired through the Google Earth Engine (GEE), recorded through the use of the Sentinel-5P satellite, processed, averaged and plotted by using Python scripts. In the Fig. \ref{fig:s5p1_1} and Fig. \ref{fig:s5p1_2} the concentration of NO2 is shown for Italy and China respectively, over two similar periods. It is possible to see that in the worst period of pandemic for both countries, the highest concentration of NO2 laid in the regions where the COVID-19 had its highest values.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{images/Picture3.png}
\caption{$NO_2$ concentration in Italy (from Jan. 1st to Jan. 5th 2020)}
\label{fig:s5p1_1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{images/Picture4.png}
\caption{$NO_2$ concentration in China (from Jan. 1st to Jan: 8th 2020)}
\label{fig:s5p1_2}
\end{figure}
For this reason, rather than analyzing the data relating to entire nations, we have chosen to focus on the regions of Italy and China where the spread of the virus has been most rapid and emblematic (Lombardy and the Wuhan area).
Sentinel-5P covers wide areas, therefore, by starting from the national concentrations of nitrogen dioxide, a subset to obtain the concentrations related to the chosen case studies has been created. Some statistics on the subset have been calculated, including the maximum, the minimum, the standard deviation, for instance, and maximum values of the pollutant have been collected in the following Table 1 and Table 2 for further discussion. In the two tables, to manage the absence of values in Sentinel-5P data, the average values of concentrations for the NO2 over five days have been calculated, by starting from January 1rst until June 30th, 2020.
The correlation between the peaks of the average NO2 concentration and the number of new COVID-19 positives has been analyzed. To match the satellite and epidemiological data, it was necessary to make an average over 5 days for the number of new infections. A scatter plot was constructed using these data, positioning the number of infected people on the abscissa axis and the $NO_2$ concentration on the ordinates.
In the Fig. \ref{lombardiadelaymax1} and Fig. \ref{wuhandelaymax1}, the scatter plots present a time sequence of data, where the time is defined by the number near the dot. This representation helps to graphically catch a correlation between NO2 and COVID-19 and also to identify emblematic cases like the Wuhan ones. In fact, in this latter case the trend, at a certain moment, starts oscillating. This phenomenon can be better explained by analyzing the Fig. \ref{lombardiadelaymax2} and Fig. \ref{wuhandelaymax2}, where it is evident that for the Lombardy region both the NO2 and COVID-19 have a decreasing trend in time (from a high number of infected and a high concentration of pollutant to a low number of infected and a low concentration of pollutant), while for the Wuhan region the situation is quite different and unexpected, since after a certain period the NO2 starts increasing again while the new infections are stable and close to a very low value.
To take into account the delay between the instant when infection occurs and the actual evidence of the transmitted contagion a delay analysis has been carried out: the COVID-19 data were moved forward in time, obtaining scatter plots with different delays. The correlation was calculated using the Pearson correlation coefficient (PCC), a statistic that measures the linear correlation between two variables X and Y, with a value between -1 and +1. A value of +1 indicates total positive linear correlation, 0 indicates no linear correlation, and -1 indicates total negative linear correlation.
\begin{table}[!ht]
\centering
\begin{tabular}{|c|c|c|}
\hline
Delay unit & PCC Lombardia & PCC Wuhan \\
\hline\hline
0 & 0.0770 & -0.2474\\
1 & 0.2773 & -0.2514\\
2 & 0.4983 & -0.2496\\
3 & 0.6629 & -0.1775\\
4 & 0.7180 & 0.0271\\
5 & 0.7918 & 0.1934\\
6 & 0.8774 & 0.3842\\
7 & 0.8092 & \textbf{0.4857}\\
8 & 0.7532 & 0.3905\\
9 & \textbf{0.8969} & 0.2969\\
10 & 0.8334 & 0.2905\\
11 & 0.8403 & 0.1813\\
12 & 0.8736 & -0.0652\\
13 & 0.7830 & -0.2000\\
14 & 0.8302 & 0.0291\\
15 & 0.8854 & 0.1762\\
\hline
\end{tabular}
\caption{Pearson Correlation Coefficient for Lombardia and Wuhan}
\label{correlazioneTable2}
\end{table}
From Table \ref{correlazioneTable2} it can be seen that the maximum correlation (positive) was recorded with a delay of 9 units (Fig. \ref{lombardiadelaymax1} and Fig. \ref{lombardiadelaymax2}) in the case of Lombardy region.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.7] {images/Picture11.png}
\caption{New daily cases VS average concentration of $ NO_2 $ in Lombardy}
\label{lombardiadelaymax1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8] {images/Picture13.png}
\caption{New daily cases VS average concentration of $ NO_2 $ in Lombardy}
\label{lombardiadelaymax2}
\end{figure}
Similarly, from Table \ref{correlazioneTable2} it can be seen that the maximum correlation (positive) was recorded with a delay of 7 delay units (Fig. \ref{wuhandelaymax1} and Fig. \ref{wuhandelaymax2}) in the case of Wuhan.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.7] {images/Picture12.png}
\caption{New daily cases VS average concentration of $ NO_2 $ in Wuhan.}
\label{wuhandelaymax1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8] {images/Picture14.png}
\caption{New daily cases VS average concentration of $ NO_2 $ in Wuhan}
\label{wuhandelaymax2}
\end{figure}
Several considerations can be made.
First of all, if the delay units are converted back to number of days, the maximum value of the correlation between NO2 and COVID-19 is found after 35 days for Wuhan region, and for 45 days for Lombardy region. Somehow, it seems that restrictions in Wuhan brought to better results in terms of COVID-19 reduction than in Italy. Moreover, it is clear that the \textit{a-posteriori} measures of lockdown cannot be an efficient mode of intervention, because before the effects on infection reduction become significant more than one month, one month and half must pass. Lastly, if the graphs related to Wuhan are analyzed (as underline before) an anomaly appears evident. In Fig. \ref{wuhandelaymax2}, when the NO2 values begin to increase (since the lockdown has been removed), the number of new infected remains very low, and this trend raises some doubts about the veracity of the data on the number of infected communicated by Wuhan in this second phase. Clearly a further analysis might give additional insights.
\section{Conclusions}
In this paper we have presented the cross-disciplinary AIRSENSE-TO-ACT project, aiming at the creation of a Decision Support System for the timely and effective activation of targeted countermeasures during virus pandemics, based on a model merging data from very heterogeneous sources, including ground wireless networks of low-cost dust monitors, and satellite data, spanning from meteorological and pollution data to crowd sensing. The correlation between virus diffusion and concentration of NO2 has been analyzed, and the further analysis of the correlation between virus diffusion and PM in the air would be the main focus of future investigations. Moreover, work is in progress on the integration of mobility data \cite{mobility} inside the model.
The "second wave" currently spreading in Europe demonstrates the urgent need for such a tool as proposed in this paper, in order to limit the economical damage of generalized lockdowns and restrictions.
\section{Acknowledgments}
Authors want to thank Maria Pia Del Rosso and Chiara Zarro for their useful insights and support.
\authorcontributions{All the authors contributed equally to this work}
\conflictsofinterest{The authors declare no conflict of interest.}
\reftitle{References}
\externalbibliography{yes}
|
2,869,038,153,765 | arxiv | \section*{Introduction}
Grover's iterative quantum search algorithm \cite{grov} can be used in such problems as cryptography, AI, pattern matching, database search, and is the most efficient quantum search algorithm till date. Its algorithmic complexity is $O(\sqrt{N})$, where $N$ is the size of the search space. Grover's quantum iterator has the following components (1) oracle for selected state inversion, (2) Hadamard transformation, (3) conditional phase shift to all the states except $|0\rangle$, and (4) Hadamard transformation. Our purpose here is to provide efficient simulation of above mentioned quantum operations on a classical Turing Machine (TM).
In general, simple minded classical simulation of quantum algorithms have an obvious problem - parallel quantum operations on superposed quantum states must be serialized on a TM or parallelized on multiple TMs. The former leads to execution time growing exponentially, while the latter leads to physical resources growing exponentially. Classical simulation therefore quickly becomes in-feasible. In this report, we outline classical implementation method that significantly reduces the time-space resource requirements for simulating Grover's algorithm on classical TM.
Conventional quantum simulators \cite{quest},\cite{libquantum} focus on simulating quantum circuit using universal gates, which can be executed on a quantum computer. For any circuit the number of gates increases with the number of qubits and the simulation of every unitary transformation requires a loop of size $N = 2^n$, where $n$ is the number of qubits. The Grover's search algorithm, due to the iterative nature, is therefore the most compute intensive. However, the performance (both run-time and performance) degrades with number of qubits in the system or the search space. We will show that the run-time can be improved significantly by implementing the equivalent mathematical circuit, without affecting the outcomes. The objective is to make the simulation computationally cheaper and time-efficient by minimizing the number of transformations and therefore loops, thus making it independent of the search space.
\section{Background}
The Grover search algorithm has four stages:
\begin{enumerate}
\item Apply Hadamard transform $H^{\otimes n}$ to create equal superposition of all the states.
\item Apply Oracle $O$ to flip the solution state(s).
\item Apply Hadamard transform $H^{\otimes n}$.
\item Perform Conditional Phase Shift $2|0\rangle \langle 0| - I$ to flip all the states except $|0\rangle^{\otimes n}$.
\item Apply Hadamard transform $H^{\otimes n}$.
\item Perform Measurement.
\end{enumerate}
Steps $3-5$ constitute the inversion about the mean or amplitude amplification operator $2|\psi\rangle \langle \psi| - I$ and $2-5$, the Grover iterator $G=(2|\psi\rangle \langle \psi| - I)O$.
The following subsections outline the reference implementation of the Grover's search on a conventional quantum simulator, as a basis to compare the proposed method defined in the subsequent section.
\subsection{Quantum State}
A {\em quantum state} can be realized using a linear array of complex numbers, where the indices represent the basis states and the values map to the associated probability amplitudes. The representation of a generic $n$-qubit system state $\psi = \sum_{i=0}^{N-1} c_{i} |i\rangle$ is shown below,
\begin{table}[!ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\psi[N]$} & Basis State $\rightarrow$ & 0 & 1 & 2 & 3 & $\cdots$ & N-2 & N-1 \\
\cline{2-9}
& Probability Amplitude $\rightarrow$ & $c_{0}$ & $c_{1}$ & $c_{2}$ & $c_{3}$ & $\cdots$ & $c_{N-2}$ & $c_{N-1}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
where $c_{i}$ represents the probability amplitude of the basis state $i$, $\sum_{i} |c_{i}|^{2} = 1$ and $N=2^{n}$.
\subsection{Unitary Gate}
A {\em single qubit unitary gate} can be represented by a $2X2$ unitary matrix, $U = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix}$, with $UU^{\dag} = I$. Equation \ref{eqn:unitary} describes the transformation applied to a generic qubit state.
\begin{equation}
\label{eqn:unitary}
\begin{split}
U_{1}|0\rangle &= \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} 1 \\ 0\end{pmatrix} = \begin{pmatrix} a_{11} \\ a_{21} \end{pmatrix} = (a_{11}|0\rangle + a_{21}|1\rangle)\\
U_{1}|1\rangle &= \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} 0 \\ 1\end{pmatrix} = \begin{pmatrix} a_{12} \\ a_{22} \end{pmatrix} = (a_{12}|0\rangle + a_{22}|1\rangle)\\
U_{1}(\alpha |0\rangle + \beta |1\rangle) &= (\alpha U_{1}|0\rangle + \beta U_{1}|1\rangle) \\
&= \begin{pmatrix} \alpha * a_{11} \\ \alpha * a_{21} \end{pmatrix} + \begin{pmatrix} \beta * a_{12} \\ \beta * a_{22} \end{pmatrix} = \begin{pmatrix} \alpha * a_{11} + \beta * a_{12} \\ \alpha * a_{21} + \beta * a_{22} \end{pmatrix}\\
&= (\alpha * a_{11} + \beta * a_{21})|0\rangle + (\alpha * a_{12} + \beta * a_{22})|1\rangle
\end{split}
\end{equation}
For a $n$ qubit arbitrary state, the operation of $U$ on $i^{th}$ target qubit can be simulated as follows. Apply the above transformation to the $i^{th}$ qubit of all the $N=2^{n}$ basis states. It is evident that the unitary operation on $i^{th}$ target qubit of a given basis state, say $|x_{n-1}x_{n-2}\dotsb x_{i} \dotsb x_{1} x_{0}\rangle$, will affect the probability amplitude of the basis state $|x_{n-1}x_{n-2} \dotsb \bar{x_{i}} \dotsb x_{1} x_{0}\rangle$, where $\bar{x_{i}} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}x_{i}$. Algorithm \ref{alg:unitary} outlines a possible implementation.
\begin{algorithm}[H]
\caption{1-qubit Unitary Operation}
\label{alg:unitary}
\SetAlgoLined
\SetKwFunction{FMain}{U}
\SetKwProg{Fn}{Function}{:}{end}
\Fn{\FMain{state: $\psi[N]$, target qubit: $i$}}{
\KwData{$U = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix}$}
\For {$basis\_state=0;\ basis\_state < N-1;\ basis\_state++$}{
\If {$\psi[basis\_state]$ present}{
$zero\_target\_state = basis\_state \& ~(1 << i)$ \tcp*{state with the target qubit $|0\rangle$}
$one\_target\_state = basis\_state | (1 << i)$ \tcp*{state with the target qubit $|1\rangle$}
\eIf (\tcp*[h]{target qubit = $|0\rangle$}){$!(basis\_state \& (1 << i))$} {
\tcp{$U|0\rangle = a_{11}*|0\rangle + a_{21}*|1\rangle$}
$a1 = a_{11} * \psi[basis\_state]$\;
$a2 = a_{21} * \psi[basis\_state]$\;
}(\tcp*[h]{target qubit = $|1\rangle$}){
\tcp{$U|1\rangle = a_{12}*|0\rangle + a_{22}*|1\rangle$}
$a1 = a_{12} * \psi[basis\_state]$\;
$a2 = a_{22} * \psi[basis\_state]$\;
}
$\psi[zero\_target\_state] += a1$\;
$\psi[one\_target\_state] += a2$\;
}
}
return 0\;
}
\end{algorithm}
The run-time complexity of this method, is therefore, O($N$). Similarly, a $n$-qubit transformation will require $n$-iterations of the above function $U$ with a complexity of O($nN$).
\subsection{Oracle} \label{oracle}
The {\em Oracle}, $O$, is required to mark the solution state. Its operation can be summarized as $O[\frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} |x\rangle] \Rightarrow \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} (-1)^{f(x)}|x\rangle$, where $f(x)=1$, if $|x\rangle$ is the solution and $0$ otherwise.
A possible $2$-qubit quantum oracle construction, for single solution, is captured in Fig. \ref{fig:oracle1}.
\begin{figure}[ht!]
\centering
\begin{tikzpicture}
\draw (.5,3.25) node {$|\psi\rangle$};
\draw (.4,2.4) node {$\frac{|0\rangle - |1\rangle}{\sqrt{2}}$};
\draw (.4,1.7) node {Solution state:};
\draw (1,3.5) -- (1.5,3.5);
\draw (1,3) -- (1.5,3);
\draw (1,2.4) -- (3.8,2.4);
\draw (1.5,2.1) rectangle (3.5,3.8) node [midway=center] {};
\draw (1.6,3.3) rectangle (2,3.7) node [midway=center] {$X$};
\draw (1.6,2.8) rectangle (2,3.2) node [midway=center] {$X$};
\draw (2.5,3.5) node [shape=circle,draw,fill=black,inner sep=0pt,minimum size=2mm]{} -- (2.5,3.5);
\draw (2.5,3) node [shape=circle,draw,fill=black,inner sep=0pt,minimum size=2mm]{} -- (2.5,3);
\draw (2.5,2.4) node [shape=circle,draw,fill=none,inner sep=0pt,minimum size=3mm]{} -- (2.5,2.4);
\draw (2.5,3.5) -- (2.5,2.25);
\draw (3,3.3) rectangle (3.4,3.7) node [midway=center] {$X$};
\draw (3,2.8) rectangle (3.4,3.2) node [midway=center] {$X$};
\draw (2,3.5) -- (3,3.5);
\draw (2,3) -- (3,3);
\draw (3.5,3.5) -- (3.8,3.5);
\draw (3.5,3) -- (3.8,3);
\draw (2.5,1.7) node {$|00\rangle$};
\draw (4,3.5) -- (4.5,3.5);
\draw (4,3) -- (6.8,3);
\draw (4,2.4) -- (6.8,2.4);
\draw (4.5,2.1) rectangle (6.5,3.8) node [midway=center] {};
\draw (4.6,3.3) rectangle (5,3.7) node [midway=center] {$X$};
\draw (5.5,3.5) node [shape=circle,draw,fill=black,inner sep=0pt,minimum size=2mm]{} -- (5.5,3.5);
\draw (5.5,3) node [shape=circle,draw,fill=black,inner sep=0pt,minimum size=2mm]{} -- (5.5,3);
\draw (5.5,2.4) node [shape=circle,draw,fill=none,inner sep=0pt,minimum size=3mm]{} -- (5.5,2.4);
\draw (5.5,3.5) -- (5.5,2.25);
\draw (6,3.3) rectangle (6.4,3.7) node [midway=center] {$X$};
\draw (5,3.5) -- (6,3.5);
\draw (5,3) -- (6,3);
\draw (6.5,3.5) -- (6.8,3.5);
\draw (5.5,1.7) node {$|01\rangle$};
\draw (7,3.5) -- (9.8,3.5);
\draw (7,3) -- (7.5,3);
\draw (7,2.4) -- (9.8,2.4);
\draw (7.5,2.1) rectangle (9.5,3.8) node [midway=center] {};
\draw (7.6,2.8) rectangle (8,3.2) node [midway=center] {$X$};
\draw (8.5,3.5) node [shape=circle,draw,fill=black,inner sep=0pt,minimum size=2mm]{} -- (8.5,3.5);
\draw (8.5,3) node [shape=circle,draw,fill=black,inner sep=0pt,minimum size=2mm]{} -- (8.5,3);
\draw (8.5,2.4) node [shape=circle,draw,fill=none,inner sep=0pt,minimum size=3mm]{} -- (8.5,2.4);
\draw (8.5,3.5) -- (8.5,2.25);
\draw (9,2.8) rectangle (9.4,3.2) node [midway=center] {$X$};
\draw (8,3) -- (9,3);
\draw (9.5,3.5) -- (9.8,3.5);
\draw (9.5,3) -- (9.8,3);
\draw (8.5,1.7) node {$|10\rangle$};
\draw (10,3.5) -- (12.8,3.5);
\draw (10,3) -- (12.8,3);
\draw (10,2.4) -- (12.8,2.4);
\draw (10.5,2.1) rectangle (12.5,3.8) node [midway=center] {};
\draw (11.5,3.5) node [shape=circle,draw,fill=black,inner sep=0pt,minimum size=2mm]{} -- (11.5,3.5);
\draw (11.5,3) node [shape=circle,draw,fill=black,inner sep=0pt,minimum size=2mm]{} -- (11.5,3);
\draw (11.5,2.4) node [shape=circle,draw,fill=none,inner sep=0pt,minimum size=3mm]{} -- (11.5,2.4);
\draw (11.5,3.5) -- (11.5,2.25);
\draw (11,3.5) -- (12,3.5);
\draw (11,3) -- (12,3);
\draw (11.5,1.7) node {$|11\rangle$};
\end{tikzpicture}
\caption{2-qubit conventional quantum Oracles} \label{fig:oracle1}
\end{figure}
The run-time in this case is given by O($f_{o}(n)N$), where $f_{o}(n)$ is the number of universal gates required to build a $n$ qubit Oracle.
\subsection{Conditional Phase Shift}
Similar to the Oracle implementation, the number of gates required to build the conditional phase shift is $f_{c}(n)$. The complexity will then be O($f_{c}(n)N$).
\subsection{Simulation of Grover's Search}
\label{sec:qsim}
The circuit simulating the overall Grover search is captured in Fig. \ref{alg:grover1} below.
\begin{algorithm}[H]
\caption{$n$ qubit Grover's Search}
\label{alg:grover1}
\SetAlgoLined
\SetKwFunction{FMain}{grover\_search}
\SetKwProg{Fn}{Function}{:}{end}
\Fn{\FMain{search state: $|i\rangle$}}{
$\psi[N] = |0\rangle^{n}$\;
$H^{\otimes n}(\psi[N])$ \tcp*{prepare superposition state}
\tcp{Apply $\frac{\pi * \sqrt{N}}{4}$ iterations of Grover iterator}
\For {$iter=0;\ iter < \frac{\pi * \sqrt{N}}{4};\ iter++$}{
$oracle(\psi[N],i)$ \tcp*{Oracle: Flip the phase of solution state $|i\rangle$}
$H^{\otimes n}(\psi[N])$ \tcp*{Apply Hadamard transformation}
$conditional\_phase\_shift(\psi[N],0)$ \tcp*{Flip the phase of state $|0\rangle$}
$H^{\otimes n}(\psi[N])$ \tcp*{Apply Hadamard transformation}
}
return 0\;
}
\end{algorithm}
The overall time complexity is approximately O($\sqrt{N}*[f_{o}(n)N+nN+f_{c}(n)N+ nN]$) = O($[f_{o}(n)+f_{c}(n)+2n]N^{\frac{3}{2}}$).
\section{Proposed Simulation Method}
The Grover operator $G=(2|\psi\rangle \langle \psi| - I)O$ consists of two sub-operators, the Oracle ($O$) and the inversion about the mean operator ($2|\psi\rangle \langle \psi| - I$).
Instead of implementing the circuit using the universal gates as the building blocks, we propose to implement the equivalent mathematical operators that can be efficiently simulated on classical computers.
\subsection{Oracle}
A typical classical realization of the Oracle construct, outlined in the previous section, is to flip the relative phase of the solution state. Since the Oracle is assumed to know the solution, this can be achieve in O($N$) instead of O($f_{o}(n)N$).
\begin{algorithm}[H]
\caption{n-qubit Oracle}
\label{alg:oracle}
\SetAlgoLined
\SetKwFunction{FMain}{oracle}
\SetKwProg{Fn}{Function}{:}{end}
\KwData{solution state(s): $k[]$}
\Fn{\FMain{oracle: $|\psi\rangle$}}{
\For {$i=0;\ i < sizeof(k);\ i++$}{
$\psi[k[i]] *= -1$\;
}
return 0\;
}
\end{algorithm}
\subsection{Inversion About Mean Operator}
The {\em inversion about mean operator} aka the {\em Diffusion operator}, for a given state $|\psi\rangle$, is given by $(2|\psi\rangle \langle \psi| - I)$ or $[H^{\otimes n}(2|0\rangle \langle 0| - I)H^{\otimes n}]$ and is mathematically equivalent to
\begin{equation}
\begin{split}
\Delta_{i,j} &= \frac{2}{N} - 1; \forall i = j\\
\Delta_{i,j} &= \frac{2}{N}; i \neq j
\end{split}
\end{equation}
It is noticed that the inversion about mean operator has the structural symmetry and a mere $2\times2$ matrix, $\Delta_{reduced} = \begin{pmatrix} \frac{2}{N} - 1 & \frac{2}{N} \\ \frac{2}{N} & \frac{2}{N} - 1\end{pmatrix}$, would be sufficient to represent it and operate on a quantum state. Here, each diagonal element would carry the matrix element $\frac{2}{N}-1$ and off diagonal element $\frac{2}{N}$, where $N=2^{n}$ is the dimension. When $\Delta_{reduced}$ operates on each of the elements of the state vector, previously operated by the oracle, only the marked state amplitude will become prominent with the progress of Grover's iterator.
The {\em Diffusion operator}, when applied to an arbitrary state $|\psi\rangle = \sum_{x=0}^{N-1}c_{x}|x\rangle$, will result in $\sum_{x=0}^{N-1}[-c_{x} + 2\langle c\rangle|x\rangle$. The following Algorithm \ref{alg:invmean} outlines the implementation.
\begin{algorithm}[H]
\caption{n-qubit Inversion about Mean}
\label{alg:invmean}
\SetAlgoLined
\SetKwFunction{FMain}{inversion\_mean}
\SetKwProg{Fn}{Function}{:}{end}
\Fn{\FMain{state: $\psi[N]$, solution state: $|i\rangle$}}{
$real_sum = imag_sum = 0$\;
\For {$basis\_state=0;\ basis\_state < N-1;\ basis\_state++$}{
\If {$\psi[basis\_state] == |i\rangle$ }{
$real\_sum += \psi[basis\_state].real$\;
$imag\_sum += \psi[basis\_state].imag$\;
$n++$\;
}
}
$real\_sum = ((real\_sum * 2) / (1 << n))$\;
$imag\_sum = ((imag\_sum * 2) / (1 << n))$\;
\For {$basis\_state=0;\ basis\_state < N-1;\ basis\_state++$}{
\If {$\psi[basis\_state] == |i\rangle$ }{
$\psi[basis\_state].real = real\_sum - \psi[basis\_state].real$\;
$\psi[basis\_state].imag = imag\_sum - \psi[basis\_state].imag$\;
}
}
return 0\;
}
\end{algorithm}
The run-time in this case is again O($N$), compared to O($[2n+f_{c}(n)]N$) in conventional Grover simulator.
\subsection{Simulation of Grover's Search}
The complete Grover search simulation using the proposed approach is outlined in Algorithm \ref{alg:grover1}.
\begin{algorithm}[H]
\caption{$n$ qubit Grover's Search}
\label{alg:grover1}
\SetAlgoLined
\SetKwFunction{FMain}{grover\_search}
\SetKwProg{Fn}{Function}{:}{end}
\Fn{\FMain{search state: $|i\rangle$}}{
$\psi[N] = |0\rangle^{n}$\;
$H^{\otimes n}(\psi[N])$\; \tcp*{prepare superposition state}
\For {$iter=0;\ iter < \frac{\pi * \sqrt{N}}{4};\ iter++$}{
$oracle(\psi[N],i)$ \tcp*{phase inversion operator on state $|i\rangle$}
$inversion\_mean(\psi[N])$ \tcp*{apply Inversion about mean operator on state $|\psi\rangle$}
}
return 0\;
}
\end{algorithm}
The overall time complexity is $O(N^{\frac{1}{2}}N) = O(N^{\frac{3}{2}})$ compared to O($[f_{o}(n)+f_{c}(n)+2n]N^{\frac{3}{2}}$) for a quantum simulator. There is a significant improvement in the time complexity due to optimized computation, which is crucial for general purpose classical computers with limited CPU. The reduction in time complexity by a factor $f_{o}(n)+f_{c}(n)+2n$ is observed to be significant for simulation involving more than $10$ qubits.
\section{Simulation Results}
\subsection{Measurement outcome}
The measurement statistics for a $3$-qubit search problem generated from the proposed simulation, along with the one obtained from IBM Q Experience \cite{ibmbluemix} simulation, for reference, are captured in Fig \ref{grsrch} below.
\begin{figure}[!ht]
\centering
\begin{minipage}[b]{.45\textwidth}
\begin{adjustbox}{width=\linewidth}
\begin{tikzpicture}
\begin{axis}[
ybar,
ylabel={Probabilities (\%)},
xlabel={State},
enlargelimits=0.10,
symbolic x coords={000,001,010,011,100,101,110,111},
xtick=data,
nodes near coords,
nodes near coords align={vertical},
]
\addplot coordinates { (000, .781) (001, .781) (010, .781) (011, .781) (100, .781) (101, 94.531) (110, .781) (111, .781) };
\end{axis}
\end{tikzpicture}
\end{adjustbox}
\caption{Three qubits Grover search outcome: solution state $|101\rangle$. (1) Approach 1, (2) Approach 2, (3) IBM Q Simulator} \label{grsrch}
\end{minipage}\hfill
\begin{minipage}[b]{.45\textwidth}
\begin{adjustbox}{width=\linewidth}
\begin{tikzpicture}
\begin{axis}[
ybar,
ylabel={Probabilities (\%)},
xlabel={State},
enlargelimits=0.1,
symbolic x coords={000,001,010,011,100,101,110,111},
xtick=data,
nodes near coords,
nodes near coords align={vertical},
]
\addplot coordinates { (000, .781) (001, .781) (010, .781) (011, .781) (100, .781) (101, 94.531) (110, .781) (111, .781) };
\end{axis}
\end{tikzpicture}
\end{adjustbox}
\end{minipage}\hfill
\begin{minipage}[b]{1.0\textwidth}
\centering
\begin{adjustbox}{width=\linewidth}
\includegraphics[scale=.10]{gsearch_circuit_3qbits_ibmq.png}
\end{adjustbox}
\end{minipage}\hfill
\begin{minipage}[b]{1.0\textwidth}
\centering
\begin{adjustbox}{width=\linewidth}
\includegraphics[scale=.10]{gsearch_result_3qbits_ibmq.png}
\end{adjustbox}
\end{minipage}\hfill
\end{figure}
\newpage
\subsection{Simulator Performance}
The simulations were performed on a laptop with Intel(R) Core(TM) i3-5005U CPU @ 2.00GHz and 8GB RAM. The program was compiled with $-O4$ compiler option and executed on a single CPU/thread. We obtained the performance baseline with the conventional simulation method captured in the background section, which is found to be inline with the existing simulators. This, along with the outcome observed from the proposed simulation approach, is captured in Table \ref{tab:perf}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Grover Search & \# qubits & minutes:seconds (approx)\\
\hline
\multirow{4}{*}{Conventional Simulator} & 10 & 0:0.010 \\
\cline{2-3}
& 16 & 0:3.768 \\
\cline{2-3}
& 18 & 0:37.708 \\
\cline{2-3}
& 20 & 5:57.029 \\
\cline{2-3}
\hline
\multirow{4}{*}{Proposed Simulator} & 10 & 0:0.025\\
\cline{2-3}
& 16 & 0:0.349 \\
\cline{2-3}
& 18 & 0:2.579 \\
\cline{2-3}
& 20 & 0:20.005 \\
\cline{2-3}
\hline
\end{tabular}
\end{center}
\caption{Running time comparison}
\label{tab:perf}
\end{table}
\newpage
\section{Discussion}
We have shown that the simulation results from the proposed method indicate significant performance improvement, compared to conventional quantum simulators. Though the implementation approach is classical, it is mathematically equivalent to the conventional method. The performance obtained can be further improved multiple folds on high performance classical computers by using the full potential of multi-core CPUs and distributed architecture.
|
2,869,038,153,766 | arxiv | \section{Introduction}
FPGAs are being introduced into the cloud and data center platforms for increased performance, computation, and parallelism benefits over existing accelerators such as GPUs. Technology has increased the demand for high-speed cloud computation over the last few years. Commercial and public cloud providers initiated using FPGAs in their cloud and data centers to provide tenants to customize their hardware accelerators in the cloud. The integration of FPGAs in the cloud was initiated after Microsoft published its research on Catapult in 2014\cite{Catapult}. Since then, it has become a niche technology for cloud service platforms, and major cloud provider giants, e.g., Amazon\cite{amazon}, Alibaba\cite{alibaba}, Baidu\cite{baidu}
etc., have integrated FPGAs into their platforms. For computationally intensive loads like machine learning, digital image processing, extensive data analytics, genomics, etc., users can exploit FPGA acceleration in cloud platforms
By mentioning three concrete examples, we can express the unique advantages of FPGAs over traditional accelerators. 1. Microsoft Bing search engine experienced a 50 percent increase in throughput, or a 25 percent reduction in latency by using FPGAs in their data centers \cite{Catapult}. 2. Using Amazon AWS FPGA F1 instance, the Edico Genome project\cite{example_gnome} has over ten times the speed and performance increase for analyzing genome sequences. and 3. Introduction of Xilinx Versal FPGAs for real-time video compression and encoding in the cloud platform has significantly reduced the operating cost by reducing the encoding bitrate by 20\%, \cite{example_ai}.
Multi-tenant FPGA platforms have multiple security concerns even after improving performance significantly. Cloud FPGAs allow tenants to drive their custom hardware designs on cloud FPGA fabric, potentially exposing multiple security risks for the adversaries, unlike CPUs and GPUs. Multi-tenancy in cloud FPGAs can create some unique hardware-related security risks such as side-channel attacks \cite{sidechannel1} where sensitive information of the hardware surface is leaked or transferred by invasive/non-invasive probing, or covert channel creation between the tenant's fabric which attacker can create a hidden channel layer between each other to transmit confidential information \cite{covert1},
Malicious adversaries can also launch Denial of Service (Dos) in any layer of the cloud system, including user or infrastructure level\cite{covert1},. Malicious bitstream upload into the FPGA fabric is one of the major attacks besides traditional cloud FPGA attacks, which can lead to shutdown or faulty results \cite{bitstream1}. Due to hardware fabric intervention and manipulation, short circuits fault, performance degradation, or shutdown major public cloud provides, e.g., Amazon only offers single bitstream deployment for each FPGA board. Whereas, in academics, multi-tenancy is under active research. In the case of multi-tenant cloud, FPGA security risk is more severe and intensive, as the same FPGA board fabric is shared among different tenants, which exposes the hardware surface more openly. In many reconfigurable SoCs sandboxing architecture e.g. Xilinx Zynq - 7000 SoC, UltraScale+ \cite{xilinx_trustzone}, ARM TrustZone the secure key is stored in battery-backed RAM (BBRAM) or eFuse medium. However, these methods have the following disadvantages:
1) They still need some secure random key generation methods like random number generator (RNG) as a source of the root of trust. 2) eFuse is a fixed, non-update-able memory. 3) A physical battery is required for the BBRAM method. 4) Storing secret keys in the non-volatile memory of the cloud FPGA boards seems impracticable as the boards are rented for a specific period, and retrieving the keys from the NVM would be impossible after the tenant lease is expired. In this study, a cutting-edge method for creating identity tokens in a multi-tenant cloud FPGA platform without requiring non-volatile memory (NVM) is proposed. In order to address the aforementioned issues with the multi-tenant cloud platform, we present a brand-new, effective, and trusted solution in this paper called the \textbf{\textbf{TrustToken}}. Without employing non-volatile memory or a secure boot mechanism, we created identity tokens during runtime using an asymmetric cryptographic method. We are motivated and inspired by the Google Chromium sandboxing \cite{google_sandbox} concept to create a secure execution environment in the background of a multi-tenant platform by allocating Tokens for each IP core. To create distinct token keys, the \textbf{TrustToken} controller uses a hybrid Physical Unclonable Function (PUF) IP block. These token keys serve the untrusted IP core as an authentication ID card and must be given in each access request for a data transaction. In conclusion we state some key contributions for our protocol framework:
\begin{comment}
\begin{figure}[h]
\centerline{\includegraphics[width=7cm]{wrapper.pdf}}
\caption{\textbf{Proposed TrustToken architecture} }
\label{trust_wrapper}
\end{figure}
\end{comment}
\begin{enumerate}[leftmargin=*]
\item Provides IP isolation by restricting IP to only allowable interactions.
\item Enforces root of trust based runtime asymmetric authorization mechanism by using a lighweight hybrid ring oscillator PUF. This approach doesnot require any non-volatile memory like eFuse or BBRAM.
\item Efficient implementation of secure isolation by assigning secure token ID for every non-trusted IP core.
\end{enumerate}
\begin{comment}
We implemented and evaluated the performance
of the proposed \textbf{TrustToken} on a Xilinx Alveo u50 accelerator board. We also present a thorough evaluation of our architecture, a quantitive analysis of the generated random keys, and the performance of the root of the trust. We didn't consider any multi-tenant hardware-level attacks which fall outside of the protection capabilities of the TrustZone, such as Side-Channel Attacks, Fault Injection, IP Privacy, Cloning, and Probing.
\end{comment}
\section{Related Work }
In the context of the multi-tenant FPGA platform, no prior research was found on unauthorized access and untrusted IPs security. So we will continue the discussion section with respect of reconfigurable SoC devices. In paper \cite{physicalisolation_huffmire}, Huffmire proposed the first isolation mechanism in the SoC platform. This isolation mechanism named "Moats and drawbridges" is configured by creating a fence around the untrusted IP block using a block of wires(moats). Then, a "drawbridge" tunnel is created to communicate from the fenced region.
To prevent malicious attacks by hardware trojans, Hategekimana et al. proposed an isolation mechanism within a hardware sandbox. But the drawback of this method is only blocks specific violations. Hategekimana et al. \cite{isolation_7} presented a protocol to build secure SoC kernels of FPGA-accelerators in heterogeneous SoC systems enforcing the Hardware Sandbox concept. The main drawback is that it has been implemented with increased overhead and latency, which seems unfit for applying real-world scenarios. Also, these proposed methods don't provide any protection outside of predefined sandboxing conditions, making the protocol ineffective in runtime conditions.
Using Mobile SRAM PUFs, Zhao et al. \cite{Zhao_trustzone_sram} propose a prototype that extends ARM TrustZone technology to establish a root of trust-based authorization (secure storage, randomness, and secure boot). Among its disadvantages, SRAM is a poor, unreliable PUF solution and requires additional error correction code (ECC) algorithms to be applied with a helper data mechanism, increasing the post-processing cost. Using separate Hash, RSA, and AES modules, Zhao et al. proposed a two-factor authentication protocol in the paper \cite{Zhao_trustzone_token}. This work had poor implementation latency and wasn't compatible with real-world SoC IP security measures. \cite{basak_2017} describes a framework for wrapping third-party IP cores in a security wrapper and within the security wrapper,
Security policies were added to prevent malicious activities in
the system. As a result of its high overhead and latency, this work is primarily intended for verification and debugging purposes. It is not suitable for runtime trojan mitigation or software-level attacks prevention.
Shared peripherals such as Ethernet, DRAM, UART, etc., in the ARM TrustZone architecture, are susceptible to row-hammer attacks and denial-of-service attacks\cite{pinto_arm_2019}.
Weak and inefficient authentication mechanisms are the primary security concern of ARM TrustZone technology. Several research works have reported unauthorized kernel-level privilege gain on ARM TrustZone platforms in normal world environments \cite{pinto_arm_2019}. Also, a trusted kernel region can have several vulnerabilities which can damage the whole TEE. \cite{pinto_arm_2019}. Benhani et al. \cite{benhani_trustzone} have published a paper demonstrating several attacks on TrustZone from the simple CAD command with some simple lines of code.
\section{Background}
\subsection{Multi-tenant Cloud FPGA}
FPGAs have mostly been used in the verification phase of ASIC designs over the previous ten years, where the ASIC design was implemented for validation and verification phases before it was actually produced. Additionally, specialized markets and research programs had some other applications. However, FPGAs are gaining popularity as an alternative for CPUs and GPUs due to high performance processing and parallelism. FPGA boards are both available on the market today as supported devices that may be connected by PCIe ports or as part of the same System-on-Chip (SoC). The integration of FPGA in cloud computing platforms to enable customers to create their hardware accelerators is one of the most recent trends in FPGA dominance. There are typically four basic methods for deploying FPGA boards in cloud data centers. As follows:
1. Coprocessor (FPGA is coupled with CPU by PCIe cards)
2. Distinct (FPGA is deployed as a standalone component)
3. Bump-in-the-wire (FPGA is positioned between the NIC and the internet)
System-on-chip, and 4. (FPGA is fabricated in same chip die along with CPU). A cloud FPGA design known as multi-tenancy rents out the FPGA fabric to many users or tenants within the same time period. The concept of spatial silicon FPGA device sharing among several tenants is integrated into multi-tenancy.
\subsection{Physical Unclonable Function}
In order to produce distinctive and unique ubiquitous used keys, Physical Unclonable Function uses the manufacturing variation of a silicon nanocircuit \cite{kawser_puf}. PUF can be used for a variety of cryptographic tasks, including authorization, random number generation, and authentication. The PUF theory states that even if two or more devices are identical in design, manufacturing variance will cause them to have distinct electrical properties.This variation is unpredictable and can not be estimated through observation, neither optical nor SEM. A PUF can be considered a black box, where an input challenge pair generates an output response pair. Due to manufacturing variation, the generated output response should be unique and can be used as a unique ID or authentication key. The most common existing PUF designs are Arbiter PUF, Ring Oscillator, XOR, SRAM, etc. The three most popular indicators are employed to assess the performance of PUF produced keys. They are uniqueness, randomness, and bit error rate.
\vspace{-2.5 mm }
\subsection{ARM TRUSTZONE}
In order to isolate trusted and untrusted software and hardware, ARM TrustZone technology refers to a secure execution environment (SEE) \cite{trustzone_white}. It also goes by the name Trusted Execution Environment (TEE), and it contains a monitor that manages how these two different worlds communicate with one another. TEE TrustZone is an embedded security technology that uses two physically independent processors for the trusted and untrusted worlds. This architecture's primary flaw is the fact that similar peripherals like Ethernet, DRAM, UART, etc. are shared. Combining a few IP blocks, ARM TrustZone enables the division of groups of I/O Peripherals, Processors, and Memory into two distinct universes. Two NS bit registers on the ARM TrustZone platform are dedicated to implementing the isolation of a software process. \cite{trustzone_white}.
\begin{comment}
\begin{figure}[h]
\centerline{\includegraphics[width=9cm]{trustzone.pdf}}
\caption{The architechture of the ARM Trustzone \cite{trustzone_white}}
\label{trustzone}
\end{figure}
\end{comment}
\subsection{Hardware Trojan and Design For Trust}
According to \cite{trojan_2}, a hardware Trojan (HT) is defined as a malicious attacker who has knowingly altered a system circuit and caused a change from intended behavior when the circuit is implemented. Because it can leak private information or alter a circuit's specifications during operation, HT poses a serious threat to SoC design. By reducing the circuit's total system functionality, HT can cause an emergency. It is exceedingly challenging to monitor this HT's negative effects during the verification stage because it is frequently deployed in stealth mode and only activated under unusual circumstances.
\begin{comment}
Trojan activation or trigger is an optional or hidden part that a malicious attacker inserts. It observes the various data events in the circuit and activates malicious attacks in a rare node condition called the payload condition. As a result, the payload is undetected almost the entire period, and the whole IC acts as a Trojan-free circuit while running. Trigger conditions sometimes occur after repeated sequential elements such as rare-event triggered counters or after n-bit counting of a sequential state counter.
\subsection{PUF as Root of Trust}
Computer chips play a pivotal role in the makeup of smart system designs, be it in smart cars, robotics, AI systems etc. As such the integrity (security) of system designs can not be completely guaranteed by default since these chips can be tampered with maliciously. Along side encryption techniques, Physical Unclonable Function (PUF), is provisioned for system design security\cite{}.
PUF as Root of Trust is PUF-based security technique for chips, where random numbes generated by the PUF acts as chip fingerprint and unique identification (UI) code\cite{}.This technology is can be incorporated as a built-in hardware root of trust (RoT) simultaneously with other security techniques. Implementing this technology can help secure both data (for privacy) and intellectual property (chip know-how) of a company thereby reducing the risk of illegal copying and enormous commercial losses\cite{}.
\end{comment}
\section{Threat Model and System Properties}
\label{sec:threat}
\noindent
Our threat model can be separated into two distinct explicit scenarios: Hardware Trojan and Illegal software access. Every IP is viewed as untrusted and capable of having dangerous Trojan components concealed inside it when we consider the likely first threat model. They can act under unusual circumstances. We assume that they are only used in run-time environment circumstances and are secretly carried out from the IP's internal architecture. The SoC IP integrator division is regarded to be reliable. In this case, the \textbf{TrustToken} can offer defense against unwanted data access, access control, alterations, and the leakage of any sensitive data from the IP core to the outside world. In the second scenario, we consider a malevolent attacker who can steal, modify, or leak sensitive information by obtaining unauthorized software access from the embedded SoC world.
Figure \ref{fig:illegal_access}, as an illustration, depicts an instance of malevolent unauthorized access. In this diagram, two different CPUs that are a part of the same SoC system are powering four software-level apps.
In the hardware level design, four special IPs are added in the tenant fabric. These IPs are accessible from the software side and are identified by the same color as the related application. By using the suggested architecture model, an access request made by software Application 3 (three) to IP Core 4 (four) can be marked as unauthorized and isolated.
\begin{figure}[h]
\centerline{\includegraphics[width=6cm]{illegal_access.pdf}}
\caption{Illegal software access request by Application 3 running on CPU}
\label{fig:illegal_access}
\end{figure}
The physical assaults carried out by physical apparatus were not taken into account in this essay because they fell outside of its scope. Attacks against hardware such as side-channel, probing, snooping, timing, denial-of-service, fault injection, etc. were not included in the attack scenarios. To summarize, we have considered the following threat models when describing our architecture:
\begin{enumerate}[leftmargin=*]
\item Any malicious HT attempting to execute in runtime environments while hiding inside an IP core. We presume that concealed HT can evade detection by current CAD tools and remain undiscovered up until the payload condition is activated.
\item Any malevolent HT attempting to control access or transfer data without authorization. We take into account the possibility that attackers could purposefully alter the computing result by overwriting the data on a particular communication channel. We also consider that a malicious attacker could potentially cause data leakage by altering the IP core's operating mode.
\item Any malicious attacker trying to access other programs' sensitive data without authorization or leak it from the CPU core.
\end{enumerate}
\begin{comment}
\subsection{System Properties}
In our proposed design, we have achieved these desired system properties :
\begin{enumerate}[leftmargin=*]
\item \textbf{Trusted.} The secure key generation for the root of trust, i.e. TrustTokens, is guaranteed to be protected against any physical attacks as they exploit silicon manufacturing variation and cannot be predicted by any malicious attacker. The root of trust based secure key generation is completely isolated from the software and non-trusted IP cores.
\item \textbf{Low-cost Solution.} Uses of PUFs eliminates the requirement of hardware-based secure storage (e.g., non-volatile memory) and random number generation IP cores, which could potentially reduce production cost and could be implemented with low resource overhead.
\item \textbf{Flexible.} The proposed architecture includes token-based flexible Trust wrappers, which can easily adopt different IP cores and can be extended depending on tenant requirements.
\end{enumerate}
\end{comment}
\subsection{Design assumptions}
While developing our proposed solution, we have taken some key points under consideration.
\begin{enumerate}[leftmargin=*]
\item \textbf{Multi-tenancy.} Our proposed protocol targets the multi-tenant cloud FPGA platform and observe the implementations on this platform. We assume that our proposed protocol is designed to perform in run time environment.
\item \textbf{Adaptability.} Although this article insist on building Token based
security features for non-trusted IPs in multi-tenant cloud FPGA
platform, this protocol can be easily adapted
only in re configurable Programmable Logic (PL) fabric-based system without the use of
processor system.
\item \textbf{Bus Specification.} For developing our protocol, we have considered all
the interfaces established by AMBA bus specifications.
We predict that our security protocol will adopt
all necessary standard AMBA interfaces like AXI,
AHB, APB, etc. We have chosen the APB interface for
our implementation and added the necessary signals
to build our security framework.
\item \textbf{Runtime Secure Key Generation . } We considered that generated authorization tokens will be generated in runtime condition exploiting process variation and will placed in blockram inside TrustToken controller.
\end{enumerate}
\noindent
\section {Proposed Architechture }
\begin{figure*}[h!]
\centerline{\includegraphics[width=13cm]{main2.pdf}}
\caption{Overview of the proposed TrustToken architecture framework. Consist of TrustToken Controller, TrustWrapper and TokenGenerator}
\label{fig:architechture}
\end{figure*}
With the help of the TrustToken, a SoC owner will be able to offer safe and adaptable IP core communication without the need for any additional secure storage services or systems.
The TrustToken Controller, TrustWrappers, and TokenGenerator are the three parts that make up the detailed architecture of our suggested solution, which is shown in Figure \ref{fig:architechture}.
\begin{comment}
The primary task of the TrustToken controller block is to provide the foundation for building a root of a trust-based secure isolation system: primary tokens generated from the PUFs in a secure boot session and maintaining the integrity of non-trusted IPs with a runtime checking mechanism. TrustWrapper is an AMBA specified extended APB bus wrapper which carries the mandatory TrustToken and other necessary important signals.
This controller block also provides the secure boot session for the PUF-generated keys, important for software-level OS and secure services. TrustToken controller block also distributes the tokens to the designated IPs cores. To implement the secure enclave isolation, TrustWrapper enforces token-based data integrity and confidentiality check-in each communication transaction initiated by an non-trusted IP core. TrustWrapper is an AMBA specified extended APB bus wrapper which carries the mandatory TrustToken and other necessary important signals. The signals are specified and discussed deliberately in Section \ref{sub:isolation}. TrustWrapper should instantiate all non-trusted IP cores in the design phase, and the IP integrator should specify the trust integrity property in the hardware design. TokenGenerator block consists of a custom hybrid ring oscillator PUF design and can generate secret and unique identification tokens by exploiting manufacturing variations of the SoC hardware. Generated tokens are provided to the TrustToken Controller block for future security measures.
\end{comment}
\paragraph{\textbf{TrustToken Controller.}}
A separate centralized IP called the \textbf{TrustToken} controller is responsible for creating special Tokens/IDs for the IPs and upholding the security norms in the networked environment. To assert the validity check, any IP Integrator must modify the value of the token's parameter designated \textbf{\textit{ar\_integrity}} (Fig \ref{data_bits}). The isolation feature will be disabled when this value is set to LOW. When HIGH, it will enforce the IP's isolation mechanism and, following a successful authorisation, execute the IP in a non-trusted zone.
The Central TrustToken Controller receives the keys once they have been generated by the PUF module and uses them to assign token IDs. The central security command center of the entire SoC system, the Central TrustToken Controller is in charge of distributing all Token IDs provided by the integrated PUF module. The \textbf{TrustToken} controller verifies the Token ID received with the list of securely stored Tokens whenever any IP wants a READ/WRITE access. It will immediately allow the data channel for communication after a successful permission or immediately disable it.
\begin{comment}
\begin{figure}[h!]
\centerline{\includegraphics[width=8cm]{main1.pdf}}
\caption{Central TrustToken Controller connected with TrustWrapper and TokenGenerator. }
\label{fig:architechture}
\end{figure}
\end{comment}
\paragraph{\textbf{Trust Wrapper.}}
\label{sub:isolation}
Every IP in our suggested design will be protected by a security wrapper called TrustWrapper. There are two distinct operating interfaces for TrustWrapper: Secured and Non-secured. The bus signals ID and Token will be added to every non-trusted IP core that is marked as non-secured. We rely on adding additional bus signals to the current AMBA bus protocol specifications in place of providing any register level isolation method or separate bus protocol for the secure isolation. It might be necessary to change the interconnect bridge mechanism for security check activities if a separate bus protocol for isolation is added. This could lead to new vulnerabilities. Additionally, in order to convey IPs ID and Token information uniformly and uniquely, a bus protocol would need to handle all conceivable bus protocol parameters, such as bandwidth, channel length, burst size, streaming techniques, etc. The Central TrustToken Controller will issue an authorization request for each data transaction started by the untrusted IP core. The controller block should receive valid and recent security data (IDs and Tokens) from non-trusted IPs via the security wrapper.
\begin{figure}[h]
\centerline{\includegraphics[width=7cm]{data_bits2.pdf}}
\caption{TrustWrapper data ports: Proposed TrustToken signals with their relative port width.}
\label{data_bits}
\end{figure}
\begin{comment}
\textbf{TrustToken} architecture provides a secure isolation depending on the value of INTEGRITY LEVEL indicated in the TrustWrapper. Figure \ref{data_bits} shows the signals specifications with their relative data ports width. Signal \textbf{\textit{ar\_integrity}} is used to alter the INTEGRITY LEVEL state of non-trusted IPs. While altering the INTEGRITY LEVEL to the HIGH state, the \textbf{TrustToken} controller will enforce the isolation mechanism with the connected non-trusted IP. One of the major contribution of proposed protocol is that even altering the INTEGRITY LEVEL will require valid authorization.
\end{comment}
\paragraph{\textbf{Token Generator. }}
\noindent
The enhanced Ring Oscillator-based PUF, which is more stable than the original Ring Oscillator PUF, is implemented due to its low overhead and latency. Comparing Ring Oscillator-based PUF to SRAM PUF, Arbiter PUF, TRNG, or other crypto cores, the results for latency and resource consumption are encouraging.
Our unique Ring Oscillator-based PUF system can produce keys with a 256-bit width. It satisfies our requirement for offering heterogeneous SoC security by having an acknowledged uniqueness and randomization. Strong PUF is characterized as having the following security qualities in one of the fundamental studies on PUF \cite{Maes2010PhysicallyDirections}, 1. The PUF circuit cannot be physically duplicated. 2. It will enable a large number of Challenge-Response Pairs (CRPs), preventing the adversary from launching a brute force assault in a reasonable amount of time. According to the Strong PUF definition, the suggested work qualifies as a strong PUF and will be the best option to implement for the suggested SoC security justification.
\begin{comment}
\section{Security Rules}
\label{sec:proposed}
In this section, we describe the formal specifications of the security rules for the \textbf{TrustToken} framework.
The security formalism defines the security elements and access control primitives that are implemented in the system. Both hardware and software level components are integrated in the security primitives because the software processes offload their to hardware IP cores. The security tuple $\mathbb{S}$ is characterized as follows:
\begin{equation*}
\mathbb{S} := \{U, P, O, T, I, A, D, M\}
\end{equation*}
\begin{itemize} [leftmargin=*]
\item $U$ = $\{u_1, u_2, u_3, .... ,u_n\}$ is the set of users in a system.
\item $P$ = $\{P_1, P_2, P_3, .... ,P_n\}$ is the set of process sets where each user has its corresponding process set $P_i$ = $\{p_{i1}, p_{i2}, p_{i3}, .... ,p_{im}\}$
\item $O$ = $\{o_1, o_2, o_3, .... ,o_k\}$ is the set of objects. In our proposed framework, objects correspond to various types of non-trusted IP cores.
\item $T$ = $\{T_1, T_2, T_3, .... ,T_n\}$ is the set of secret Tokens.
\item $I$= $\{I_1, I_2, I_3, .... ,I_n\}$ is the set of assigned IDs to each non-trusted IP core.
\item $A$ = $\{HIGH,LOW\}$ is the set of integrity access attributes. Here, $HIGH$ is the HIGH state level of integrity, $LOW$ is LOW state level of integrity.
\item $D$ = $\{yes,no\}$ is the set of decisions.
\item $M$ = $\{M_1, M_2, M_3, .... ,M_n\}$ is the set of access matrix. Each user has its corresponding access matrix. Each matrix has $m\times k$ elements where each element is a 3-bit access attribute, $a = a_2a_1a_0$ where $a_2 \rightarrow r, a_1 \rightarrow w, a_0 \rightarrow e$.
\end{itemize}
As most of the modern OS system allows us to create multiple user accounts in a single CPU , we include the set of users in the security tuple. Each user can execute multiple processes and we have included one process under each user. The integrity access attributes include HIGH and LOW states. To ensure the security of the system, we have defined and established some security rules:
\noindent
\textbf{Rule 1.} For each $u$ $\in$ $U$, there is a function $F_u$$\colon$$P$$\rightarrow$$M$ which must be a one to one function.
Rule 1 ensures secure isolation of hardware access requests as a process under one user can not gain any unauthorized access of other user.
\vspace{-0.5mm}
\textbf{Rule 2.} An access request is a 4-tuple $\tau := (u, p, o, t, i, a)$ where $u$ $\in$ $U$, $p$ $\in$ $P_i$, $o$ $\in$ $O$, $t_i$ $\in$ $T$, $i_i$ $\in$ $I$ and $a_i$ $\in$ $A$.
Rule 2 defines the access request where a process under a user account requests for a data transaction from a hardware IP core.\\
\noindent
\textbf{Rule 3.} Confidentiality Preserving Rule : If a process $p$ $\in$ $P$ has an integrity attribute, $i$ over an object $o$ $\in$ $O$ and the decision is $d$ $\in$ $D$, the confidentiality is preserved if $a_2$ = $r$ or $a_0$ = $e$ or both.
\vspace{-0.5mm}
\noindent
\textbf{Rule 4.} Integrity Preserving Rule : If a process $p$ $\in$ $P$ has an access attribute $a$ over an object $o$ $\in$ $O$ and the decision is $d$ $\in$ $D$, the integrity is preserved if $a_1$ = $w$ or $a_0$ = $e$ or both.
\noindent
\textbf{Rule 5.} The access request of a process $p$ $\in$ $P$ over an object $o$ $\in$ $O$ is granted if the decision is $d$ $\in$ $D$ and $d$ = $yes$.
\noindent
\textbf{Rule 6.} Only the Central Trust Controller or an IP integrator in design phase has the access to modify the access matrix $M_i$ $\in$ $M$.
\vspace{-0.5mm}
\noindent
\begin{comment}
\vspace{1.5mm}
\noindent
\textbf{Rule 6.} Only the Central Trust Controller or an IP integrator in design phase has the access to modify the access matrix $M_i$ $\in$ $M$.
An unauthorized modification of access permission is prevented by rule 6, as only the user can change the entries of its own access matrix.
\vspace{1.5mm}
\noindent
\textbf{Theorem 1.} \textit{An access request and its corresponding permission is secured if it obeys the rule 2 and rule 5.}
\begin{proof}
Let, an access $\tau_1$ to an accelerator is not a 4-tuple, such as $\tau_1 := (u_1, o_1, a_1)$ or $\tau_1 := (p_1, o_1, a_1)$. Besides, $\tau_2$ is a 4-tuple, $\tau_2 := (u_2, p_2, o_2, a_2)$. As for $\tau_1$, either user id or process id of the request is unknown, the request can not be verified whereas the request $\tau_2$ can be verified. So, an access request must obey the rule 2 to be a complete request. To grant a permission, if the request is complete, the decision $d$ is determined by accessing the element $a$ from the corresponding access matrix $M$. Hence, an access request is securely processed by the rule 2 and rule 5.
\end{proof}
\noindent
\textbf{Theorem 2.} \textit{The confidentiality and integrity of the system is preserved if all access requests obey the rule 3 and rule 4.}
\begin{proof}
Let, two access attributes are $a'=a'_2a'_1a'_0$ and $a''=a''_2a''_1a''_0$. If $a'_2 = w$ and $a'_1 = r$, the confidentiality and integrity of the system may be compromised for the access $a'$ as the read and write access is altered. Besides, if $a''_2 = r$ and $a'_1 = w$, the confidentiality and integrity of the system is not compromised for the access $a''$, as the read and write access is same as defined. Hence, the rule 3 and rule 4 preserves confidentiality and integrity of the system.
\end{proof}
\noindent
\textbf{Theorem 3.} \textit{A system is secured if and only if it's all accesses to objects are secured, and the system obeys the rule 6.}
\begin{proof}
The accesses to objects in a system are secured by the rule 1-5 and theorem 1 and theorem 2. Rule 6 defines the authority to modify the access matrix entries. Let, $u_1, u_2 \in U$ are two users. If $u_1$ tries to modify $M_2$ and $u_2$ tries to modify $M_1$, there is a possibility of security breach in the system. Therefore, rule 6 ensures the prevention of unauthorized modification in $M$ and maintain security.
\end{proof}
\begin{algorithm}[t]
\caption{Algorithm of Software IP Management Module}
\label{alg:token_algo}
\algsetup{linenosize=\scriptsize}
\small
\begin{algorithmic}[1]
\REQUIRE $O_{id}$: Set of Object IDs, $P_i$: Set of processes, $SC$: Set of Security Contexts of processes, $\tau$: an access request tuple, $D_{in}$: Original Input Data, $b$: Signal from HIMM cache miss, $M_i$: Access Matrix
\ENSURE $C_{id}$: Set of Context ID for processes, $D_{out}$: Data out
\STATE $C_{id}\gets \phi$
\FORALL {$p_{ij} \in P_i$}
\IF{$SC_j(p_{ij})$ is valid}
\STATE $c_j \gets f(SC_j)$ /* Context ID generation function*/
\STATE $C_{id} \gets C_{id}\cup \{c_j\}$
\ENDIF
\ENDFOR
\FORALL {$\tau_i \in \tau$}
\STATE $a_i \gets M(p_{ij}, o_{id})$ \label{line:data_out}
\STATE $D_{out} \gets D_{in}\cup o_{id}\cup a_i$
\ENDFOR
\IF{$b = 1$}
\FOR {$p_i \in \tau_i$}
\STATE $SC_i \gets f^{-1}(c_i)$ /* Inverse function of \textit{f} */
\STATE $a_i \gets F(SC_i)$ /* Access SELinux Policy Server*/
\STATE $M(p_{ij}, o_{id}) \gets a_i$
\STATE Go to line~\ref{line:data_out}
\ENDFOR
\ENDIF
\RETURN $D_{out}, C_{id}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Algorithm of Trust Token Module}
\label{alg:SIMM_algo}
\algsetup{linenosize=\scriptsize}
\small
\begin{algorithmic}[1]
\REQUIRE $O_{id}$: Set of IP IDs, $P_i$: Set of processes, $SC$: Set of TrustToken and ID of processes, $\tau$: an access request tuple, $D_{in}$: Original Input Data, $M_i$: Access Matrix
\ENSURE $C_{id}$: Set of Context ID for processes, $D_{out}$: Data out
\STATE $C_{id}\gets \phi$
\FORALL {$p_{ij} \in P_i$}
\IF{$SC_j(p_{ij})$ is valid}
\STATE $c_j \gets f(SC_j)$ /* TrustToken and ID generation function*/
\STATE $C_{id} \gets C_{id}\cup \{c_j\}$
\ENDIF
\ENDFOR
\FORALL {$\tau_i \in \tau$}
\STATE $a_i \gets M(p_{ij}, o_{id})$ \label{line:data_out}
\STATE $D_{out} \gets D_{in}\cup o_{id}\cup a_i$
\ENDFOR
\RETURN $D_{out}, C_{id}$
\end{algorithmic}
\end{algorithm}
\vspace{-1.5mm}
\noindent
\end{comment}
\section {Proposed Protocol evaluation }
The protocol robustness in the outlined attack scenarios was our goal in this section.
\subsection{Case 1 : ID signals being compromised}
We described a potential attack scenario where a software level attack was introduced from an arbitrary application core in section \ref{sec:threat}. By launching a transaction request from another IP core, the malicious adversaries configure a secured IP core and try to access the victim IP. All assigned IDs and Tokens, as well as their corresponding source and destination IPs, are nevertheless recorded by Central Trust Controller. Since the attacker tried to get access illegally from a different IP core, this attempt will be checked against the saved credentials and blocked if they don't match.
\subsection{Case 2 : Insecure access control }
At the AXI interconnect level security check is carried out in the case of Xilinx TrustZone, \cite{xilinx_trustzone}, and it plays a crucial part in the security. There is a significant security risk because this Interconnect crossbar is also in charge of determining the security state of each transaction on the associated AXI bus. By altering a few security signals, any hostile attacker wishing to breach the security layer can simply control the AXI connection crossbar. This flaw was fixed with the suggested secure architecture, which imposed a strong and secure system that makes it very impossible for any access control attack to take control of the internal signals of the Central \textbf{TrustToken} Controller. With a PUF-based Token ID key, the Central \textbf{TrustToken} Controller encrypts itself, and as a result prevents any illegal use of this IP's access control.
\subsection{Case 3: Comprising INTEGRITY LEVEL}
The status of the INTEGRITY LEVEL signals determines whether any non-trusted IP is connected to the Central \textbf{TrustToken} Controller for secure isolation. Only an IP Integrator can declare the INTEGRITY STATUS at the hardware level, as was previously indicated in the thread model section. This provides defense against any CAD or RTL script attack by requiring adequate authorisation for any alteration of this signal under runtime conditions. Additionally, any malicious attacker must display their PUF-based Token ID of the untrusted IP in order to change the status of the protection level. Benhani et al. \cite{benhani_trustzone} demonstrated in their work that a malicious attacker could only significantly disrupt the Arm TrustZone AWPROT/ARPROT signal to cause a Denial of Service (DoS) disruption in the SoC. The suggested secure transition paradigm, which stipulates that a change request should additionally go into an additional authorisation layer, can be used to avoid this situation.
\section{TrustToken Implementation in multi-tenant cloud }
The experimental setup and overhead calculations needed to create our suggested architecture and assess the robustness of the suggested \textbf{TrustToken} framework are described in this section.
The primary setup involved calculating overhead and latency for data exchanges as well as effectively implementing the design.
\subsection{ Cloud FPGA Setup}
The cloud is configured on a Dell R7415l EMC server with an AMD Epyc 7251 processor clocked at 2.09GHz and 64GB of RAM. The node is running CentOS-7 with a 3.10.0 kernel. The Dell Server and the Xilinx Alveo u50 board were linked by PCIe express. Xilinx Runtime Library (XRT) was used to initiate multi-tenant domains in the FPGA and reconfigure the regions to different tenants.
\subsection{Proposed Protocol Performance}
By implementing and synthesizing our proposed TrustToken protocol on an Alveo u50 board, we evaluated its performance. Four symmetric crypto IP cores have TrustWrappers attached around them for evaluation (AES,DES,TRNG and RSA). For the purpose of evaluating the suggested architecture model, each TrustWrapper was given a HIGH integrity state assignment. Additionally, we launched five distinct ARM-based programs to access the computational output from the crypto cores. We successfully implemented a trusted execution environment using the TrustToken concept in our implementation and tracked the outcomes. We discussed a potential software level attack scenario in section \ref{sec:threat} where a hostile attacker from Application 3 (mapped to TRNG hardware IP core) tries to create an approved access path to RSA IP core. We put this scenario into action, and the TrustToken module stopped the attack. We also created a Xilinx TrustZone enclave using the VIVADO CAD tool, based on the work Xilinx \cite{xilinx_trustzone_cad}, to compare the protocol performance with the proposed TrustToken protocol. We successfully launch a straightforward CAD Tool attack against the Xilinx TrustZone by changing the \textbf{AWPROT} signal in a runtime circumstance scenario. Similar to how the attack attempt in the suggested technique failed, the protocol's resistance to attack by CAD tools is clearly demonstrated.
\subsection{The effectiveness of the generated keys }
The results of the hamming distance calculation using the PUF keys are shown in Fig. \ref{hamming}. The hamming distance is closely rounded between 40 and 60 percent, as seen in the figure, demonstrating the keys' stability and efficacy and being extremely similar to the ideal properties of PUF \cite{kawser_puf}. The general characterizations of the PUF were compiled in Table \ref{table:PUF}. Our internal PUF architecture has 512 oscillators and is capable of producing keys that are 256 bits wide.
\begin{figure}[h]
\centerline{\includegraphics[width=5cm]{Hamming_Distance.pdf}}
\caption{Hamming Distance between the PUF keys.}
\label{hamming}
\end{figure}
\begin{table}
\caption{Characterizations of the Ring Oscillator PUF}
\label{table:PUF}
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Properties & Value \\
\hline
\hline
Oscillators Numbers & 512 \\
Number of Keys & 256 \\
Single Key Length & 256 bits\\
single Challenge Input Length & 2 bytes\\
Randomness & 46.62\% \\
Uniqueness & 48.18\%\\
Reliability & 100\% \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{comment}
\subsection{TrustToken validation overhead}
In our proposed mechanism, the isolation mechanism with TrustToken initiates a validation overhead in each data transaction request. This overhead is mainly contributed by the TrustWrapper authorization delay time. TrustWrapper and Central TrustToken Controller have a bi-directional handshake authentication mechanism and generates a latency from 1 to 2 cycle depending on the Intregrity Level of the non-trusted IP cores.
\subsection{Resource Overhead}
After successful implementation, we have included the utilization report from the VIVADO software platform in Table \ref{table:utlization}. The deployed design shows encouraging results with low resource utilization. BUFG region utilization is only rounded to 6.25 percent.
\begin{table}[ht]
\caption{Utilization Report}
\centering
\begin{tabular}{ |c | c | c |c}
\hline
Resource & Available & Utilization (\%) \\ [0.5ex]
\hline
LUT & 53200 & 618 (1.16\%) \\
FF & 106400 & 44 (0.04\%) \\
BUFG & 32 & 2 (6.25\%) \\[1ex]
\hline
\end{tabular}
\label{table:utlization}
\end{table}
\end{comment}
\begin{comment}
\begin{table}[ht]
\caption{Power Dissipation}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Type & Dynamic Power & Static Power & Total Power \\
\hline
\rule[-2ex]{0pt}{2.2ex} \textbf{TrustToken} & 1.400W & 0.137W & 1.537W \\
\hline
\end{tabular}
\label{table:power}
\end{table}
Also, in Table \ref{table:power} we have included a power dissipation summary of the whole synthesized design. In terms of power, the proposed protocol consumes significantly less power, which assures the proposed protocol design to be deployed with minimum power consumption.
\end{comment}
\section{Conclusion }
In a multi-tenant cloud FPGA platform, this study suggests a Token-based secure SoC architecture for untrusted IP. Using a root of trust-based security module, the \textbf{TrustToken} architecture adds an extra layer of security against unwanted access control and assaults in for multi-tenant platform. The protocol can take advantage of the reconfigurable features of the SoC-based FPGA platform and uses a bespoke Ring Oscillator-based PUF module to create keys. Our method eliminates the uses of NVM memory for secure storage of keys as they are uncertain in context of multi-tenant platform.
\begin{comment}
To improve the reliability and tolerance of the PUF, an encoding scheme and "helper data" will be added with future works. Such changes will reduce the PUF response error due to changed environmental conditions, including temperature, voltage, and aging effects.
\end{comment}
\printbibliography
\end{document}
|
2,869,038,153,767 | arxiv | \section{Introduction}
Multitype populations are naturally modeled as measure-valued processes. In this paper we consider
a class of multilevel measure-valued processes which model ensembles of subpopulations with
mutation and selection at the subpopulation level and possible death and replacement of
subpopulations. In particular this includes mathematical models of multilevel selection which has
been the subject of considerable debate in the evolutionary biology literature. Before introducing
our multilevel models we begin with a brief review of some of this literature.
\subsection{Hierarchical population structure}
The hierarchical structure of populations plays a fundamental role in the biological and social
sciences. In evolutionary biology and ecology the hierarchy includes ecosystems, community,
species, organism, genes and in the social sciences we have cites, regions, nations, etc. These
are systems in which at each level of the hierarchy we have a collection of elements of the next
lower level in the hierarchy. The description of a unit at a given level in the hierarchy involves
the distribution of the different characteristics of the individuals at the next lower level in the
hierarchy.
\subsection{Historical remarks on hierarchy in population genetics and evolutionary biology}
Biological evolution can be viewed in terms of a hierarchy of levels of organisation going from the
molecular level to the species level and social of groups of members of a species. A natural
question is to what extent does the Darwinian mechanism of variation and selection of fitter types
in a competitive environment play a role at the various levels.
\bigskip
\noindent An early application of group selection was by Wynne-Edwards (1962) \cite{WE62} who used it to explain adaptations and social behaviour of
animals. Subsequently
G.C. Williams (1966) \cite{W66} made a highly critical analysis of group selection which was very
influential.
John Maynard Smith
(1964),(1976) (\cite{MS64},\cite{MS76}) considered both group selection and kin selection which
was introduced by W.D. Hamilton (1964) \cite{H64} and concluded that there may be conditions under
which group selection is effective. In the subsequent decades there has been intense debate among
evolutionary biologists about the extent to which evolution has been shaped by selective pressures
acting at the level of groups of individuals. \bigskip
\noindent In recent years the role of multilevel selection has re-emerged in a number of contexts
including the emergence of life (Szathm\'ary and Demeter \cite{SD-87}), structural complexity
(G\"{o}rnerup and Crutchfield (2008) \cite{GP-06}),
prebiotic evolution (Hogeweg and Takeuchi \cite{HT-03}), plasmid replication in bacteria (Paulsson (2002) \cite{P-02}), evolution of cooperation
(Traulsen and Nowak \cite{TN-06}) and sociobiology (Wilson and Wilson) \cite{WW}. In the study of cultural evolution Boyd and Richardson \cite{BR} suggest that
interdemic group selection can be important when there are multiple stable equilibria at the deme level and the emergence
of higher level equilibria occurs.
Moreover these ideas are relevant in the context of spatially structured populations and
evolutionary ecology (see Lion and van Baalen \cite{LVB-08}, Lion et al \cite{LJD-11}). A detailed
study of host-pathogen systems in the framework of multilevel selection was carried out by Luo
(2013) \cite{S-13}, \cite{SRMK-12} and Luo and Mattingly \cite{LM15} who demonstrated that a phase
transition between the dominance of selection at the different levels can occur as the model
parameters are varied. Multilevel selection also underlies current research in the development of
complex human societies (see e.g. Turchin et al. \cite{Tur}). Several books have been written on
the question of the levels of selection. These include Brandon and Burian \cite{BB-84}, Sober and
Wilson \cite{S-W}, Keller \cite{Kel-99} and Okasha \cite{O-06}. An number of other recent research
papers on multilevel selection are included in the References.
\bigskip
\noi We end with a quotation of Leigh \cite{L10} that provides a useful perspective on these
questions:
{\em ``These conditions ( e.g. he quotes Kimura's conditions - see (\ref{kcond})) seem so wonderfully improbable
that, following Williams (1966), most biologists have focused almost exclusively on individual
selection. Improbability, however, does not mean impossibility. Group selection capable of
overwhelming selection within groups, played a crucial role in some major transitions ...''}.
\bigskip
\noi An objective of this research is to develop tools to identify conditions under which higher
level selection is relevant for the class of mathematical models we consider.
\bigskip
\subsubsection{A multideme model with two types of individuals}
In order to introduce the main ideas we briefly review a formulation of group selection given by
Aoki \cite{A-82}). This begins with a countable collection of demes where each deme is a population
of $n$ individuals which are either type A or type B. Type B individuals are altruistic and add to
the fitness of the deme. The life cycle of a deme involves four discrete events, namely,
migration, reproduction, extinction and recolonization. In the reproduction stage, within each deme
the population undergoes weighted finite population resampling in which type B has fitness $-s$
(with $s>0$). The probability that a deme suffers extinction is a monotone decreasing function of
the proportion of type B individuals it contains, that is, the fitness of the deme increases as a
function of the number of altruistic individuals it contains. In the migration stage a random
number of individuals within a deme are replaced by individuals chosen at random from the total
population pool. Aoki then obtained a recursion formula for the probability distribution of the
number of individuals of type B per deme over successive life cycles and discussed the question of
the long time behavior of this distribution, in particular whether or not the proportion of type B
goes to zero or not.
\subsubsection{A diffusion process model of multilevel selection with two types}
The class of Wright-Fisher diffusion processes plays an important role in population genetics.
Following Aoki \cite{A-82}), an analogous extension of the Wright-Fisher process was introduced by
Kimura (1983) \cite{K-83} with alleles $A$ and $B$ distributed in an infinite number of competing
demes. It is assumed that $B$ is the altruistic allele which has a selective disadvantage $s_1$
but which is beneficial for a deme in competition with other demes,
namely, a deme having frequency $x$ for $B$ has advantage $s_2(x-\int y\nu(dy))$ where $\nu(dy)$ is the distribution of the frequency of type $B$ individuals
over the set of demes.
This leads to the integro-differential equation for the dynamics of the density $\{\wt\nu(t,x)
\}_{t\geq 0}$ where $\nu(t,dx)=\wt\nu(t,x)dx$ :
\be{}\label{Kim} \frac{\partial \wt\nu(t,x)}{\partial t}
=\frac{\gamma_1}{2}\frac{\partial^2}{\partial x^2}\left(
x(1-x)\wt\nu(t,x)\right)-\frac{\partial}{\partial x}(M(t,x)\wt\nu(t,x)) +s_2\left(x-\int
y\wt\nu(t,y)dy\right)\wt\nu(t,x)\ee where
\[ M(t,x)=m_{21}(1-x)-m_{12}x+c\left(\int y\wt\nu(t,y)dy-x\right)-s_1 x(1-x), \]
and $m_{12},m_{21}$ are the mutation rates $1\to 2$, $2\to 1$ respectively, $c$ is the rate of
migration between colonies and the resampling rate $\gamma_1$ is inversely proportional to the
effective population size at a deme. This model will be discussed in detail in subsection
\ref{sss.rand1} including Kimura's analysis which was based methods of ordinary differential
equations as well as the analysis using the dual representation which will be developed in section
3. The duality method is not restricted to two-type systems (as is the case for the ode method)
and can be used to study general multitype systems.
\subsection{Multilevel multitype measure-valued processes}
The natural framework for multilevel multitype models with random effects at different
levels is the setting of multilevel measure-valued processes. Models of this type were first
developed for multilevel branching systems in Dawson and Hochberg \cite{DH-91} and Etheridge
\cite{ET-93}. The long-time behaviour of multilevel measure-valued processes is investigated in Wu
\cite{Wu-94}, Dawson and Wu \cite{DW-96}, and Gorostiza, Hochberg and Waklbinger \cite{GHW-95}. In
particular two-level measure-valued processes have state spaces of the form
$\mathcal{M}(\mathcal{M}(E))$ for some Polish space $E$ where $\mathcal{M}(E)$ denotes the space of
Borel measures on $E$. In this paper we work with an analogous class of two level probability
measure-valued processes formulated in terms of a well-posed martingale problem which generalizes
the Kimura model to systems with more than two types, more complex interactions and to the
diffusion limit of systems with finitely many demes.
\subsection{Outline of the paper}
The main objectives of this paper are to formulate a general measure-valued framework for
multilevel population systems with mutation and selection and to develop the method of duality for
multilevel measure-valued stochastic processes with applications to population systems with
multilevel selection. In section 2 we introduce the class of models characterized as solutions to
a well-posed martingale problem. In Section 3 we introduce the dual processes used to establish
that the martingale problems are well-posed and to compute joint moments. These are given by the
multilevel generalization of the class of function-valued and set-valued dual processes introduced
in Dawson and Greven \cite{DG-14}. In Section 4 we consider the long-time behavior of systems with
two types and with two levels of selection. In Section 5 we introduce some more complex models of
systems with $K\geq 2$ types and with multilevel selection as well as further possible extensions
of these models and methods.
\section{Multitype-multilevel mutation-selection models}
\subsection{A two level finite population model}
We begin with a two-level finite population model given by an exchangeably interacting system of
Moran particle systems with selection at both levels. We assume that the higher level fitness of
a subpopulation can depend on the distribution of level I types within the subpopulation. In other
words at the group level the fitness $V_2(\mu)$ of a subpopulation described by its distribution
$\mu$ over the space of types, is a result of a network of interactions. Then the resulting
distribution of the collection of subpopulations $\{\mu_i\}_{i\in S}$ is formulated in the setting
of multilevel measure-valued processes which provide a natural setting for the study of
hierarchical systems of this type.
We begin with a simple colony containing $N_1$ individuals described by a Moran model. \ Each
individual
has type in $\mathbb{I}=\{1,\dots,K\}.$ We le
\bea{}
&& n_{k} :=\text{ number of individuals of type }k\in\{1,\dots,K\}\\
&&N_1 =\sum_{k=1}^{K}n_{k
\eea and we think of the normalized vector $\frac{1}{N_1}(n_{1},\dots,n_{K})$ as an element of
$\mathcal{P}(\mathbb{I})$, the space of probability measures on $\{1,\dots,K\}$,
\[ X=\frac{1}{N_1}\sum_{k=1}^Kn_k\delta_k,\]
where the single atom measure $\delta_k$ represents an individual of type $k$.
\bigskip
The dynamics of a simple colony is given by a continuous time
Markov chain, $\{X_{t}:t\geq0\}$ with state space $\mathcal{P
(\mathbb{I}).$
The dynamics includes:
\begin{itemize}
\item Mutation: given by transition rates $\{m_{ij}\}_{i,j\in \mathbb{I}}$, that is, the rate at which an
individual of type $i$ is replaced by an individual of type $j$
\item Sampling: at rate $\frac{\gamma_1}{2}$ an individual of type $i$ is replaced by an individual of type $j$ where $j$ is chosen
from the empirical distribution $X$
\item Selection with fitness function $V_1:\mathbb{I}\to [0,1]$ and intensity $s_1$.
\end{itemize}
The resulting transitions for the probability-measure-valued process are given b
\bea{}
&&\mu \rightarrow\mu-\frac{1}{N_1}\delta_{i}+\frac{1}{N_1}\delta_{j}\text{ at rate \ }m_{ij}\mu(i)\\
&&\mu \rightarrow\mu-\frac{1}{N_1}\delta_{i}+\frac{1}{N_1}\delta_{j}\text{ at rate
}((N_1-1)\frac{\gamma_1}{2}+s_{1}{V_{1}(j)})\mu
(i)\mu(j
\eea for $i,j\in \mathbb{I}$. Note that these assumptions result in a rate of change
${dX(t,i)}/{dt}$ due to mutation and selection of order ${1}/{N_1}$ which turns out to be the same
as the order of the sampling fluctuations when $\gamma_1>0$. This corresponds to the case of weak
selection in population genetics and the {\em diffusion limit} below will involve a time speed-up
by a factor of $N_1$. In population genetics the parameter $\gamma_1$ is viewed as {\em inverse
effective population size} (see Remark 5.4 in \cite{D10}) and is a measure of the population size
in relation to the selection intensity in the finite population model.
We now consider a collection of $N_2$ colonies (demes) \ each consisting of $N_1$ individuals with
internal dynamics within each colony given as above. In addition (as in the Aoki model) there is
an interaction between colonies via migration. To model this, individuals within a colony die and
are replaced at rate $c>0$ by a new individual with type given by that of a randomly chosen
individual in a randomly chosen colony.
The final mechanism is death (extinction) and replacement of colonies following the same
mechanism as the sampling mechanism within colonies. That is, a colony dies and is replaced by a
copy of a randomly chosen colony. \ In addition we can include deme-level selection using a level
II fitness function $V_{2}(\mu)$ and selection intensity $s_2$. For
example, we can take a linear fitness function of the for
\[
V_{2}(\mu)=\int v_{2}(x)\mu(dx)
\]
We then consider the empirical measur
\[
\Xi_{N_2,N_1}(t):=\frac{1}{N_2}\sum_{i=1}^{N_2}\delta_{\mu_{i}(t)}\in \mathcal{P
(\mathcal{P}(\mathbb{I}))
\]
where $\mu_{i}$ denotes the state of the ith colony, namely,
\[ \mu_i=\frac{1}{K}\sum_{j=1}^{K} \frac{n_{i,j}}{N_1}\delta_j\]
and $n_{i,j}$ denotes the number of type $j$ individuals in the ith
colony.
The resulting transitions due to the level one dynamics are of the form
\[
\nu\rightarrow \nu +\frac{1}{N_2}(\delta_{\mu-\frac{\delta_i}{N_1}+\frac{\delta_j}{N_1}}-\delta
_{\mu})
\
at rate $m_{ij}\mu(i)+\left((N_1-1)\frac{\gamma_1}{2}+s_{1}V_{1}(j)\right)\mu(i)\mu(j)\nu(d\mu)$
\smallskip
The resulting transitions due to level II sampling and selection for the measure-valued process are given by:
\[
\nu\rightarrow(\nu+\frac{1}{N_2}(-\delta_{\mu_1}+\delta_{\mu_2}))\text{ rate
}\left(s_2{V_{2}(\mu_2)}+\frac{\gamma_{2}}{2}(N_2-1)\right)\nu(d\mu_1)\nu(d\mu_2)
\]
We can then consider the limiting behaviour as $N_1$ or $N_2$ go to $\infty$, or we can allow them
to go to infinity simultaneously with for example $N_2=\eta N_1$. In the following subsections we
will first consider the limit as $N_1\to\infty$ for a finite system of $N_2$ demes leading to a
system of interacting Fisher-Wright diffusions. We then consider the exchangeable system of
Fisher-Wright diffusions with additional death and random replacement of demes as described above
and then obtain a two level measure-valued diffusion process (called the two level Fleming-Viot
model) by letting $N_2\to\infty$.
\subsection{The martingale problem formulation}
The framework in which we specify the different stochastic models is the class of
probability-valued Markov processes $X(t)\in \mathcal{P}(E_1)$ where $E_1$ is a Polish space. Let
$D_{E_1}([0,\infty))$, ($C_{E_1}([0,\infty))$) denote the class of c\`{a}dl\`{a}g (resp.
continuous) functions from $[0,\infty)$ to $E_1$. We denote by $\{\mathcal{F}_t\}_{t\geq 0}$ the
natural filtration of $\sigma$-algebras on these spaces.
The probability law $P\in
\mathcal{P}(D_{E_1}([0,\infty)))$ is said to be a solution of the {\em martingale problem with
generator $(G,D(G))$}, where $G$ is a linear operator on $D(G)\subset C(E_1)$ and $D(G)$ is
measure-determining on $E_1$, if
\bean{}\label{duality1} M_F(t):=&& F(X(t))-\int_0^t GF(X(s))ds \\&&\text{ is an }\mathcal{F}_t\text{-adapted }P\text{
martingale} \text{ for all }F\in D(G).\eean The martingale problem method is used to characterize
stochastic processes of interest in many
applications. The method (which we will also use below) consists of four steps:\\
(1) to construct a sequence of approximating processes with probability laws $P_n\in$
$\mathcal{P}(D_{E_1}([0,\infty)))$ that satisfy
some simple martingale problems,\\
(2) to show that the laws of the processes are tight, that is, relatively compact in
$\mathcal{P}(D_{E_1}([0,\infty)))$\\
(3) to show limit points of the $P_n$ satisfy the martingale problem defined by $(G,D(G))$, and
\\(4) to prove that there is a unique solution to this martingale problem thus characterizing the
limiting probability law $P$ of the process of interest. We will use this method to define the
Fleming-Viot process that models selection at two levels.
\bigskip
\noi A key tool used to establish the uniqueness of solutions is {\em duality}. This is achieved by
constructing a {\em dual process } $\mathcal{G}_t$ with state space $E_2$ and function
$F:\mathcal{P}(E_1)\times E_2 \to \mathbb{R}$ such that the functions $\{F(\cdot, g),g\in E_2\}$
are in $D(G)$ and are measure-determining on $E_1$, and the duality relation: \bean{}
E_{X(0)}(F(X(t),\mathcal{G}_0))=E_{\mathcal{G}_0}(F(X(0),\mathcal{G}_t))\eean is satisfied for all
$\mathcal{G}_0\in E_2$ for all $X(0)\in \mathcal{P}(E_1)$ (where the right side denotes the
expectation with respect to the law of the $\mathcal{G}_t$ process). The class of set-valued dual
processes we use in the study of multilevel mutation selection systems is developed in detail in
Section 3. In addition to using the dual to establish that the martingale problem has a unique
solution it will be used below to compute moments, to obtain fixation probabilities and to prove
ergodicity.
\medskip
\noi
For general background on the martingale problem formulation see \cite{D93}, \cite{D10} and
\cite{EK2}.
\subsection{Diffusion process limits}
In this subsection we will identify the {\em limit} of the process $\{\Xi_{N_2,N_1}(t)\}_{t\geq
0}$ as $N_1\to\infty$ for fixed $N_2<\infty$.
\subsubsection{The limiting single deme diffusion process}
\label{system1} We first consider the special case in which $N_2 =1$ and let $N_1\to\infty$.
\begin{proposition}The limit as $N_1\to\infty$ of the single deme ($N_2=1$) normalized empirical measure with
diffusion scaling leads (with time speed-up $t\to N_1t$)) to a $K$-type Fleming-Viot process
(equivalently, a finite type Wright-Fisher diffusion) which is characterized as the unique solution
of the martingale problem given with generator
\bea{}\label{G.0} G^{0}f(\mathbf{x})&&=\sum_{i=1}^{K}\left( \sum_{j=1}^K
(m_{ji}x_j-m_{ij}x_i)\right)\frac{\partial f(\mathbf{x})}{\partial x_i}\quad{\text{mutation}}\\&&
+s_1\,\sum_{i=1}^{K} x_i\left( V_1(i)- \suml^{K}_{k=1} V_1(k)x_k \right)\frac{\partial
f(\mathbf{x})}{\partial x_i}\quad{
\text{selection}}\nonumber\\&&+\frac{\gamma_1}{2}\sum_{i,j=1}^{K}
x_i(\delta_{ij}x_j-x_j)\frac{\partial^2f(x)}{\partial x_i\partial x_j}\quad{ \text{genetic drift}}
\nonumber\eea defined on the class $D(G^0)$ given by the class of functions $f$ with continuous
second derivatives on the simplex $\Delta_{K-1}=\{(x_1,\dots,x_K),\, x_i\geq 0,\; \sum x_i=1\}$.
\end{proposition}
\begin{proof} See for example \cite{D93}, Theorem 2.7.1 for the neutral case and
\cite{D93} Theorem 10.2.1 for the proof of uniqueness for the case with selection. (Also see
\cite{EK2}, Chapter 10, Theorem 1.1 for the derivation of the diffusion limit starting with a
discrete generation model.)
\end{proof}
\subsection {Exchangeable system of Wright-Fisher diffusions}
\noindent We now consider a system of demes labeled by $S=\{1,2,\dots,N_2\}$ where the population
at each deme undergoes mutation and selection as in the single deme process but in addition
individuals can migrate between demes at rate $c$ and the population in a deme can become extinct
at rate $s_2$ and be replaced with population $\mu$ sampled from
the empirical distribution of deme compositions. With selection, the replacement deme type is chosen
with weights proportional to the level II fitness \[0\leq V_2(\mu)\leq 1,\quad \mu\in\mathcal{P}(\mathbb{I}).\]
\bigskip
\subsubsection{Deme level fitness functions}
In order to incorporate fitness at the deme level we must introduce an appropriate class of fitness
functions. It is natural to assume that the fitness of a deme (subpopulation) is a function of the
distribution of level I types within the deme given by $V_2(\mu)$ when the distribution of types
within the deme is $\mu\in\mathcal{P}(\mathbb{I})$. We also assume that $V_2$ is a bounded and
continuous function of $\mu$ (in the topology of weak convergence). Without loss of generality (by
the addition of a constant if needed) we can assume that $V_2(\mu)\geq 0$.
\begin{example} Consider the special case $\mathbb{I} =\{1,2\}$, and
\be{} V_2(\mu))=f(\mu(1))\geq 0.\ee Then (see e.g. Lorentz (1963) \cite{L-63} (Chapt. 1, Theorem
4)), we can uniformly approximate $V_2$ using Bernstein polynomials as follows \be{}
V_{2}(\mu)=\lim_{n\to\infty}\sum_{k,\ell}a_{n,k}(\mu(1))^k(\mu(2))^{n-k}\ee where the coefficients
$a_{n,k}\geq 0$.
\end{example}
In general, given a compact Polish space $E$ we consider the space $\mathcal{P}(E)$ of probability measures on
$E$ with the topology of weak convergence. We then consider the Bernstein operators
$B^K:C(\mathcal{P}(E))\to C(\mathcal{P}(E))$ (where $C(\mathcal{P}(E))$ is a normed space with the
supremum norm) defined by \be{} B^Kf(\mu)=\int\dots\int
f\left(\frac{1}{K}\sum_{i=1}^K\delta_{x_i}\right)\mu(dx_1)\dots \mu(dx_K) \ee
Then by (Dawson and G\"artner \cite{DGa} Theorem 3.9 ) for any $f\in
C(\mathcal{P}(E))$ \be{} B^Kf\to f\quad\text{in } C(\mathcal{P}(E)).\ee
This means that we can approximate any bounded continuous fitness function $V_2\in
C_+(\mathcal{P}(\mathbb{I}))$ by \be{} B^KV_2(\mu)=\int\dots\int
h(x_1,\dots,x_K)\mu(dx_1)\dots\mu(dx_K) \ee where $h$ is a bounded non-negative function on
$(\mathbb{I})^K$. This can be rewritten in the form \be{}\label{E.BA} B^KV_2(\mu)=\sum_i s_{i,K}
\int_{(\mathbb{I})^K} \prod_{j=1}^K 1_{A_{K,i,j}}d\mu^{\otimes K} \ee where for each $i$ the
$A_{K,i,j}$ are subsets of $ \mathbb{I}$. We denote the class of fitness functions of the form
(\ref{E.BA}) by $\mathcal{V}_K$ and note that we can approximate any bounded continuous fitness
function by a functions in $\mathcal{V}:=\cup_K \mathcal{V}_K$.
\begin{example} Types $\mathbb{I}=\{1,2$\}. If $\mu(1) =p_1,\; \mu(2)=p_2$, then
\be{} V_2(\mu)=p_1p_2,\quad (1- V_2(\mu))=p_1^2+p_2^2+p_1p_2\ee \be{} V_2(\mu)=\mu^\otimes (C), 1-
V_2(\mu)=\mu^\otimes(C^c)\ee where $C=\{1\}\otimes\{2\}$.
\end{example}
\begin{example}
Consider the 3 type case $\mathbb{I}=\mathbb\{1,2,3\}$ with fitness functions as follows:
\[ V_1(1)=s_1,\quad V_2(\mu)=s_2\mu(2)\mu(3).\]
\end{example}
\begin{example}\label{E4} Model with 3 types $\mathbb{I}=\{1,2,3\}$ and mutualistic (state-dependent) fitness.
\begin{itemize}
\item $V_1(1,\mu)= s_1\mu(2)$, $V_1(2,\mu)=s_1\mu(1)$, $V_1(3)=1/2$
\item Level II fitness is $V_2(\mu)=s_2[\frac{1}{2}\mu(3)+ 2s_1\mu(1)\mu(2)]$.
\end{itemize}
This can be analysed using the set-valued dual as indicated in Remark \ref{R5}.
\end{example}
\begin{example} $ V_2(\mu)$ is positive iff the population contains a certain set of properties
(from a finite set).
\be{} V_2(\mu)=\sum e_i\mu^\otimes(A_i)\ee \be{} 1- V_2(\mu)=\sum e_i\mu^\otimes(A^c_i)\ee where
$e_i\geq 0,\;\sum e_i=1$, $A_i\subset (\mathbb{I})^\N$.
\end{example}
\subsubsection{The limiting generator as $N_1\to\infty$ and $N_2<\infty$}
\noindent The generator for the resulting model of $N_2$ interacting demes:
for $F\in C^2(\mathcal{P}(\mathbb{I})^{N2})$, with
$\mathbf{X}:=(\mathbf{x}_1,\dots,\mathbf{x}_{N_2})\in (\mathcal{P}(\mathbb{I}))^{N2}$
\bea{}\label{G.int} &&G^{N_2,\rm{int}} F(\mathbf{X}) \\&& = \; {{\eta}}\suml_{\xi=1}^{N_2}
G^{0}_\xi F(\mathbf{X}) \quad\quad\quad\text{ mutation-selection dynamics at each site}\nonumber
\\&&\; +
c\cdot \suml_{\xi=1}^{N_2}\left[\sum_{j=1}^{K}\left(\sum_{\xi'=1}^{N_2}\frac{1}{N_2}\,x_j(\xi')
-x_j(\xi)\right)\frac{\partial F(\mathbf{X})}{\partial x_j(\xi)}\right] \quad{\text{migration}}\nonumber\\
&& +s_2\,\sum_{\xi=1}^{N_2}\left(\frac{1}{N_2}\sum_{\xi'=1}^{N_2} V_2(\mathbf{x}(\xi))
F(\Phi_{\xi\xi'}\mathbf{X})-F(\mathbf{X})]\right)\; { \text{deme replacement}}\nonumber\\&&
+\frac{1}{2}\gamma_2\sum_{\xi=1}^{N_2}\sum_{\xi'=1}^{N_2}[F(\Phi_{\xi\xi'}\mathbf{X})-F(\mathbf{X})]\quad
{ \text{deme resampling}}\nonumber\eea where $\Phi_{\xi\xi'}\mathbf{X} =(\mathbf{x}_1,\dots,\mathbf
{x}_\xi,\dots,\mathbf{x}_\xi, \mathbf{x}_{N_2}) $ (corresponding to the replacement of
$\mathbf{x}_\xi'$ by $\mathbf{x}_{{\xi}}$) and $\eta$ is a parameter that depends on the relation
between the natural time scales at the two levels.
\medskip
The martingale problem with generator $G^{N_2,\rm{int}}$ has a unique solution that defines a
c\`adl\`ag strong Markov process $\{{\mathbf X}^{N_2}_t\}_{t\geq 0}$ with state space
$(\mathcal{P}(\mathbb{I}))^{N_2}$. The proof follows as in the proof of Proposition 2.1 but where
the dual process needed to show that the martingale problem is well posed in given in Subsection
3.1.
\subsection{Empirical measure-valued processes and the Fleming-Viot limit}
We will next consider the limit as $N_2\to\infty$ in the general case in which we can have $s_2>0$
and/or $\gamma_2
>0$. We assume that the initial state satisfies $(\mu_1(0),\dots,\mu_{N_2}(0))\quad$ {is exchangeable}.
\beL{}\label{exchange} Consider the Markov process
$\mathbf{X}(t)=(\mathbf{x}_1(t),\dots,\mathbf{x}_{N_2}(t))\in (\mathcal{P}(\mathbb{I}))^{N_2}$ with
generator $G^{N_2,\rm{int}}$. Assume that the probability distribution of $\mathbf{X}(0)$ is
exchangeable (i.e the distribution is invariant under permutations of $\{1,2,\dots,N_2\}$). Then
$(\mathbf{x}_1(t),\dots,\mathbf{x}_{N_2}(t))$ is an exchangeable system of
$\mathcal{P}(\mathbb{I})$-valued diffusions.
\end{lemma}
\begin{proof} This follows since the migration and level II selection terms in the generator are
invariant under permutation - see \cite{vail} for the general case of exchangeable diffusions.
\end{proof}
{\bigskip}
\noindent {\em The level II empirical process} is defined by
\medskip
\bea{}\label{nueq} \Xi^{N_2}_t :=\frac{1}{N_2}\sum_{j=1}^{N_2}\delta_{\mu_j}\in
\mathcal{P}(\mathcal{P}(\I)).\eea
\bigskip
\noindent Then by Lemma \ref{exchange}, $\Xi^{N_2}(t)$ is a
$\mathcal{P}(\mathcal{P}(\mathcal{\mathbb{I}}))$-valued Markov process with generator inherited
from the interacting system. To describe this we consider the algebra of functions, $D(G^{N_2})$,
on $\mathcal{P}(\mathcal{P}(\I))$ containing functions of the form
\be{}\label{alg} H(\nu)=\prod_{k=1}^K\left[\int h_k(\mu_k)\nu(d\mu_k)\right] \ee where
\be{}\label{alg2} h_k(\mu) =\sum_j h_{k,j} \mu^{\otimes}(\prod_i 1_{A_{k,ij}})\qquad\text{that
is, a polynomial on }\mathcal{P}(\mathbb{I}).\ee
\noi We then define the generator in terms of the generator of the interacting system as follows:
\[ G^{N_2}H(\nu):=G^{N_2,\rm{int}} {F}(({\mu}_1,\dots,{\mu}_{N_2}))
\]
where $H\in D(G^{N_2})$ and \be{} \nu=\frac{1}{N_2}\sum_{j=1}^{N_2}\delta_{\mu_j},\ee and
$G^{N_2,\rm{int}}$ is given by (\ref{G.int}).
\begin{theorem}\label{T.1} (\cite{DG-93}, \cite{DG-14}).\\
\noindent Assume that $\gamma_2\geq 0$ and $V_2\in \mathcal{V}$. Then
\bean{} {\{\Xi^{N_2}_t\}_{t\in [0,T]}\Rightarrow (\Xi_t)_{t \in [0,T]} \;\text{ as }N_2 \to \infty}\eean where $\Xi_t(dx) \in
C_{\mathcal{P}(\mathcal{P}(\mathbb{I}))}([0,T])$ is the two level Fleming-Viot process with level
two selection given by the unique solution, $\{P_\nu:\nu\in\mathcal{P}(\mathcal{P}(\mathbb{I}))\}$,
to the well-posed $(G_2,{D}_2)$ martingale problem where {the domain ${D}_2\subset
C(\mathcal{P}(\mathcal{P}(\mathbb{I})))$} consists of the algebra of functions containing functions
of the form (\ref{alg}) and the generator acting on ${D}_2$ is given by
\bea{}\label{G.2}
\\ G_2 H(\nu)&&=\int_{\mathcal{P}(\mathbb{I})} \eta\, G^0\frac{\delta
H(\nu)}{\delta\nu(\mu)}\nu(d\mu)\nonumber\\&&+ c
\int_{\mathcal{P}(\mathbb{I})}\int_{\mathbb{I}}\left( \frac{\delta}{\delta\mu_1(x)}\frac{\delta
H(\nu)}{\delta\nu(\mu_1)}\left[\int\nu(d\mu_2)\mu_2(dx)-\mu_1(dx)\right]\right)\nu(d\mu_1)\nonumber
\\&& +\frac{\gamma_2}{2}\int_{\mathcal{P}(\mathbb{I})}\int_{\mathcal{P}(\mathbb{I})}\frac{\delta^2
H(\nu)}{\delta(\nu(\mu_1))\delta(\nu(\mu_2))}\Big(\nu(d\mu_1)\delta_{\mu_1}(d\mu_2)-\nu(d\mu_1)\nu(d\mu_2)\Big)\nonumber\\&&
+s_2 \left[ \int_{\mathcal{P}(\mathbb{I})} \frac{\delta H(\nu)}{\delta \nu(\mu_1)}\left[V_{2}(\mu_1) -
\int_{\mathcal{P}(\mathbb{I})} V_2(\mu_2)\nu(d\mu_2)\right]\nu(d\mu_1)\right] ,\nonumber \eea where $G^0$
is given by (\ref{G.0}).
\end{theorem}
\begin{proof} We follow the standard argument which involves three steps: proof of the tightness of
the laws of the processes, proof of convergence of the generators on a sufficiently large class of
functions and finally proof that the martingale problem associated with the limiting generator has
a unique solution. The first two steps follow in the usual way (e.g. proof of \cite{D93}, Theorem
5.3.1). It then remains to prove the uniqueness - this will be proved in the next section after
introducing the appropriate class of dual processes.
\end{proof}
\bigskip
\begin{remark}
An alternative class of functions, $\mathcal{D}_2$, is the linear span of functions of the form
\be{} H(\nu) = \prod_{k=1}^K\left(\int_{\mathcal{P}(\mathbb{I})} \left(\int_{\mathbb{I}^{n_k}}
h(x_{k,1},\dots,x_{k,n_k})\mu_k^{\otimes n_k}(dx_k)\right)\nu(d\mu_k)\right). \ee
We also consider the convex set $\wt{\mathcal{D}}_2$ of
$[0,1]$-valued functions which contain functions of the above form with
$h$ having values in $[0,1]$. Note that this class uniquely determines probability measures on $\mathcal{P}(\mathcal{P}(\mathbb{I}))$.
\end{remark}
\begin{remark} The multilevel Fleming-Viot process is the analogue of the { multilevel superprocess}
- see e.g. Dawson-Hochberg (1991) \cite{DH-91}, Etheridge (1993) \cite{ET-93}, Wu (1994)
\cite{Wu-94}, Gorostiza-Hochberg-Wakolbinger (1995) \cite{GHW-95}, Dawson-Hochberg-Vinogradov
(1996) \cite{DHV-96}, Dawson and Wu (1996) \cite{DW-96}, Dawson-Gorostiza-Wakolbinger (2004)
\cite{DGW-04}.
\end{remark}
\bigskip
\begin{remark} In the special case $s_2=0$ and $\gamma_2=0$ we obtain the mean-field limit.
We consider a tagged colony - by exchangeability this can be colony 1, $\mu_{1}^{N_2}(t)$. Then as
$N_2\rightarrow\infty$ in the limit we obtain the measure-valued McKean-Vlasov dynamics (cf.
\cite{DG-14}) given by the solution to the martingale problem with {\em nonlinear generator}
\be{} G_1^\nu F(\mu)=G^0F(\mu)+ c \int_{\mathbb{I}} \frac{\delta
F(\mu)}{\delta\mu(x)}[\int_{\mathcal{P} (\mathbb{I})}\nu(d\mu)(\mu(dx))-\mu(dx)]\ee and the law of
the process, $\Xi_t= \mathcal{L}(\mu_t)\in\mathcal{P}(\mathcal{P}(\mathbb{I}))$ is the weak
solution of a nonlinear second order partial differential equation.
If we assume $\gamma_2=0$ but $s_2>0$, then $\Xi_t$ is still deterministic and is the solution of a
nonlinear second order partial differential equation which is a generalization of Kimura's equation
(\ref{Kim}). Depending on the functions $V_1,V_2$ and with recombination these nonlinear equations can
exhibit a range of behaviors including multiple equilibria and possible periodic or chaotic
behaviour (see Akin \cite{A-83a}). In the general case with $\gamma_2 >0$ we obtain a two level Fleming-Viot process.
\end{remark}
\section{Duality for interacting and two-level Fleming-Viot systems}\label{duality}
In this section we introduce a basic tool, namely the generalization of the class of set-valued
dual introduced in \cite{DG-14} to the class of two level probability-measure-valued processes
$\Xi(t)\in \mathcal{P}(\mathcal{P}(\mathbb{I}))$ which were obtained in the previous section. These
processes satisfy the martingale problem with generator $G_2$.
\bea{}\label{MP2} M_H(t):=&& H(\Xi_t)-\int_0^t G_2H(\Xi_s)ds \\&&\text{ is a } P-\text{
martingale} \text{ for all }H\in D_2(G_2).\nonumber\eea
\noindent The dual process developed here will be used to prove that there is a unique law \\ $P\in
\mathcal{P}(C_{\mathcal{P}(\mathcal{P}(\mathbb{I}))}([0,\infty)))$ which satisfies the martingale
problem (\ref{MP2}). As explained above the idea is to find a dual process $\mathcal{G}^2_t$ and
to establish the duality relation
\be{}\label{duality21}
E_{\Xi(0)}(F(\Xi(t),\mathcal{G}^2_0))=E_{\mathcal{G}^2_0}(F(\Xi(0)),\mathcal{G}^2_t)).\ee
\medskip
We begin by obtaining the dual for the system of interacting $\mathcal{P}(\mathbb{I})$-valued
processes with generator $G^{N_2,int}$ given by (\ref{G.int}). For detailed background on the
duality method to be used refer to \cite{DG-14} Chapter 5.
\medskip
In
\subsection{A function-valued dual}
We now introduce a function-valued dual for the process with generator $G^{N_2,int}$.
The state space for the function-valued dual is the set of functions, $\mathbb{H}$, of the form\\ $\sum_k
\prod_{i=1}^{N_2}\prod_{j=1}^{n_i} h_{k,i,j}(x_{ij})$. By inspection of the action of the generator
$G^{N_2,int}$ on functions in $\mathbb{H}$, we can read off the corresponding function-valued
transitions corresponding to mutation, selection and migration as follows.
\begin{itemize}
\item
Level I Selection with $V_1(x)=1_B(x)$.
\noindent Transitions at rate $s_1$
\be{} h(x_1,\dots,x_n)\to 1_B(x_i)h(x_1,\dots,x_n)+1_{B^c}(x{n+1})h(x_1,\dots,x_n)\ee
\item Mutation
\be{} h(\dots,x_i,\dots)\to \int h(\dots,y,\dots)M(x_i,dy)\ee
\item Level I Coalescence: At rate $\frac{\gamma n(n-1)}{2}$,
\be{} h(x_1,\dots,x_n)\to h(x_i,\dots,x_i,\dots, x_{j-1},x_i,x_{j+1},\dots,x_n)\ee
\item Migration: For each $i,j\in S$, at rate $\frac{c}{N_2}$ , \be{}\label{mig} h_1(x_{i1},x_{i2})h_2(x_{j1},x_{j2})\to
h_1(x_{i1},x_{j3})h_2(x_{j1},x_{j2}) \ee Here the first index indicates the deme and the second the
rank at the given deme.
\item Level II selection: By (\ref{E.BA}) and taking convex combinations it suffices to consider a level II fitness function of the form:
\[ V_2(\mu)= \mu(B)\]
\bea{}&& h(\cdot)\longrightarrow V_2(\cdot) h(\cdot) +(1- V_2(\cdot))\otimes h(\cdot) \eea
\noi Then the level II selection: for each $i,j\in S$ in transitions
at rate $\frac{s_2}{N_2}$
\[ h(x_{i1},x_{i2})\to 1_B(x_{i1})h(x_{i2},x_{i3})+(1-1_B(x_{j1}))h(x_{i1},x_{i2})\]
\item Level II coalescence: for each pair $i,j$ at rate $\frac{\gamma_2}{2}$ \be{}
h_1(x_{i1},x_{i2})h_2(x_{j1},x_{j2})\to h_1(x_{i1},x_{i2})h_2(x_{i3},x_{i4}).\ee
\end{itemize}
\subsection{A set-valued dual for exchangeably interacting systems of Fleming-Viot
processes}\label{sss.sv}
\medskip
We now introduce the set-valued dual which will be used to study the interacting system of
Fleming-Viot processes and then the limiting two-level Fleming-Viot process. This is based on the
set-valued dual introduced in \cite{DG-14} (subsections 9.4, 9.5) for the system of exchangeably
interacting Fleming-Viot processes but extended in order to include level II selection and
resampling.
\medskip
\noi We begin with the population at a set of demes labeled by $S$ with \be{} S=\{1,\dots,N_2\}\ee
with migration between demes as defined in subsubsection \ref{system1} with the assumption of
exchangeability. Recall that the state space for the finite system of interacting Fleming-Viot
processes is $(\mathcal{P}(\mathbb{I}))^{S}$.The set-valued dual is a refinement of the
function-valued dual sketched above. Noting that it suffices to work with linear combinations of
indicator function the Level I function-valued and set-valued version of the above dual were
introduced and studied in depth in Dawson and Greven \cite{DG-14}.
We now introduce the state space and notation needed to define the set-valued dual
$\mathcal{G}_t$.
\noi Recall that $\mathbb{I}=\{1,\dots,K\}$. We indicate the indicator function of a subset
$A\subset \mathbb{I}$ by $1_A=(e_1,\dots,e_K)$ with $e_i=1$ if $i\in A$ and $e_i=0$ is $i\in A^c$,
that is, the complement of $A$. For example, the indicator function of $\{1,2\}\subset\{1,2,3\}$ is
indicated by $(110)$. We sometimes identify finite subsets with their indicator functions.
Let
\begin{eqnarray}&&\mathcal{I}:= \text{ algebra of subsets of }\mathbb{I}^{\N}\nonumber
\\&&\qquad\text{ of the form } A\otimes_1 \mathbb{I}^{\N},\; A\text{ is a subset of }\mathbb{I}^m,m\in\N,\nonumber\end{eqnarray}
the coordinates in a product in $\mathbb{I}^m$ are called {\em ranks}. Given $A,B\subset
\mathbb{I}$ we denote the product of these sets in $\mathbb{I}\times\mathbb{I}$ as $A\otimes_1 B$.
Given $A,B\subset \mathcal{I}$ we denote the product of these sets in
$\mathcal{I}\times\mathcal{I}$ as $A\otimes_2 B$.
\bigskip
\noi The {\em state space:} $\sf{I}^{N_2}$ for the set-valued dual associated to the interacting
systems of Fleming-Viot processes with $S= \{1,\dots,N_2\}$ is the algebra of sets containing sets
of the form
\bea{}\label{fnc1} && \bigotimes_{2,\,i\in S} \left(\otimes_{1\,,j=1}^{n_i} A_{i,j}\right),\quad
A_{i,j}\in\mathbb{I},\; n_i\in\N,\\&&\in (\mathcal{I})^{\otimes_2 S}.\nonumber\eea
In order to describe the dual dynamics we first describe the transitions that occur for a set
written as a disjoint union of sets of the form (\ref{fnc1})
where in $A_{i,j}\subset \mathbb{I}$ the first subscript denotes the deme and the second subscript
denotes the rank at the deme, and $V_1(x) = 1_{B}(x)$ with rate $s_1>0$.
\bigskip
\noi The transitions of the set-valued process $\mathcal{G}^{N_2,int}_t$ are obtained by restricting the function-valued
transitions to indicator functions of sets in $\sf{I}^{N_2}$. These are then given by:
\smallskip
\noindent Level I selection at rank $j^*$ at deme $i^*\in S$ at rate $s$: \be{}\label{sel1}
A_{i^*.j^*} \to B_{i^*.j^*}\cap A_{i^*.j^*}\cup B_{i^*.j^*}^c\otimes_1 A_{i^*.j^*+1}\ee and all
other ranks larger than $j^*$ are also shifted to the right at deme $i^*$.
\bigskip
\noindent Mutation at rank $j$ at deme $i$: (refer to \cite{DG-14}, Definition 5.12 and
Subsubsection 9.5.3) \be{}A_{ij}\subset\mathbb{I}\to A_{ij}\cup\{\ell\}\text{ with } \ell\in
\mathbb{I}\text{ at rate }\sum_{k\in A}m_{\ell,k},\ee or \be{}A_{ij}\to A_{ij}\backslash
\{\ell\}\text{ at rate }\sum_{k\in A_{ij}^c}m_{\ell,k}.\ee
\bigskip
\noindent Coalescence at rate $\gamma_1/2$ of ranks $j_1$ and $j_2>j_1$ at deme $i\in S$:
$A_{i,j_1}\otimes_1 A_{i,j_2}\to \wt A_{i,j_1}= A_{i,j_1}\cap A_{i,j_2}$ and $\wt A_{i,j}=
A_{i,j+1}$ for $j\geq j_2$.
\bigskip
\noindent Migration at rate $\frac{c}{N_2}$ of rank $j$ from deme $i_2\in S$ to $i_1\in S$. Let
$A_i=\otimes_{1,i=1}^{n_i}A_{ij}$.
\be{}A_{i_1}\otimes_2
A_{i_2}\to \wt A_{i_1}\otimes_2 \wt A_{i_2}\ee with \be{} \wt A_{i_1,n_1+1}= A_{i_2,j}\ee \be{}\wt
A_{i_2,\ell}=A_{i_2,\ell+1}\quad\text{for }\ell\geq j\ee
\be{}\wt
A_{i_2,\ell}=A_{i_2,\ell}\quad\text{for }\ell < j\ee
\begin{remark}
Note that in the limit $N_2\to\infty$ the measure $\nu$ is nonatomic and migration or level II
selection transitions always lead to a new (that is unoccupied) deme.
\end{remark}
\bigskip
\medskip
\noi {\em Coupling:} Note that every set in $\sf{I}^{N_2}$ can be written as the union of a finite
number of disjoint sets of the form $\otimes_{2,i=1}^N\otimes_{1,j=1}^{n_i}A_{i,j}$ with
$A_{i,j}\subset \mathbb{I}$ and $N\in\N$. Finally the above transitions are simultaneously carried
out in this disjoint union of products and are coupled as follows: all selection, mutation,
coalescence and migration operations are simultaneously applied to each rank at each deme of each
product in the disjoint union. Each such transition preserves the decomposition of the disjoint
union into a new disjoint union - this is obviously satisfied for mutation, coalescence and
migration and true for selection in view of the specific form (\ref{sel1}).
\bigskip
\subsubsection{The dual representation}
We now state the duality relation between the system of interacting Fleming-Viot processes
$\mathbf{X}$ under the assumption
\be{} \mathbf{X}(0)=\mu^{\otimes_{2}N_2}.\; \text{ with }
\mu\in\mathcal{P}(\mathbb{I})\ee which implies that we have a system of exchangeably interacting
Fleming-Viot processes.
\medskip
Define the function $F:\mathcal{P}(\mathbb{I})\otimes \sf{I}\to [0,1]$ by
\bea{} F(\mathbf{X},\mathcal{G})= \mathbf{X}^*(\mathcal{G})\eea
where if $\mathbf{X}(0)=\otimes_{2,j=1}^{N_2} \mu_j$, then { $\mathbf{X}^*(0)
=\otimes_{2,j=1}^{N_2}(\mu_j)^{\otimes_1\N}\in \mathcal{P}((\mathbb{I}^{\mathbb{N}})^{N_2})$}. For
example,
if $G=\bigotimes_{i\in S} G_i$ with $G_i\in\mathcal{I}$, then
\be{} \mathbf{X}^*(G)=\prod_{j\in S} \mu_j^{\otimes_1 \N}(G_j).\ee
\bigskip
\begin{theorem}\label{T.2} Let $\mathbf{X}^{N_2}$ denote a solution to the martingale problem with
generator $G^{N_2,int}$. Then \noi (a) \noindent{Dual Representation}{
\bea{}\label{dr1} E_{\mathbf{X}(0)}(F(\mathbf{X}^{N_2}_t,\mathcal{G}^{N_2,int}_0))=E_{\mathcal{G}^{N_2,int}_0}(F(\mathbf{X}^{N_2}_0,\mathcal{G}^{N_2,int}_t))\eea}
\noi (b) The representation (\ref{dr1}) uniquely determines the marginal distribution of the
process $\mathbf{X}^{N_2}(t)$ and therefore establishes the uniqueness of the solution to the
martingale problem.
\end{theorem}
\begin{proof} The proof in the case $s_2=0$ is given in detail in \cite{DG-14} based on verifying that the generators
of the two processes acting on the function $F$ satisfy the relation \be{}
G^{N_2,int}F(\mu,A)=G^{dual}F(\mu,A)\quad \text{ for all } \mu\in \mathcal{P}(\mathbb{I}),\; A\in
\sf{I},\ee where $G^{dual}$ is the generator of the set-valued Markov jump process with transition
rates given above. The extension to the case $s_2>0$ follows in the same way and will be given in
more detail for the two-level process below.
\end{proof}
\bigskip
\begin{remark}\label{R5} \textbf{State-dependent fitness}
We can also consider level I selection that is state dependent, that is in which the fitness of a
type depends on the distribution of types (e.g. diploid). For example we could have the fitness of
type $1$ proportional to the population proportion of type $B$, that is, $V(1,\mu)= s\mu(B),\;
B\subset \mathbb{I},\; s\geq 0$. In this case the dual has function-valued transitions at rate $s$:
\bean{}
&& f\to 1_B\otimes_1 [1_1f-1_1\otimes_1 f]+f\\
&& =1_B\otimes_1[1_1f +(1-1_1)\otimes_1 f]\\
&& \quad + (1-1_B)\otimes_1 f. \eean
A second example to a set-valued dual which can be used to analyse such systems, eg. mutualistic
types (see Example \ref{E4}). Consider $\mathbb{I}=\{1,2,3\}$. \[ V_1(1)=v_1,\quad V_1(2,\mu)=
v_{M}\cdot \mu(3), \;V_1(3,\mu)=v_{M}\cdot \mu(2),\] with transitions \begin{eqnarray}\label{sdf}&&
f \to 1_1f+(1_{2}+1_3)\otimes_1 f\quad \text{ at rate }v_1,\\&&
f\to 1_3\otimes_1[1_2f+(1_{1}+1_3)\otimes_1 f]+(1_1+1_2)\otimes_1 f \quad \text{ at rate }v_M,\nonumber\\&&
f\to 1_2\otimes_1[1_3f+(1_{1}+1_2)\otimes_1 f]+(1_1+1_3)\otimes_1 f \quad \text{ at rate
}v_M.\nonumber\end{eqnarray}
\end{remark}
\subsection{A set-valued dual for the two level Fleming-Viot process}
The objective of this subsection is to extend the set-valued dual of subsection \ref{sss.sv} in
order to construct set-valued duals $\mathcal{G}^{2,N_2}_t$, $\mathcal{G}^2_t$ for the
$\mathcal{P}(\mathcal{P}(\mathbb{I}))$- valued processes $\{\Xi^{N_2}(t)\}_{t\geq 0}$ (assuming
exchangeable initial configuration) and $\{\Xi(t)\}_{t\geq 0}$. We assume that the level II
selection rate is $s_2$ and with fitness function $V_2$ and the level II resampling rate is
$\frac{\gamma_2}{2}$. To simplify the notation we take $\eta=1$ in (\ref{G.int}) in the
subsequent discussion.
\bigskip
\noi The {\em state space:} $\sf{I}^{2*}$ for the set-valued dual $\mathcal{G}^2_t$ is is the
algebra of subsets of $(\mathbb{I}^{\N})^{\N}$ containing sets of the form
\bea{}\label{fnc12} && \bigotimes_{2,\,i\leq m} \left(\otimes_{1\,,j=1}^{n_i} A_{i,j}\right),\quad
A_{i,j}\in\mathbb{I},\; n_i\in\N,\\&&\in (\mathcal{I})^{\otimes_2 m},\quad m\in \N,\nonumber\eea
where $i$ is the index of the deme and $j$ is the index of the rank within the deme.
The transitions of the dual due
to level I mutation, resampling, and selection at each deme and migration between demes are given as
above in subsection \ref{sss.sv}. In the $N_2\to\infty$ limit, migrants always move to a new
(unoccupied) deme, namely the the unoccupied deme of lowest index.
\medskip
\noi We assume that $V_2$ belongs to the class of level II fitness functions of the form
\be{}\label{v2rep} V_2(\mu)= \sum_j s_{2,j} V_{2,j}(\mu),\;\;
V_{2,j}(\mu)= \mu^{\otimes}( 1_{B_j})\ee and $B_j=\prod_{i=1}^{n_j}B_{ji}$ with $B_{ji}\subset
\mathbb{I}$.
\bigskip
\noi We
now introduce the additional transitions that occur due to Level II selection and coalescence. For
the former, using the linearity
it suffices to describe the contribution to the dynamics of $ V_{2,j}$ with $
V_{2,j}(\mu)=\mu^\otimes(B)$, $B\in\mathcal{I}$.
\bigskip
\noindent As above we use the notation $\otimes_1$ and $\otimes_2$ to distinguish such products
on $\mathbb{I}$ and $\mathcal{I}$. Similarly for measures in
$\nu_i\in\mathcal{P}(\mathcal{P}(\mathbb{I}))$ we write $\nu_1\otimes_2\nu_2$ for the product
measure.
\bigskip
\noindent \underline{Set-valued transitions - deme level selection}
\smallskip
Using the linearity
it suffices to consider the contribution to the dynamics of $ V_{2,j}$ with $
V_{2,j}(\mu)=\mu^\otimes(B_j)$, $B_j\in(\mathbb{I})^{n_j}$. Given such a fitness function and sets
$A_i\in\mathcal{I}$ with the subscript indicating the deme, then for every $k,\ell\in S$ the action
of selection on deme $k$ results in the transition
\be{} \bigotimes_{2, i\in S} A_i\to \left(B_k\otimes_1 A_k \otimes_2 \bigotimes_{2,\,i\ne
k}A_i\right) \bigcup \left(B^c_\ell\otimes_2 \bigotimes_{2,\, i\in S} A_i\right)\ee occurs with
rate $\frac{s_{2,j}}{N_2}$ if $S=\{1,\dots,N_2\}$.
\medskip
If $S=\mathbb{N}$, then this becomes
\be{}\label{22sel} \bigotimes_{2,\, i=1}^{n} A_i\to \left(B_k\otimes_1 A_k \otimes_2
\bigotimes_{2,\,i\ne k}A_i\right) \bigcup \left(B^c_\ell\otimes_2 \bigotimes_{2,\,i=1}^{n}
A_i\right)\ee where $A_i,B_i\in\mathcal{I}$, $n$ is the number of occupied demes (i.e. not
identically $\mathbb{I}^\N$) and $\ell$ denotes the first unoccupied deme, occurs with rate
$s_{2,j}$.
\bigskip
\noi Note that exchangeability is preserved by the dynamics and we can again {\em couple} the
corresponding indices after a level II selection event, that is all further transitions are applied
simultaneously to the corresponding indices (deme and rank at the deme). For this reason we can
rewrite (\ref{22sel}) as
\be{}\label{l2selx} \bigotimes_{2,\, i=1}^{n} A_i\to \left(B_k\otimes_1 A_k \otimes_2
\bigotimes_{2,\, i\ne k}A_i\right) \bigcup \left(B^c_k\otimes_2 \bigotimes_{i} \wt A_i\right)\ee
where $\wt A_i =A_i\text{ if }i<k$, $\wt A_i =A_{i+1}\text{ if }i\geq k$. This means that the new
event is a disjoint union of events and this is preserved by all further transitions.
\bigskip
\noindent Example: Let $\mathbb{I}=\{1,2\}$ with fitness $V_2(\mu)=\mu(1)$. Then given
$1_{\mathcal{G}}= 1_1$, then the first transition (in terms of indicator functions)
\[ 1_1\to 1_1\otimes_1 1_1+ 1_1\otimes_2 1_2\]
so that letting
\[ E_\nu(\mu(1))=\int \mu(1)\nu(d\mu),\] we have
\[ E_\nu(\mu(1))\to E_\nu(\mu(1)) +E_\nu((\mu(1))^2)-(E_\nu(\mu(1)))^2= E_\nu(\mu(1))+\text{Var}_{\nu}(\mu(1))\]
so that if $\nu$ is not a single atom probability measure the level II selection is effective.
\bigskip
\noi \underline{Set-valued transitions - deme level coalescence}
The level II resampling results in the coalescence of two demes, for example demes $1,2$ as
follows:
\bean{}&& \left(\bigotimes_{1,j} B_j\right)_1\otimes_1 \left(\bigotimes_{1,i} A_i\right)_1 +
\left(\bigotimes_{1,j}
B_j\right)_1^c\otimes_2 \left(\bigotimes_{1,i} A_i\right)_2\\
&&\to \left(\bigotimes_{1,j} B_j\right)_1 \otimes_1 \left(\bigotimes_{1,i} A_i\right)_1+
\left(\bigotimes_{1,j} B_j\right)_1^c\otimes_1 \left(\bigotimes_{1,i} A_i\right)_1\\&&=
\left(\bigotimes_{1,i} A_i\right)_1\eean where the exterior subscripts denote the deme.
When there is no level II coalescence and $\nu_0$ is non-random, the $\{t\to \nu(t)\}$ is
deterministic and it suffices to consider $k_0=1$. The reason is that there is no interaction
between the supports of the associated set-valued processes starting with disjoint supports in this
case so that $Var(\int h(\mu)\nu_t(d\mu))=0$.
\bigskip
\subsubsection{The duality relation for the two level Fleming-Viot process and its applications}
We now present the dual representation for the two level Fleming-Viot systems.
\medskip
\noi Let $\{\Xi(t)\}_{t\geq 0}$ denote a $\mathcal{P}(\mathcal{P}(\mathbb{I}))$-valued process
with probability law \[P^{\Xi_0}=\mathcal{L}(\Xi)\in
\mathcal{P}(C_{\mathcal{P}(\mathcal{P}(\mathbb{I})))}([0,\infty)))\] which satisfies the
$(G_2,{{D}}_2)$-martingale problem. Then the time-marginals $\Xi(t)$ are random probability measure
on $\mathcal{P}(\mathbb{I})$. Then by de Finetti's theorem (see \cite{D93}, Theorem 11.2.1) there
exist a sequence $\{\wt\mu_n\}$ of $\mathcal{P}(\mathbb{I})$-valued exchangeable random variables
such that $(\wt\mu_1,\dots,\wt\mu_n)$ has joint distribution \be{}
P^{(n)}(t,d\wt\mu_1,\dots,d\wt\mu_n)=\int_{\mathcal{C}_{\mathcal{P}(\mathcal{P}(\mathbb{I}))}[0,\infty)}\Xi(t,d\mu_1)\dots\Xi(t,d\mu_n)\,dP^{\Xi_0},\quad
n\in\N,\ee that correspond to the moment measures of $\Xi(t)$.
\medskip
\noi Let $\mathcal{G}_t$ with values in $\sf{I}^{2*}$ denote the set-valued process defined above.
\medskip
\noi Define the function $ \mathcal{H}:(\mathcal{P}(\mathcal{P}(\mathbb{I})))\times (\mathcal{I})^{\otimes \N,*}\to [0,1]$ by
\bea{} &&\mathcal{H}(\nu,\mathcal{G})= \sum_{k=1}^{N_1}\prod_{i=1}^{N_{2,k}}\int \left[\mu_i^{\otimes
n_{ki}}(A_{k,i})\right]\nu(d\mu_i)\;\;\text{ if }
\mathcal{G}=\bigcup_{k=1}^{N_1}\bigotimes_{2,i=1}^{N_{2,k}} A_{k,i},\text{ with } A_{k,i}\subset (\mathbb{I})^{n_{ki}},\nonumber\\&&
\text{with } \bigotimes_{2,i=1}^{N_{2,k_1}} A_{k_1,i}\cap \bigotimes_{2,i=1}^{N_{2,k_2}} A_{k_2,i}=\varnothing\text{ if }k_1\ne k_2,\nonumber \\&&
\text{we can also write this as }\mathcal{H}(\nu,\mathcal{G})=\nu^*(\mathcal{G}).\nonumber\eea
\beT{}\label{T.3}{Dual Representation}
(a) For any solution $\{P_{\Xi_0}:\Xi_0\in\mathcal{P}(\mathcal{P}(\mathbb{I}))\}$ of the
$(G^{N_2},{{D}}_2)$ or $(G_2,{{D}}_2)$-martingale problem
\bea{}\label{drep2} E_{\Xi_0}(\mathcal{H}(\Xi_t,\mathcal{G}^2_0))=E_{\mathcal{G}_0}(\mathcal{H}(\Xi_0,\mathcal{G}^2_t))\eea
(b) The $(G^{N_2},{{D}}_2)$ and $(G_2,{{D}}_2)$-martingale problems are
well-posed.
\end{theorem}
\begin{proof} The proof for the cases $(G^{N_2},{{D}}_2)$ and $(G_2,D_2)$ follow the same lines. We now give the details
the the latter case.\\ (a) As above, we begin by identifying the terms in $G_2H(\nu)$ for
functions
in ${D}_2$ of the form: \be{}H(\nu)=\prod_{k=1}^{k_0}\left(\int_{\mathcal{P}(\mathbb{I})}
\left[\int_{\mathbb{I}^{n_k}} h(x_{k,1},\dots,x_{k,n_{k}})d\mu_{k}^{\otimes
n_{k}}\right]\nu(d\mu_{k})\right),\ee in other words we work with functions of the form
\[ \prod_{k=1}^{k_0}h(x_{k,1},\dots,x_{k,n_k}).\]
\medskip
The transitions for functions of this form are given as follows:
\begin{itemize}
\item Level I resampling. This results in the coalescence \be{}\int \left[\int\int
h(x_{11},x_{12})\mu(dx_{11})\mu(dx_{12})\right]\nu(d\mu)\to \int\left[\int
h(x_{11},x_{11})\mu(dx_{11})\right]\nu(d\mu),\quad \text{at rate }\gamma_1\ee
\item Migration. At rate $c$
\bea{}&& \int \left[\int h(x_{11},x_{12})\mu(dx_{11})\mu(dx_{12})\right]\nu(d\mu)\\&& \to
\int\int\left[ \int
h(x_{11},x_{22})\mu_1(dx_{11})\mu_2(dx_{22})\right]\nu(d\mu_1)\nu(d\mu_2).\nonumber\eea
\item Selection at level I with
\be{} V_1(x)= s_1 1_{B}(x).\ee This results in the transition with
\bea{}&&\int \int h(x_{11})\mu(dx_{11})\nu(d\mu)\\&&\to \int\left[\int
h(x_{11})1_B(x_{11})\mu(dx_{11})+\int\int
1_{B^c}(x_{11})h(x_{12})\mu(dx_{11})\mu(dx_{12})\right]\nu(d\mu)\nonumber \eea
at rate $s_1$.
\end{itemize}
\medskip
\noi Under {\em level II coalescence} two occupied demes $i,j \in S,\,i\ne j$ are chosen at random
at rate $\gamma_2/2$ and we have \bea{} &&\\&&\int\left[ \int\int
h(x_{i1},x_{i2})h(x_{j1},x_{j2})\mu_i(dx_{i1})\mu_i(dx_{i2})\mu_j(dx_{j1})\mu_j(dx_{j2})\right]\nu(du_i)\nu(d\mu_j)\nonumber\\&&\to
\int\left[ \int \int\int\int
h(x_{i1},x_{i2})h(x_{i3},x_{i4})\mu(dx_{i1})\mu(dx_{12})\mu(dx_{i3})\mu(dx_{i4})\right]\nu(d\mu),\nonumber\eea
\[ h\otimes_2h\to h\otimes h.\]
In particular we have
\[ \int\left(\int h(x)\mu(dx)\right)\nu(d\mu)\cdot \int\left(\int h(x)\mu(dx)\right)\nu(d\mu)\to
\int \left(\int h(x)\mu(dx)\right)^2\nu(d\mu).\]
\medskip
\noi {\em Level II selection} with fitness function $ V_{2}$
Now consider the case in which $h(.)$ and $ V_2(\cdot)$ are polynomials, that is,
\be{}\label{hdef} h(\mu) =\sum_j h_j \mu^{\otimes}(\otimes_{1,i}{A_{ji}})\qquad\text{polynomial
on }\mathcal{P}(\mathbb{I}),\; h_j\leq 1\ee where \be{}\label{v2rep2} V_2(\mu)= \sum_j a_j
V_{2,j},\;\; V_{2,j}= \mu^{\otimes}(\otimes_{1,i}{B_{ji}})\ee
\be{}\label{hsela} h(\mu)\to V_{2,j}(\mu_1) h(\mu_1)+(1- V_{2,j}(\mu_1)) h( \mu_2)\quad\text{at
rate } a_j.\ee
\be{} \label{hselb}\int h(\mu)\nu(d\mu)\to\int V_{2,j}(\mu_1)h(\mu_1)\nu(d\mu_1)+\int\int
(1-V_{2,j}(\mu_1))h( \mu_2)\nu(d\mu_1)\nu(d\mu_2)\ee
\bea{}&& h(x_{11},\dots,x_{1,n_1})\to\\&& V_2(x_{1,1},\dots,x_{1,n_2})
h(x_{(1,n_2+1)},\dots,x_{1,(n_2+n_1)})\nonumber\\&&+(1- V_2(x_{1,1},\dots,x_{1,n_2}))
h(x_{2,(n_2+1)},\dots,x_{2,(n_2+n_1)})\nonumber\eea
\bigskip
\noi More generally, if $H(\nu)=\int\prod_{i=1}^K h(\mu_i)\nu(d\mu_i)$ Assume $ V_2\leq 1$, namely
the indicator function of a set. The selection acting on $H$ produces
\bea{}\label{hselc} && \int\prod_{i=1}^K h(\mu_{i})\nu(d\mu_{i}) \longrightarrow\\
&& \sum_{j=1}^K \Big(\Big\{ \prod_{i\ne j}h(\mu_{i})\nu(d\mu_{i})\Big\}\nonumber
\\&&\cdot \left[\int h(\mu_{j}) V_2(\mu_{j})\nu(d\mu_{j})+\int (1- V_2(\mu_j))\nu(d\mu_j)\int h(\mu_{K+1})\nu(d\mu_{K+1})\right]\Big)\nonumber\eea
\[
h(\mu_1)\to V_2(\mu_1) h(\mu_1)+ (1- V_2(\mu_1))\otimes h(\mu_2).\]
\bigskip
\noi The corresponding set-valued transitions are obtained by restricting the class of functions
$h$ of the form $h(\mu)=\mu^\otimes(A)$ with $A\in\mathcal{I}$ as in subsubsection \ref{sss.sv}.
\medskip
\noi \underline{Coupling}. In view of the assumption of exchangeability, we can couple the $
V_2(\mu) h(\mu)$ and $(1-V_2(\mu))$ terms for the level II selection transitions at the deme
level, that is, place these at the same deme index in the two resulting summands thus producing a
union of disjoint sets in $\sf{I}^{2*}$. Then as before, all operations are performed
simultaneously on all demes and ranks in the different summands.
\bigskip
\noi \underline{Set-valued transitions} The set-valued transitions can then be read off by
restricting the function-valued transitions to the class of functions $h$ that are based on
indicator functions of sets as in (\ref{hdef}) and noting that due to the coupling the transitions
preserve the decomposition into the union of {\em disjoint} subsets. These transitions define a
Markov jump process $\{\mathcal{G}^2_t\}_{t\geq 0}$ with countable state space $\sf{I}^{2*}$ and we
denote the resulting generator by $G^{dual}$. The identity of the action of the corresponding
terms of $G_2$ acting on $\mathcal{H}(\nu,\mathcal{G})$ and the result of the transition of the
set-valued dual, that is,
\be{}\label{G-G}
G_2 \mathcal{H}(\nu,\mathcal{G})=G^{dual}\mathcal{H}(\nu,\mathcal{G})\quad \text{ for all }
\nu\in \mathcal{P}(\mathcal{P}(\mathbb{I})),\; \mathcal{G}\in (\mathcal{I})^{\otimes \N,*},\ee is
then immediate by inspection. For example, setting $V_2(\mu)=\mu^\otimes(\otimes_{1,i} B_i)$ and
applying the corresponding selection transition from $G_2$ to $H(\nu)=\int \prod_{i=1}^K
h_i(\mu_i)\nu(d\mu_i)$ with $h_i(\mu_i)= \mu_i(\otimes_{1,j} A_{ij})$, (\ref{hselc}) yields the
set-valued transitions (\ref{22sel}) with $A_i=\otimes_{1,j} A_{ij}$ which correspond the
$G^{dual}$.
\medskip
\noi The duality relation (\ref{drep2}) then follows from (\ref{G-G}) (see for example
Proposition 7.10 in \cite{D10} or Chapter 4 of \cite{EK2})). \medskip
\noi The uniqueness of the solution to the martingale problem is then obtained. In particular,
moment measures of the time marginals of any solution to the martingale problem are determined by
the dual representation. In turn the moment measures uniquely define the $\mathcal{L}(\Xi(t))$ as
follows: Let $\wt\mu_1,\wt\mu_2,\dots$ be an exchangeable sequence of
$\mathcal{P}(\mathbb{I})$-valued random variables with marginal distributions given by the moment
measures determined by the dual $\mathcal{G}^2_t$ and \be{} \wt\Xi_m:=\frac{1}{m}\sum_{i=1}^m
\delta_{\wt\mu_i}.\ee
Then by de Finetti's theorem \be{}
\mathcal{L}(\wt\Xi_m)\Rightarrow \mathcal{L}(\Xi(t))\quad \text{as }m\to\infty,\ee that is, the
time marginal laws of any solution are uniquely determined by this limit.
\medskip
\noindent (b) Since the class of function of the form $\mathcal{H}(\cdot,\mathcal{G})$ with
$\mathcal{G}\in (\mathcal{I})^{\otimes \N,*}$ is probability-measure-determining on
$\mathcal{P}(\mathcal{P}(\mathbb{I}))$ this implies that the time marginals of $\Xi(t)$ are
uniquely determined follows from (a). The result (b) then follows from the basic results on dual
martingale problems (see e.g. Theorem 7.9 and Proposition 7.10 in \cite{D10} or Chapter 4 of
\cite{EK2}).
\end{proof}
\bigskip
\subsubsection{Moment calculations} The dual can be used to compute joint moments and covariance
structures. We illustrate with two simple examples.
\begin{example} \label{ex6} Consider the case of $\mathbb{I}=\{1,2\}$, no mutation and $V_1(1)=1,V_1(2)=0$, $c=\gamma_2=s_2=0$ but $s_1,\gamma_1>0$.
In order to compute \be{}\lim_{t\to\infty} E_{\delta_{\mu_0}}\left(\int \mu(2)\Xi_t(d\mu)\right)\ee
we use the the dual started with $\mathcal{G}_0=(01)$. Then we have transitions due to selection
and coalescence. As a result \be{} \mathcal{G}_t= (01)^{\otimes_1 n(t)}\ee where $n(t)$ is a birth
and death process with linear birth rate $n\to n+1 $ at rate $s_1n$ and quadratic death rate
$\gamma_1n(n-1)/2$. As a result $\{n(t)\}$ is ergodic with distribution measure $\{p_k:k\in \N\}$.
Then \be{}\label{lfirst} \lim_{t\to\infty} E_{\delta_{\mu_0}}\left(\int \mu(2)\Xi_t(d\mu)\right)=
E\left(\mu_0^{\otimes_1}(\mathcal{G}_{eq})\right)=\sum_{k=1^\infty} (\mu_0(2))^kp_k.\ee
\noi Next consider \be{}\label{lmixed} \lim_{t\to\infty} E_{\delta_{\mu_0}}\left(\int (\mu(1)\cdot
\mu(2))\,\Xi_t(d\mu)\right).\ee In this case we start the dual with
$\mathcal{G}_0=(10)\otimes_1(01)$. Again the number of ranks is given by a birth and death process
which will eventually reduce to $n(t)=1$ which due to coalescence implies that
$\mathcal{G}(\tau)=\emptyset$ for some finite random time $\tau$. This implies fixation, that is,
the limit (\ref{lmixed}) is zero. The corresponding fixation probabilities are then given by the
limiting first moments calculated in (\ref{lfirst}).
\end{example}
\begin{example}
With two types $\{1,2\}$ we take $\mathcal{G}^2_0= (10)_1^{\otimes k_1}\otimes_2 (10)_2^{\otimes
k_2}$ and we get
\[ E_{\Xi_0}\left(\int(\mu(1))^{k_1}\Xi_t(d\mu)\int(\mu(1))^{k_2}\Xi_t(d\mu)\right)=E[\mathcal{H}(\Xi_t,(10)_1^{\otimes k_1}\otimes_2 (10)_2^{\otimes k_2})]
=E_{\mathcal{G}^2_0}[\mathcal{H}(\Xi_0,\mathcal{G}^2_t)].\]
\end{example}
\subsubsection{Coalescent}
The coalescent plays a central role in the study of Moran, Fisher-Wright and Fleming-Viot processes
with neutral types. We now consider the analogous genealogical structure for two-level systems with
$s_1=s_2=0$ which is determined by the level one and level two coalescence transitions in the
set-valued dual. The genealogy is described by a marked coalescent process analogous to the marked
coalescent process used for spatial processes (see for example \cite{GLW}).
The state space of the two-level coalescent is the set of marked partitions $(\pi,\zeta(\pi)),
\pi\in \Pi^{\texttt{I}},\zeta(\pi)\in \mathbb{N}^{|\pi|} $
where $\Pi^{\texttt{I}}$ is the set of partitions of a countable set $\texttt{I}$ into subsets and for
$\pi\in\Pi^{\texttt{I}}$, $|\pi|$ denotes the number of subsets in the partition.
The marks $\{\zeta(i):i=1,\dots,|\pi|\}$
represent the positions in $\N$ of the subsets and $|\zeta|=|\{k\in\mathbb{N}:\zeta(i)=k\text{ for some }i\in \{1,\dots,|\pi|\}\}|$. For
$i=1,\dots, |\zeta|$, let $n_i(\pi)$ denote the number of subsets with $\zeta=i$ so that
$\sum_{i=1}^{|\zeta(\pi)|}
n_i(\pi)= |\pi|$.
A subset in the partition can jump to a new unoccupied site at rate $c$ and level I coalescence of
two subsets occurs at rate $\gamma_1$ if they are at the same site. On the other hand all the
subsets at two occupied sites combine to form a single site with all these subsets at rate
$\gamma_2$.
Therefore given $(|\zeta(\pi)|, (n_1,\dots,n_{|\zeta(\pi)|}) = (k,(n_1,\dots,n_k))$ the possible
transitions are:
\begin{enumerate}
\item $(k,(n_1,\dots,n_k))\to (k+1,(n_1,\dots,n_i-1,\dots,n_k,1))$ at rate $cn_i 1_{n_i>1}$,
\item $(k,(n_1,\dots,n_k))\to (k,(n_1,\dots,n_i-1,\dots,n_k))$ at rate $\gamma_1 n_i(n_i-1)$,
\item $(k,(n_1,\dots,n_i,\dots,n_j,\dots,n_k))\to (k-1,(n_1,\dots,n_i+n_j,\dots,n_k))$ at rate
$\gamma_2 k(k-1)$.
\end{enumerate}
\begin{proposition} (The multilevel coalescent)\\
(a) Consider the case with $s_1=s_2=c=0$, $\gamma_2>0$ and $\gamma_1>0$. Then the two level coalescent converges to $(1,(1))$.\\
(b) If $c>0$ and $\gamma_1>0,\; \gamma_2=0$, then the coalescent process started at
$(k_0,(n_1,\dots,n_{k_0}))$ with $k_0<\infty$, converges to $(\wt k,(1,\dots,1))$ for some random
$\wt k$.
\end{proposition}
\begin{proof} (a) Due to jumps of type 2 at rate $\gamma_2$ the process will eventually reach an element of the type
$(1,(n_1,\dots,n_i+n_j,\dots,n_1))$. The due to jumps of type 1 the element $(1,(1))$ is then
reached in a finite random time.\\
(b) If $\gamma_2 =0$, the number of occupied sites is nondecreasing and at each site level I
coalescence leads to a single element.
\end{proof}
\section[Multilevel population systems with two types]{Long-time behaviour of two type multilevel population systems}
The class of two-level Fleming-Viot systems obtained by Theorem \ref{T.1} with dual representation
given by Theorem \ref{T.3} describe a rich class of population systems. The evolution of their
structure over different time scales depends on various parameters including mutation, levels I and
II selection, migration, and demographic stochasticity rates and lead to different classes of
behaviours.
In this section we consider the simplest case of a system with two types $\mathbb{I}=\{1,2\}$, no
mutation, migration rate $c$, levels I and II selection rates $s_1,s_2$ and levels I and II
resampling rates $\gamma_1,\gamma_2$. We first consider the deterministic case (i.e. infinite
population case at both levels, $\gamma_1=\gamma_2=0$) and then in the random cases $\gamma_1$
and/or $\gamma_2>0$.
\subsection{Nonlinear measure-valued dynamics with $\mathbf{\gamma_1=\gamma_2=0}$}
\begin{proposition}\label{P.4} Consider the two level system in which $\gamma_2= 0$ and $P(\Xi_0=\nu_0)=1,\; \nu_0\in\mathcal{P}(\mathcal{P}(\mathbb{I}))$. Then the
$\mathcal{P}(\mathcal{P}(\mathbb{I}))$-valued process $\Xi_t=\nu_t$ is deterministic.
\end{proposition}
\begin{proof}
This follows immediately from the dual representation since the variance of $\rm{Var}(\int
h(\mu)\Xi_t(\mu))=0$ for any $h\in\mathbb{H}$. This can be shown to be zero since all $E[(\int
h(\mu)\Xi_t(\mu))^2]$ is obtained by starting the dual with
\[ h(\mu_1)\otimes_2 h(\mu_2)\]
and since no coalescence occurs the two descendant sets evolve independently.
\end{proof}
Now assume that the mutation, migration and genetic drift parameters are zero and
\be{}V_1(x)= 1_{D}(x),\; D\subset \mathbb{I},\ee and level II fitness function of the form: \be{}
V_2(\mu)= \mu(B)\text{
with } B\subset \mathbb{I}.\ee (This can be generalized to $V_2(\mu)=\mu^\otimes(B)$ with $B\subset (\mathbb{I})^K$ for some
$K\in\N$ but we consider here the case $K=1$ to keep things simple.)
\bigskip
The transitions of the dual due to level I, respectively, level II,
are given by
\be{}\label{E.3.0} 1_C(x_{11})\to 1_{D\cap C}(x_{11})+1_{D^c}(x_{11})\otimes_1 1_C(x_{12})\quad
\text{at rate }s_1,\ee
\be{}\label{E.3.00} 1_C(x_{11})\to 1_B(x_{11})\otimes_11_{C}(x_{12})+1_{B^c}(x_{11})\otimes_2
1_C(x_{21})\quad \text{at rate }s_2.\ee
\bigskip
\begin{theorem}\label{T.5}
Assume that $\mathbb{I}=\{1,2\}$ with fitness functions \be{}\label{E.3.1} V_1(1)=0,\;
V_1(2)=1,\ee and \be{}\label{E.3.2} V_2(\mu)= \mu(1),\ee selection rates $s_1\geq 0,\; s_2\geq 2$
and all other parameters equal to zero.
Then as $t\to\infty$,\\
(a) If $s_1>0,\; s_2=0$, and $\nu_0\ne \delta_{(1,0)}$, then $\nu_t \to (1-\nu_0((1,0))) {\delta_{(0,1)}}$,\\
(b) If $s_1=0$, $s_2>0$, for some $x_0\in (0,1]$, $\nu_0(\{\mu:\mu(1)\geq x_0\})>0$, then $\nu_t\to
\delta_{(p^*,1-p^*)}$ with $p^*\geq \int \mu(1)\nu_0(d\mu)$.
\end{theorem}
\begin{proof} We compute $\frac{d}{dt}E_{\nu_0}(\int\mu(1)\nu_t(d\mu))$ using the equivalent dual
form $\frac{d}{dt}E_{\mathcal{G}_0=\{1\}}(\mathcal{H}(\nu_0,\mathcal{G}_t))$. To compute the latter
consider the first dual transition, either a level I selection jump (\ref{E.3.0}) or a level II
selection jump (\ref{E.3.00}). This yields $1_1\to 1_1\otimes_1 1_1$ at rate $s_1$ and $1_1\to
1_1\otimes_1 1_1 + 1_1\otimes_2 1_2$ at rate $s_2$ and therefore
\noindent
\be{}\label{E.IN} \frac{d}{dt}E_t(\mu(1)) =s_2\text{Var}_{2,t}(\mu(1))
-s_1(E_t(\mu(1))-(E_t(\mu(1)))^2)\ee where
\[E_t(\mu(1))=\int\mu(1)\nu_t(d\mu),\;\text{Var}_{2,t}(\mu(1)) = \int \mu(1)^2\nu_t(d\mu)- (\int \mu(1)\nu_t(d\mu))^2\]
\noindent (a) If $s_2=0$, then (\ref{E.IN}) yields the ode
\be{} \frac{d}{dt}E_t(\mu(1))= s_1[(E_t(\mu(1)))^2-E_t(\mu(1))],\ee and the result follows
immediately.
\bigskip
\noindent (b) If $s_1=0$, then by (\ref{E.IN}) $E_t(\mu(1))$ is nondecreasing and strictly
increasing if $\text{Var}_{2,t}(\mu(1))>0$. Since $E(\mu(1))\leq 1$ and
$\text{Var}_{2,t}(\mu(1))>0$ unless $\nu =\delta_{(x,1-x)}$ for some $x\in [0,1]$,
$\lim_{t\to\infty} \text{Var}_{2,t}(\mu(1)) =0$ and $\nu_t\to \delta_{(p^*,1-p^*)}$ with
$p^*\geq\int \mu(1)\nu_0(d\mu)$.
\end{proof}
In the case $\delta_1=0$, $\delta_2=0$, the system is deterministic. In order for level II
selection to play a role, diversity in the composition of clusters is required as first pointed out
by Maynard Smith (recall discussion in subsubsection 1.1.1). In particular consider the case where
$\nu_0\in\mathcal{P}(\mathcal{P}(\mathbb{I}))$, let $\widetilde{\nu}_0(dx):= \nu_0(\{\mu:\mu(1)\in
dx\})\in \mathcal{P}([0,1])$. The following is a version of a result of Luo \cite{S-13}.
\begin{theorem}\label{T.6} Assume (\ref{E.3.1}) and (\ref{E.3.2}), $c=0$, and $s_1>0$. Assume that $\widetilde \nu_0(dx)$ has
a continuous density $\widetilde \nu_0(x)$. Then \\
(a) If there exists $x_0<1$ such that $\widetilde{\nu}_0([x_0,1])=0$,
then $\widetilde \nu_t \to \delta_0,\;\; \nu_t\to \delta_{(0,1)}$.\\
(b) If the density $\widetilde{\nu}_0$ is continuous and positive at $1$, then there exists a
critical $s_2^*$ such that if $s_2<s_2^*$, then $\nu_t \to {\delta_{(0,1)}}$ and if $s_2>s_2^*$,
then there exists an equilibrium distribution $\nu_{eq}$ with $\nu_{eq}(\{\mu:\mu(1)>0\})>0$ .
\end{theorem}
\begin{proof} This was established by S. Luo \cite{S-13} (see (A.5) in the case $\eta= U$) by solving an integro
partial differential equation for $\widetilde{\nu}_t(x)$, the density of the distribution of
$\mu_t(1)$. Luo's equation is given by \be{}\label{E.Luo} \frac{\partial}{\partial
t}\widetilde{\nu}(t,x)=\frac{\partial}{\partial
x}(\widetilde{\nu}(t,x)x(1-x))+\lambda\widetilde{\nu}(t,x)[x-\int_0^t \widetilde{\nu}(t,y)ydy]\ee
with solution \be{}\label{E.Luo2}
\widetilde{\nu}(t,x)=\widetilde{\nu}_0\left(\frac{xe^t}{1+x(e^t-1)}\right)e^{t-\lambda\int_0^th(s)ds}[1+x(e^t-1)]^{(\lambda-2)}\ee
where $\lambda=\frac{s_2}{s_1}$ and $h(t)=\int_0^1y\widetilde{\nu}(t,y)dy$.
\noindent (a) For any $\varepsilon >0$ there exists $t_0(\varepsilon)$ such that for $x\geq
\varepsilon$ and $t\geq t_0(\varepsilon)$, $\frac{xe^t}{1+x(e^t-1)}\geq x_0$ and therefore the
right side of (\ref{E.Luo2}) is 0.
\noindent (b) In the case $\mu_0$ is uniform on $[0,1]$ and $\lambda\ne 1$, (\ref{E.Luo2}) becomes
\be{} \widetilde{\nu}(t,x)=\frac{(e^t-1)(\lambda
-1)}{e^{t(\lambda-1)}-1}[1+x(e^t-1)]^{(\lambda-2)}\ee where $\lambda = s_2/s_1$. When $\lambda <1$,
we have \be{} \int_0^1 x\widetilde{\nu}(t,dx)\to 0.\ee When $\lambda >1$, we get
\be{}\label{E.lim} \widetilde{\nu}_{eq}(1)= \lim_{t\to\infty} \widetilde{\nu}(t,1) \to \lambda
-1,\ee and
\be{} \int_0^1 x\widetilde{\nu}(t,dx)\to \frac{\lambda -1}{\lambda}.\ee
\end{proof}
\bigskip
From Theorem \ref{T.6}(a) it follows that the survival of type $1$ depends only on the initial
density $\widetilde{\nu}(0,x)$ near $x=1$ and (\ref{E.lim}) remains positive if the initial density
is positive at $x=1$. This means that level II selection can overcome level I selection with two
types only if there is a positive density of sites (subpopulations) at time $t=0$ with arbitrarily
small (or zero) proportions of the individually more fit types. We next consider the role of
randomness and demonstrate that level one positive genetic drift can lead to a phase transition
with the possibility of survival of inferior types for any initial $\nu_0$ provided that $\int
\mu(1)\nu_0(d\mu)\in (0,1)$.
\subsection{Phase transitions: the role of randomness}
In this subsection we again consider the case in which level two selection favours colonies that
include altruistic or cooperative types. In the previous subsection level II selection exploited
the diversity in the initial distribution among demes.
\subsubsection{Level I randomness}\label{sss.rand1}
In this subsection we now consider the parameter regions for dominance of level I or level II
selective effects and the transition between these phases where level II selection acts on the
diversity in deme composition resulting from local genetic drift at each deme, that is, when
$\gamma_1>0$.
\begin{theorem}\label{T.7}
Consider the case $\mathbb{I}=\{1,2\}$.
Assume that , $c>0$, $\gamma_1 >0$,
$\gamma_2=0$, $s_1\geq 0$, $s_2\geq 0$,
with $\int
\mu(1)\nu_0(d\mu)\in (0,1)$.
\[ V_1(1)=0,\;V_1(2)=s_1.\] The migration rate is $c$ and the deme fitness is
\[V_2(\mu)=s_2\mu(1).\]
\medskip
\noi (a) Assume $m_{12}=m_{21}=0$ and $s_2=0,\, s_1 >0$. Then $\int_{\mathcal{P}(\mathbb I)}
\mu(2)\nu(d\mu)\to 0$ with exponential decay rate.
\noi (b) Assume $m_{12}=m_{21}=0$ and $s_1=0,\, s_2 >0$. Then $\int_{\mathcal{P}(\mathbb I)}
\mu(1)\nu(d\mu)\to 0$ with exponential decay rate.
\noi(c) Assume $m_{12}=m_{21}=0$. Then for fixed $c>0,\;\gamma_1>0,\; s_1>0$, there is a critical value $s_2^*(c,\gamma,s_1)\in (0,\infty)$ such that level II selection dominates, that is
then for $\varepsilon
>0$,
\[{ \nu_t(\{\mu:\mu(2)>\varepsilon\})\to 0,}\]
if $s_2>s_2^*$
and level I selection dominates if $s_2<s_2^*$, that is,
for $\varepsilon >0$,
\[{ \nu_t(\{\mu:\mu(1)>\varepsilon\})\to 0.}\]
\noi(d) Assume that $m_{12},m_{21}>0$. The there exists a unique equilibrium and the system
converges to the equilibrium measure as $t\to\infty$.
\end{theorem}
\begin{proof} (a) and (b). We use the dual $\{\mathcal{G}_t\}$ with initial value
$\mathcal{G}_0= \{2\}\;(\text{ or }\{1\})$ to compute the first moment of $E_{\Xi_0}\left[ \int
\mu(2)\Xi_t(d\mu)\right]= E_{\mathcal{G}_0}\left[ H(\Xi_0, \mathcal{G}_t)\right]$ where
$\Xi_0=\delta_{\mu_0}$. Let $p=\mu_0(2)$.
We now identify two basic combinations of transitions that will either increase or decrease this
expression.
\noi (1) Level I selection followed by migration before coalescence.
\medskip
\noi Transitions in terms of indicator functions of the sets,
\begin{eqnarray*}
1_{2}(1) && \rightarrow1_{2}(1)+1_{1}(1)\otimes_1 1_{2}(1)\quad\text{level I selection at rate }s_1\\
&& \rightarrow1_{2}(1)+1_{1}(1)\otimes_2 1_{2}(2)\;\;\text{migration}\end{eqnarray*} so that after
integration we obtain \be{} p \rightarrow p+(1-p)p=2p-p^{2}=1-(1-p)^{2}.
\e
\noi Note that after the migration step, this change can no longer be reversed by a coalescence at
site $1$. Noting that migration before coalescence occurs with probability $\frac{2c}{2c+\gamma_1}$
so that this combination occurs with effective rate $s_1\frac{2c}{2c+\gamma_1}$.
\noi Similarly these transition result in
\begin{eqnarray*}
1_{1}(1) && \rightarrow0+1_{1}(1)\otimes_11_{1}(1)\quad\text{level I selection}\\
&& \rightarrow1_{1}(1)\otimes_21_{1}(2)\;\;\text{migration}\\
(1-p) && \rightarrow(1-p)^{2
\end{eqnarray*}
\noi In the absence of level II selection this can be identified with a Crump-Mode-Jagers (CMJ)
branching process (see \cite{D10}(3.1.4), \cite{N} ) in which individuals are occupied sites and
during their lifetimes these individuals produce new offspring sites. Since the death rate is zero
this is a supercritical branching process with exponential growth rate, the Malthusian parameter
$\alpha_1$ (see \cite{DG-14} subsubsection 8.3.4) which can be represented in terms of the stable
age distribution \cite{DG-14},(8.167). This proves (a).
\medskip
\bigskip
\noi (2) Level II selection. We consider the effect of a level II selection event. followed by coalescence before migration.
\begin{eqnarray*}
1_{2}(1) && \rightarrow1_{1}(1)\otimes_1 1_{2}(1)+1_{2}(1)\otimes_2 1_{2}(2)\quad\text{ level II
selection at rate }s_2\\&& \rightarrow 1_{1}(1)\otimes_21_{2}(2)+1_{2}(1)\otimes_21_{2}(2)=
1_2(2)\\&&\;\;\;\; \qquad\qquad\hspace{2cm}\text{migration before coalescence with probability
}\frac{2c}{2c+\gamma_1}
\\&& \rightarrow 0+1_{2}(1)\otimes_21_{2}(2)\;\;\;\;\text{coalescence before migration with
probability }\frac{\gamma_1}{2c+\gamma_1} \end{eqnarray*} so that after integration we have
\be{} p\to p,\hspace{2cm} p\to p^2,\ee in the two cases.
Similarly in the second case (coalescence before migration) we have
\medski
\begin{eqnarray*}
1_{1}(1) && \rightarrow1_{1}(1)\otimes_11_{1}(1)+1_{2}(1)\otimes_2 1_{1}(2)\quad\text{ level II selection}\\
&& \rightarrow1_{1}(1)+1_{2}(1)\otimes_21_{1}(2)\;\;\;\;\text{coalescence before migration with
probability }\frac{\gamma_1}{2c+\gamma_1},\end{eqnarray*}
so that after integration we have
\be{}(1-p) \rightarrow(1-p)+p(1-p)=1-p^{2}, \ee and this occurs with effective rate
$s_2\frac{\gamma_1}{2c+\gamma_1}$. \medskip
In the absence of level I selection this again produces a supercritical CMJ branching process with Malthusian parameter
$\alpha_2$ in which
individuals are occupied sites (where an occupied site is a site at which the set is not
$\mathbb{I}$). This proves (b).
\bigskip
\noindent (c) When both $s_1>0,\; s_2 >0$ there is a competition between the two levels of
selection. In the dual setting this corresponds to the the competition between two branching
mechanisms, that is, {\em competing CMJ branching processes}.
\medskip
We now focus on the interaction of the competing CMJ processes. To do this we first consider effect of a level I transition at a site where a level II transition
has already occurred. This leads to \be{} 1_2\to_{II} 1_2\otimes_2 1_2\to_{I} 1_2\otimes_2 1_2
+1_2\otimes_21_1\otimes_2 1_2,\ee that is, in terms of sets \[ \{2\}\supset
(\{2\}\otimes_2\{2\})\subset(\{2\}\otimes_2\{2\})\cup (\{2\}\otimes_2\{1\}\otimes_2\{2\})\subset
\{2\}\]
which after integration leads to \be{} p\to_{II} p^2\to_{I}
p^2+p^2(1-p) \in [p, p^2]. \ee This means that a level I transition after a level II transition
partially reverses the first effect (but does not overshoot the reversal). (The same happens if a
level II follows a level I transition.) Therefore we can obtain a bound to the decrease in the
mean of type 2 due to level II transitions by completely reversing them at the rate of level I
transitions.
First, assume that $s_1\frac{c}{c+\gamma_1}>s_{2}\frac{\gamma_1}{\gamma_1+c}$. We can construct a
birth and death process with births (new factors $1_1(\cdot)$ via level I selection) at rate
$s_1\frac{c}{c+\gamma_1}$. We note that we can obtain a domination by letting the action of level
II selection to remove a $1_1(\cdot)$ (i.e. replacing it by $1_{2\cup 1}$ rather than something
intermediate), that is this produces a death rate $s_2\frac{\gamma_1}{c+\gamma_1}$. Therefore the
resulting birth and death process is supercritical and goes to infinity. Thus in this case, if
$\int\mu(2)\nu_0(d\mu)>0$, then $\int\mu(2)\nu_t(d\mu)\to 1$ and type 2 takes over in the limit in
the McKean-Vlasov system.
Similarly, if $s_2\frac{c}{c+\gamma_1}>s_{1}\frac{\gamma_1}{\gamma_1+c}$ then we have a birth process with $1_2$ factors produced due to level
II
selection at rate $s_2\frac{\gamma_1}{c+\gamma_1}$ and a removal of these factors by level I
selection at rate $s_1\frac{\gamma_1}{c+\gamma_1}$. Therefore if $\int\mu(1)\nu_0(d\mu)>0$, then
$\int\mu(1)\nu_t(d\mu)\to 1$ and type $1$ wins out and takes over in the McKean-Vlasov system.
\bigskip
\noi(d) This is a special case of Theorem \ref{T.ergodic2}.
\end{proof}
\bigskip
\begin{remark} We can also consider the {\em linear stability} of the fixed points $\delta_{(1,0)},\delta_{(0.1)}$
by computing
\[ \frac{d}{dt}E_{(\varepsilon,1-\varepsilon)}(\mu(1))|_{t=0} = {s_2 \gamma_1}-{s_1c} \]
for small $\varepsilon$ (e.g. under the condition $s_2\leq c$ discussed below). This
is given by the first change due to a jump of the dual process started at $1_1$. This means that
level II selection prevails if \be{kcond}{s_2 \gamma_1}>{s_1c}.\ee Condition (\ref{kcond}) was
derived by Kimura \cite{K-83} using the properties of the equilibria of the solution of equation
\ref{Kim} for the density $U(t,x)$.
Existence and
uniqueness of the solution to (\ref{Kim}) was established by Shimakura \cite{Sm-85}), namely, for
any initial distribution there exists a unique solution. Also see Shiga \cite{Sh-87} for a
generalization to the multitype case. A more complete description of the long-time behaviour is
proved in Ogura and Shimakura \cite{OS-87} Theorem 3(iii) (in the case $s_2\leq c$ so that
$s_1/\gamma_1 \leq 1$ if also $s_2>s_1c/\gamma_1$ which corresponds to the case in which the
expected number of dual factors at a site is bounded above by 2). They also obtain a related result
in Theorem 5 in the case in which the mutation rates satisfy $m_{12}>0,m_{21}>0$ or
$m_{12}>0,m_{21}=0$ or $m_{12}=0,m_{21}>0$. These results were obtained using ODE methods and
explicit solutions in terms of hypergeometric functions.
The ODE methods cannot be extended to populations having more than two types or systems with level
II coalescence. The objective of this paper is to develop the dual representation and the above
model of competing branching mechanisms to study the long-time behaviour of multitype-multilevel
systems. An example involving three types is given in Theorem \ref{T.cooperation}.
\end{remark}
\subsubsection{The case $\gamma_1>0,\;c=0$}
Consider the case again with no mutation, level II fitness
function $V_2(\mu)=\mu(2)$, $s_2>0$, $c=\gamma_2=0$ but with $\nu_0=\delta_{\mu_0}$, that is, no
initial diversity in the composition of demes but with $\mu_0(i)>0.\; i=1,2$.
Then level II selection can dominate, that is, $\mu_t(2)\to 1$ only if
$\gamma_1>0$. This means that group selection cannot be effective if $\gamma_1=0$ in the case
$\nu_0=\delta_{\mu_0}$.
In the case $c=0$, $\gamma_2=0$, $\gamma_1>0$, fixation occurs within each deme so that eventually
the competition is between demes of type $1$ and demes of type $2$ and then $\mu_t(2)\to 1$ if
$s_2>0$. To verify this we have \bea{} &&(10)\to (01)\otimes_1(10)+(10)\otimes_2(10)\quad\text{ at
rate }s_2\\&&\qquad \to (10)\otimes_2(10)\quad\text{ at rate }\gamma_1\nonumber\eea
\bea{}&& (10)\to (10)+(01)\otimes_1(10)\quad\text{ at rate }s_1\\&&\qquad \to (10) \quad\text{at
rate }\gamma_1.\nonumber\eea Recalling that the coalescence dominates level I selection here due to
the quadratic rate then effectively the dual develops as $(10)^{\otimes_2 k(t)}$ and
$k(t)\to\infty$. Then if $\mu_0(1)<1$, $\nu_t=\delta_{\mu_t}$ and $\mu_t(1)\to 0$ as $t\to\infty$.
\subsubsection{Level II randomness $\mathbf{\gamma_2>0}$}
\beP{} (a) If $\gamma_1=0$, $\mu_0(2)<1.\;\nu_0$-a.s., and $s_1>0,\;\gamma_2>0,\; c>0$, then
\be{}E\left[\int\mu(2)\Xi_t(d\mu)\right]\to 0\text{ as }t\to\infty.\ee
\medskip
\noi (b) If $\gamma_1>0$ and $\gamma_2>0$, then fixation occurs, that is,
\be{}{}E\left(\int\mu(1)\Xi_t(d\mu)\cdot\int\mu(2)\Xi_t(d\mu)\right)\to 0,\quad \text{as }
t\to\infty.\ee
\end{proposition}
\begin{proof}
(a) This follows by a simple dual calculation as follows: \be{} (01)\to (01)\otimes_1 (01) \text{
as rate }s_1,\ee
\be{} (01)\to (01)\otimes_1 (01) \to (01)\otimes_2 (01)\text{ at rate }c,\ee but then
\be{}(01)\otimes_2 (01)\to (01)\otimes_1(01)\text{ at rate }\gamma_2,\ee
and \bea{} &&(01)\to (01)\otimes_1 (01)
+(10)\otimes_2(01)\text{ at rate } s_2\\&&\qquad\to (01)\otimes_1 (01) +(10)\otimes_1(01)=
(01)\text{ at rate } \gamma_2\nonumber\eea Recalling that the coalescence dominates level II
selection here due to the quadratic rate then effectively the dual develops as $(01)^{\otimes_1
k(t)}$ and $k(t)\to\infty$. This implies that level II selection has no long term effect and the
level I selection leads to $\int (\mu(2))^{k(t)}\nu_0(d\mu)\to 0,\quad\text{as }t\to\infty$.
\medskip
\noi (b) This follows by a two-level version of the calculation in Example \ref{ex6}.
\end{proof}
\bigskip
\section{Multitype multilevel population systems}
In this section we consider the evolution of more complex population systems with possibly many
types, multilevel structure and different combinations of the basic mechanisms. In this case it is
no longer possible to use the methods of one dimensional nonlinear dynamics or one dimensional
diffusions, that is, methods involving ordinary differential equations. We will outline the
application of the dual representations in some examples in this class.
\subsection{ Equilibria and fixation probabilities}
An important application of the dual is to obtain results on ergodic properties of multilevel
systems. We begin by reviewing the results for the case with only level I selection in
\cite{DG-14}.
\begin{theorem}\label{T.ergodic} {\em Ergodicity of McKean-Vlasov systems with level I selection.}
Assume that $\gamma_2=0$ and that at time $t=0$, all demes have the same individual distribution,
say $\mu_0$, that is $\nu_0=\delta_{\mu_0}$. If the mutation rates are positive on $\mathbb{I}$
and positive migration rate $c>0$, then the limiting empirical process is given by a deterministic
McKean-Vlasov dynamics $\{\nu_t\}_{t\geq 0}$ where $\nu_t=\mathcal{L}({\mu_t})$. Moreover, as
$t\to\infty$, $\nu_t$ converges to a unique equilibrium $\nu_{eq}$.
\end{theorem}
\begin{proof} See \cite{DG-14} Theorem 12, Theorem 14 for the case $s_2=0$. The extension to
include the case $s_2>0$ follows along the same lines.
\end{proof}
We now consider the extension of the ergodicity result to
the case in which $\gamma_2>0$.
\begin{theorem}\label{T.ergodic2} {\em Ergodicity of two level Fleming-Viot systems.} Assume that
$\gamma_2 > 0$ and $s_2\geq 0$
and positive mutation rates on $\mathbb{I}$. Then the law of the two level
Fleming-Viot process converges to a unique equilibrium
$P_{eq}\in\mathcal{P}(\mathcal{P}(\mathcal{P}(\mathbb{I})))$.\\
\end{theorem}
\begin{proof} We adapt the proof of \cite{DG-14}. Due to level II coalescence the number of sites occupied by
the dual can be reduced to 1 and this event is recurrent. Note that the equilibrium mean measure
of a subset of $\mathbb{I}$ is given by the probability that the dual process starting at the
indicator of the subset hits the absorbing point $\mathbb{I}^{\N}$ but this occurs with positive
probability at each time the event occurs. Therefore an absorbing point is reached with probability
1.
\end{proof}
We now consider the case in which the mutation Markov chain has two or more ergodic classes. In
this case the system can be non-ergodic. If $\gamma_1>0$, then eventually the population will be
concentrated on one of the ergodic classes and the problem is to compute the fixation
probabilities.
\beT{}\label{T.fixation} {\em Fixation probabilities}
\begin{description}
\item [(a)]{\em Single deme.} Assume that the initial configuration is iid $\mu_0$ and that the mutation chain has two or more ergodic classes $\mathbb{I}_1,\dots,\mathbb{I}_\ell$, $\gamma_1>0$,
and $s_1\geq 0$. Then the population is ultimately fixed on one of the classes and the probability
that it is in class $k$ is given by the equilibrium measure of the dual chain started with $\mathcal{G}_0 = \mathbb{I}_k$ integrated with respect to the initial measure $\mu_0$
.
\item[(b)] {\em Two level Fleming-Viot systems.} Assume that $\gamma_1>0$, $\gamma_2 >0$, $V_1,V_2>0$ and that the mutation Markov chain has two or
more ergodic classes. Then there is ultimate fixation of a single ergodic class and the law of the
two level system converges to a random mixture of pure equilibrium single class populations.
\end{description}
\end{theorem}
\begin{proof} (a) We again use the dual representation. To verify ultimate fixation take $\mathcal{G}_0 =\prod_{i=1}^{\ell^\prime} \mathbb{I}_i\subset \mathbb{I}^\ell$, $2\leq\ell^\prime\leq\ell$. Then due to the quadratic rate coalescence, with probability 1, $\mathcal{G}_\tau =\emptyset$ at some random time $\tau$ and this is a trap so that $\mathcal{G}_t =\emptyset$ for large $t$. To compute the probability that ergodic class $k$ is chosen, start the dual with $\mathcal{G}_0 = \mathbb{I}_k$. The dual process jumps are within class mutation jumps, selection jumps
and coalescence jumps. Due to coalescence the state $\mathcal{G} = \mathbb{I}_k$ is recurrent and
therefore the dual is positive recurrent and converges to equilibrium as $t\to\infty$. The
fixation probability (limiting probability that ergodic class $k$ is chosen) is then obtained by integrating with respect to the initial measure
$\mu_0^\otimes$.
\noi (b) We again use the dual process. Due to level II coalescence there will eventually be an
equilibrium distribution of the number of sites occupied by the dual and in fact the single site
situation occurs infinitely often. Then each time there is non-zero probability that one of the
classes will be eliminated and the within class mutation will hit an absorbing point as in (a).
\end{proof}
\begin{example} Assume that $\mathbb{I}=\{1,2,3\}$, no mutation, $\gamma_1 >0,\;\gamma_2>0$ and
$\nu_0=\frac{1}{3}\delta_{\delta_{1}}+
\frac{1}{3}\delta_{\delta_{2}}+\frac{1}{3}\delta_{\delta_{3}}$. We use the set-valued dual
$\mathcal{G}_t$. If $\mathcal{G}_0=\{i\}_1\otimes_2\{j\}_2$, then by level II coalescence
followed by level I coalescence, $\mathcal{G}_\tau = \emptyset$ at some finite random time $\tau$
if $i\ne j$ and therefore only one type survives. We also note that the dual process
$\mathcal{G}_t$ starting from $\mathcal{G}_0=\{i\}_1$ is positive recurrent with $\{i\}_1$ as a
renewal point - this is due to the levels I and II quadratic rate coalescence events (in contrast
to the linear birth rates due to selection events).
Then ``level II fixation'' occurs, namely,
\[ \nu_t\to \delta_{\delta_{i}} \quad \text{with probability } p_i\] and the fixation probabilities $p_i$ are obtained
by integrating the equilibrium dual $\mathcal{G}_{\rm{eq}}(i)$ starting from
$\mathcal{G}_0=\{i\}_1$, that is,
\[ p_i= \nu_0^{\otimes}(\mathcal{G}_{\rm{eq}}(i)).\]
\end{example}
\subsection{Examples of multilevel effects}
The class of multitype multilevel population systems with mutation and selection is extremely rich
and can exhibit many complex dynamical behaviors. We do not discuss this in detail but now given
an illustrative example of a simple effect of this type
\subsubsection
{A model of cooperation.}\label{coop}
\medskip
\noindent Consider the 3 type case $\mathbb{I}=\{1,2,3\}$ with level I fitness function
\[ V_1(1)=v_1,\quad V_1(2)=V_1(3)=0,\]
and level II fitness function
\[ V_2(\mu)=v_2\mu(2)\mu(3),\]
with $v_1,v_2>0$. This models a cooperative interaction of two types $2$ and $3$ that endow a
deme containing them with a positive advantage.
\noindent We consider the emergence
of demes having population distributions $(p_1,p_2,p_3)$ when
$\gamma_2=0$ satisfying \begin{equation}\label{threshold} v_1p_1< v_2p_2p_3.\end{equation}
\begin{theorem}{}\label{T.cooperation}
Assume that $c>0$, $\gamma_1>0$, rare mutation from type 1 to types 2,3 occurs at rates
$m_{12}=m_{13}=\ve >0$ and $m_{23},m_{32}>0$, $m_{21}=m_{31}=0$.
(a) If $v_2=0$, then for sufficiently large $v_1>0$ we can have long time survival of type $1$,
that is an equilibrium with positive mean proportion of type $1$.
(b) There exists a critical value $v_2^*$ at which emergence of demes having threshold values
satisfying (\ref{threshold}) occurs for $v_2>v_2^*$, type 1 becomes extinct and types $2,3$ go to
equilibrium.
\end{theorem}
\begin{proof} (a) This has been proved for the case of two types in \cite{DG-14}, Corollary 10.1. The proof of (a) follows
along the same lines and elements of it are used in the proof of (b). We will outline the main
steps. We use the dual started with $\mathcal{G}_0=\{1\}\otimes \mathbb{I}^{\otimes\mathbb{N}}$.
The corresponding indicator function representation starts with $(100)\otimes(111)^{\otimes
\mathbb{N}}$. Then selection followed by migration leads to new summands. Mutation has the
effect of eliminating summands so that together we have a birth and death process. There are two
possible outcomes. First the $(100)$ can be a recurrent point and eventually will eventually change
to $(000)$ with probability 1. The second possibility is that the birth and death process is
supercritical and in this case the type $1$ is positive for the invariant measure.
(b) We consider $\lim_{t\to\infty}\int \mu(2\cup 3)\nu_t(d\mu)$ and $\lim_{t\to\infty}\int
\mu(1)\nu_t(d\mu)$. Note that (using indicator functions of subsets of $\mathbb{I}$) the level II
selection transitions gives \bean{}&&(011)\to\\&& (010)\otimes_1(001)\otimes_1 (011)\\&&+
[(100)\otimes_1 (111)+(010)\otimes_1 (110)+(001)\otimes_1 (111)] \otimes_2 (011)\eean In order to
produce a permanent summand we then require either two migration or mutation transitions before
coalescence of the first two terms but this occurs with positive rate so that this can have an
important effect for large $v_2$. This can then lead to
\bean{}&&(011)\to\\&&
(010)\otimes_1(001)\otimes_1 (011)\\&&+ [(100)\otimes_1 (111)+(010)\otimes_1 (110)+(001)\otimes_1
(111)] \otimes_2 (011)\\&& \dots \to (011)+(100)\otimes_2 (011)\eean
Level I selection leads to
\[ (011)\to (011)\otimes_1 (011)\]
If we then have a level II selection
\bean{} &&(011)\to (011)\otimes_1 (011)\\&&\to (010)\otimes_1(001)\otimes_1 (011)\otimes_1
(011)\\&& + [(100)\otimes_1 (111)+(010)\otimes_1 (110)+(001)\otimes_1 (111)] \otimes_2
(011)\otimes_1(011)\\&&\to \dots (011)\otimes_1 (011)+(011)\otimes_1(100)\otimes_2(011)\eean
\bigskip
We then have competing branching mechanisms one whose rate is proportional to $v_1$ and one whose
rate is proportional to $v_2$. We can construct birth and death processes where the deaths in the
level I process (birth rate prop. to $v_1$) are caused by mutation and level II selection. Deaths
in the level II process (birth rate prop to $v_2$)are caused by level I selection. Recall that
type 1 can only survive if the level birth and death process is supercritical (cf. DG, p. 743). We
again have a dichotomy involving the critical behaviour of two competing branching processes and
the result follows as in the proof of Theorem \ref{T.7}
\end{proof}
\subsubsection{Emergence of mutualistic types}\label{mutual}
There are many different mechanisms involving multilevel selection that can influence the overall
population structure. For example, multilevel selection can make possible the survival of a trait
that then leads to the emergence of another trait that has a mutually beneficial effect which then
gives the first trait higher (inclusive) individual fitness and the pair survives locally.
To illustrate this consider the following modification of the model of subsubsection \ref{coop}.
Let
\[ V_1(1)=v_1,\quad V_1(2,\mu)= v_{M}\cdot \mu(3), \;V_1(3,\mu)=v_{M}\cdot \mu(2),\] and level II fitness function
\[ V_2(\mu)=v_2(\mu(2)+\mu(3)).\]
\begin{theorem}{}\label{T.mutualism}
Assume that $c\geq 0$, $\gamma_1\geq 0$, rare mutation from type 1 to types 2,3 occurs at rates
$m_{12}=m_{13}=\ve \geq 0$ very small, and with $v_1,v_2, v_{M}\geq 0$. Denote by $x_i(t)$ the
expected proportion of type $i$ at time $t$.
\noindent (a) Consider the deterministic case with $\gamma _1=m_{12}=m_{13}=0$, $v_1=1$ and
$v_2=0$ and for simplicity assume that $x_2(0)=x_3(0)>0$. Then if \[ v_M<\frac{2}{1-x_1(0)},\]
then
\[ x_1(t)\to 1\] as $t\to\infty$.
\noindent (b) If
\[ v_M >\frac{2}{1-x_1(0)},\]
then $ x_1(t)\to 0$ as $t\to\infty$. In the case $ v_M=\frac{2}{1-x_1(0)}$ there is an unstable
equilibrium.
\noindent (c) Consider the two level system. Assume conditions on $\nu_0$, namely on the density
at $1$ of the total mass of types $2$ and $3$ (as in Theorem \ref{T.6}) and that $v_M>2$. Then for
sufficiently large $v_2$ type $1$ becomes extinct and the mutualistic types dominate. Moreover they
continue to dominate even if the level II selection ends, that is, $v_2(t)=1_{[0,T]}(t)\,v_2$ for
sufficiently large $T$.
\noindent
\noindent In this case we have a two stage process\\
\noindent Stage 1: Ignoring the mutualistic effect, survival and equilibria of altruistic types $2,3$ by level II selection.\\
\noindent Stage 2: Takeover by the mutualistic pair $2,3$.\\
\end{theorem}
\begin{proof} (a,b) Using the dual as above we can obtain the following equations
by decomposing the first transitions of the dual into the different cases (recall (\ref{sdf})):
\[ \frac{dx_1(0)}{dt}= x_1(0)[(1-x_1(0))-\frac{v_m}{2}(1-x_1(0))^2].\]
The result follows by checking the sign of the derivative.
\noindent (c) By Theorem \ref{T.6}, in the two level system, under appropriate conditions on the
density of the total mass of types $2$ and $3$ at $1$, for sufficiently large $v_2$ there is an
equilibrium with mean mass of types $2,3$ greater than $\frac{2}{v_M}$, that is
$1-x_1(0)>\frac{2}{v_M}$, and therefore by part (b) type $1$ becomes extinct and the mutualistic
types dominate. Moreover they continue to dominate even if the level II selection ends.
\end{proof}
\begin{remark} Since the mutualistic pair can then persist even in the absence of further level II selection in (c), this
illustrates the possible role of transient group selection in the long time genetic structure of a
population. Without the role of level II selection, the simultaneous emergence of both types with
at least critical density would be an event of higher order rarity.
\end{remark}
\begin{remark} We can also consider a random version of this. In this case we assume that
$\nu_0=\delta_{(1,0,0)}$, $\gamma_1 >0$, $m_{12}=m_{13}=\varepsilon >0$. Then again for
sufficiently large $v_M$ and $v_2$ type 1 dies out and the mutualistic types take over. The dual
analysis involves four classes of selection birth events corresponding to the level I fitness of
type 1, the mutualist fitnesses of types 2,3, the level II fitness of sites containing types 2 and
3 and level I coalescence. This will be carried out elsewhere.
\end{remark}
\subsubsection{A three level system}
Consider a system with state space $\mathcal{P}(\mathcal{P}(\mathcal{P}(\mathbb{I})))$. For
example, this could model a system of competing regions in which regions contain competing towns
and each town contains a population with type space $\mathbb{I}$. The relative fitness $V_3(\nu)$,
of a region (level III fitness) is assumed to depend on the distribution $\nu$ of the
characteristics of the towns it contains so that $V_3:(\mathcal{P}(\mathcal{P}(\mathbb{I})))\to
[0,1]$
\be{} V_3(\nu)=\prod_{k=1}^K \left[\int h_{k}(\mu_{k})\nu(d\mu_k)\right] \ee where $h_k$ is of the
form (\ref{alg2}), and convex combinations of functions of this form. We also allow individuals to
migrate to a different region and even entire towns to move to a different region. Then the
combined effect of three levels of selection can lead to complex behaviour which can be analyzed
using the set-valued dual with values in $((\mathcal{I})^
{\otimes\mathbb{N}})^{\otimes\mathbb{N}}$.
\subsection{The study of more complex multilevel interactions - set-valued Monte Carlo approximation}
For more complex multilevel interactions it is natural to consider simulations since closed form
solutions cannot be expected. However the numerical simulation of systems of Fleming-Viot processes
involves the solutions of systems of nonlinear stochastic partial differential equations with
degenerate boundary behaviour. The numerical simulation of such systems of stochastic
differential equations is difficult. On the other hand the dual formulation involves only the
simulation of continuous time Markov chains with discrete states which are easy to simulate by
Monte Carlo methods. In particular one can compute means and covariances directly by simulating the
dual process and determining the empirical distribution of the outcomes, namely, of the absorption
probabilities (for equilibria) or equilibrium probabilities for fixation probabilities.
\subsection{ Extensions}
The study of multilevel evolutionary systems arises in evolutionary biology, ecology, virology,
sociology, economics, etc. These give rise to a wide range of mechanisms and interactions. Some of
these can be modeled by extensions of the models and methods described above. For example, the
models and the set-valued dual representations can be extended to systems with countably many
types, recombination and horizontal gene transfer, higher level interactions, random environments,
multiple species, and measure-valued processes with jumps leading to duals with multiple
coalescence.
|
2,869,038,153,768 | arxiv | \section{Introduction}
Since Hawking proved that a black hole can radiate thermally
\cite{SWH}, many papers have appeared to deeply discuss the
quantum radiation of black holes via different methods \cite{TGJ}.
Recently, Wilczek and his collaborators have proposed two
universal methods to correctly recover Hawking radiation of black
holes. In Ref.\cite{SF12}, when the classically irrelevant ingoing
modes at the event horizon of the black hole is neglected, the
effective chiral theory contains an anomaly with respect to
general coordinate symmetry, which is often named as gravitational
anomaly. To cancel gravitational anomaly and restore general
coordinate covariance at the quantum level, an energy momentum
tensor flux must be introduced at the horizon. The result shows
that the compensating energy momentum tensor flux has an
equivalent form as that of $(1 + 1)$-dimensional blackbody
radiation at the Hawking temperature. Later, much work further
promotes this method to the cases of different-type black
holes\cite{all1,all2,all3}.
On the other hand, Hawking radiation could be viewed as a
semi-classical quantum tunnelling process. According to the WKB
approximation, the tunnelling rate takes the form as
$\Gamma\propto \exp(-2\textrm{Im}I)$, where $I$ is the classical
action of the trajectory. Thus the calculation of the imaginary
part of the action becomes most important for this tunnelling
method. In general, two universal methods are applied in
references to derive the action. One method is called as the Null
Geodesic method(a detail description is available in
Ref.\cite{M}), and regarding the imaginary part of the action as
only contribution of the momentum $p_r$ of the emitted null
s-wave. Another method, proposed by Srinivasan and
Padmanabhan\cite{KS} and recently developed by Angheben
et.al\cite{M2}, is successfully present to derive the imaginary
part of the action by solving the Hamilton-Jacobi equation, which
is, later, called as the Hamilton-Jacobi method.
Till now, a lot of work has already been successfully carried out
for further development of the tunnelling approach, but all of
them are only focused on Hawking radiation of scalar particles
from various black hole spacetimes \cite{M3,M4}. In fact, a black
hole can radiate any types of particles at the Hawking
temperature, and the true emission spectrum should contain
contributions of particles with charge and all possible spins.
Recently, Kerner and Mann have succeeded to apply uncharged
fermion tunnelling from a non-rotating black hole to correctly
recover its Hawking temperature \cite{M5}. Subsequently, people
extend the analysis to the cases of Kerr black hole, Kerr-Newman
black hole and dynamical horizon and all the results are
satisfying\cite{M6}. In this paper, we further improve it to deal
with charged fermion tunnelling from the general dilatonic black
holes, specifically including the charged, spherically symmetric
dilatonic black hole, the rotating Einstein-Maxwell-Dilaton-Axion
(EDMA) black hole and the rotating Kaluza-Klein(KK) black hole.
The rest of the paper are organized as follows. In Sec.\ref{SSDB},
we study charged fermion tunnelling from the charged, spherically
symmetric dilatonic black hole, and the expected Hawking
temperature is well recovered. In Sec.\ref{EMDA} and Sec.\ref{KK},
for a broad extension, we again check charged fermions tunnelling
from the rotating Einstein-Maxwell-Dilaton-Axion (EMDA)and
Kaluza-Klein (KK) black holes. Sec.\ref{cd} contains some
discussions and conclusions.
\section{Fermions tunnelling from the charged, spherically symmetric dilatonic black hole}\label{SSDB}
In this section, we focus our attention on Hawking radiation of
fermions via tunnelling from the charged spherically symmetric
dilatonic black hole. Dilaton field is a kind of scalar field
which occurs in the low energy limit of the string theory where
the Einstein action is supplemented by fields such as axion, gauge
fields and dilaton coupling in a nontrivial way to the other
fields. Solutions for charged dilaton black holes in which the
dilaton is coupled to the Maxwell field have been obtained and
have many differences from that of ordinary black holes obtained
in the Einstein gravitational theory. Since the presences of
dilaton they have important consequences on the causal structure
and the thermodynamic properties of the black hole, and much
interest is attracted to study the dilaton black holes. The
spherically symmetric solution \cite{M7} is obtained from the
four-dimensional low energy Lagrangian
\begin{equation}
S= \int {dx^4\sqrt { - g} \left[ { - R + 2\left( {\nabla \Phi }
\right)^2 + e^{ - 2a\Phi }F^2} \right]} , \label{S2}
\end{equation}
where $a$ is a parameter, and denotes the strength of the coupling
of the dilation field $\Phi $ to Maxwell field $F$. When $a = 0$,
it reduces to the usual Einstein-Maxwell scalar theory. When
$a=1$, it is part of the low energy action of string theory. The
metric of the charged, spherically symmetric dilatonic black hole
(also called as G. H dilatonic black hole) reads as
\begin{eqnarray}
&& ds^2 =
- e^{2U}\left( r \right)dt^2 + e^{ - 2U}\left( r \right)dr^2 +
R^2\left( r \right)\left( {d\theta ^2 + \sin ^2\theta d\varphi ^2}
\right),\nonumber\\
&& e^{2\Phi } = \left( {1 - \frac{r_ - }{r}} \right)^{\frac{2a}{1 +
a^2}}, \quad F = \frac{Q}{r^2}dt \wedge dr, \label{eqds} \end{eqnarray} where
$e^{2U}\left( r \right) = \left( {1 - \frac{r_h }{r}} \right)\left(
{1 - \frac{r_ - }{r}} \right)^{\frac{1 - a^2}{1 + a^2}}$, $R\left( r
\right) = r\left( {1 - \frac{r_ - }{r}} \right)^{\frac{a^2}{1 +
a^2}}$, $\Phi $ and $F$ are the dilaton and Maxwell fields,
the mass and electric charge of the black hole are expressed as
$M = \frac{r_h }{2} + \frac{r_ - }{2} \cdot \frac{1 - a^2}{1 + a^2}$
and $Q^2 = \frac{r_h r_ - }{1 + a^2}$, respectively. $r_h /r_ - $
are the outer/inner horizon of the black hole, $a$ is a couple
constant confined in $0 \le a < 1$. When
$a = 0$, this metric reduces to the Reissner-Nordstr\"{o}m solution.
The electric potential of the G. H. dilatonic black hole is $A_\mu =
A_t dt = \frac{Q}{r}dt$. For all $a$, it is singular at the location
of the outer horizon $r = r_h $.
The motion equation of charged fermion in the
electromagnetic field can be written as
\begin{equation}
i\gamma ^\mu \left( {\partial _\mu + \Omega _\mu + \frac{i}{\hbar
}eA_\mu } \right)\psi + \frac{m}{\hbar }\psi = 0, \label{eq9}
\end{equation}
where $\Omega _\mu = \frac{i}{2}\Gamma _\mu ^{\alpha \beta } \sum
_{\alpha \beta } $ , $\sum _{\alpha \beta } = \frac{i}{4}\left[
{\gamma ^\alpha ,\gamma ^\beta } \right]$ and $\gamma ^\mu $
matrices satisfy $\left\{ \gamma ^\mu ,\gamma ^\nu \right\} =
2g^{\mu \nu }\times I$, $m$ and $e$ are the mass and the electric
charge of the emitted particles, and $A_\mu $ is the electric
potential of the black hole. To deal with fermions tunnelling radiation, it is important to choose an appropriate $\gamma ^\mu
$ matrices. There are many ways to choose them\cite{M5,
M6}, in our case, we choose \begin{eqnarray} &&\gamma ^t = \frac{1}{\sqrt
{e^{2U}\left( r \right)} }\left( {{\begin{array}{*{20}c}
i \hfill & 0 \hfill \\
0 \hfill & { - i} \hfill \\
\end{array} }} \right),
\quad \gamma ^\theta = \frac{1}{R\left( r \right)}\left(
{{\begin{array}{*{20}c}
0 \hfill & {\sigma ^1} \hfill \\
{\sigma ^1} \hfill & 0 \hfill \\
\end{array} }} \right), \nonumber\\
&&\gamma ^r = \sqrt {e^{2U}\left( r \right)} \left(
{{\begin{array}{*{20}c}
0 \hfill & {\sigma ^3} \hfill \\
{\sigma ^3} \hfill & 0 \hfill \\
\end{array} }} \right),
\quad \gamma ^\phi = \frac{1}{R\left( r \right)\sin \theta }\left(
{{\begin{array}{*{20}c}
0 \hfill & {\sigma ^2} \hfill \\
{\sigma ^2} \hfill & 0 \hfill \\
\end{array} }} \right).
\end{eqnarray} Here, $\sigma ^i$ is the Pauli Sigma matrices. For fermion with spin 1/2, the wave function has two spin states
(namely, spin up($\uparrow$) and down ($\downarrow$) states), so we can take the following ansatz as \begin{eqnarray}
&&\Psi _ \uparrow = \left( {{\begin{array}{*{20}c}
A \left({t,r,\theta ,\varphi } \right) \hfill \\
0 \hfill \\
B\left({t,r,\theta ,\varphi } \right) \hfill \\
0 \hfill \\
\end{array} }} \right)\exp \left( {\frac{i}{\hbar }I_ \uparrow \left(
{t,r,\theta ,\varphi } \right)} \right), \nonumber\\
&& \Psi _ \downarrow = \left( {{\begin{array}{*{20}c}
0 \hfill \\
C\left({t,r,\theta ,\varphi } \right) \hfill \\
0 \hfill \\
D\left({t,r,\theta ,\varphi } \right) \hfill \\
\end{array} }} \right)\exp \left( {\frac{i}{\hbar }I_ \downarrow \left(
{t,r,\theta ,\varphi } \right)} \right), \label{eq1} \end{eqnarray} where
$\Psi _ \uparrow$ denotes the wave function of spin up particle, and
$\Psi _ \downarrow$ is for spin down case. Inserting Eq.(\ref{eq1})
for spin up particle into the Dirac equation and dividing the
exponential term and multiplying by $\hbar$, we have
\begin{equation}
- \left( {\frac{iA}{\sqrt {e^{2U}\left( r \right)} }\left( {\partial
_t I_ \uparrow + eA_t } \right) + B\sqrt {e^{2U}\left( r \right)}
\partial _r I_ \uparrow } \right) + mA = 0, \label{eqA}
\end{equation}
\begin{equation}
\left( {\frac{iB}{\sqrt {e^{2U}\left( r \right)} }\left( {\partial
_t I_ \uparrow + eA_t } \right) - A\sqrt {e^{2U}\left( r \right)}
\partial _r I_ \uparrow } \right) + mB = 0, \label{eqA1}
\end{equation}
\begin{equation}
{\frac{B}{R\left( r \right)}\partial _\theta I_ \uparrow +
\frac{iB}{R\left( r \right)\sin \theta }\partial _\varphi I_
\uparrow } = 0,
\end{equation}
\begin{equation}
{\frac{A}{R\left( r \right)}\partial _\theta I_ \uparrow +
\frac{iA}{R\left( r \right)\sin \theta }\partial _\varphi I_
\uparrow } = 0.
\end{equation}
It is difficult to directly solve the action from above equations.
Considering the symmetries of the space-time, we can
carry out separation of variables for the action as
\begin{equation}
I_\uparrow= - \omega t + W\left( r \right) + \Theta \left( {\theta
,\varphi } \right), \label{eqI}
\end{equation}
where $\omega$ is the energy of the emitted particle. Then
substituting Eq.(\ref{eqI}) into Eqs.(\ref{eqA})and (\ref{eqA1}),
the radial function $W\left( r \right)$ satisfies the following
equations
\begin{equation}
\left( {\frac{iA}{\sqrt {e^{2U}\left( r \right)} }\left( {\omega -
eA_t } \right) - B\sqrt {e^{2U}\left( r \right)}
\partial _r W\left( r \right)} \right) + mA = 0, \label{eq2}
\end{equation}
\begin{equation}
- \left( {\frac{iB}{\sqrt {e^{2U}\left( r \right)} }\left( {\omega -
eA_t } \right) + A\sqrt {e^{2U}\left( r \right)} \partial _r W\left(
r \right)} \right) + mB = 0.\label{eq3}
\end{equation}
Here, we neglect the equations about $\Theta \left({\theta ,\varphi
} \right)$ function. Although $\Theta \left({\theta ,\varphi }
\right)$ could provide a
contribution to the imaginary part of the action, its total
contributions to the tunnelling rate are cancelled out \cite{M5}.
At the event horizon, the radial function $W\left( r \right)$ can be written as
\begin{eqnarray} W_\pm \left( r \right)& =& \pm \int {\frac{\sqrt {\left(
{\omega - eA_t } \right)^2 + m^2e^{2U}\left( r \right)}
}{e^{2U}\left( r \right)}dr} \nonumber\\
&=&\pm i\pi r_h \left( {\omega - eA_0 } \right)\left( {1 - \frac{r_
- }{r_h }} \right)^{\frac{a^2 - 1}{a^2 + 1}}, \label{eqW1} \end{eqnarray}
where $+ $/$-$ correspond to the outgoing/ingoing solutions, and
$A_h = {Q}/{r_h }$ is the electric potential at the event horizon.
So the tunnelling probability of charged fermion is \begin{eqnarray} \Gamma &=&
\frac{P_{\left( {emission} \right)} }{P_{\left( {absorption}
\right)} } = \frac{\exp ( - 2\textrm{Im}I_ {\uparrow+ })}{\exp
( - 2\textrm{Im}I_{\uparrow - })}= \frac{\exp ( - 2\textrm{Im}W_ + )}{\exp ( - 2\textrm{Im}W_ - )}\nonumber\\
& =& \exp \left( { - 4\pi r_h \left( {\omega - eA_0 } \right)\left(
{1 - \frac{r_ - }{r_h }} \right)^{\frac{a^2 - 1}{a^2 + 1}}} \right),
\end{eqnarray}
which means Hawking temperature of the dilatonic black hole
is
\begin{equation}
T = \frac{1}{4\pi r_h }\left( {1 - \frac{r_ - }{r_h }}
\right)^{\frac{1 - a^2}{1 + a^2}}. \label{T}
\end{equation}
Now Hawking temperature has been correctly derived via fermion with spin up
tunnelling from the dilatonic black hole. For spin down
case, the similar process is adopted and the same result can be
recovered. When $a = 0$, the metric (\ref{eqds}) is the solution of
the Reissner-Nordstr\"{o}m black hole, and Hawking temperature
is recovered from Eq.(\ref{T}) as
\begin{equation}
T = \frac{1}{2\pi }\frac{\sqrt {M^2 - Q^2} }{\left( {M + \sqrt {M^2
- Q^2} } \right)^2}. \label{eq17}
\end{equation}
In the extreme limit $r_h = r_ - $, it is found that the surface
gravity is zero and Hawking temperature of the black holes (the
extreme Reissner-Nordstr\"{o}m black hole and the extreme $U\left( 1
\right)$ dilatonic black hole) vanishes for all $0 \le a < 1$. But
for $a = 1$ the surface gravity is a constant; and for $a > 1$, it
divergences because the black hole approaches its extremal
limit\cite{M8}, at the time the nearly extremal black hole behaves
more like elementary particles.
\section{Fermions tunnelling from the rotating EMDA black hole}\label{EMDA}
In this section, we study charged fermions tunnelling from the
rotating Einstein-Maxwell-Dilaton-Axion(EMDA) black hole. In 1995,
Alberto Garc\'{\i}a et.al gave a class of stationary axisymmetric
solutions of the Einstein-Maxwell-Dilaton-Axion field equations.
From the action \begin{eqnarray} S &=& \int dx^4\sqrt { - g} [ R - 2g^{\mu
\nu}\partial _\mu \phi
\partial _v \phi \nonumber\\
&-& \frac{1}{2}e^{4\phi }g^{\mu \nu }\partial _\mu \kappa
\partial _v \kappa - e^{ - 2\phi }F_{\mu \nu } F^{\mu \nu } - \kappa F_{\mu
\nu }
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}}\over
{F}} ^{\mu \nu } ] , \end{eqnarray} the solution of the EMDA black
hole\cite{M9} can be obtained as
\begin{eqnarray} ds^2 &=& - \frac{\Delta -
a^2\sin ^2\theta }{\sum }dt^2 - \frac{2a\sin ^2\theta \left( {r^2
+ 2br + a^2 - \Delta } \right)}{\sum }dtd\varphi \nonumber\\
&+& \frac{\sum}{\Delta }dr^2 + \sum d\theta ^2 +\frac{\left( {r^2 +
2br + a^2} \right)^2 - \Delta a^2\sin ^2\theta }{\sum }\sin ^2\theta
d\varphi ^2, \label{eq16} \end{eqnarray} where \begin{eqnarray}
&& A_\mu =A_t dt + A_\varphi d\varphi = \frac{Qr}{\sum}dt - \frac{Qra\sin ^2\theta }{\sum }d\varphi ,\nonumber\\
&& \sum = r^2 + 2br + a^2\cos ^2\theta, \nonumber\\
&& \Delta = r^2 - 2mr + a^2 = \left( {r - r_h } \right)\left( {r - r_ - }\right),\nonumber\\
&& r_h = m + \sqrt{m^2 - a^2}, r_ - = m - \sqrt{m^2- a^2}. \end{eqnarray}
The dilaton $\phi $ and axion scalar $\kappa $ fields are,
respectively, given as $\exp \left( {2\phi _0 } \right) = \omega $
and $\kappa = \kappa _0 $, where $\omega $ and $\kappa _0 $ are
constants, and $r_h $/$r_ -$ are the outer/inner horizons of
the black hole. The parameters $m$, $a$ and $b$ are the mass,
angular momentum per unit mass and dilatonic constant of the black
hole, which are related to the ADM mass $M$, charge $Q$ and
angular momentum $J$ of the black hole with
\begin{equation}
M = m + b, ~~Q^2 = 2b\left( {m + b} \right),~~J =\left( {m + b}
\right)a.
\end{equation}
When $a = 0$, the EMDA metric reduces to the
Garfinkle-Horowitz-Strominger dilatonic solution. When $b = m\sinh
^2\left(\frac{\alpha}{2}\right)$ and $\omega = 1$, one can derive
the parameters of characterizing the Kerr-Sen black hole. For
simplicity of our computation, we introduce the dragging coordinate
transformation as $\phi = \varphi -\Omega t$, where
\begin{equation}
\Omega = \frac{a\left( {r^2 + 2br + a^2 - \Delta } \right)}{\left(
{r^2 + 2br + a^2} \right)^2 - \Delta a^2\sin ^2\theta},
\end{equation}
to the metric (\ref{eq16}), then the new metric takes the forms as
\begin{equation}
ds^2 = - f\left( r \right)dt^2 + \frac{1}{g\left( r \right)}dr^2 +
\sum d\theta ^2 + g_{33} d\phi ^2, \label{eq20}
\end{equation}
where \begin{eqnarray} && f\left( r \right) = \frac{\Delta \sum }{\left( {r^2 +
2br + a^2}
\right)^2 - \Delta a^2\sin ^2\theta }, \quad g\left( r\right) = \frac{\Delta }{\sum }, \nonumber\\
&& g_{33} = \frac{\left( {r^2 + 2br + a^2} \right)^2 - \Delta
a^2\sin ^2\theta }{\sum }\sin ^2\theta . \end{eqnarray}
The corresponding potential is
\begin{equation}
\mathcal{A}_\mu = \mathcal{A}_t dt + \mathcal{A}_\phi d\phi =
\frac{\left( {r^2 + 2br + a^2} \right)Qr}{\left( {r^2 + 2br + a^2}
\right)^2 - \Delta a^2\sin ^2\theta }dt - \frac{Qra\sin ^2\theta
}{\sum }d\phi .
\end{equation}
In order to solve Dirac equation, we must first introduce the
metrics $\gamma ^\mu $. As mentioned in Sec.\ref{SSDB}, there are
many different ways to choose them. Considering the similarity between
the metrics (\ref{eqds}) and (\ref{eq20}), we choose $\gamma ^\mu $
matrices as
\begin{eqnarray} && \gamma ^t = \frac{1}{\sqrt {f\left( r \right)} }\left(
{{\begin{array}{*{20}c}
i \hfill & 0 \hfill \\
0 \hfill & { - i} \hfill \\
\end{array} }} \right),
\quad \gamma ^\phi = \frac{1}{\sqrt {g_{33} } }\left(
{{\begin{array}{*{20}c}
0 \hfill & {\sigma ^2} \hfill \\
{\sigma ^2} \hfill & 0 \hfill \\
\end{array} }} \right), \nonumber\\
&& \gamma ^r = \sqrt {g\left( r \right)} \left(
{{\begin{array}{*{20}c}
0 \hfill & {\sigma ^3} \hfill \\
{\sigma ^3} \hfill & 0 \hfill \\
\end{array} }} \right),
\quad \gamma ^\theta = \frac{1}{\sqrt {\sum }}\left(
{{\begin{array}{*{20}c}
0 \hfill & {\sigma ^1} \hfill \\
{\sigma ^1} \hfill & 0 \hfill \\
\end{array} }} \right). \label{eq23}
\end{eqnarray}
We still choose the same form as given in Eq.(\ref{eq1}) for the
wave functions of charged fermion tunnelling from the EMDA black
hole, and only explore spin up case. Substituting the $\gamma ^\mu $
matrices (\ref{eq23}) and the wave function into Dirac equation
(\ref{eq9}), we have
\begin{equation}
- \left( {\frac{iA}{\sqrt {f\left( r \right)} }\left( {\partial _t I_
\uparrow + e\mathcal{A}_t } \right) + B\sqrt {g\left( r \right)}
\partial _r I_ \uparrow } \right) + mA = 0,\label{eq27}
\end{equation}
\begin{equation}
\left( {\frac{iB}{\sqrt {f\left( r \right)} }\left( {\partial _t I_
\uparrow + e\mathcal{A}_t } \right) - A\sqrt {g\left( r \right)}
\partial _r I_ \uparrow } \right) + mB = 0, \label{eq28}
\end{equation}
\begin{equation}
{\frac{B}{\sqrt {\sum }}\partial _\theta I_ \uparrow +
\frac{iB}{\sqrt {g_{33} } }\left( {\partial _\phi I_ \uparrow +
e\mathcal{A}_\phi } \right)} = 0,
\end{equation}
\begin{equation}
{\frac{A}{\sqrt {\sum }}\partial _\theta I_ \uparrow +
\frac{iA}{\sqrt {g_{33} } }\left( {\partial _\phi I_ \uparrow +
e\mathcal{A}_\phi } \right)} = 0.
\end{equation}
There are four equations, but our interest is the first two ones
because the tunnelling rate is directly related to the imaginary
part of the radial function, and the angular contribution can be
cancelled out when dividing the outgoing probability by the ingoing
probability. In view of the properties of the rotating EMDA
space-time, one can carry out separation of variables for the action as
\begin{equation}
I_ \uparrow = - \left(\omega-j\Omega\right) t + W\left( r \right) +
j\phi + \Theta \left( \theta \right), \label{eq31}
\end{equation}
where $\omega $ is the energy of the emitted particle for the
observer at the infinity, and $j$ is the angular quantum number
about $\varphi$. Inserting the action (\ref{eq31}) into Eqs.
(\ref{eq27}) and (\ref{eq28}) yields
\begin{equation}
\left( {\frac{iA}{\sqrt {f\left( r \right)} }\left( {\omega -
j\Omega - e\mathcal{A}_t } \right) - B\sqrt {g\left( r \right)}
\partial _r W\left( r \right)} \right) + mA = 0,
\end{equation}
\begin{equation}
- \left( {\frac{iB}{\sqrt {f\left( r \right)} }\left( {\omega - j\Omega -
e\mathcal{A}_t } \right) + A\sqrt {g\left( r \right)} \partial _r
W\left( r \right)} \right) + mB = 0.
\end{equation}
Here exists two cases. When $m=0$, the above equations describe the
radial wave function for the massless particle. When $m \ne 0$,
charged massive fermion is in consideration, and solving the above
equations yields
\begin{eqnarray} W_\pm \left( r \right) &=& \pm \int {\sqrt
{\frac{\left( {\omega - j\Omega - e\mathcal{A}_t } \right)^2+ m^2f
\left( r
\right)}{f\left( r \right)g\left( r \right)}} dr} \nonumber\\
& =& \pm i\pi\frac{\omega - j\Omega _h - e\mathcal{A}_t(r_h)
}{\sqrt {{f}'\left( {r_h } \right){g}'\left( {r_h } \right)} }, \end{eqnarray}
where $+ $($-$) correspond to the outgoing (ingoing) solutions, and
$\Omega _h=\Omega(r_h)=a/\left(r_h^2+2br_h+a^2\right)$ is the
angular velocity at the event horizon of the EMDA black hole. Thus
the tunnelling probability of charged fermion can be written as \begin{eqnarray}
\Gamma &=& \frac{P_{\left( {emission} \right)} }{P_{\left(
{absorption} \right)} } = \frac{\exp ( - 2\textrm{Im}I_ {\uparrow+}
)}{\exp
( - 2\textrm{Im}I_ {\uparrow- })}=\frac{\exp ( - 2\textrm{Im}W_ + )}{\exp( - 2\textrm{Im}W_ - )} \nonumber\\
&=& \exp \left( { - 4\pi \frac{\omega - j\Omega _h -
e\mathcal{A}_t(r_h) }{\sqrt {{f}'\left( {r_h } \right){g}'\left(
{r_h } \right)} }} \right). \end{eqnarray} According to the relationship
between the tunnelling rate and Hawking temperature, Hawking
temperature of the EMDA black hole can be obtained as
\begin{equation}
T = \frac{\sqrt {{f}'\left( {r_h } \right){g}'\left( {r_h } \right)}
}{4\pi } = \frac{1}{2\pi }\frac{r_h - m}{r_h^2 + 2br_h + a^2}.
\label{eq37}
\end{equation}
When $b = 0$, Hawking temperature $T$ can be written as
\begin{equation}
T
=\frac{1}{2\pi}\frac{\sqrt{m^2-a^2}}{\left(m+\sqrt{m^2-a^2}\right)^2+a^2},
\label{eq38}
\end{equation}
which is Hawking temperature of the Kerr black hole. For $b = 0$
and $a = 0$ (Schwarzschild black hole case), Hawking temperature equals $1/8\pi m $. So fermions tunnelling from
these black hole can correctly recover Hawking temperatures.
\section{Fermions tunnelling from the rotating Kaluza-Klein black hole}\label{KK}
The solution of the rotating Kaluza-Klein black hole \cite{M10}
denoted a five dimensional space-time with a translational
symmetry in a spacelike direction in four dimensional metrics
$g_{\mu \nu } $ is obtained from the action (\ref{S2}) with $a =
\sqrt 3 $, which reads \begin{eqnarray} ds^2 &=& - \frac{\Delta - a^2\sin
^2\theta }{\Pi \sum }dt^2 +
\frac{\Pi \sum }{\Delta }dr^2 + \Pi \sum d\theta ^2 \nonumber\\
&+& \left[ {\Pi \left( {r^2 + a^2} \right) +\frac{{\rm Z}}{\Pi
}a^2\sin ^2\theta } \right]\sin ^2\theta d\varphi ^2 - \frac{2a{\rm
Z}\sin ^2\theta }{\Pi \sqrt {1 - v^2} }dtd\varphi , \end{eqnarray} with the
dilaton field $\Phi =- (\sqrt{3}\ln \Pi)/2$, and the electromagnetic
potential
\begin{equation}
A_\mu = \frac{v}{2\left( {1 - v^2} \right)}\frac{{\rm Z}}{\Pi ^2}dt
- \frac{va\sin ^2\theta }{2\sqrt {1 - v^2} }\frac{{\rm Z}}{\Pi
^2}d\varphi ,
\end{equation}
where \begin{eqnarray} && Z = \frac{2mr}{\sum }, \quad \Pi = \sqrt {1 +
\frac{v^2{\rm Z}}{1 - v^2}} , \nonumber\\
&& \sum = r^2 + a^2\cos ^2\theta ,
\quad \Delta = r^2 - 2mr + a^2, \nonumber\\
&& r_h = m + \sqrt {m^2 - a^2}, \quad r_ - = m - \sqrt {m^2 - a^2}.
\end{eqnarray} and $a$ and $v$ are the rotation parameter and boost
velocity respectively. The outer(inner) horizons are described by
the parameters $r_h$($r_ -$), and satisfy $\Delta = 0$. The solution
reduces to the Kerr solution for $v=0$. The physics mass $M$, charge
$Q$ and angular momentum $J$ of the black hole are related to the
parameters $m$, $a$ and $v$ as
\begin{equation}
M = \frac{m}{2} \cdot \frac{2 - v^2}{1 - v^2},~~ Q = \frac{mv}{1 -
v^2},~~J = \frac{ma}{\sqrt {1 - v^2} }.
\end{equation}
As mentioned in Sec.\ref{EMDA}, to easily investigate charged fermions tunnelling from the black hole, we first
introduce the dragging coordinate transformation as $\phi = \varphi
- \Omega t$, where
\begin{equation}
\Omega = \frac{a{\rm Z}}{\left[ {\Pi ^2\left( {r^2 + a^2} \right) +
{\rm Z}a^2\sin ^2\theta } \right]\sqrt {1 - v^2} }.
\end{equation}
Then the new metric takes the form as
\begin{equation}
ds^2 = - F\left( r \right)dt^2 + \frac{1}{G\left( r \right)}dr^2 +
\Pi \sum d\theta ^2 + g_{33} d\phi ^2, \label{eq43}
\end{equation}
with the new electromagnetic potential
\begin{eqnarray}
\mathcal{A}_\mu &=& \mathcal{A}_t dt + \mathcal{A}_\phi d\phi \nonumber\\
&=& \frac{Qr}{\sum }\frac{r^2 + a^2}{\Pi ^2\left( {r^2 + a^2}
\right)^2 + {\rm Z}a^2\sin ^2\theta }dt - \frac{va\sin ^2\theta
}{2\sqrt {1 - v^2} }\frac{{\rm Z}}{\Pi ^2}d\phi , \end{eqnarray} and \begin{eqnarray}
&&F\left( r \right) = - \frac{\Pi \sum \Delta \left( {1 - v^2}
\right)}{\left( {r^2 + a^2} \right)^2 - \Delta \left( {a^2\sin
^2\theta + v^2\sum } \right)}, \nonumber\\
&&g_{33} = \left( {\Pi \left( {r^2 + a^2} \right) + \frac{{\rm
Z}}{\Pi }a^2\sin ^2\theta } \right)\sin ^2\theta , \quad G\left( r
\right) = \frac{\Delta }{\Pi \sum }. \end{eqnarray}
As the metric (\ref{eq43}) takes the similar form as that of (\ref{eq20}),
we can choose the similar $\gamma ^\mu $ matrices as Eq.(\ref{eq23}) for the black hole,
specifically takeing
\begin{eqnarray} &&\gamma ^t = \frac{1}{\sqrt {F\left( r
\right)} }\left( {{\begin{array}{*{20}c}
i \hfill & 0 \hfill \\
0 \hfill & { - i} \hfill \\
\end{array} }} \right),
\quad \gamma ^\theta = \frac{1}{\sqrt {\Pi \sum } }\left(
{{\begin{array}{*{20}c}
0 \hfill & {\sigma ^1} \hfill \\
{\sigma ^1} \hfill & 0 \hfill \\
\end{array} }} \right), \nonumber\\
&&\gamma ^r = \sqrt {G\left( r \right)} \left(
{{\begin{array}{*{20}c}
0 \hfill & {\sigma ^3} \hfill \\
{\sigma ^3} \hfill & 0 \hfill \\
\end{array} }} \right),
\quad \gamma ^\phi = \frac{1}{\sqrt {g_{33} } }\left(
{{\begin{array}{*{20}c}
0 \hfill & {\sigma ^2} \hfill \\
{\sigma ^2} \hfill & 0 \hfill \\
\end{array} }} \right).
\end{eqnarray}
Substituting the above $\gamma ^\mu $ matrices and the wave function
with spin up given in (\ref{eq1}) into Dirac equation yields
\begin{equation}
- \left( {\frac{iA}{\sqrt {F\left( r \right)} }\left( {\partial _t I_
\uparrow + e\mathcal{A}_t } \right) + B\sqrt {G\left( r \right)}
\partial _r I_ \uparrow } \right) + mA = 0,\label{eq47}
\end{equation}
\begin{equation}
\left( {\frac{iB}{\sqrt {F\left( r \right)} }\left( {\partial _t I_
\uparrow + e\mathcal{A}_t } \right) - A\sqrt {G\left( r \right)}
\partial _r I_ \uparrow } \right) + mB = 0,\label{eq48}
\end{equation}
\begin{equation}
{\frac{B}{\sqrt {\Pi \sum } }\partial _\theta I_ \uparrow +
\frac{iB}{\sqrt {g_{33} } }\left( {\partial _\phi I_ \uparrow +
e\mathcal{A}_\phi } \right)} = 0,
\end{equation}
\begin{equation}
{\frac{A}{\sqrt {\Pi \sum } }\partial _\theta I_ \uparrow +
\frac{iA}{\sqrt {g_{33} } }\left( {\partial _\phi I_ \uparrow +
e\mathcal{A}_\phi } \right)} = 0.
\end{equation}
Although there are four equations, our attention is also focused on
the first two equations. Considering the properties of the
Kaluza-Klein space-time, we carry out separation of variables as
Eq.(\ref{eq31}). Inserting the action $I_ \uparrow $ into
Eqs.(\ref{eq47}) and (\ref{eq48}) yields
\begin{equation}
\left( {\frac{iA}{\sqrt {F\left( r \right)} }\left( {\omega - j\Omega -
e\mathcal{A}_t } \right) - B\sqrt {G\left( r \right)} \partial _r
W\left( r \right)} \right) + mA = 0,
\end{equation}
\begin{equation}
- \left( {\frac{iB}{\sqrt {F\left( r \right)} }\left( {\omega - j\Omega -
e\mathcal{A}_t } \right) + A\sqrt {G\left( r \right)} \partial _r
W\left( r \right)} \right) + mB = 0,
\end{equation}
where $\omega$ denotes the energy of the emitted particles measured
by the observer at the infinity, and $j$ is the angular quantum
number about $\varphi$. In the case $m \ne 0$, solving the above
equations, we have
\begin{eqnarray} W_\pm \left( r
\right) &=& \pm \int {\sqrt {\frac{\left( {\omega - j\Omega -
e\mathcal{A}_t } \right)^2 +m^2F\left( r
\right)}{F\left( r \right)G\left( r \right)}} dr} \nonumber\\
&=& \pm i\pi \frac{\omega - j\Omega _h - e\mathcal{A}_t(r_h) }{\sqrt
{{F}'\left( {r_h } \right){G}'\left( {r_h } \right)} }, \end{eqnarray} where
$+(-) $ signs represent the outgoing(ingoing) solutions, and $\Omega
_h =\Omega(r_h)$ is the angular velocity at the outer horizon of the
KK black hole. So the tunnelling probability of charged fermion of
the black hole can be written as \begin{eqnarray} \Gamma &=& \frac{P_{\left(
{emission} \right)} }{P_{\left( {absorption} \right)} } = \frac{\exp
( - 2\textrm{Im}I_ {\uparrow+} )}{\exp
( - 2\textrm{Im}I_ {\uparrow- })}=\frac{\exp ( - 2\textrm{Im}W_ + )}{\exp( - 2\textrm{I}mW_ - )} \nonumber\\
&=& \exp \left( { - 4\pi \frac{\omega - j\Omega _h -
e\mathcal{A}_t(r_h) }{\sqrt {{F}'\left( {r_h } \right){G}'\left(
{r_h } \right)} }} \right). \end{eqnarray}
Thus Hawking temperature of the Kaluza-Klein black hole takes
the form as
\begin{equation}
T = \frac{\sqrt {{F}'\left( {r_h } \right){G}'\left(
{r_h } \right)} }{4\pi } = \frac{1}{4\pi } \cdot \frac{\sqrt {\left(
{1 - v^2} \right)\left( {m^2 - a^2} \right)} }{m\left( {m + \sqrt
{m^2 - a^2} } \right)}. \label{eq56}
\end{equation}
When $v = 0$ and $Q=0$, Hawking temperature of Eq.(\ref{eq56})
equals Eq.(\ref{eq38}), which means, in that case, the KK black hole
can reduce to the Kerr black hole. And for $v = 0$ and $a = 0$, it
describes a Schwarzschild black hole, and its Hawking temperature
equals $1/8\pi m$. Obviously, the results once again proves the validity of the charged
fermions tunnelling method.
\section{Conclusions and Discussions}\label{cd}
In this paper, we attempted to apply Kerner and Man's fermions
tunnelling method to charged fermions' cases. As an example, Hawking
radiation of charged fermions for a general charged, spherically
symmetric dilatonic black hole is first studied via the tunnelling
method. For a wide extension, charged fermions tunnelling from the
rotating dilatonic black holes, specifically including the rotating
Einstein-Maxwell-Dilaton-Axion (EDMA) and Kaluza-Klein (KK) black
holes, are also considered in the paper. As a result, the correct
Hawking temperatures are well described by charged fermions
tunnelling from these black holes.
For simplicity to choose the metrics $\gamma^\mu$ when dealing
with Hawking radiation of charged fermionss tunnelling from the
rotating dilatonic black holes in Sec.(\ref{EMDA}) and
Sec.(\ref{KK}), we carried out the dragging coordinate
transformation. In fact, such behavior would not
stop us from getting the correct Hawking temperature of these black
holes, as in Ref.\cite{M5} to discuss Hawking radiation of black holes in the Painlev\'{e} and
Kruskal-Szekers coordinate systems. In addition, after charged
fermionss have tunnelled out, we assumed that the energy, charge and
angular momentum of the dilatonic black holes keep the same as
before. If the emitted particle's self-gravitational interaction is
incorporated into Hawking radiation of fermionss tunnelling, Hawking
temperatures will be corrected slightly, but their leading
terms take the same form as Eqs.(\ref{eq17}),(\ref{eq37}) and
(\ref{eq56}).
In summary, we have succeeded in dealing with Hawking radiation of
the rotating black holes via charged fermions tunnelling. This method
can also be directly extended to the case of the non-stationary
charged rotating black holes.
\section*{Acknowledgments}
This work was partially supported by the Natural Science Found of
China under Grant Nos.10675051, 10705008 and 10773008, and a
Graduate Innovation Foundation by CCNU.
|
2,869,038,153,769 | arxiv | \section{Introduction}
Let $b\in L^\infty(I\times \R^d ; \R^d)$ denote a time-dependent vector field on $\R^d$, where $I=(0,T)$, $T>0$, $d\in \mathbb N$. Consider the Cauchy problem for the continuity equation
\begin{equation}\label{Cauchy-problem}
\left\{
\begin{aligned}
&\partial_t u + \mathop{\mathrm{div}_x} (u b) = 0 \qquad \text{in $I\times \R^d$},
\\
&u|_{t=0} = \bar u \qquad\text{in $\R^d$},
\end{aligned}
\right.
\end{equation}
where $\bar u \in L^1_\loc(\R^d)$ is the initial condition and $u\colon I\times \R^d\to \R$ is the unknown. A function $u\in L^1_\loc(I\times \R^d)$ is called \emph{weak solution of \eqref{Cauchy-problem}} if it satisifies \eqref{Cauchy-problem} in sense of distributions:
\begin{equation*}
\int \int u(t,x) (\d_t \varphi(t,x) + b(t,x) \d_x \varphi(t,x)) \, dx \, dt + \int \bar u(x) \varphi(0, x) \, dx = 0
\end{equation*}
for any $\varphi \in C^1_c([0,T)\times \R^d)$.
Existence and uniqueness of weak solution of \eqref{Cauchy-problem} are well-known when
the vector field $b$ is Lipschitz continuous. However in connection with many problems in mathematical physics one has to study \eqref{Cauchy-problem} when $b$ is non-Lipschitz (in general). In particular, vector fields with Sobolev regularity arise in connection with fluid mechanics \cite{DiPernaLions}, and vector fields with bounded variation arise in connection with nonlinear hyperbolic conservation laws \cite{AmbrosioBV}. Therefore one would like to find the weakest assumptions on $b$ under which weak solution of \eqref{Cauchy-problem} exists and is unique.
For a generic bounded vector field $b$ concentrations may occur and therefore the Cauchy problem \eqref{Cauchy-problem} can have no bounded weak solutions. However under mild additional assumptions on $b$ existence of bounded weak solutions can be proved. Namely, the following class of vector fields has been studied in connection with the so-called Keyfitz-Kranzer system (introduced in \cite{KeyfitzKranzer}):
\begin{definition}\label{def-ni}
A vector field $b\in L^\infty(I\times \R^d ; \R^d)$ is called \emph{nearly incompressible with density $\rho \colon I\times \R^d \to \R$} if there exists $C>0$ such that $1/C \le \rho(t,x) \le C$ for a.e. $(t,x)\in I \times \R^d$ and $\rho$ solves
$\partial_t \rho + \mathop{\mathrm{div}_x} (\rho b) = 0$
(in sense of distributions).
\end{definition}
It is well-known that near incompressibility is sufficient for existence of bounded weak solutions of \eqref{Cauchy-problem}.
However in the generic multidimensional case ($d \ge 2$)
it is not sufficient for uniqueness. For example, there exists a bounded divergence-free autonomous vector field on the plane ($d=2$), for which \eqref{Cauchy-problem} has a nontrivial
bounded weak solution with zero initial data \cite{ABC2}.
Uniqueness of weak solutions has been established for some classes of weakly differentiable vector fields \cite{DiPernaLions,AmbrosioBV}.
Recently new uniqueness results were obtained for continuous vector fields \cite{Crippa,Shaposhnikov} (without explicit assumptions on weak differentiablilty). Note that in general a nearly incompressible vector field does not have to be continuous (and vice versa). Uniqueness of locally integrable weak solutions has been proved in \cite{CarCri16}
for Sobolev vector fields under additional assumption of continuity.
Uniqueness of bounded weak solutions for nearly incompressible vector fields in the two-dimensional case ($d=2$) was also studied in \cite{BBG2016}. In particular it was proved that uniqueness holds when $b\ne 0$ a.e., or when $b\in BV$.
Our main result is the following:
\begin{theorem}\label{t-main}
Suppose that $b \in L^\infty(I\times \R; \R)$ is nearly incompressible. Then for any initial condition $\bar u\in L^1_\loc(\R)$ the Cauchy problem \eqref{Cauchy-problem} has a \emph{unique} weak solution $u\in L^1_\loc(I \times \R)$.
\end{theorem}
Existence of bounded weak solutions of \eqref{Cauchy-problem} with bounded $\bar u$ for nearly incompressible vector fields is well-known (see e.g. \cite{CrippaThesis} for the case of vector fields with bounded divergence). Uniqueness of bounded weak solutions in the one-dimensional case has already been proved in \cite{BouchotJames98}.
The novelty of Theorem~\ref{t-main} is that it applies to merely locally integrable weak solutions.
\section{Uniqueness of locally integrable weak solutions}
\begin{definition}\label{def-density}
A non-negative function $\rho \in L^1_\loc(I\times \R^d; \R)$ is called a \emph{density associated with a vector field} $b\in L^1_\loc(I\times \R^d; \R^d)$ if $\rho b \in L^1_\loc(I\times \R^d; \R^d)$ and $\d_t \rho + \div(\rho b) = 0$ in $\ss D'(I\times \R^d)$.
\end{definition}
\begin{remark}
If a vector field $b\in L^\infty(I\times \R^d; \R^d)$ admits a density $\rho$ and there exist strictly positive constants $C_1, C_2$ such that $C_1 \le \rho(t,x) \le C_2$ for a.e. $(t,x) \in I\times \R^d$ then $b$ is nearly incompressible.
\end{remark}
Suppose that a vector field $b\in L^1_\loc(I\times \R^d; \R^d)$ admits a density $\rho$.
Since $\partial_t \rho + \d_x (\rho b) = 0$ in $\ss D'((0,T)\times \R)$ there exists $H\in W^{1,1}_\loc((0,T)\times \R)$ such that
\begin{equation} \label{e-dxH}
\d_x H = \rho \quad \text{and} \quad
\d_t H = - \rho b.
\end{equation}
in $\ss D'(I\times \R^d)$.
\begin{definition}
If a function $H\colon I\times \R \to \R$ satisfies \eqref{e-dxH} then it is called a \emph{Hamiltonian associated with $(\rho, b)$}.
\end{definition}
Clearly the Hamiltonian $H$ is unique up to an additive constant. Moreover, if $\rho, b\in L^\infty(I\times \R)$ then the Hamiltonian can be chosen in such a way that it is Lipschitz continuous, i.e. $H\in \Lip([0,T]\times \R)$.
\begin{theorem}\label{th-L1-unique}
Suppose that a vector field $b\in L^\infty(I\times \R; \R)$ admits a density $\rho \in L^\infty_\loc(I\times \R; \R)$ such that $\rho(t,x)>0$ for a.e. $(t,x) \in I\times \R$.
If $u\in L^1_\loc(I\times \R; \R)$ is a weak solution of
\eqref{Cauchy-problem} with $\bar u \equiv 0$ then $u(t,x)=0$ for a.e. $(t,x)\in I \times \R$.
\end{theorem}
\begin{proof}
\emph{Step 1.}
Let $H\in \Lip([0,T]\times \R)$ be a Hamiltonian associated with $(\rho, b)$.
We would like to use test functions of the form $\varphi(t,x) := f(H(t,x))$ in the districutional formulation of \eqref{Cauchy-problem}, where $f\in C^\infty_c(\R)$. In general such functions could be not compactly supported, therefore we apply an approximation argument.
For any $(t,x)\in(-T,0)\times \R$ let $H(t,x):=H(-t,x)$.
Clearly $\d_x H = \tilde \rho$ and $\d_t H = \tilde \rho \tilde b$ in $\ss D'((-T,T)\times \R)$, where
\begin{equation*}
\tilde \rho(t,x) :=
\begin{cases}
\rho(t,x), & t > 0, \\
\rho(-t,x), & t<0;
\end{cases}
\qquad
\tilde b(t,x) :=
\begin{cases}
b(t,x), & t > 0, \\
-b(-t,x), & t<0.
\end{cases}
\end{equation*}
Let $\eps > 0$ and let $\omega_\eps(z):= \eps^{-2}\omega(\eps^{-2}z)$,
where $\omega\in C_c^\infty(\R^2)$ is the standard mollification kernel.
Let $H_\eps := H * \omega_\eps$, where $*$ denotes the convolution. Clearly
\begin{equation}\label{e-dxHeps}
\d_x H_\eps = \tilde \rho_\eps \quad \text{and}\quad \d_t H_\eps = - (\tilde \rho \tilde b)_\eps.
\end{equation}
Hence for any $t\in (-T+\eps,T-\eps)$ the function $H_\eps(t,\cdot)$ is strictly increasing.
\emph{Step 2.}
Let $h\in \R$ be such that the level set $L_{\eps, h} := \{(t,x)\in (-T+\eps,T-\eps)\times \R : H_\eps(t,x)=h\}$ is not empty.
Suppose that $\tau,\xi \in \R$ and $\tau^2 + \xi^2 = 1$. If $|\xi| > \|b\|_\infty |\tau|$ then the derivative of $H$ in the direction $\nu:=(\tau,\xi)$ satisfies
\begin{equation*}
\d_\nu H = \tau \d_t H + \xi \d_x H = -\tau (\tilde \rho \tilde b)_\eps + \xi \tilde \rho_\eps
\ge (\xi - \tau \|b\|_\infty) \rho_\eps > 0,
\end{equation*}
therefore for any $(t,x)\in L_{\eps, h}$ the level set $L_{\eps, h}$ is contained in some cone:
\begin{equation}\label{e-level-set-in-cone}
L_{\eps, h} \subset \{(t',x') : (x'-x) \le \|b_\infty\| (t' - t)\}.
\end{equation}
Consequently $L_{\eps, h}$ is a bounded subset of $(-T+\eps,T-\eps)\times \R$.
Fix $(t,x)\in L_{\eps, h}$. Since $\d_x H_\eps = \tilde \rho_\eps > 0$, by Implicit Function Theorem
the level set $L_{\eps, h}$ in some neighborhood $U=(t-\delta, t+\delta)\times (x-\delta, x+\delta)$ of $(t,x)$
can be represented as a graph of a smooth function $\tau\mapsto Y_\eps(\tau,h)$:
\begin{equation}\label{e-implicit1}
L_{\eps, h} \cap U = \{(\tau, Y_\eps(\tau,h)) \;|\; \tau \in (t-\delta, t+\delta)\}.
\end{equation}
Moreover,
\begin{equation}\label{e-implicit2}
\d_\tau Y_\eps(\tau, h) = - \left.\frac{(\d_t H_\eps)(\tau, x)}{(\d_x H_\eps)(\tau, x)}\right|_{x=Y_\eps(\tau, h)}
\quad \stackrel{\eqref{e-dxHeps}}{\Rightarrow} \quad
|\d_\tau Y_\eps| \le \frac{|(\tilde \rho \tilde b)_\eps|}{|\tilde \rho_\eps|} \le \|b\|_\infty.
\end{equation}
Let $P_{\eps,h} := \pi_t (L_{\eps, h})$, where $\pi_t (\tau, x) := \tau$ is the projection on the $t$-axis.
By \eqref{e-implicit1} $P_{\eps,h}$ is open. On the other hand $P_{\eps,h}$ is closed in $(-T+\eps, T-\eps)$, since $L_{\eps, h} = H_\eps^{-1}(h)$ is closed in $(-T+\eps,T-\eps)\times \R$. Therefore $P_{\eps,h} = (-T+\eps, T-\eps)$.
The image $R_\eps:=H_\eps((-T+\eps, T-\eps)\times \R) \subset \R$ is connected, since $H_\eps$ is continuous and $(-T+\eps, T-\eps)\times \R$ is connected. Moreover, since for any $t\in (-T+\eps, T-\eps)$ the function $x \mapsto H(t,x)$ is strictly increasing and continuous, the images $H(t, \R)$ are open and hence
\begin{equation*}
R_\eps = \cup_{t\in (-T+\eps, T-\eps)} H_\eps(t, \R)
\end{equation*}
is open. Therefore $R_\eps$ is an open interval.
We have thus proved that for any $h\in R_\eps$ the level set $L_{\eps, h}$ can be \emph{globally}
represented as a graph of a smooth function $\tau \mapsto Y_\eps(\tau, h)$, where $\tau \in (-T+\eps, T-\eps)$ and moreover $|\d_\tau Y_\eps| \le \|b\|_\infty$ by \eqref{e-implicit2}.
\emph{Step 3.}
Using Fubini's theorem and the distributional formulation of \eqref{Cauchy-problem} one can show that
there exists a Lebesgue-negligible set $N\subset (0,T)$ such that for any $\tau\in (0,T) \setminus N$
the function $x\mapsto \rho(t,x)$ is strictly positive for a.e. $x$ and for all $\varphi \in \Lip_c([0,\tau]\times R)$ it holds that
\begin{equation}\label{e-distrib-Cauchy}
\int_\R u(\tau,x) \varphi(\tau, x) \, dx - \int_\R \bar u(x) \varphi(0, x) \, dx
= \int_0^\tau \int_\R u \cdot (\d_t \varphi + b \d_x \varphi) \, dx \, dt
\end{equation}
Let us fix $\tau \in (0,T) \setminus N$ and consider $\eps \in (0, T-\tau)$. By \eqref{e-dxH} the function $x \mapsto H(\tau, x)$ is strictly increasing and continuous. Hence the image $I_\tau := H(\tau, \R)$ is a nonempty open interval.
Consider $f\in C^\infty_c(I_\tau)$ and let $\varphi_\eps(t,x):= f(H_\eps(t,x))$.
We claim that there exists $\eps_1>0$ and a compact $K\subset [0,\tau]\times \R$ such that
\begin{equation*}
\supp \varphi_\eps \subset K
\end{equation*}
for any $\eps \in (0, \eps_1)$.
Indeed, the support of $f$ is contained in some finite interval $(\alpha, \beta)$ such that $[\alpha, \beta] \subset I_\tau$.
Let us fix $\alpha_1 \in I_\tau \setminus [\alpha, +\infty)$ and $\beta_1 \in I_\tau \setminus (-\infty, \beta]$.
By definition of $I_\tau$ there exist $x_1$ and $y_1$ such that $H(t,x_1) = \alpha_1$ and $H(t,y_1) = \beta_1$.
Since $H_\eps(t,x_1) \to H(t,x_1)$ and $H_\eps(t,y_1)\to H(t,y_1)$ as $\eps \to 0$ we can find $\eps_0 > 0$
such that $R_\eps \supset (\alpha, \beta)$ for any $\eps \in (0,\eps_0)$.
Since $x\mapsto H_\eps(\tau, x)$ is strictly monotone and continuous, there exist unique $x_0$ and $y_0$
such that $H(x_0,\tau) = \alpha$ and $H(y_0, \tau) = \beta$
Since the support of $f$ is a compact subset of $(\alpha,\beta)$ and $H_\eps(x_0,\tau) \to \alpha$ and $H_\eps(y_0, \tau) \to \beta$ as $\eps \to 0$, there exists $\eps_0 >0$ such that
\begin{equation*}
\supp f \subset (H_\eps(x_0, \tau), H_\eps(y_0, \tau))
\end{equation*}
whenever $\eps \in (0,\eps_0)$.
Hence the support of $\varphi_\eps$ (restricted to $[0,\tau]\times \R$) is confined by the level sets of $H_\eps$, passing through $x_0$ and $y_0$:
\begin{equation*}
\supp \varphi_\eps \subset \{(t,x) \;|\; t\in [0,\tau], \; x\in [Y_\eps(t, H_\eps(\tau, x_0)), Y_\eps(t, H_\eps(\tau, y_0))]\}
\stackrel{\eqref{e-implicit2}}{\subset} K,
\end{equation*}
where
\begin{equation*}
K:=\{(t,x) \;|\; t \in [0,\tau], x \in [x_0 - \|b\|_\infty (\tau - t), y_0 + \|b\|_\infty (\tau - t)]\}.
\end{equation*}
\emph{Step 4.}
Now we are in a position to use $\varphi_\eps$ as a test function in \eqref{e-distrib-Cauchy}.
First we observe that
\begin{equation*}
\d_t \varphi_\eps + b \d_x \varphi_\eps = f'(H_\eps(t,x)) (\d_t H_\eps + b \d_x H_\eps)
= f'(H_\eps(t,x)) (- (\tilde\rho \tilde b)_\eps + b \tilde \rho_\eps) \to 0
\end{equation*}
a.e. on $(0,\tau)\times \R$ as $\eps \to 0$. Since $\bar u \equiv 0$, by \eqref{e-distrib-Cauchy} and Lebesgue's dominated convergence theorem
\begin{equation}\label{e-limit}
\int_\R u(\tau,x) \varphi_\eps(\tau, x) \, dx
= \int \int_K u \cdot (\d_t \varphi_\eps + b \d_x \varphi_\eps) \, dx \, dt \to 0
\end{equation}
as $\eps \to 0$. (Indeed, $|u \cdot (\d_t \varphi_\eps + b \d_x \varphi_\eps)| \le \|f\|_{C^1} \|\rho\|_{L^\infty(K)} (1 + \|b\|_\infty) |u| \in L^1(K)$.)
Since $H_\eps(\tau, \cdot) \to H(\tau, \cdot)$ uniformly on $[x_0, y_0]$, the left-hand side of the equality above converges to $\int_\R u(\tau, x) f(H(t,x)) \, dx$. We have thus proved that
\begin{equation}\label{e-varid}
\int u(\tau,x) f(H(\tau, x)) \, dx = 0
\end{equation}
for all $f \in C^1_c(I_\tau)$. Approximating $f\in C_c(I_\tau)$ with a sequence of functions from $C^1_c(I_\tau)$ it is easy to see that \eqref{e-varid} holds for any $f \in C_c(I_\tau)$.
Fix $\psi \in C_c(\R)$. Since $x \mapsto H(\tau, x)$ is strictly monotone and continuous, it has a continuous inverse, and therefore we can find $f\in C_c(I_\tau)$ such that $\psi(x) = f(H(\tau,x))$ for all $x\in \R$. Therefore by \eqref{e-varid}
\begin{equation}
\int u(\tau, x) \psi(x) \, dx = 0
\end{equation}
for any $\psi \in C_c(\R)$. Hence $u(\tau, \cdot) \equiv 0$. Since this argument is valid for any $\tau\in (0,T) \setminus N$, we conclude that $u(\tau, \cdot) = 0$ a.e. for a.e. $\tau\in I$.
\end{proof}
From the proof above one can also deduce the following result:
\begin{theorem}
Suppose that a vector field $b\in L^\infty(I\times \R; \R)$ admits a density $\rho \in L^1_\loc(I\times \R; \R)$ such that $\rho(t,x)>0$ for a.e. $(t,x) \in I\times \R$.
If $u\in L^\infty_\loc(I\times \R; \R)$ is a weak solution of
\eqref{Cauchy-problem} with $\bar u \equiv 0$ then $u(t,x)=0$ for a.e. $(t,x)\in I \times \R$.
\end{theorem}
\begin{proof}[The proof repeats the proof of Theorem~\ref{th-L1-unique}.]
Only when passing to the limit in \eqref{e-limit} we have to argue slightly differently.
Namely, since $\tilde \rho \in L^1_\loc([-T,T]\times \R)$ it follows that
\begin{equation*}
\begin{aligned}
&\|u \cdot (\d_t \varphi_\eps + v \d_x \varphi_\eps)\|_{L^1(K)}
\le \|u\|_{L^\infty(K)} \cdot \|-(\tilde \rho \tilde b)_\eps + b \tilde \rho_\eps\|
\\
&\le \|u\|_{L^\infty(K)} \cdot \bigl( \|-(\tilde \rho \tilde b)_\eps + \tilde \rho \tilde b\|_{L^1(K)} + \|-\tilde \rho \tilde b + b \tilde \rho_\eps\|_{L^1(K)} \bigr) \to 0
\end{aligned}
\end{equation*}
as $\eps \to 0$.
\end{proof}
\section{Lagrangian flows and existence of weak solutions}
Suppose that $b\in L^\infty(I\times \R; \R)$ is a nearly incompressible vector field with density $\rho \in L^\infty(I\times\R; \R)$. Let $H\in \Lip([0,T]\times \R)$ be a Hamiltonian associated with $(\rho,b)$.
By \eqref{e-dxH} and Fubini's theorem for a.e. $t\in I$ for all $x,y \in \R$ such that $x<y$ it holds that
\begin{equation}\label{e-H-difference1}
C_1 (y-x) \le H(t,y) - H(t,x) \le C_2 (y-x),
\end{equation}
where $C_1, C_2$ are the constants from Definition~\ref{def-ni}.
By continuity of $H$ \eqref{e-H-difference1} holds for all $t\in \bar I$.
Hence for any $t\in \bar I$ the function $x\mapsto H(t,x)$ is strictly increasing and \emph{bilipschitz}.
Consequently, for any $h \in \R$ there exists unique $Y(t,h)\in \R$ such that $H(t,Y(t,h)) = h$.
By \eqref{e-H-difference1} for any $t\in [0,T]$ there exists a function $\rho_t \in L^\infty$ such that $C_1 \le \rho_t \le C_2$ a.e. and
\begin{equation*}
\d_x H(t,x) = \rho_t(x)
\end{equation*}
in $\ss D'(\R)$. Note that by continuity of $H$ the function $I\ni t \mapsto \rho_t \in L^\infty(\R)$ is $*$-weak continuous
and therefore $\rho$ solves the Cauchy problem for the continuity equation \eqref{Cauchy-problem} with the initial data $\rho_0$. In view of \eqref{e-dxH} for a.e. $t\in I$ we have $\rho(t,x) = \rho_t(x)$ for a.e. $x$.
Since we can always redefine $\rho$ on a negligible set, for convenience we will
assume that the last equality holds for \emph{all} $t\in [0,T]$.
\begin{lemma}\label{l-level-sets}
The function $Y$ is Lipschitz continuous on $[0,T]\times \R$.
Moreover, there exists a negligible set $M\subset \R$ such that for all $h\in \R \setminus M$
\begin{equation}\label{e-ODE}
\d_t Y(t,h) = b(t,Y(t,h))
\end{equation}
in $\ss D'(I)$.
Finally, for all $t\in [0,T]$
\begin{equation}\label{e-Y-pushforward}
Y(t,\cdot)_\# \Le = \rho(t, \cdot) \Le.
\end{equation}
\end{lemma}
Here $f_\# \mu$ denotes the image of the measure $\mu$ under the map $f$ and $\Le$ denotes the Lebesgue measure (we use the notation from \cite{AFP}).
\begin{proof}
By \eqref{e-H-difference1} for any $h, h' \in \R$ it holds that
\begin{equation*}
C_1 |Y(t,h) - Y(t,h')| \le |H(t,Y(t,h)) - H(t,Y(t,h'))| = |h-h'|
\end{equation*}
hence the function $h\mapsto Y(t,h)$ is Lipschitz continuous with Lipschitz constant $1/C_1$.
Fix $(t,x) \in I\times \R$.
In view of \eqref{e-dxH} and Fubini's theorem for a.e. $(t',x') \in I \times \R$ such that $|x'-x| > \|b\|_\infty |t'-t|$
it holds that
\begin{equation}\label{e-H-difference}
|H(t',x') - H(t,x)| \ge C_1 (|x'-x| - \|b\|_\infty |t'-t|).
\end{equation}
By continuity of $H$, \eqref{e-H-difference} holds for \emph{all} $(t',x')\in I\times \R$.
Hence for any $h\in \R$ and any $(t,x)\in H^{-1}(h)$ the level set $H^{-1}(h)$ is contained in a cone:
\begin{equation}
H^{-1}(h) \subset \{(t',x')\in I \times \R : |x'-x| \le \|b\|_\infty |t'-t|\},
\end{equation}
therefore for any $h\in \R$ the function $t \mapsto Y(t,h)$ is Lischitz continuous with Lipschitz constant $\|b\|_\infty$.
In view of Rademacher's theorem the functions $H$ and $Y$ are differentiable a.e. on $I\times \R$.
Hence by chain rule and taking into account \eqref{e-dxH}
we obtain
\begin{equation*}
\begin{aligned}
0 &= \d_t h = \d_t H(t, Y(t,h)) = \d_t H(t, Y(t,h)) + \d_x H(t,Y(t,h)) \d_t Y(t,h)\\
&= - \rho(t,Y(t,h)) b(t,Y(t,h)) + \rho(t,Y(t,h)) \d_t Y(t,h).
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
1 &= \d_h h = \d_h H(t, Y(t,h)) = \d_x H(t,Y(t,h)) \d_h Y(t,h)\\
&= \rho(t,Y(t,h)) \d_h Y(t,h)
\end{aligned}
\end{equation*}
for a.e. $(t,h)\in I \times \R$.
Hence \eqref{e-ODE} holds and moreover for any $\varphi \in C_c(\R)$
\begin{equation*}
\begin{aligned}
\int \varphi \, dY(t,\cdot)_\# \Le &= \int \varphi(Y(t,h)) \, dh \\
&=\int \varphi(Y(t,h)) \rho(t,Y(t,h)) \d_h Y(t,h) \, dh
= \int \varphi (y) \rho(t,y) \, dy
\end{aligned}
\end{equation*}
(by Area formula, see e.g. \cite{AFP}). Thus \eqref{e-Y-pushforward} is proved.
\end{proof}
We define the \emph{flow} $X$ of $b$ as
\begin{equation}\label{e-X-def}
X(t,x) := Y(t, H(0,x)).
\end{equation}
Note that $X$ is independent of the additive constant in the definition of $H$.
In order to show that $X$ is independent of the choice of $\rho$ we recall the definition of regular Lagrangian flow (see \cite{CrippaThesis}) and the corresponding uniqueness result:
\begin{definition}\label{d-rlf}
Let $b\colon [0,T]\times \R^d \to \R^d$ be a bounded measurable vector field.
We say that a map $X\colon [0,T]\times \R^d \to \R^d$ is a regular Lagrangian flow relative to $b$ if
\begin{enumerate}
\item for $\Le^d$-a.e. $x\in \R^d$ the map $t \mapsto X(t,x)$ is an absolutely continuous integral solution of $\dot \gamma(t) = b(t,\gamma(t))$ for $t\in [0,T]$ with $\gamma(0)=x$;
\item there exists a constant $L>0$ independent of $t$ such that $X(t,\cdot)_\# \Le^d \le L \Le^d$.
\end{enumerate}
\end{definition}
\begin{proposition}[see \cite{CrippaThesis}, Theorem 6.4.1]\label{p-regular-lagrangian-flow}
Let $b\colon [0,T]\times \R^d \to \R^d$ be a bounded measurable vector field.
Assume that the only weak solution $u\in L^\infty(I\times \R^d)$ of \eqref{Cauchy-problem} with $\bar u = 0$ is $u =0$.
Then the regular Lagrangian flow relative to $b$, if it exists, is unique. Assume in addition that \eqref{Cauchy-problem}
with $\bar u = 1$ has a positive solution $u\in L^\infty(I\times \R^d)$. Then we have existence of a regular Lagrangian flow relative to $b$.
\end{proposition}
By Lemma \ref{l-level-sets} the flow $X$ defined in \eqref{e-X-def} is a regular Lagrangian flow of $b$.
Indeed, by \ref{e-Y-pushforward}
\begin{equation}\label{e-X-pushforward}
X_\# (\rho(0,\cdot) \Le) = Y(t,\cdot)_\# H(0,\cdot)_\# (\rho(0,\cdot)\Le) = Y(t,\cdot)_\# \Le = \rho(t,\cdot) \Le.
\end{equation}
Since Theorem~\ref{th-L1-unique} implies uniqueness of bounded weak solutions of \eqref{Cauchy-problem}, Proposition~\ref{p-regular-lagrangian-flow} immediately implies uniqueness of regular Lagrangian flow of $b$. Hence $X$ is independent of the choice of the density $\rho$.
\begin{theorem}\label{th-existence}
Let $b\in L^\infty(I\times \R; \R)$ be nearly incompressible with the density $\rho$.
Let $X$ be the flow of $b$. Then for any $\bar u\in L^1_\loc(\R)$
there exists a function $u\in L^1_\loc(I\times \R)$ such that for a.e. $t\in I$
\begin{equation*}
u(t,\cdot) \Le = X(t,\cdot)_\# (\bar u \Le)
\end{equation*}
and the function $u$ solves \eqref{Cauchy-problem}.
\end{theorem}
\begin{proof}
It is straightforward to check that for any $t\in [0,T]$ the inverse $X^{-1}(t,\cdot)$ of the function $X(t,\cdot)$
is given by $X^{-1}(t,x) = Y(0,H(t,x))$. We define $u(t,x)$ as follows:
\begin{equation*}
u(t,x):= \frac{\bar u(X^{-1}(t,x))}{\rho(0,X^{-1}(t,x))} \rho(t,x).
\end{equation*}
Then
\begin{equation*}
\begin{aligned}
u(t,\cdot) \Le &= \frac{\bar u (X^{-1}(t,\cdot))}{\rho (0, X^{-1}(t,\cdot))}X_\# (\rho(0,\cdot) \Le)
\\ &= X_\# \left( \frac{\bar u(\cdot)}{\rho(0,\cdot)}\rho(0,\cdot) \Le \right) = X(t,\cdot)_\# (\bar u \Le)
\end{aligned}
\end{equation*}
Therefore for any $\varphi\in C^1_c([0,T)\times \R)$ by Definition~\ref{d-rlf} and Theorem~\ref{th-existence}
\begin{equation*}
\begin{aligned}
&\int_I\int_\R (\d_t \varphi + b \d_x \varphi) u(t,x) \, dx \, dt
= \int_I\int_\R (\d_t \varphi + b \d_x \varphi) dX(t,\cdot)_\# (\bar u \Le) \, dt
\\&= \int_I\int_\R [(\d_t \varphi)(t,X(t,x)) + b(t,X(t,x)) (\d_x \varphi)(t,X(t,x))] \bar u(x) \, dx \, dt
\\&= \int_I\int_\R \d_t(\varphi(t,X(t,x))) \bar u(x) \, dx \, dt
\\&= -\int_\R \varphi(t,x) \bar u(x) \, dx \, dt. \qedhere
\end{aligned}
\end{equation*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{t-main}]
Existence follows from Theorem~\ref{th-existence} and uniqueness follows from Theorem~\ref{th-L1-unique}.
\end{proof}
\begin{remark}
It would be interesting to study existence and uniqueness of weak solutions of \eqref{Cauchy-problem}
for vector fields admitting non-negative density which may vanish on the sets of positive measure.
Such vector fields (in particular in dimension one) are relevant to the Kuramoto-Sakaguchi equation \cite{AmadoriHaPark}.
\end{remark}
\section{Compactness of flows}
In \cite{Bressan03} Bressan has proposed the following conjecture:
\begin{conjecture}[\cite{Bressan03}]\label{c-Bressan}
Consider a sequence of smooth vector fields $b_n\colon I\times \R^d \to \R^d$
which are uniformly bounded, i.e. $|b_n| \le C$ for some $C>0$ for all $n\in \N$.
Let $X_n=X_n(t,x)$ denote the classical flow of $b_n$, i.e.
\begin{equation*}
X_n(0,x) = x, \qquad \d_t X_n(t,x) = b_n(t,X_n(t,x)).
\end{equation*}
Suppose that there exist constants $C_1$, $C_2$
\begin{equation}
\begin{gathered}
C_1 \le |\det (\nabla_x X(t,x))| \le C_2, \quad (t,x)\in I\times \R^d, \notag\\
\|\nabla_x b_n\|_{L^1} \le C_3. \label{e-BV-bound}
\end{gathered}
\end{equation}
Then the sequence $X_n$ is strongly precompact in $L^1_\loc(I\times \R^d; \R^d)$.
\end{conjecture}
\begin{theorem}\label{th-compactness}
Consider a sequence of one-dimensional vector fields $b_n \in L^\infty(I\times \R; \R)$
which are uniformly bounded, i.e. $|b_n| \le C$ for some $C>0$ for all $n\in \N$.
Let $X_n=X_n(t,x)$ denote the (regular Lagrangian) flow of $b_n$.
Suppose that for each $n\in \N$ the vector field $b_n$ is nearly incompressible with density $\rho_n$
and there exist constants $C_1, C_2$ such that
\begin{equation*}
C_1 \le \rho_n \le C_2
\end{equation*}
a.e. on $I\times \R$ for all $n\in \N$.
Then the sequence $X_n$ is precompact in $C(K)$ for any compact $K\subset I\times \R$.
\end{theorem}
\begin{proof}
By \eqref{e-X-def} and the estimates from the proof of Lemma~\ref{l-level-sets}
one can easily deduce that for any $n\in \N$
\begin{equation*}
|X_n(t,x) - X_n(t',x')| \le \frac{C_2}{C_1}|x-x'| + \|b\|_\infty |t-t'|
\end{equation*}
for all $x,x'\in \R$ and $t,t'\in[0,T]$. Therefore it remains to apply Arzel\`a-Ascoli theorem.
\end{proof}
\begin{remark}
Theorem~\ref{th-compactness} shows that in the one-dimensional case Conjecture holds even without assuming BV bound \eqref{e-BV-bound}.
A quantitative version of Conjecture~\ref{c-Bressan} assuming only the BV bound \eqref{e-BV-bound} (without near incompressibility) has be established in \cite{StefanoBianchini2006}.
\end{remark}
\section{Acknowledgements}
This work is supported by the Russian Science Foundation under grant \No 14-50-00005.
The author would like to thank Debora Amadori, Paolo Bonicatto, Fran\c{c}ois Bouchot and Gianluca Crippa for interesting discussions of this work and their valuable remarks.
\bibliographystyle{unsrt}
|
2,869,038,153,770 | arxiv | \section{Introduction}
The fluctuation-dissipation theorem (FDT) is one of the cornerstones
of modern statistical physics. Roughly speaking, the
fluctuation-dissipation theorem states that for dynamical systems at
statistical equilibrium the average response to small external
perturbations can be calculated through the knowledge of suitable
correlation functions of the unperturbed dynamical system. The
fluctuation-dissipation theorem has great practical use in a variety
of settings involving statistical equilibrium of baths of identical
gas or liquid molecules, Ornstein-Uhlenbeck Brownian motion, motion of
electric charges, turbulence, quantum field theory, chemical physics,
physical chemistry and other areas. The general advantage provided by
the fluctuation-dissipation theorem is that one can successfully
predict the response of a dynamical system at statistical equilibrium
to an arbitrary small external perturbation without ever observing the
behavior of the perturbed system, which offers great versatility and
insight in understanding behavior of dynamical processes near
equilibrium in numerous scientific applications
\cite{EvaMor,KubTodHas}. In particular, there has been a profound
interest among the atmospheric/ocean science community to apply the
fluctuation-dissipation theorem to predict global climate changes
responding to variation of certain physical parameters
\cite{Bel,Lei,CarFalIsoPurVul,GriBra,Gri,GriBraDym,GriBraMaj,GriDym,MajAbrGro,CohCra},
where the FDT has been used largely in its classical formulation
\cite{Ris}. A vivid demonstration of high predictive skill in
low-frequency climate response despite structural instability of
statistical states is given in \cite{MajAbrGer}.
Recently, Majda and the author \cite{AbrMaj5,AbrMaj4,AbrMaj6}
developed and tested a novel computational algorithm for predicting
the mean response of nonlinear functions of states of a chaotic
dynamical system to small change in external forcing based on the
FDT. The major difficulty in this situation is that the probability
measure in the limit as time approaches infinity in this case is
typically a Sinai-Ruelle-Bowen probability measure which is supported
on a large-dimensional (often fractal) set and is usually not
absolutely continuous with respect to the Lebesgue measure
\cite{EckRue,You}. In the context of Axiom A attractors, Ruelle
\cite{Rue1,Rue2} has adapted the classical calculations for FDT to
this setting. The geometric algorithm (also called the short-time FDT,
or ST-FDT algorithm in \cite{AbrMaj5,AbrMaj4,AbrMaj6}) is based on the
ideas of \cite{Rue,Rue2} and takes into account the fact that the
dynamics of chaotic nonlinear forced-dissipative systems often reside
on chaotic fractal attractors, where the classical FDT formula of the
fluctuation-dissipation theorem often fails to produce satisfactory
response prediction, especially in dynamical regimes with weak and
moderate chaos and slower mixing. It has been discovered in
\cite{AbrMaj4,AbrMaj5,AbrMaj6} that the ST-FDT algorithm is an
extremely precise response approximation for short response times, and
can be blended with the classical FDT algorithm with Gaussian
approximation of the state probability density (quasi-Gaussian FDT
algorithm, or qG-FDT) for longer response times to alleviate
undesirable effects of expanding Lyapunov directions (which cause
numerical instability in ST-FDT for longer response times). Further
developing the ST-FDT response algorithm for practical applications,
in \cite{Abr5} the author designed a computationally inexpensive
method for ST-FDT using the reduced-rank tangent map, and in
\cite{Abr6} the ST-FDT algorithm is adapted for the response on slow
variables of multiscale dynamics, which improves its computational
stability and simultaneously reduces computational expense.
However, dynamical systems describing real-world processes are often
driven by a stochastic forcing. In this setting, the traditional
approach is to use the classical FDT algorithm, which computes the
linear response to small external forcing as a correlation function
along a single long-term trajectory. Typically, it is assumed that the
single long-term trajectory samples the statistical equilibrium state
of the model, however, suitable generalizations for dynamics with
time-periodic forcing can also be made \cite{MajWan,MajGer}. A
significant drawback of the classical FDT approach is that its
computational algorithm requires the statistical state probability
density together with its derivative to be explicitly computed, which
is typically not possible for complex nonlinear systems. Usually, an
approximation is used, such as the Gaussian approximation with
suitable mean state and covariance matrix
\cite{AbrMaj4,AbrMaj5,AbrMaj6}. In this case, if the actual
statistical state is far from the Gaussian, the predicted response is
usually considerably different from what is observed by direct model
perturbation (so called ideal response
\cite{AbrMaj4,AbrMaj5,AbrMaj6}).
On the other hand, the ST-FDT response algorithm is observed to be
consistently superior to the classical FDT with Gaussian approximation
for deterministic chaotic dynamical systems with strongly non-Gaussian
statistical states for response times before the numerical instability
occurs. In this work we adapt the ST-FDT linear response algorithm to
be used with stochastically forced dynamics (further called stochastic
ST-FDT, or SST-FDT). Below we observe that the SST-FDT response
algorithm, adapted to stochastically driven dynamics and blended with
the qG-FDT algorithm to avoid numerical instability, is also generally
superior to the classical FDT with Gaussian approximation of the
statistical state for both the additive and multiplicative noise, just
as the ST-FDT algorithm in \cite{AbrMaj4,AbrMaj5,AbrMaj6} for chaotic
deterministic systems. The manuscript is organized as follows. In
Section \ref{sec:fdt_stoch} we develop the SST-FDT formula for general
time-dependent stochastically forced dynamics, and design a practical
computational algorithm for autonomous dynamics with invariant
probability measure. In Section \ref{sec:l96_app} we test the new
algorithm for the stochastically driven Lorenz 96 model
\cite{Lor,LorEma}. Section \ref{sec:summary} summarizes the results of
this work.
\section{Fluctuation-dissipation theorem for stochastically driven
systems}
\label{sec:fdt_stoch}
Here we consider an It\=o stochastic differential equation (SDE) of the
form
\begin{equation}
\label{eq:dyn_sys}
\dif\BS x = \BS f_\alpha(\BS x,t)\dif t+\BS\sigma(\BS x,t)\dif\BS W_t,
\end{equation}
where $\BS x=\BS x(t)\in\mathbb R^N$, $\BS f_\alpha:[\mathbb R^N\times
T]\to\mathbb R^N$, $\BS \sigma:[\mathbb R^{N\times K}\times
T]\to\mathbb R^N$ are smooth nonlinear functions, and $\BS W_t$ is the
$K$-dimensional Wiener process. Additionally, $\BS f$ depends on a
scalar parameter $\alpha$. We say that the SDE in \eqref{eq:dyn_sys}
is {\em unperturbed} if $\alpha=0$, or {\em perturbed} otherwise. We
also adopt the notation $\BS f\equiv\BS f_0$, with the
assumption
\begin{equation}
\label{eq:f_alpha}
\parderiv{}\alpha\BS f_\alpha(\BS x,t)|_{\alpha=0}=\BS B(\BS x)\BS\eta(t),
\end{equation}
where $\BS B(\BS x)$ is an $N\times L$ matrix-valued function, and
$\BS \eta(t)$ is a $L$-vector valued function. The practical meaning
of the above assumption will become clear below.
Let $A(\BS x)$ be a nonlinear function of $\BS x$, and let
$\expecta{A}{\BS x}{t_0}{t}{\alpha}$, where $t>0$ is the elapsed time
after $t_0$, denote the expectation of $A$ at time $t_0+t$ over all
realizations of the Wiener process in \eqref{eq:dyn_sys}, under the
condition that $\BS x(t_0)=\BS x$ (with the short notation
$\expecta{A}{\BS x}{t_0}{t}{0}=\expect{A}{\BS x}{t_0}{t}$). Let $A$ at
the time $t_0$ be distributed according to a probability measure
$\rho_{t_0}$, that is, the average value of $A$ at time $t_0$ is
\begin{equation}
\langle A\rangle(t_0)=\rho_{t_0}(A)=\int_{\mathbb R^N} A(\BS
x)\dif\rho_{t_0}(\BS x),
\end{equation}
where $\dif\rho_{t_0}(\BS x)$ denotes the measure of the infinitesimal
Lebesgue volume $\dif\BS x$ associated with $\BS x$. Then, for time
$t_0+t$, the average of $A$ for the perturbed system in
\eqref{eq:dyn_sys} is given by
\begin{equation}
\langle A\rangle_\alpha(t_0+t)=\rho_{t_0}\left(\expecta{A}{\BS x}
{t_0}{t}{\alpha}\right)=\int_{\mathbb R^N}
\expecta{A}{\BS x}{t_0}{t}{\alpha}\dif\rho_{t_0}(\BS x).
\end{equation}
In general, for the same initial distribution $\rho_{t_0}$, the
average value $\langle A\rangle_\alpha(t_0+t)$ depends on the value of
$\alpha$. Here, we define the {\em average response} $\delta\langle
A\rangle_\alpha(t_0+t)$ as
\begin{equation}
\label{eq:av_resp}
\delta\langle A\rangle_\alpha(t_0+t)=\int_{\mathbb
R^N}\left(\expecta{A}{\BS x}{t_0}{t}{\alpha}-
\expect{A}{\BS x}{t_0}{t}\right)\dif\rho_{t_0}(\BS x).
\end{equation}
The meaning of the average response in \eqref{eq:av_resp} is the
following: for the same initial average value of $A$ it provides the
difference between the future average values of $A$ for the perturbed
and unperturbed dynamics in \eqref{eq:dyn_sys}.
If $\alpha$ is small, we can formally linearize \eqref{eq:av_resp}
with respect to $\alpha$ by expanding in Taylor series around
$\alpha=0$ and truncating to the first order, obtaining the following
general linear fluctuation-response formula:
\begin{equation}
\label{eq:lin_resp}
\delta\langle A\rangle_\alpha(t_0+t)=\alpha\int_{\mathbb R^N}\partial_\alpha
\expect{A}{\BS x}{t_0}{t}\dif\rho_{t_0}(\BS x),
\end{equation}
where we use the short notation
\begin{equation}
\partial_\alpha\bullet\equiv\left.\parderiv{\bullet_\alpha}
\alpha\right|_{\alpha=0}.
\end{equation}
\subsection{Stochastic short-time linear response}
To compute the general linear fluctuation-response formula in
\eqref{eq:lin_resp}, we need a suitable algorithm for
$\partial_\alpha\expect{A}{\BS x}{t_0}{t}$. Let
$x(t_0+t)=\solna{t_0}{t}{\alpha}\BS x$ be the trajectory of
\eqref{eq:dyn_sys} starting at $\BS x$ at $t_0$ for a particular
realization of the Wiener process $\BS W_{[t_0\ldots t_0+t]}$. Then, the
expectation $\expecta{A}{\BS x}{t_0}{t}{\alpha}$ is given by
\begin{equation}
\expecta{A}{\BS x}{t_0}{t}{\alpha}=\expec\left[A
\left(\solna{t_0}{t}{\alpha}\BS x\right)\right],
\end{equation}
where the expectation in the right-hand side is taken with respect to
all Wiener paths. Therefore,
\begin{equation}
\label{eq:expect_partial}
\partial_\alpha\expect{A}{\BS x}{t_0}{t}=
\expec\left[DA\left(\soln{t_0}{t}\BS x\right)
\partial_\alpha\soln{t_0}{t}\BS x\right],
\end{equation}
where $DA$ denotes the derivative of $A$ with respect to its
argument. For $\partial_\alpha\soln{t_0}{t}\BS x$, by taking the
difference between the perturbed and unperturbed versions of
\eqref{eq:dyn_sys} and linearizing with respect to $\alpha$ at
$\alpha=0$, we have
\begin{equation}
\label{eq:dif_soln}
\begin{split}
\dif\partial_\alpha\soln{t_0}{t}\BS x=\Big(D\BS
f\left(\soln{t_0}{t}\BS x, t_0+t\right)\dif t+D\BS
\sigma\left(\soln{t_0}{t}\BS x,t_0+t\right) \dif\BS
W_{t_0+t}\Big)\times\\\times
\partial_\alpha\soln{t_0}{t}\BS x+\partial_\alpha\BS f
\left(\soln{t_0}{t}\BS x,t_0+t\right)\dif t,
\end{split}
\end{equation}
where $D\BS f$ and $D\BS \sigma$ are Jacobians of $\BS f$ and
$\BS\sigma$, respectively. The above equation is a linear stochastic
differential equation for $\partial_\alpha\soln{t_0}{t}\BS x$ with
zero initial condition (as at $t_0$ both perturbed and unperturbed
solutions start with the same $\BS x$). It can be solved as follows:
let us first introduce the integrating factor $\tmap{\BS x}{t_0}{t}$
(an $N\times N$ matrix) given by the solution of the equation
\begin{equation}
\label{eq:tmap}
\begin{split}
\dif\tmap{\BS x}{t_0}{t}&=\Big(D\BS f\left(\soln{t_0}{t}\BS
x,t_0+t\right) \dif t+\\&+D\BS\sigma\left(\soln{t_0}{t}\BS
x,t_0+t\right)\dif\BS W_{t_0+t}\Big) \tmap{\BS x}{t_0}{t},\quad
\tmap{\BS x}{t_0}{0}=\BS I,
\end{split}
\end{equation}
and represent $\partial_\alpha\soln{t_0}{t}\BS x$ as a product
\begin{equation}
\partial_\alpha\soln{t_0}{t}\BS x=\tmap{\BS x}{t_0}{t}\BS
y^{t_0,t}_{\BS x},
\end{equation}
where $\BS y^{t_0,t}_x$ is an $N$-vector. Then, for the It\=o
differential of $\partial_\alpha\soln{t_0}{t}\BS x$ we obtain
\begin{equation}
\label{eq:dif_soln_2}
\begin{split}
\dif\partial_\alpha\soln{t_0}{t}\BS x=\dif\tmap{\BS x}{t_0}{t}\BS
y^{t_0,t}_{\BS x}+ \tmap{\BS x}{t_0}{t}\dif\BS y^{t_0,t}_{\BS
x}=\Big(D\BS f\left(\soln{t_0}{t}\BS x,t_0+t\right) \dif
t+\\+D\BS\sigma\left(\soln{t_0}{t}\BS x,t_0+t\right)\dif\BS
W_{t_0+t}\Big) \tmap{\BS x}{t_0}{t}\BS y^{t_0,t}_{\BS x}+\tmap{\BS
x}{t_0}{t}\dif\BS y^{t_0,t}_{\BS x}=\\ =\Big(D\BS
f\left(\soln{t_0}{t}\BS x,t_0+t\right)\dif
t+D\BS\sigma\left(\soln{t_0}{t}\BS x,t_0+t\right)\dif\BS
W_{t_0+t}\Big)\times\\ \times\partial_\alpha\soln{t_0}{t}\BS
x+\tmap{\BS x}{t_0}{t}\dif\BS y^{t_0,t}_{\BS x}.
\end{split}
\end{equation}
Comparing the right-hand sides of \eqref{eq:dif_soln} and
\eqref{eq:dif_soln_2} we find that $\BS y^{t_0,t}_{\BS x}$ satisfies
\begin{equation}
\dif\BS y^{t_0,t}_{\BS x}=(\tmap{\BS
x}{t_0}{t})^{-1}\partial_\alpha\BS f \left(\soln{t_0}{t}\BS
x,t_0+t\right)\dif t,\quad\BS y^{t_0,0}_{\BS x}=0,
\end{equation}
with the formal solution
\begin{equation}
\BS y^{t_0,t}_{\BS x}=\int_0^t(\tmap{\BS
x}{t_0}{\tau})^{-1}\partial_\alpha\BS f
\left(\soln{t_0}{\tau}\BS x,t_0+\tau\right)\dif\tau.
\end{equation}
Therefore, $\partial_\alpha\soln{t_0}{t}\BS x$ is given by
\begin{equation}
\label{eq:dif_soln_3}
\partial_\alpha\soln{t_0}{t}\BS x= \int_0^t\tmap{\BS
x}{t_0}{t}(\tmap{\BS x}{t_0}{\tau})^{-1}\partial_\alpha\BS f
\left(\soln{t_0}{\tau}\BS x,t_0+\tau\right)\dif\tau.
\end{equation}
At this point, observe that the solution $\tmap{\BS x}{t_0}{t}$ of
\eqref{eq:tmap} can be represented as a product
\begin{equation}
\tmap{\BS x}{t_0}{t}=\tmap{\soln{t_0}{\tau}\BS x}{t_0+\tau}
{t-\tau}\tmap{\BS x}{t_0}{\tau},\quad \tau\leq t,
\end{equation}
due to the fact that a solution of \eqref{eq:tmap} can be multiplied
by an arbitrary constant matrix on the right and still remains the
solution. Then, \eqref{eq:dif_soln_3} becomes
\begin{equation}
\label{eq:deriv_soln}
\begin{split}
\partial_\alpha\soln{t_0}{t}\BS x=\int_0^t\tmapp{\soln{t_0}{\tau}\BS x}
{t_0+\tau}{t-\tau}{t_0+\tau}{t_0+t}\partial_\alpha\BS f
\left(\soln{t_0}{\tau}\BS x,t_0+\tau\right)\dif\tau.
\end{split}
\end{equation}
For smooth $\BS f_\alpha$ and $\BS \sigma$ in \eqref{eq:dyn_sys},
$\soln{t_0}{t}\BS x$ smoothly depends on $\BS x$ \cite{Kun}, and the
integrating factor $\tmap{\BS x}{t_0}{t}$ is in fact the tangent map for
the trajectory $\soln{t_0}{t}\BS x$:
\begin{equation}
\tmap{\BS x}{t_0}{t}=\parderiv{}{\BS x}\soln{t_0}{t}\BS x.
\end{equation}
With \eqref{eq:deriv_soln},
\eqref{eq:expect_partial} becomes
\begin{equation}
\partial_\alpha\expect{A}{\BS x}{t_0}{t}=\int_0^t\expec
\Big[DA\left(\soln{t_0}{t}\BS x\right)\tmapp{\soln{t_0}{\tau}\BS x}
{t_0+\tau}{t-\tau}{t_0+\tau}{t_0+t}
\partial_\alpha\BS f\left(\soln{t_0}{\tau}\BS x,t_0+\tau\right)
\Big]\dif\tau.
\end{equation}
Recalling \eqref{eq:f_alpha}, we write the above formula as
\begin{equation}
\partial_\alpha\expect{A}{\BS x}{t_0}{t}=\int_0^t
\expec\Big[DA\left(\soln{t_0}{t}\BS x\right)
\tmapp{\soln{t_0}{\tau}\BS x}{t_0+\tau}{t-\tau}{t_0+\tau}
{t_0+t}\BS B\left(\soln{t_0}{\tau}\BS x\right)\Big]\BS
\eta(t_0+\tau)\dif\tau.
\end{equation}
Then, the general linear response formula in \eqref{eq:lin_resp} can be
written as
\begin{equation}
\label{eq:fdt_response}
\delta\langle A\rangle_\alpha(t_0+t)=\alpha\int_0^t
\BS R_{SST}(t_0,t,\tau)\BS \eta(t_0+\tau)\dif\tau,
\end{equation}
where the {\em linear response operator} $\BS R_{SST}(t_0,t,\tau)$ is given by
\begin{equation}
\label{eq:fdt_operator}
\BS R_{SST}(t_0,t,\tau)=\expec\int_{\mathbb
R^N}DA\left(\soln{t_0}{t}\BS x\right)
\tmapp{\soln{t_0}{\tau}\BS x}{t_0+\tau}{t-\tau}{t_0+\tau}{t_0+t}
\BS B\left(\soln{t_0}{\tau}\BS x\right)\dif\rho_{t_0}(\BS x).
\end{equation}
Further we refer to \eqref{eq:fdt_operator} as the {\em stochastic
short-time fluctuation-dissipation theorem algorithm}, or SST-FDT
algorithm. The reason is that in practice the computation of the
tangent map in \eqref{eq:tmap} for large $t$ becomes numerically
unstable because of exponential growth due to positive Lyapunov
exponents (just as observed in
\cite{Abr5,Abr6,AbrMaj4,AbrMaj5,AbrMaj6} for deterministic chaotic
dynamics). Note that if the stochastic forcing is removed from
\eqref{eq:dyn_sys}, the SST-FDT response operator becomes the usual
ST-FDT from \cite{Abr5,Abr6,AbrMaj4,AbrMaj5,AbrMaj6}. Apparently,
\eqref{eq:fdt_operator} requires the average with respect to
$\rho_{t_0}$. If $\rho_{t_0}$ is not known explicitly, there are some
opportunities to replace the $\rho$-average with time average,
particularly for the autonomous dynamics with $\rho_{t_0}$ being the
invariant probability measure, and also for non-autonomous dynamical
systems with explicit time-periodic dependence (as done in
\cite{MajWan,MajGer} for classical FDT response).
\subsection{Classical linear response}
The standard way to derive the classical linear response formula is
through the Fokker-Planck equation (or, as it is also called, the
forward Kolmogorov equation) for the perturbed system in
\eqref{eq:dyn_sys} by neglecting the terms of higher order than the
perturbation, as it is done in
\cite{AbrMaj4,AbrMaj5,AbrMaj6,MajAbrGro,Ris,MajWan}. However, for the
sake of clarity, here we show the derivation of the classical FDT
directly from \eqref{eq:lin_resp}. Under the assumption of continuity
of $\rho_{t_0}$ with respect to the Lebesgue measure, that is,
$\dif\rho_{t_0}(\BS x)=p_{t_0}(\BS x)\dif\BS x$, where $p_{t_0}$ is the
probability density, we can also obtain a formal general expression for
the classical fluctuation-response formula. Using the notations
\begin{equation}
\label{eq:kolm-notations}
\begin{array}{c}
\displaystyle\kolma{\BS x}{t}{\alpha}=-\parderiv{}{\BS x}\cdot
(\BS f_\alpha(\BS x,t)\bullet)+\left(\parderiv{}{\BS x}
\otimes\parderiv{}{\BS x}\right)
\cdot(\BS{\sigma\sigma}^T(\BS x,t)\bullet),\\
\displaystyle\Kolma{\BS x}{t_0}{t}{\alpha}=
\texp\left(\int_0^t\dif\tau \,\kolma{\BS x}{t_0+\tau}{\alpha}\right),
\end{array}
\end{equation}
which are, respectively, the Fokker-Planck and forward Kolmogorov
operators, we write the expectation $\expecta{A}{\BS x}{t_0}{t}{\alpha}$
in the form
\begin{equation}
\begin{split}
&\expecta{A}{\BS x}{t_0}{t}{\alpha}=\int_{\mathbb R^N}A(\BS
y)\Kolma{\BS y}{t_0}{t}{\alpha}\delta(\BS x-\BS y)\dif\BS
y=\\ =\int_{\mathbb R^N}&\Kolma{\BS y}{t_0}{t}{\alpha}A(\BS
y)\delta(\BS x-\BS y) \dif\BS y=\Kolmad{\BS x}{t_0}{t}{\alpha}A(\BS
x),
\end{split}
\end{equation}
where $\delta(\BS x)$ is the Dirac delta-function, and the adjoint is
taken with respect to the standard inner product under the
integral. Then, the general response formula with $\dif\rho_{t_0}(\BS
x)= p_{t_0}(\BS x)\dif\BS x$ becomes
\begin{equation}
\begin{split}
\delta\langle A\rangle_\alpha(t_0+t)&=\alpha\int_{\mathbb
R^N}\partial_\alpha \expect{A}{\BS x}{t_0}{t}p_{t_0}(\BS x)\dif \BS
x=\\=&\alpha\int_{\mathbb R^N}\partial_\alpha \Kolmd{\BS
x}{t_0}{t}A(\BS x)p_{t_0}(\BS x)\dif \BS x=\\=&\alpha\int_{\mathbb
R^N}A(\BS x)\partial_\alpha\Kolm{\BS x}{t_0}{t}p_{t_0}(\BS x)\dif\BS
x.
\end{split}
\end{equation}
It is not difficult to show that the parametric derivative of an
ordered exponential of a linear operator $L_\alpha(\BS x,t)$ is
computed as
\begin{equation}
\label{eq:texp_deriv}
\begin{split}
\parderiv{}\alpha&\texp\left(\int_{t_0}^t\dif\tau\,L_\alpha(\BS
x,\tau)\right)=\\ =&\int_{t_0}^t\dif\tau\,\texp\left(\int_\tau^t\dif
s\,L_\alpha(\BS x,s)\right) \parderiv{L_\alpha(\BS x,\tau)}\alpha\texp
\left(\int_{t_0}^\tau\dif s\,L_\alpha(\BS x,s)\right).
\end{split}
\end{equation}
As a result, we obtain
\begin{equation}
\begin{split}
\delta\langle A&\rangle_\alpha(t_0+t)=
\alpha\int_0^t\dif\tau\,\int_{\mathbb R^N}\Kolmd{\BS
x}{t_0+\tau}{t-\tau} \times\\\times &A(\BS
x)\partial_\alpha\kolm{\BS x}{t_0+\tau}\Kolm{\BS x}{t_0}{\tau}
p_{t_0}(\BS x)\dif\BS x=\\=\alpha\int_0^t&\dif\tau\,\int_{\mathbb R^N}
\expect{A}{\BS x}{t_0+\tau}{t-\tau}\partial_\alpha\kolm{\BS
x}{t_0+\tau} p_{t_0+\tau}(\BS x)\dif\BS x,
\end{split}
\end{equation}
where $p_{t_0+\tau}(\BS x)$ is given by
\begin{equation}
p_{t_0+\tau}(\BS x)=\Kolm{\BS x}{t_0}{\tau}p_{t_0}(\BS x).
\end{equation}
Recalling \eqref{eq:f_alpha}, we recover the classical linear
fluctuation-response formula in the form
\begin{equation}
\label{eq:class_resp}
\begin{split}
\delta\langle A&\rangle_\alpha(t_0+t)=\alpha\int_0^t\dif\tau\,
\int_{\mathbb R^N}\expect{A}{\BS
x}{t_0+\tau}{t-\tau}\times\\\times&\partial_\alpha
\kolm{\BS x}{t_0+\tau}p_{t_0+\tau}(\BS x)\dif\BS x=
\alpha\int_0^t\BS R_{class}(t_0,t,\tau)\BS\eta(t_0+\tau)\dif\tau,
\end{split}
\end{equation}
where the classical linear response operator $\BS R_{class}$ is given
by
\begin{equation}
\label{eq:class_operator}
\BS R_{class}(t_0,t,\tau)=-\expec\int_{\mathbb
R^N}A(\soln{t_0+\tau}{t-\tau}\BS x)
\parderiv{}{\BS x}\cdot(\BS B(\BS x)p_{t_0+\tau}(\BS x))\dif\BS x.
\end{equation}
Observe that, unlike \eqref{eq:fdt_operator}, in
\eqref{eq:class_operator} one has to know $p_{t_0+\tau}(\BS x)$ for
all response times explicitly to perform differentiation with respect
to $\BS x$. Usually, an approximation is used, such as the Gaussian
approximation \cite{AbrMaj4,AbrMaj5,AbrMaj6}.
\subsection{Special case for autonomous dynamics with ergodic
invariant probability measure}
Here we consider the case where $\BS f$ and $\BS \sigma$ in
\eqref{eq:dyn_sys} do not explicitly depend on $t$ (although $\BS
f_\alpha$ does with $\alpha\neq 0$), and we choose $\rho_{t_0}=\rho$
to be an ergodic invariant probability measure for
\eqref{eq:dyn_sys}. In this situation, one can replace the averaging
with respect to the measure $\rho$ with averaging over a single
long-term trajectory which starts with an initial condition $\BS x$ in
the support of $\rho$:
\begin{equation}
\begin{split}
\BS R_{SST}(t_0,t,\tau)=\expec\lim_{r\to\infty}\frac 1r\int_0^rDA
\left(\soln{t_0}{t}\soln{t_0-s}{s}\BS x\right)\times\\\times
\tmapp{\soln{t_0}{\tau}\soln{t_0-s}{s}\BS
x}{t_0+\tau}{t-\tau}{t_0+\tau}{t_0+t}
\BS B\left(\soln{t_0}{\tau}\soln{t_0-s}{s}\BS x\right)\dif s,
\end{split}
\end{equation}
where, without loss of generality, the starting is time $t_0-s$, that
is, the averaging occurs over the endpoints of $\soln{t_0-s}{s}\BS
x$. Combining the solution operators, we obtain
\begin{equation}
\begin{split}
\BS R_{SST}(t_0,t,\tau)=\expec\lim_{r\to\infty}\frac 1r\int_0^rDA
\left(\soln{t_0-s}{s+t}\BS x\right)\times\\\times
\tmapp{\soln{t_0-s}{s+\tau}\BS x}{s+\tau}{t-\tau}{t_0+\tau}{t_0+t}
\BS B\left(\soln{t_0-s}{s+\tau}\BS x\right)\dif s.
\end{split}
\end{equation}
Since the averaging over all independent realizations of the Wiener
process is needed, we can average over many statistically independent
chunks of the Wiener path along a single long-time trajectory by
setting $t_0=s-\tau$:
\begin{equation}
\BS R_{SST}(t,\tau)=\lim_{r\to\infty}\frac 1r\int_0^rDA
\left(\soln{-\tau}{s+t}\BS x\right)
\tmapp{\soln{-\tau}{s+\tau}\BS x}{s}{t-\tau}{t_0+\tau}{t_0+t}
\BS B\left(\soln{-\tau}{s+\tau}\BS x\right)\dif s.
\end{equation}
Finally, replacing $\BS x$ with $\soln{0}{-\tau}\BS x$ (which for
finite $\tau$ is also in the support of $\rho$), we find that
\begin{equation}
\BS R_{SST}(t,\tau)=\lim_{r\to\infty}\frac 1r\int_0^rDA
\left(\soln{0}{s+t-\tau}\BS x\right)
\tmapp{\soln{0}{s}\BS x}{s}{t-\tau}{t_0+\tau}{t_0+t}
\BS B\left(\soln{0}{s}\BS x\right)\dif s,
\end{equation}
or, denoting $\BS x(s)=\soln{0}{s}\BS x$,
\begin{equation}
\BS R_{SST}(t,\tau)=\lim_{r\to\infty}\frac 1r\int_0^rDA(
\BS x(s+t-\tau))\tmapp{\BS x(s)}
{s}{t-\tau}{s}{s+t-\tau}\BS B\left(\BS x(s)\right)\dif s.
\end{equation}
Now, the linear response formula in \eqref{eq:fdt_response} and the
response operator in \eqref{eq:fdt_operator} become, respectively,
\begin{equation}
\label{eq:st_fdt}
\begin{split}
\delta\langle A\rangle&_\alpha(t_0+t)=\alpha\int_0^t \BS
R_{SST}(t-\tau)\BS\eta(t_0+\tau)\dif\tau,\\ \BS
R_{SST}(t)&=\lim_{r\to\infty}\frac 1r\int_0^rDA( \BS x(s+t))\tmapp{\BS
x(s)}{s}{t}{s}{s+t}\BS B\left(\BS x(s)\right)\dif s.
\end{split}
\end{equation}
In a similar fashion, for the classical linear response in
\eqref{eq:class_resp} we note that the Fokker-Planck operator $L_{FP}$
does not depend on $t$, and both the forward Kolmogorov operator
$\mathcal L_K$ and its adjoint do not depend on $t_0$. Taking into
account that $p_{t_0+\tau}(\BS x)=p(\BS x)$, where $p(\BS x)$ is the
invariant probability density, we write
\begin{equation}
\BS R_{class}(t)=-\expec\int_{\mathbb R^N}A(\soln{t_0}{t}\BS x)
\parderiv{}{\BS x}\cdot(\BS B(\BS x)p(\BS x))\dif\BS x,
\end{equation}
or, after replacing the $p$-average with the average over the
long-term trajectory,
\begin{equation}
\BS R_{class}(t)=-\expec\lim_{r\to\infty}\frac 1r\int_0^r A(\BS
x(s+t))\frac{\parderiv{}{\BS x}\cdot(\BS B(\BS x(s))p(\BS
x(s)))}{p(x(s))}\dif s.
\end{equation}
Here the expectation can be removed since the averaging over different
Wiener paths will automatically occur as the long time average is
computed. As a result, we obtain
\begin{equation}
\BS R_{class}(t)=-\lim_{r\to\infty}\frac 1r\int_0^rA(\BS x(s+t))
\frac{\parderiv{}{\BS x}\cdot(\BS B(\BS x(s))p(\BS x(s)))}{p(\BS
x(s))}\dif s.
\end{equation}
\section{Application for the stochastically driven Lorenz 96 model}
\label{sec:l96_app}
The 40-mode deterministic Lorenz 96 model (L96) has been introduced by
Lorenz and Emanuel \cite{Lor,LorEma} as a simple model with large
scale features of complex nonlinear geophysical systems. The
deterministic Lorenz 96 (L96) model is given by
\begin{equation}
\label{eq:L96}
\dot X_n=X_{n-1}(X_{n+1}-X_{n-2})-X_n+F,\quad 1\leq k\leq N,
\end{equation}
with periodic boundary conditions given by $X_{n\pm N}=X_n$, where
$N=40$, and $F$ being a constant forcing parameter. The model in
\eqref{eq:L96} is designed to mimic midlatitude weather and climate
behavior (in particular Rossby waves), so periodic boundary conditions
are appropriate. It is demonstrated in Chapter 2 of \cite{MajAbrGro}
that the dynamical regime of the L96 model varies with changing the
value of constant forcing $F$: weakly chaotic dynamical regimes with
$F=5,6$, strongly chaotic regime with $F=8$, and turbulent regimes
$F=12,16,24$ with self-similar time autocorrelation decay.
Here we apply the stochastic forcing to the L96 model as
\begin{equation}
\label{eq:SL96}
\dif X_k=\left[X_{k-1}(X_{k+1}-X_{k-2})-X_k+F\right]\dif t
+(\BS\sigma(\BS X))_k(\dif\BS W_t)_k,
\end{equation}
where $\BS\sigma:\mathbb R^N\to\mathbb R^N$ is a vector-valued
function of $\BS X$, $\BS W$ is a $N$-dimensional Wiener process, and
$(\dif\BS W_t)_k$ is the $k$-th component of $\dif\BS W$ (that is,
effectively $\BS\sigma$ is a diagonal matrix multiplying the vector
$\dif\BS W$). As the stochastic Lorenz 96 (SL96) model above does not
depend explicitly on time (except for the Wiener noise), we can assume
that it has an invariant probability measure $\rho$.
In this work, we perturb the SL96 model in \eqref{eq:SL96} by a small
parameter $\alpha$ as
\begin{equation}
\label{eq:SL96_pert}
\dif X_k=\left[X_{k-1}(X_{k+1}-X_{k-2})-X_k+F+\alpha\eta_k
\right]\dif t+(\BS\sigma(\BS X))_k(\dif\BS W_t)_k,
\end{equation}
where $\BS\eta\in\mathbb R^N$ is a constant forcing vector
perturbation, which is ``turned on'' at time $t_0=0$. With the
invariant probability state $\rho$, and the perturbation given in
\eqref{eq:SL96_pert}, the general response formula in
\eqref{eq:lin_resp} becomes
\begin{equation}
\label{eq:resp_const_forc}
\begin{split}
\delta\langle A\rangle_\alpha(t)=\alpha\mathcal R(t)\BS\eta,\\
\mathcal R(t)=\int_0^t\BS R(\tau)\dif\tau,\\
\end{split}
\end{equation}
where subscripts for $\BS R$ and $\mathcal R$ are omitted as both the
SST-FDT and classical response operators apply. We also set the
observable $A(\BS x)=\BS x$, that is, the response of the mean state
is computed. As an approximation for the invariant probability density
for the classical response, we choose the Gaussian distribution with
the same mean and covariance as the actual invariant probability
measure, which are determined by averaging along the long-term time
series of unperturbed \eqref{eq:SL96}, and, thus, further call it
quasi-Gaussian FDT (qG-FDT) as in \cite{AbrMaj4,AbrMaj5,AbrMaj6}. In
this setting, the short-time and quasi-Gaussian linear response
operators become
\begin{equation}
\begin{split}
&\BS R_{SST}(t)=\lim_{r\to\infty}\frac 1r\int_0^r\tmapp{\BS
x(s)}{s}{t}{s}{s+t} \dif s,\\\BS R_{qG}(t)=&\lim_{r\to\infty}\frac
1r\int_0^r\BS x(s+t)\BS C^{-1} (\BS x(s)-\bar{\BS x})\dif s,
\end{split}
\end{equation}
where $\bar{\BS x}$ and $\BS C$ are the mean state and covariance
matrix of the long-time series of unperturbed \eqref{eq:SL96}.
\subsection{Blended SST/qG-FDT response}
Following \cite{AbrMaj5,AbrMaj6}, we also compute the blended
SST/qG-FDT response as
\begin{equation}
\label{eq:blend_fdt}
\BS R_{SST/qG}(t)=\left[1-H\left(t-t_{\mbox{\scriptsize
cutoff}}\right)\right]\BS R_{SST}(t)+H\left(t-t_{\mbox{\scriptsize
cutoff}}\right)\BS R_{qG}(t),
\end{equation}
where the blending function $H$ is the Heaviside step-function. The
cut-off time $t_{\mbox{\scriptsize cutoff}}$ is chosen as
\begin{equation}
t_{\mbox{\scriptsize cutoff}}=\frac 3{\lambda_1},
\end{equation}
where $\lambda_1$ is the largest Lyapunov exponent (for details see
\cite{AbrMaj5,AbrMaj6}). This cut-off time allows to switch to the
$\BS R_{qG}$ just before the numerical instability occurs in $\BS
R_{SST}$, and, thus avoid the numerical instability. For constant
external forcing and the Heaviside blending step-function the blended
response operators become
\begin{equation}
\label{eq:STqG_mix_op}
\mathcal R_{SST/qG}(t)=\int_0^{t_{\mbox{\scriptsize cutoff}}}
\BS R_{SST}(\tau)\dif\tau + \int_{t_{\mbox{\scriptsize cutoff}}}^t
\BS R_{qG}(\tau)\dif\tau.
\end{equation}
\subsection{Computational experiments}
Below we perform computational experiments in the following setting:
\begin{itemize}
\item The number of variables (model size) $N=40$
\item Constant forcing $F=6$. The L96 model is observed to be weakly
chaotic in this regime \cite{AbrMaj4,AbrMaj5,AbrMaj6,MajAbrGro}, and
we would like to compare the responses for weakly chaotic
deterministic dynamics and the stochastically driven dynamics
\item The tangent map $\tmap{x}{t_0}{t}$ in \eqref{eq:tmap} is
computed in the same fashion as in
\cite{Abr5,Abr6,AbrMaj4,AbrMaj5,AbrMaj6}
\item Forward Euler numerical scheme with time step $\Delta t=0.001$
for both \eqref{eq:tmap} and \eqref{eq:SL96}
\item The linear response is tested for the following settings of the
stochastic term $\sigma$:
\begin{itemize}
\item $\sigma_k=0$ (fully deterministic regime without stochastic
forcing)
\item $\sigma_k=1$ (additive noise)
\item $\sigma_k=0.2X_k$, $\sigma_k=0.5X_k$ (multiplicative noise)
\end{itemize}
\item We compute the linear response operators $\mathcal R_{SST}$,
$\mathcal R_{qG}$ and $\mathcal R_{SST/qG}$, which are given by
\eqref{eq:resp_const_forc} and \eqref{eq:STqG_mix_op}, and compare
them with the ideal response operator $\mathcal R_{ideal}$, which is
computed through the direct model perturbations
\cite{Abr5,Abr6,AbrMaj4,AbrMaj5,AbrMaj6}
\item The time-averaging is done along a time series of 10000 time
units
\item The ideal response operator $\mathcal R_{ideal}$ is computed via
direct perturbations a 10000-member statistical ensemble
\item The comparison of the FDT response operators with the ideal
response operator is carried out by evaluating the $L_2$ relative
error
\begin{equation}
L_2\mbox{-error}=\frac{\|\mathcal R_{FDT}-\mathcal R_{ideal}\|}
{\|\mathcal R_{ideal}\|},
\end{equation}
and the correlation function
\begin{equation}
\mbox{Corr}=\frac{(\mathcal R_{FDT},\mathcal R_{ideal})}
{\|\mathcal R_{FDT}\|\|\mathcal R_{ideal}\|},
\end{equation}
where $(\cdot ,\cdot)$ denotes the standard Euclidean inner
product. Observe that the $L_2$ error shows the general difference
between the FDT and ideal responses, while the correlation function
shows the extent to which the responses are collinear (that is, how
well the location of the response is determined, without considering
its magnitude)
\end{itemize}
\begin{figure}%
\picturehere{Errors_N40_F6_sigma0_errors.eps}%
\picturehere{Errors_N40_F6_sigma1_errors.eps}\\%
\picturehere{Errors_N40_F6_sigma0.2_x_errors.eps}%
\picturehere{Errors_N40_F6_sigma0.5_x_errors.eps}%
\caption{$L_2$-errors of the response operators for SL96 model, $N=40$,
$F=6$. Straight dotted vertical line denotes the blending cut-off time
for SST/qG-FDT. $\mathcal R_{ideal}$ denotes the intrinsic error in the ideal response due to slight nonlinearity.}%
\label{fig:l96_errors}%
\end{figure}%
\begin{figure}%
\picturehere{Errors_N40_F6_sigma0_corrs.eps}%
\picturehere{Errors_N40_F6_sigma1_corrs.eps}\\%
\picturehere{Errors_N40_F6_sigma0.2_x_corrs.eps}%
\picturehere{Errors_N40_F6_sigma0.5_x_corrs.eps}%
\caption{Correlations of the FDT response operators with the ideal
response operator for SL96 model, $N=40$, $F=6$. Straight dotted
vertical line denotes the blending cut-off time for SST/qG-FDT.}%
\label{fig:l96_corrs}%
\end{figure}%
\begin{figure}%
\picturehere{Snapshot_N40_F6_sigma0__t1.eps}%
\picturehere{Snapshot_N40_F6_sigma1__t1.eps}\\%
\picturehere{Snapshot_N40_F6_sigma0.2_x__t1.eps}%
\picturehere{Snapshot_N40_F6_sigma0.5_x__t1.eps}%
\caption{Snapshots of the response operators for SL96 model
at $T=1$, $N=40$, $F=6$.}%
\label{fig:l96_snapshot_t1}%
\end{figure}%
\begin{figure}%
\picturehere{Snapshot_N40_F6_sigma0__t2.eps}%
\picturehere{Snapshot_N40_F6_sigma1__t2.eps}\\%
\picturehere{Snapshot_N40_F6_sigma0.2_x__t2.eps}%
\picturehere{Snapshot_N40_F6_sigma0.5_x__t2.eps}%
\caption{Snapshots of the response operators for SL96 model
at $T=2$, $N=40$, $F=6$.}%
\label{fig:l96_snapshot_t2}%
\end{figure}%
\begin{figure}%
\picturehere{Snapshot_N40_F6_sigma0__t5.eps}%
\picturehere{Snapshot_N40_F6_sigma1__t5.eps}\\%
\picturehere{Snapshot_N40_F6_sigma0.2_x__t5.eps}%
\picturehere{Snapshot_N40_F6_sigma0.5_x__t5.eps}%
\caption{Snapshots of the response operators for SL96 model
at $T=5$, $N=40$, $F=6$.}%
\label{fig:l96_snapshot_t5}%
\end{figure}%
In Figure \ref{fig:l96_errors} we display the $L_2$ relative errors
between the ideal response operator and the FDT response operators,
together with the intrinsic error in the ideal response operator
(which is the result of slight nonlinearity in the ideal response due
to small but finite perturbations). Observe that in the fully
deterministic regime ($F=6$, $\sigma_k=0$) the SST-FDT response
provides a very precise prediction until the time $t\approx 3$, and
then the errors in the SST-FDT grow exponentially rapidly, which is
due to the positive Lyapunov exponents and numerical instability in
the tangent map. On the other hand, the qG-FDT response is not precise
(reaching about 80\% by the time $t=1.5$), due to the fact that the
invariant probability measure associated with the deterministic regime
is highly non-Gaussian, and most probably not continuous with respect
to the Lebesgue measure (that is, it does not even possess a
density). Remarkably, if we look at the stochastically driven regimes
$\sigma_k=1$ (additive noise) and $\sigma_k=0.2X_k$, $\sigma_k=0.5X_k$
(multiplicative noise), we see that the behavior of both the SST-FDT
and qG-FDT responses is qualitatively the same as in the fully
deterministic regime, even though the dynamics is qualitatively
different. Apparently, the level of noise in the two stochastically
driven regimes $\sigma_k=1$ and $\sigma_k=0.2X_k$ is insufficient to
``smooth out'' the invariant probability measure enough for it to
resemble the Gaussian state and to destabilize the computation of the
tangent map. However, in the $\sigma_k=0.5X_k$ multiplicative noise
regime, the errors in the initial qG-FDT response are reduced to about
40\%, which is due to the fact that in this regime the invariant
probability measure is closer to the Gaussian state because of strong
noise. The blended SST/qG-FDT response yields the lowest errors in all
cases, due to its explicit design to avoid numerical instability in
the SST-FDT algorithm.
In Figure \ref{fig:l96_corrs} we show the correlation functions for
the same simulations. Observe that, although significant $L_2$-errors
were observed for the qG-FDT algorithm for the fully deterministic
regime $\sigma_k=0$, its correlations with the ideal response are
generally on the level of around 0.7, which is remarkable. Also, the
correlations of the SST-FDT response with the ideal response are
roughly 1 (nearly perfect correlation) before the numerical
instability manifests itself. As for the blended SST/qG-FDT response,
the best correlations are achieved in the stochastically forced
regimes $\sigma_k=1$ (additive noise) and $\sigma_k=0.2X_k$,
$\sigma_k=0.5X_k$ (multiplicative noise), were the correlations do not
become lower than 0.95 for all response times. For the fully
deterministic case $\sigma_k=0$ the correlations of the blended
SST/qG-FDT response are about 0.8.
In addition to displaying the errors and correlations between the FDT
response operators and the ideal response operator, in Figures
\ref{fig:l96_snapshot_t1}--\ref{fig:l96_snapshot_t5} we show the
instantaneous snapshots of the linear response operators at times
$T=1$, $T=2$ (which are before the SST/qG-FDT cutoff time) and $T=5$
(which is after the SST/qG-FDT cutoff time). Although the linear
response operator at a given time is an $40\times 40$ matrix, it has
the property of translational invariance (just like the L96 model
itself), and, thus, can be averaged along the main diagonal with
wrap-around aliasing of rows (or columns) into a single vector. These
averaged vectors are displayed in Figures
\ref{fig:l96_snapshot_t1}--\ref{fig:l96_snapshot_t5}. Observe that for
the early times of the response $T=1,2$ the SST/qG-FDT response is
virtually indistinguishable from the ideal response. As for the qG-FDT
response, its best performance is observed in the case of strong
multiplicative noise $\sigma_k=0.5X_k$, where the discrepancies
between the qG-FDT and ideal response are not much larger than those
between the SST/qG-FDT response and the ideal response. This is
probably the consequence of the fact that the strong multiplicative
noise changes the invariant probability density of the SL96 model to
the point where it is relatively close to the Gaussian. For other
regimes, by the response time $T=2$ significant errors develop in the
qG-FDT response to the right of the main response diagonal. For the
longer response time $T=5$ and all regimes the blended SST/qG-FDT
response is very similar to the ideal response, while the qG-FDT
response again develops large discrepancies to the right of the main
response diagonal for $\sigma_k=0,1,0.2X_k$. For the strong
multiplicative noise regime, $\sigma_k=0.5X_k$, and response time
$T=5$, the qG-FDT yields lower errors than in the other regimes, but
is still less precise than the SST/qG-FDT response.
\section{Summary}
\label{sec:summary}
The classical fluctuation-dissipation theorem, by its design, is
suitable for computing the linear response for stochastically driven
systems, as it assumes the continuity of the probability measure of
the statistical ensemble distribution with respect to the Lebesgue
measure (which is guaranteed in many stochastically driven
systems). However, the drawback of the classical fluctuation-response
formula is that it requires the probability density together with its
derivative (or their suitable approximations) explicitly in the
response formula. Unfortunately, for complex systems with many
variables such an approximation might not be necessarily available
with required precision.
In this work, we develop the stochastic short-time
fluctuation-dissipation formula (SST-FDT) for stochastically driven
systems which does not require the probability measure of the
statistical state of the system to be known explicitly. This formula
is the analog of the general linear response formula
\cite{AbrMaj4,AbrMaj5,AbrMaj6,EckRue,Rue2} for chaotic (but not
stochastically driven) nonlinear systems. We demonstrate that, before
the numerical instability due to positive Lyapunov exponents occurs,
the SST-FDT for the stochastically driven Lorenz 96 model is generally
superior to the classical FDT formula where the probability density of
the statistical state is approximated by the Gaussian density with the
same mean and covariance (qG-FDT). We test the new SST-FDT formula for
the L96 model with stochastic forcing for both the additive and
multiplicative noise, and observe that the SST-FDT response formula is
generally better than the qG-FDT in both the error and correlation
comparison, before the numerical instability develops in the SST-FDT
response. Additionally, the blended SST/qG-FDT response with a simple
Heaviside blending function clearly performs on top of both the qG-FDT
and SST-FDT in all studied regimes. The results of this work suggest
that the SST/qG-FDT algorithm can be used in practical applications
with stochastic parameterization, such as the climate change
prediction.
\begin{acknowledgment}
The author thanks Ibrahim Fatkullin for helpful comments and
remarks. This work is supported by the NSF CAREER grant DMS-0845760
and the ONR grant N000140610286.
\end{acknowledgment}
|
2,869,038,153,771 | arxiv | \section{Supplemental Material}
Here we give the proofs of Theorems~1 through 5 of the main manuscript, and also elaborate on the MUS of (23). (We preserve the numbering of the equations and theorems in the main manuscript, and add a prefix ``S" to such objects appearing in this supplemental material.) Let us first state the following useful result, proved in \cite{ColesEtAlv4}, that relates the conditional entropy to the relative entropy. This will allow us to rewrite the UPQSI in terms of relative entropy.
\begin{thm4}
\label{thm6}
Let $\Pi=\{\Pi_{j}\}$ be a projective decomposition of $I_a$ and let $P=\{P_{j}\}$ be a POVM on $a$.
(i) Let $\rho_{abc}$ be a pure state, then
\begin{equation}
\label{eqn29}
\tag{S1}
H_{}(\Pi|b)=S(\rho_{ac}||\sum_j \Pi_{j}\rho_{ac}\Pi_{j}).
\end{equation}
(iii) Let $\rho_{abc}$ be \emph{any} state, then
\begin{equation}
\tag{S2}
\label{eqn30}
H_{}(P|b)\geq S(\rho_{ac}||\sum_j P_{j}\rho_{ac}P_{j}).
\end{equation} \openbox
\end{thm4}
\subsection{Proof of Theorem~1}
First, consider the single-POVM UPQSI in (15). We remarked in the main manuscript that the strongest bound in (15) results from chosing $\Pi$ to have the smallest possible rank, i.e.\ the projector onto the support of $\rho_a$. One can see this by considering two projectors $\Pi$ and $\Pi'$ where the latter has a higher rank than the former and $\Pi'=\Pi+\Phi$ where $\Phi$ is also a projector, and note that $G'_j=\sqrt{P_{j}}\Pi'\sqrt{P_{j}}\geq \sqrt{P_{j}}\Pi\sqrt{P_{j}}=G_j$. It follows \cite{HornJohn} that the spectrum of $G'_j$ weakly majorizes that of $G_j$ and thus $\|\Pi' \sqrt{P_{j}}\|_\infty^2 \geq \|\Pi \sqrt{P_{j}}\|_\infty^2$. Now let us prove (15).
\begin{proof}
The important properties \cite{VedralReview02, OhPe93} of $S(\cdot||\cdot)$ we use are:
\begin{equation}
\tag{S3}\label{eqn31}
S(\rho||\sigma )\geq S(\mathcal{E}(\rho)||\mathcal{E}(\sigma ))
\end{equation}
for any quantum channel $\mathcal{E}$; and for positive operators $\rho$, $\sigma $, $\tau $, if $\tau \geq \sigma $, then
\begin{equation}
\tag{S4}\label{eqn32}
S(\rho||\sigma )\geq S(\rho ||\tau).
\end{equation}
Let $\lambda _{\max}(A)$ denote the maximum eigenvalue of $A$, let $G_{j}= \sqrt{P_{j}} \Pi\sqrt{P_{j}} $, note $\lambda _{\max}(G_j)=\|\Pi \sqrt{P_{j}}\|_\infty^2$, then from \eqref{eqn30}:
\begin{align}
H_{}(P|b)&\geq S(\rho_{ac}||\sum_j P_{j}\rho_{ac}P_{j})\nonumber\\
\tag{S5}\label{eqn33}&\geq S(\rho_{ac}||\sum_j \Pi P_{j}\rho_{ac}P_{j} \Pi) \\
\tag{S6}\label{eqn34}&\geq S(\rho_{c}||\sum_j {\rm Tr}_a\{\Pi P_{j}\rho_{ac}P_{j} \Pi\}) \\
\tag{S7}\label{eqn35}&\geq S(\rho_{c}||\sum_j \lambda _{\max}(G_{j}) {\rm Tr}_a\{P_{j}\rho_{ac}\}) \\
\tag{S8}\label{eqn36}&\geq S(\rho_{c}|| \max_j \lambda _{\max}(G_{j}) \rho_c])\\
\tag{S9}\label{eqn37}&=-\log \max_j \lambda _{\max}(G_{j}).
\end{align}
We invoked \eqref{eqn31} for \eqref{eqn33} with the channel $\rho\to\Pi\rho\Pi+(I-\Pi)\rho(I-\Pi)$, and for \eqref{eqn34} with the channel $\rho\to{\rm Tr}_a\rho$. We invoked \eqref{eqn32} for \eqref{eqn35}; $\lambda _{\max}(G_{j})I_a\geq G_j$ which implies ${\rm Tr}_a[\lambda _{\max}(G_{j})I_aT_{ac,j}]\geq {\rm Tr}_a[G_jT_{ac,j}]$, where $T_{ac,j}=\sqrt{P_j}\rho_{ac}\sqrt{P_j}$ is a positive operator. We also used \eqref{eqn32} for \eqref{eqn36}, i.e.\ $\max_j \lambda _{\max}(G_{j}) \sum_j A_j \geq \sum_j \lambda _{\max}(G_{j})A_j$ where the $A_j$ are positive operators.
\end{proof}
Now we prove (14).
\begin{proof}
Let $e$ be an auxiliary system that acts as a register for the $Q$ measurement. Consider the quantum channel $\mathcal{E}_ Q \colo ab\rightarrow eb$ defined by $\mathcal{E}_ Q(\rho_{ab})=\sum_k [e_k]\otimes {\rm Tr}_a(Q_{k}\rho_{ab})$, where $\{\ket{e_k}\}$ is an orthonormal basis of $e$. Also, define $G_{jk}= \sqrt{P_{j}} \Pi Q_{k} \Pi\sqrt{P_{j}} $, and note $G_{jk} \leq \lambda _{\max}(G_{jk})I_a$, and $r(P,Q; \Pi)=\max_{j,k}\lambda _{\max}(G_{jk})$. Then, starting from \eqref{eqn33} (swapping labels $b$ and $c$),
\begin{align}
\tag{S10}\label{eqn38}& H_{}(P|c)\geq S(\rho_{ab}||\sum_j \Pi P_{j}\rho_{ab}P_{j}\Pi)\\
\tag{S11}\label{eqn39}&\geq S(\mathcal{E}_ Q(\rho_{ab})||\sum_j \mathcal{E}_ Q(\Pi P_{j}\rho_{ab}P_{j}\Pi))\\
&= S(\sum_{k} [e_k]\otimes {\rm Tr}_a\{Q_{k} \rho_{ab}\}||\nonumber\\
\tag{S12}\label{eqn40}&\hspace{3pt}\sum_{j,k} [e_k]\otimes {\rm Tr}_a\{G_{jk}\sqrt{P_j}\rho_{ab}\sqrt{P_j}\})\\
&\geq S(\sum_{k} [e_k]\otimes {\rm Tr}_a\{Q_{k} \rho_{ab}\}||\nonumber\\
\tag{S13}\label{eqn41}& \hspace{3pt} \sum_{j,k} \lambda _{\max}(G_{jk})[e_k]\otimes {\rm Tr}_a\{P_{j}\rho_{ab}\})\\
\tag{S14}\label{eqn42}&\geq S(\sum_{k} [e_k]\otimes {\rm Tr}_a\{Q_{k} \rho_{ab}\}|| r(P,Q; \Pi) I_e \otimes \rho_{b})\\
\tag{S15}\label{eqn45}&= -\log r(P,Q; \Pi) - H_{}(Q|b),
\end{align}
We invoked \eqref{eqn31} for step \eqref{eqn39}, \eqref{eqn32} for steps \eqref{eqn41} and \eqref{eqn42}, and Eq.~(11.58) of \cite{NieChu00} for step \eqref{eqn45}.
\end{proof}
\subsection{Proof of Theorem~2}
\begin{proof}
This theorem can be viewed as a corollary to Theorem~4. Set $d$ to be prime, so that $\eta=2$ and $\{s_\alpha \}=\{1,d\}$. For $s_\alpha =1$, $w^\alpha $ is the $z$-basis, and for $s_\alpha =d$, $w^\alpha $ is the $x$-basis. Thus, part (i) of Theorem~4 clearly reduces to part (i) of Theorem~2. Part (ii) of Theorem~4 reduces to part (ii) of Theorem~2 since there are no constraints on the diagonal elements of $\rho^\alpha _a$ for $s_\alpha $ equal to 1 or $d$. Likewise, setting $s_\alpha $ equal to 1 or $d$ in $\rho^\alpha _{ab}=d\sum_{\beta ,\gamma } p_\gamma [w^\alpha _{\beta ,\gamma }]\otimes \sigma ^z_{b,\beta }$ gives the two solutions in part (iii) of Theorem~2.
\end{proof}
\subsection{Proof of Corollary~3}
\begin{proof}
Define $\zeta:=H(x)+H(y)+H(z)- 2\log 2 - S(\rho_a)$. First, consider (possibly mixed) states $\rho_a$ in the $xy$ plane of the Bloch sphere; such states have $H(z)=\log 2$. For these states, $\zeta =0$ if and only if $H(x)+H(y)=\log 2 +S(\rho_a)$. But from Theorem~2, this is true if and only if either $x$ or $y$ is the eigenbasis of $\rho_a$, i.e. the state lies on either the $x$ or $y$ axis of the Bloch sphere. Any other state in the $xy$ plane will strictly have $H(x)+H(y)>\log 2 +S(\rho_a)$. Now consider taking a vertical path in the Bloch sphere up from some point in the $xy$ plane. Such a path will never decrease the value of $\zeta$ (See Appendix F of \cite{ColesEtAlv4}). Thus, the only states that could possibly satisfy $\zeta =0$ are those in the $xz$ plane and the $yz$ plane. But we already know that the territory between the $x$ and $y$ axes in the $xy$ plane cannot have $\zeta=0$, so by symmetry, the territory between the $x$ and $z$ axes in the $xz$ plane cannot have $\zeta=0$, and likewise for the $yz$ plane. So the only states that satisfy $\zeta =0$ are those along the $x$, $y$, and $z$ axes.
\end{proof}
\subsection{Proof of Theorem~4}
\begin{proof}
Even though this is a corollary of Theorem~5, it is instructive to see the direct proof as it is simpler than that of Theorem~5. We discuss below that parts (i) and (ii) follow from part (iii).
(i) Clearly from (21) the only states that can satisfy (20) with equality are pure states $[S(\rho_a)=0]$. Thus, the MUS of (20) are a subset of the MUS of (21), precisely the subset with $S(\rho_a)=0$. Assuming part (ii) of this theorem is true, then the only states that can be MUS of (21) are diagonal in a $w^\alpha $ basis, and thus the only states that can be MUS of (20) are (pure) basis vectors from a $w^\alpha $ basis, and indeed it is easily verified that all such basis vectors are MUS of (20).
(ii) Likewise part (ii) follows from part (iii) of this theorem. The MUS of (21) are a subset of the MUS of (22), precisely the subset with $\rho_{ab}=\rho_a \otimes \rho_b$. Imposing this condition on $\rho^\alpha _{ab}=d\sum_{\beta ,\gamma } p_\gamma [w^\alpha _{\beta ,\gamma }]\otimes \sigma ^z_{b,\beta }$ and tracing over $b$ gives $\rho^\alpha _{a}=d\sum_{\beta ,\gamma } p_\gamma q_\beta [w^\alpha _{\beta ,\gamma }]$. (It turns out we did not need to impose the condition $\rho_{ab}=\rho_a\otimes\rho_b$ since all MUS of (22) have a $\rho^\alpha _a$ of this form.)
(iii) It remains only to prove part (iii). Using (17) and (18) with $\rho=\rho_{ab}$, $\sigma = \sum_j [x_{j}] \rho_{ab}[x_{j}]$, $\mathcal{E}(\cdot)=\sum_k [z_{k}](\cdot)[z_{k}]= \mathcal{E}^\dagger(\cdot)$, gives:
\begin{equation}
\tag{S16}\label{eqn46}
\rho_{ab}=\sum_{j,j',k} \omega ^{(j-j')k} \dyad{x_j}{x_{j'}}\otimes \sqrt{\sigma ^x_{b,j}}\rho_b^{-1/2} \sigma ^z_{b,k}\rho_b^{-1/2} \sqrt{\sigma ^x_{b,j'}}.
\end{equation}
Now specializing to $\chi(x,b)=0$, meaning $\sigma ^x_{b,j}=p_j\rho_b$ for each $j$, \eqref{eqn46} becomes:
\begin{equation}
\tag{S17}\label{eqn47}
\rho_{ab}= \sum_{j,j'} \sqrt{p_jp_{j'}} \dyad{x_j}{x_{j'}}\otimes {\rm Tr}_a(Z^{j-j'}\rho_{ab}),
\end{equation}
where $Z=\sum_k \omega ^k [z_k]$. Computing ${\rm Tr}_a(Z^\mu\rho_{ab})$ from \eqref{eqn47} for $\mu=1,..., d-1$, one arrives at a system of equations (one for each $\mu$):
\begin{equation}
\tag{S18}\label{eqn48}f_\mu(z)g_\mu(x)=0,
\end{equation}
where
\begin{align}
f_\mu(z)&:= {\rm Tr}_a(Z^\mu\rho_{ab})=\sum_k \omega ^{\mu k} \sigma ^z_{b,k},\nonumber\\
\tag{S19}\label{eqn49} g_\mu(x)&:=1-\sum_{j} \sqrt{p_jp_{j+\mu}}.
\end{align}
One can show that $g_\mu(x)=0$ if and only if $p_j=p_{j+m\mu}$ for all $j,m\in \mathbb{Z}_d$, as follows. Using the method of Lagrange multipliers, the Lagrangian is $L=1-\sum_{j} \sqrt{p_jp_{j+\mu}}+\lambda (1-\sum_j p_j)$. Taking $\partial L/\partial p_k=0$ gives $-2\lambda \sqrt{p_k}= \sqrt{p_{k+\mu}}+ \sqrt{p_{k-\mu}}$, and summing this over all $k$ shows that $\lambda =-1$. Thus rearranging: $ \sqrt{p_k}-\sqrt{p_{k-\mu}}= \sqrt{p_{k+\mu}}- \sqrt{p_k}$, which must also equal $\sqrt{p_{k+2\mu}}- \sqrt{p_{k+\mu}}$, etc. Since each stepwise difference is the same and doing $d$ steps brings us back to the same point ($p_k=p_{k+d\mu}$), it must be that $p_k=p_{k+m\mu}$ for all $m=0,...,d-1$.
Now note that $g_\mu(x)=0$ implies that $g_{m\mu}(x)=0$. This fact implies that there are $\eta$ and only $\eta$ different ways to set some $g_{\mu}(x)$ terms to zero, each way corresponding to setting $g_{s_\alpha }(x)=g_{ms_\alpha }(x)=0$, thus $p_j=p_{j+m s_\alpha }$ for $m=0,...,(d/s_\alpha )-1$, and $g_\mu(x)\neq 0$ for $\mu\neq ms_\alpha $. Of course, to solve the system of equations, \eqref{eqn48}, one must compensate for the non-zero $g_\mu(x)$ by setting $f_\mu(z)=0$ for $\mu\neq ms_\alpha $, which can be shown to imply that $\sigma ^z_{b,k}=\sigma ^z_{b,k+nd/s_\alpha }$ for $n=0,...,s_\alpha -1$, as follows. Noting that $\mu$ and $k$ are Fourier partners, Fourier-transform $f_\mu(z)$ to get $\sigma ^z_{b,k}=(1/d)\sum_{\mu} \omega ^{-\mu k}f_\mu(z)=(1/d)\sum_{m} \omega ^{-m s_\alpha k}f_{m s_\alpha }(z)$. Clearly this implies that $\sigma ^z_{b,k}=\sigma ^z_{b,k+nd/s_\alpha }$.
Thus we have $\eta$ solutions where the $\alpha $-th solution, denoted $\rho^\alpha _{ab}$, has the properties that $p_j=p_{j+m s_\alpha }$ and $\sigma ^z_{b,k} = \sigma ^z_{b,k+nd/s_\alpha }$. Now we rewrite the $\rho_{ab}$ in \eqref{eqn47}, letting $j=\gamma +m s_\alpha $, $j'=\gamma '+m' s_\alpha $, $k=\beta +nd/s_\alpha $, with $0\leq \gamma ,\gamma ',n \leq s_\alpha -1$ and $0\leq \beta ,m,m' \leq d/s_\alpha -1$, giving:
\begin{align}
&\rho_{ab}= \sum_{\gamma ,\gamma ',m,m',\beta ,n} \omega ^{(\beta +nd/s_\alpha )(\gamma -\gamma '+ms_\alpha -m's_\alpha )}\times\nonumber\\
\tag{S20}\label{eqn50}&\sqrt{p_{\gamma +ms_\alpha } p_{\gamma '+m's_\alpha }}\dyad{x_{\gamma +ms_\alpha }}{x_{\gamma '+m's_\alpha }}\otimes \sigma ^z_{b,\beta +nd/s_\alpha }.
\end{align}
So for the $\alpha $-th solution this reduces to:
\begin{align}
\rho^\alpha _{ab}= & \sum_{\gamma ,\gamma ',m,m',\beta ,n} \omega ^{(\beta +nd/s_\alpha )(\gamma -\gamma '+ms_\alpha -m's_\alpha )}\times\nonumber\\
\tag{S21}\label{eqn51}&\hspace{26pt} \sqrt{p_\gamma p_{\gamma '}}\dyad{x_{\gamma +ms_\alpha }}{x_{\gamma '+m's_\alpha }}\otimes \sigma ^z_{b,\beta },
\end{align}
The sum over $n$ gives a $\delta _{\gamma ,\gamma '}$ and we arrive at:
\begin{align}
\rho^\alpha _{ab}&= s_\alpha \sum_{\gamma ,m,m',\beta } \omega ^{\beta (ms_\alpha -m's_\alpha )}\times\nonumber\\
\tag{S22}\label{eqn52}&\hspace{31pt} p_\gamma \dyad{x_{\gamma +ms_\alpha }}{x_{\gamma +m's_\alpha }}\otimes \sigma ^z_{b,\beta },
\end{align}
Using $\sqrt{d}\ket{w^\alpha _{\beta ,\gamma }}= \sqrt{s_\alpha } \sum_m \omega ^{\beta ms_\alpha } \ket{x_{\gamma +ms_\alpha }}$, we arrive at $\rho^\alpha _{ab}=d\sum_{\beta ,\gamma }p_\gamma [w^\alpha _{\beta ,\gamma }]\otimes \sigma ^z_{b,\beta }$.
\end{proof}
\subsection{Proof of Theorem~5}
\begin{proof}
This proof mirrors that of Theorem~4, except now we use a vector notation for all quantities, e.g.\ $\vec j=(j_1,...,j_\lambda )$ and $\vec \mu=(\mu_1,...,\mu_\lambda )$, where each component refers to a particular subsystem. From (17) and (18), the MUS of (26) are:
\begin{equation}
\tag{S23}\label{eqn53}
\rho_{ab}=\sum_{\vec j,\vec j'} \sqrt{p_{\vec j}p_{\vec j'}} (\bigotimes_{\nu=1}^\lambda \dyad{x_{j_\nu}}{x_{j'_\nu}})\otimes {\rm Tr}_a\{(\bigotimes_{\nu=1}^\lambda Z_\nu^{j_\nu-j'_\nu})\rho_{ab}\},
\end{equation}
where $Z_\nu=\sum_{k_\nu}\omega _\nu^{k_\nu}[z_{k_\nu}]$ and $\omega _\nu=e^{2\pi i/ d_\nu}$. Now let $\mu_\nu=0,...,d_\nu-1$, compute ${\rm Tr}_{a}\{(\bigotimes_{\nu=1}^\lambda Z_\nu^{\mu_\nu})\rho_{ab}\}$ and using ${\rm Tr}_{a_\nu}(Z_\nu^{\mu_\nu} \dyad{x_{j_\nu}}{x_{j'_\nu}})=\delta _{j_\nu,j'_\nu+\mu_\nu}$ arrive at a system of equations:
\begin{equation}
\tag{S24}\label{eqn54}
f_{\vec\mu} (z) g_{\vec\mu} (x)=0
\end{equation}
where
\begin{align}
f_{\vec\mu}(z)&:= {\rm Tr}_{a}\{(\bigotimes_{\nu=1}^\lambda Z_\nu^{\mu_\nu})\rho_{ab}\} =\sum_{\vec k} (\prod_{\nu=1}^\lambda \omega _\nu^{\mu_\nu k_\nu}) \sigma ^z_{b,\vec k},\nonumber\\
\tag{S25}\label{eqn55}
g_{\vec\mu}(x)&:=1-\sum_{\vec j} \sqrt{p_{\vec j}p_{\vec j+\vec\mu}}.
\end{align}
Consider the following rules. Rule (1): $g_{\vec\mu}(x)=0$ if and only if $p_{\vec j}=p_{\vec j+ \vec \mu}$ for all $j$. This implies the following rules. Rule (2): If $g_{\vec\mu}(x)=0$ then $g_{m \vec\mu}(x)=0$ for all $m=0,...,d-1$, where $m \vec\mu= \vec\mu+ \vec\mu+...$ ($m$ times). Rule (3): If the $d_\nu$ are pairwise coprime and if $g_{\vec\mu}(x)=0$ then $g_{\vec m \vec\mu}(x)=0$ for all $\vec m=(m_1,...,m_\lambda )$, where $m_\nu=0,...,d_\nu-1$ and $\vec m \vec\mu=(m_1\mu_1,...,m_\lambda \mu_\lambda )$.
Rule (1) follows by the method of Lagrange multipliers, as in the proof of Theorem~4. Rule (2) follows from Rule (1) in a straightforward way. Rule (3) follows from Rule (2) by the Chinese Remainder Theorem, which implies that the ring $\mathbb{Z}_d$ is isomorphic to the ring $\mathbb{Z}_{d_1}\times ...\times\mathbb{Z}_{d_\lambda }$. The bijection relating $m\in \mathbb{Z}_d$ to $\vec m\in \mathbb{Z}_{d_1}\times ...\times\mathbb{Z}_{d_\lambda }$ is defined through $m_\nu=(m \mod d_\nu)$. By this bijection and the ring isomorphism, $\{g_{m \vec\mu}(x); m=0,..., d-1\}=\{g_{\vec m \vec\mu}(x); m_\nu=0,..., d_\nu-1\}$, and so Rule (3) follows.
From the above rules and letting $\{s^{(\nu)}_{\alpha _\nu}\}_{\alpha _\nu=1}^{\eta_\nu}$ be the set of factors of $d_\nu$, there are only $\prod_{\nu=1}^\lambda \eta_\nu$ different ways to set some of the $g_{\vec\mu}(x)$ terms to zero, one for each $\vec \alpha $. The way corresponding to a particular $\vec \alpha $ involves setting $g_{\vec s_\alpha }(x)=g_{\vec m \vec s_\alpha }(x)=0, \forall \vec m$, where $\vec s_\alpha =(s^{(1)}_{\alpha _1},..., s^{(\lambda )}_{\alpha _\lambda })$, and $g_{\vec \mu}(x)\neq 0$ for $\vec \mu \neq \vec m \vec s_\alpha $. Of course, to solve \eqref{eqn54} we must set $f_{\vec\mu}(z)=0$ for $\vec \mu \neq \vec m \vec s_\alpha $. From the latter condition, it follows that $\sigma ^z_{b,\vec k}= \sigma ^z_{b,\vec k+\vec n \vec t_\alpha }, \forall \vec n$, where $\vec t_\alpha =(d_1/s^{(1)}_{\alpha _1},..., d_\lambda /s^{(\lambda )}_{\alpha _\lambda })$. And from Rule (1), we have $p_{\vec j}=p_{\vec j+ \vec m \vec s_\alpha }, \forall \vec m$. Plug these two conditions into \eqref{eqn53}, make the variable changes (like in the proof of Theorem~4) $\vec{j}=\vec{\gamma }+ \vec m \vec s_\alpha $, $\vec j'=\vec \gamma '+ \vec m' \vec s_\alpha $, and $\vec{k}=\vec \beta + \vec n \vec t_\alpha $, then sum over $\vec n$ to get a $\delta _{\vec \gamma ,\vec \gamma '}$, then change the $x_\nu$ bases to the $w^{\alpha _\nu}$ bases to arrive at $\rho^{\vec \alpha }_{ab}=d\sum_{\vec\beta , \vec\gamma } p_{\vec\gamma } (\bigotimes_{\nu=1}^\lambda [w^{\alpha _\nu}_{\beta _\nu,\gamma _\nu}])\otimes \sigma ^z_{b, \vec\beta }$.
\end{proof}
\subsection{MUS of (23)}
Here we discuss different classes of MUS of (23). We remind that reader that discord is a measure of the non-classicality of bipartite correlations. All of our discussion will refer to the one-way discord, as originally defined in \cite{OllZur01}, that is asymmetric under interchanging the two systems; in particular, the discord that uses projectors on system $a$.
Generally, any bipartite state can be classified as either zero-discord (ZD), separable with non-zero discord (SNZD), or entangled (E) \cite{OllZur01}. We shall classify MUS of (23) by classifying the reduced density operators $\rho_{ab}$ and $\rho_{ac}$ of the tripartite pure state $\rho_{abc}$ into one of these three categories, i.e.\ by giving an ordered pair of form ($\rho_{ab}$ category, $\rho_{ac}$ category), for example (ZD,E) means $\rho_{ab}$ is ZD and $\rho_{ac}$ is E. Naively this would give $3\times 3 = 9$ possible ordered pairs, but if $\rho_{ab}$ is ZD then $\rho_{ac}$ cannot be SNZD, and vice-versa. (The proof for this is as follows: If $\rho_{ab}$ is ZD, then there exists a basis $w$ for which $H(w|c)=0$. In turn, if $H(w|b)=0$ then $\rho_{ac}$ is ZD, otherwise if $H(w|b)>0$ then $H(w|c)-H(w|b)=S(a|c)<0$ implying that $\rho_{ac}$ is E. So the only possibilities are for $\rho_{ac}$ to be ZD or E, it cannot be SNZD.) So there are only seven possible ordered pairs, and all seven are physically possible.
Below we find three classes of MUS of (23): one class denoted $\Lambda $ for which both $\rho_{ab}$ and $\rho_{ac}$ are E, so (E,E); one class denoted $\Omega$ for which both $\rho_{ab}$ and $\rho_{ac}$ are SNZD, so (SNZD,SNZD); and one class denoted $\Upsilon$ where either $\rho_{ab}$ or $\rho_{ac}$ are ZD, so this includes three ordered pairs (ZD,ZD), (ZD,E), and (E,ZD). It remains an open question as to whether there are MUS of (23) of the form (SNZD,E) or (E,SNZD).
From (17) and (18), the MUS of (23) are tripartite pure states $\rho_{abc}$ with:
\begin{equation}
\tag{S26}\label{eqn56}
\rho_{ab}=\sum_{j,j',k} \omega ^{(j-j')k} \dyad{x_j}{x_{j'}}\otimes \sqrt{\sigma ^x_{b,j}}\rho_b^{-1/2} \sigma ^z_{b,k}\rho_b^{-1/2} \sqrt{\sigma ^x_{b,j'}}
\end{equation}
and by symmetry the MUS also satisfy an equation analogous to \eqref{eqn56} for $\rho_{ac}$.
Let us consider solutions $\rho^\alpha _{abc}$ with the properties that $\sigma ^x_{b,\gamma }=\sigma ^x_{b,\gamma +n s_\alpha }$ and $\sigma ^z_{b,\beta }=\sigma ^z_{b,\beta +m d/s_\alpha }$ for all $n=0,...,d/s_\alpha -1$ and all $m=0,...,s_\alpha -1$; and other solutions $\rho^{\eta+\alpha }_{abc}$ with $\sigma ^x_{c,\gamma }=\sigma ^x_{c,\gamma +n s_\alpha }$ and $\sigma ^z_{c,\beta }=\sigma ^z_{c,\beta +m d/s_\alpha }$ likewise for all $n$ and $m$. Then from \eqref{eqn56}:
\begin{align}
\tag{S27}\label{eqn57}
\rho^\alpha _{ab}&=d \sum_{\beta ,\gamma } [w^\alpha _{\beta \gamma }] \otimes A_{b;\beta ,\gamma }^\dagger A_{b;\beta ,\gamma },\\
\tag{S28}\label{eqn58}
\rho^{\eta+\alpha }_{ac}&=d \sum_{\beta ,\gamma } [w^\alpha _{\beta \gamma }] \otimes A_{c;\beta ,\gamma }^\dagger A_{c;\beta ,\gamma },
\end{align}
where $A_{b;\beta ,\gamma }= \sqrt{\sigma ^z_{b,\beta }} \rho_b^{-1/2} \sqrt{\sigma ^x_{b,\gamma }} $ and $A_{c;\beta ,\gamma }= \sqrt{\sigma ^z_{c,\beta }} \rho_c^{-1/2} \sqrt{\sigma ^x_{c,\gamma }}$, and as always $\beta =0,...,d/s_\alpha -1$ and $\gamma =0,...,s_\alpha -1$. Note that the solution $\rho^{\alpha }_{abc}$ has $H(w^\alpha |c)=0$, while the solution $\rho^{\eta+\alpha }_{abc}$ has $H(w^\alpha |b)=0$. These represent the $2\eta$ solutions ($\eta$ is the number of factors of $d$, e.g.\ $\eta=3$ for $d=4$) described in the main manuscript that compose the set $\Upsilon$. Setting $s_\alpha =1$ or $s_\alpha =d$ in \eqref{eqn57} and \eqref{eqn58} shows that $\Upsilon$ contains all states for which either $H(z|c)$, $H(x|c)$, $H(z|b)$, or $H(x|b)$ equals zero, and so $\Upsilon$ contains the set $\Xi$ defined in the main manuscript.
Let us consider a second class $\Omega $ of MUS of the form:
\begin{equation}
\tag{S29}\label{eqn59}
\rho_{ab}=\sum_{\alpha ,\beta ,\gamma } g_{\alpha ,\beta ,\gamma } [w^\alpha _{\beta ,\gamma }]\otimes \rho_{\alpha ,\beta ,\gamma },
\end{equation}
where the different $\rho_{\alpha ,\beta ,\gamma }$ are all orthogonal to each other and $0\leq g_{\alpha ,\beta ,\gamma }\leq 1$. For these states $S(a|b)=0$, $H(z|b)=H(z|c)=\sum g_{\alpha ,\beta ,\gamma } H(z)_{\ket{w^\alpha _{\beta ,\gamma }}}= \sum_{\alpha ,\beta ,\gamma } g_{\alpha ,\beta ,\gamma }\log s_\alpha $, and $H(x|b)=H(x|c)= \sum g_{\alpha ,\beta ,\gamma } H(x)_{\ket{w^\alpha _{\beta ,\gamma }}}=\sum_{\alpha ,\beta ,\gamma } g_{\alpha ,\beta ,\gamma }\log (d/ s_\alpha )$. So they satisfy (27) since $\sum_{\alpha ,\beta ,\gamma } g_{\alpha ,\beta ,\gamma }=1$. Also, one can show (with a Schmidt decomposition across the $ab/c$ cut) that if $\rho_{ab}$ is given by \eqref{eqn59}, then $\rho_{ac}$ has the same form:
\begin{equation}
\tag{S30}\label{eqn60}
\rho_{ac}=\sum_{\alpha ,\beta ,\gamma } g_{\alpha ,\beta ,\gamma } [w^\alpha _{\beta ,\gamma }]\otimes \sigma _{\alpha ,\beta ,\gamma },
\end{equation}
where the different $\sigma _{\alpha ,\beta ,\gamma }$ are all orthogonal to each other. Thus, both $\rho_{ab}$ and $\rho_{ac}$ are separable, and as long as more than one $w^\alpha $ basis appears in the sums in \eqref{eqn59} and \eqref{eqn60}, then they both have non-zero discord.
Finally, the main manuscript gives an example for $d=2$ of MUS that are neither in $\Upsilon$ nor in $\Omega$. The tripartite state:
\begin{equation}
\tag{S31}\label{eqn61}
\ket{\psi}_{abc}=(\ket{0}\ket{\phi_b}\ket{\phi_c}+\ket{1}\ket{\varphi_b}\ket{\varphi_c})/\sqrt{2}
\end{equation}
where $\ket{\phi_b},\ket{\phi_c},\ket{\varphi_b},\ket{\varphi_c}$ are arbitrary kets with $\ip{\phi_b}{\varphi_b}\ip{\phi_c}{\varphi_c}\in \mathbb{R}$, satisfies (27) with $H(z|b)=\log 2 - S(\rho_b)$, $H(z|c)=\log 2 - S(\rho_c)$, $H(x|b)=S(\rho_c)$, and $H(x|c)= S(\rho_b)$. Likewise, replacing the $z$ states $\{\ket{0},\ket{1}\}$ in \eqref{eqn61} with the $x$ states $\{\ket{+},\ket{-}\}$, the tripartite state:
\begin{equation}
\tag{S32}\label{eqn62}
\ket{\psi}_{abc}=(\ket{+}\ket{\phi_b}\ket{\phi_c}+\ket{-}\ket{\varphi_b}\ket{\varphi_c})/\sqrt{2}
\end{equation}
satisfies (27) with $H(x|b)=\log 2 - S(\rho_b)$, $H(x|c)=\log 2 - S(\rho_c)$, $H(z|b)=S(\rho_c)$, and $H(z|c)= S(\rho_b)$. Except for the extreme cases where $S(\rho_b)$ or $S(\rho_c)$ are 0 or $\log 2$, the states described by \eqref{eqn61} and \eqref{eqn62} are clearly not in $\Upsilon$, and the fact that they are not in $\Omega$ follows from $S(b|a)=-S(b|c)<0$ and $S(c|a)=-S(c|b)<0$, implying that both $\rho_{ab}$ and $\rho_{ac}$ are entangled, in contrast to the separable states in $\Omega$. There is reason to believe that there are MUS for $d>2$ of a similar nature to the qubit examples given here (with both $\rho_{ab}$ and $\rho_{ac}$ entangled), as we have found such MUS for $d=3$. For example:
\begin{equation}
\tag{S33}\label{eqn63}
\ket{\psi}_{abc}=(\ket{z_0}\ket{0}\ket{0}+ \ket{z_1}\ket{+}\ket{+}+ \ket{z_2}\ket{y+}\ket{y-})/\sqrt{3},
\end{equation}
where $b$ and $c$ are qubits and $\ket{y\pm}=(\ket{0}\pm i \ket{1})/\sqrt{2}$, has $H(z|b)= H(z|c) =\log 3 - S(\rho_b)$ and $H(x|b)=H(x|c)=S(\rho_b)$.
\end{document}
\xb
\outl{}
\xa
|
2,869,038,153,772 | arxiv | \section{Introduction}
Plasmas can sustain accelerating fields on the order of $\mathrm{TeV/m}$ \cite{Malka_science_2002}, which makes them an exellent candidate for building small-scale accelerators \cite{Dawson}.
Multiple laboraties around the world have obtained reproducible, quasi-monoenergetic electron beams in laser-produced wakefields.
Electron energy as high as 4 GeV has been reported from an all-optical experiment using this technology \cite{Leemans_4gev}.
Apart from the laser-wakefield acceleration, other ideas are being explored to make use of the collective plasma dynamics for particle acceleration.
One of the alternative approaches is to accelerate electrons within a hollow plasma channel.
Low-density plasma channels within a dense background can serve as a guide for laser light.
This was recently exploited to guide the electrons between two wakefield acceleration stages \cite{Leemans_plasma_lens}.
Channels are particularly suited for experiments where laser transport over several milimeters of plasma is required.
This concept has also been proposed using a laser-formed channel for inertial confinement fusion within a three-step igition scheme \cite{Tabak_POP}, where the first stage is the target compression and in the second stage a long ($\gtrsim10$ ps) laser forms a hollow channel in the coronal plasma.
The pre-formed channel provides a path for an intense ignition laser that subsequently propagates towards the target core.
Due to the interaction at a lower density, the laser penetrates through the coronal plasma with minimal losses of energy.
This enables a higher coupling efficiency of the laser energy to the ignition core.
The potential global impact of this idea has motivated further research on plasma channel formation and dynamics \cite{ pipe_channel, channel_formation1, Fuchs_channel, Malka_prl_1997_ch, Zulf_POP_2003, channel_electrons, Channel_formation_2, sarkisov, Sarri_popch, Li_simulations, Satya_channel, Lemos_Grismayer_Dias}.
Two-pulse configuration studies showed that not only the laser, but the accelerated electron beams can also be guided within a plasma channel \cite{double_pulse_guiding, double_pulse_guiding_2}.
Optimising the laser self-focusing \cite{Self_focusing_early1, Self_focusing_early2} and self-guiding is of particular importance for the laser channeling \cite{Mori_selffocus, naumova_poleff, Matsuoka}.
Besides self-focusing, a range of other interesting collective effects were observed, both in experiments and numerical simulations.
Among them are the long-lived postsolitons that persist long after the laser-plasma interaction is completed \cite{Sarri_solitons, Louise_channel, Macchi_ionmotion, DKPOP}.
More recently, laser-generated channels gained attention as an enviroment by itself well-suited for particle acceleration.
Various mechanisms have been reported as responsible for electron acceleration within the channels, where the betatron resonance is one of the most explored \cite{pukhov_acc_pop, mangles_intenseaccmech}.
The channels are shown to sustain a large-scale azimuthal magnetic fields \cite{Ashley_Bfield} where the electrons perform betatron oscillations.
A considerable enhancement of the axial momentum and the total electron energy can be achieved via amplification of betatron oscillations \cite{POP_arefiev_2014, PRL_arefiev_2012, Huang_recent}.
The existence of transverse fields also helps keeping the momentum of forward-moving electrons aligned with the channel axis.
Self-generated electromagnetic fields can thus assist direct laser acceleration within the channel and allow energy gain beyond the vacuum acceleration limit \cite{Tsakiris_acc}.
Fluctuations of the longitudinal electric field affect the dephasing between the electrons and the laser,
which in turn allows for generation of "superponderomotive" electrons within the channel \cite{Superponderomotive} (that can be similarly generated in solid-density plasmas \cite{Sorokovikova_PRL}).
In addition, electron acceleration within the channel can be initially assisted by the presence of a surface wave on the channel walls.
Here, the surface wave pre-accelerates the electrons, which are then further accelerated by direct laser acceleration or betatron resonance \cite{Naseri_acc, Naseri_PRL_SW}.
A comprehensive review of the factors that contribute to the electron acceleration within the channel can be found in Ref. \cite{JPP_arefiev_2015} and scaling laws in Ref. \cite{Khudik_scalings}.
A distinguishing feature for prospective applications is that particles accelerated within plasma channels emit high frequency radiation that can be used as an X-ray source,
which was demonstrated in recent experiments \cite{Silvia_nat, Betatron_channel, Kneip_prl, Albert_channel_xrays}.
At higher laser intensities, $\gamma$-rays can be emitted \cite{Arefiev_PRL2016_gamma_rays}, and the radiation reaction becomes relevant for the electron dynamics.
Next generation laser facilities will provide extreme intensities ($I>10^{24}~ \mathrm{W/cm^2}$) \cite{facilities_ELI, facilities_apollon, facilities_xcels}.
These short pulses of intense light ($20 - 150$ fs) will be preceeded by a long low intensity pedastal (or prepulse).
When interacting with a plasma slab, the prepulse can produce a hollow plasma channel, because its duration is longer than the characteristic timescale for the plasma ions (the pedastal duration can be on the order of nanoseconds).
A plasma channel can also be produced by a separate laser pulse, with a moderate intensity of $10^{18}~\mathrm{W/cm^2}$ and duration on the order of $10$ ps.
If the prepulse forms a channel, the main laser will be naturally aligned with the channel axis, and therefore its interaction with the plasma will happen within the pre-formed channel.
Due to the high laser intensity, this interaction may be radiation reaction-dominated.
It was reported from simulation studies that radiation reaction can affect the collective plasma dynamics in laser interaction with solid targets ($\gtrsim10^{22}$ cm$^{-3}$) \cite{Arefiev_PRL2016_gamma_rays, Pukhov_rrtrapp}.
An investigation on how an extremely intense laser will interact with dense gas targets is still missing.
This is timely because hydrogen gas jets are now available with densities up to $n \simeq10^{21}$ cm$^{-3}$ \cite{dense_gas_jets}, and they will be employed in experiments with the next generation of lasers.
Here we present a study of a mm-scale channel formation in underdense plasma and subsequent intense laser propagation through that channel.
We demonstrate that stable, non-fully-cavitated channels can be created in plasmas of various densities.
In particular, we studied the channel formation in the background plasma between $n_e=0.001~n_c$ and $n_e=0.1~n_c$, where $n_c$ is the critical density.
We consider a channel formation with a laser of $I=10^{18}~\mathrm{W/cm^2}$.
Large-scale self-generated electric and magnetic fields are studied with 2D and 3D particle-in-cell simulations (full-scale 3D simulations were performed at low plasma density).
We focus on the channel structure obtained from simulations at $n_e=0.1~n_c$, where the full cavitation is not achieved and
the plasma density near the channel axis is $n_e=0.02~n_c$.
We then consider different intense lasers ($\tau_0\sim ps$) propagating through this light pipe, and their interaction with the plasma.
Radiation reaction is shown to play an important role already at $a_0=100$ (where the normalized vector potential $a_0$ is defined as $a_0=0.85\sqrt{I[10^{18}~\mathrm{W/cm^2}]}\lambda_0[\mathrm{\mu m}]$ for a linearly polarised laser).
Qualitative differences are observed in the plasma dynamics with and without accounting for the electron energy loss due to high-frequency photon emission.
There are differences in the channel wall dynamics, but the most striking difference is that the radiative trapping enables electrons to experience longer interactions with the laser, which in turn leads to a higher energy gain due to the direct laser acceleration.
This also increases the number of accelerated electrons.
Without optimising the channel width and laser focusing parameters, we show that with a 10 PW, 150 fs laser one can obtain an electron beam with over 1.6 nC of super-ponderomotive electrons, with a 6 GeV energy cutoff in a 1.8 mm plasma channel.
For the same laser parameters, we also consider a case of propagation through a channel with a spatialy-varying width.
Increasing the laser power, or varying the channel properties to obtain laser guiding at a higher local laser intensity is expected to substantially increase the electron cutoff energy.
For example, energy of 15 GeV can be obtained with $a_0=600$ for a propagation length of 0.5 mm.
This paper is organised as follows.
In the next section, we study the channel formation in near-critical plasmas using long ($\tau_0>$ 10 ps), weakly relativistic ($a_0\simeq1$) laser pulses.
The space-charge field structure within the self-created chennels is discussed, as well as its strength as a function of the background plasma density.
Section 3 explores the propagation of intense laser pulses ($a_0\gtrsim100$, $\tau_0\sim1~$ps) through such preformed channels.
Special attention is devoted to exploring the distinct plasma dynamics when radiation reaction becomes relevant.
This is followed by an example relevant for the near-future 10 PW laser facilities ($a_0~100$, $\tau_0\sim150~$fs).
Finally, we present our counclusions.
\section{Plasma channel generation; electromagnetic field structure within a pre-formed plasma channel}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{3D_channeling.pdf}
\caption{Electromagnetic field lines within a plasma channel formed for $n_p=0.001~n_c$ at $t=1632~ \omega_p^{-1}$ (9.07 ps). Magnetic field in a channel formed by a) circularly polarised laser, b) linearly polarised laser. Electric field in a channel formed by c) circularly polarised laser, d) linearly polarised laser.}
\label{3d_field_lines}
\end{figure}
This section deals with $\gtrsim$10 ps long lasers interacting with underdense plasma slabs.
Such a long laser pulse creates a mm-scale plasma channel that can serve as a guiding structure for another laser beam.
The laser field initially introduces a trasverse temperature gradient which causes the expansion of the electrons.
Multi-ps timescale allows the plasma ions to follow the electrons, pulled by the electric field induced by the charge separation.
If the laser power is high enough, self-focusing can increase the laser intensity, which further reinforces the channel formation.
The critical power for self-focusing is given by $ P_c=17 \left( n_c/n_e\right) ~\mathrm{GW}$ \cite{Self_focusing_early1, Self_focusing_early2}, where $n_e$ is the electron plasma density and $n_c=m_e \omega_0^2/(4\pi e^2)$ is the critical density associated with the propagation of a light wave with frequency $\omega_0$.
When the plasma density is near-critical, channel-splitting might occur, and a fraction of the laser energy may be lost trasversely.
Self-focusing helps these daughter-channels to re-combine in a single main channel.
If the side-channels persist, some of the energy can be trapped later to form solitons \cite{Sarri_solitons, Louise_channel, Macchi_ionmotion, DKPOP}.
More details on channel formation process, laser self-focusing, self-guiding and associated instabilities can be found in the literature \cite{DKPOP, Friou_Gremillet, naumova_poleff, Naumova_popch, sarkisov, Mori_selffocus, clayton_selffocus, Mori_morecomplicated, Matsuoka, filament_ins}.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{3d_vs_2d_fields.pdf}
\caption{Comparison of the a) $E_2$ and b) $B_3$ components in $x_1-x_2$ plane for 2D and 3D simulations of linearly polarised laser channeling. Both panels represent lineouts for $x_1=345~c/\omega_p$ at $t=1632~\omega_p^{-1}$. }
\label{fields_2d3d}
\end{figure*}
Here we are interested in the electromagnetic field structure that is formed within a mm-scale plasma channel.
This field structure can be used to aid particle acceleration or to guide an externally or self-injected electron beam.
To illustrate the channel field structure that could be expected in near-critical plasmas, we perform a series of 2D and 3D PIC simulations of channel formation with OSIRIS \cite{Fonseca_scaling}.
The laser pulse length is 15 ps at FWHM, with the transverse laser waist of $W_0=$14.3 $\mu$m.
The temporal envelope consists in a 5 ps rise, 5 ps fall and a 10 ps flat-top section in between, where the laser amplitude is at its maximum.
Peak laser intensity is $I=10^{18}~\mathrm{W/cm^2}$, corresponding to the normalized vector potential $a_0=0.8$.
The background plasma density is $n_e = 0.001~n_c$ which for a laser with $\lambda_0=1~\mu$m corresponds to $n_e=10^{18}~\mathrm{cm^{-3}}$.
The plasma is 1.91 mm long.
We simulate 31.9 ps of interaction with $2.61\times10^{5}$ iterations. All simulations are normalized to $\omega_p=1.8\times10^{14}~\mathrm{s^{-1}}$.
With this normalization, distances expressed in $c/\omega_p$ can be radily converted to $\mu$m simply by multiplying with a numerical factor of 1.6.
The simulation box in 2d is $2069~ \mu$m long and $159~\mu$m wide, resolved with $39000 \times 3000$ cells.
The 3D simulations are performed with a box size of $2069~ \mu\mathrm{m} \times 127.4~ \mu\mathrm{m} \times 127.4~ \mu\mathrm{m}$, resolved with $39000 \times 200\times 200$ cells.
We perform additional 2D simulations with $n_e=0.1~n_c$ and $n_e=0.01~n_c$, that are not possible in 3D because a finer transverse resolution would be required to do so.
All 2D simulations are performed for a linearly polarized laser pulse, while in 3D we also consider circularly polarized laser pulses (considering the same $a_0=0.8$).
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{EMfields_lineouts.pdf}
\caption{Lineouts of space-charge electric field $E_2$ and out-of-plane magnetic field $B_3$ for plasma densities of a) $n_p=0.001~n_c$, b) $n_p=0.01~n_c$ and c) $n_p=0.1~n_c$. All lineouts are taken at $t=1150 ~\omega_p^{-1}$ (which corresponds to $\sim$ 6.4 ps) for $x_1=120 ~c/\omega_p$.}
\label{EMfields_lineouts}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{Electrons_ions.pdf}
\caption{Channel formation in a plasma with a background density of $n=0.1~n_c$. a) Electron plasma density, b) ion plasma density at $t=2300~\omega_p^{-1}$, which corresponds to $\sim$ 12.8 ps. }
\label{long_channel_density}
\end{figure*}
Figure \ref{3d_field_lines} shows the electromagnetic fields within the channel from the 3D simulations for circular and linear laser polarisations.
The channel fields are superimposed with the fields of the laser, but are more than two orders of magnitude smaller.
The fast-oscillating laser fields therefore interfere with measuring the slow-varying large-scale channel fields.
However, we can still access the channel fields if we average the data over the fast oscillating component in the $x_1$ direction.
It then becomes evident that the channel field structure is composed of a radial electric field and an azimuthal magnetic field, which are shown in Fig. \ref{3d_field_lines}.
Let us now consider an electron beam traveling in the positive $x_1$ direction, and the effect the channel fields have on its propagation.
The radial electric field provides for the electrons a restoring force that acts towards the channel centre.
The direction of the azimuthal $B$ is also such that for an electron moving forward in $x_1$ in the vicinity of the channel axis, the Lorentz force points towards the channel centre.
Hence, both the electric and the magnetic fields of the channel provide forces that tend to guide the electron beam by acting against the escape of individual electrons from the central region.
This ensures a stable propagation of an electron beam or a current fillament within the channel.
Full-scale 3D simulations of the channel formation process in 3D are possible only at low densities ($n_e\sim0.001~n_c$).
By comparing 2D and 3D description at low densities, we evaluate how well laser channeling is described in 2D simulations.
Figure \ref{fields_2d3d} shows $E_2$ and $B_3$ lineouts parallel to the $x_2$ axis through the centre of the channel (here the $x_2$ coordinate represents the distance from the channel centre).
One notable difference is that the electric field amplitude is slightly lower in 2D (as shown in \ref{fields_2d3d} a)).
Apart from this, the field structure is identical in 2D and 3D, and the channel expands to the same width.
For higher plasma densities, we deduce the order of magnitude of the fields generated, as well as the transverse size of the plasma channel from 2D simulations.
The electric and magnetic field lineouts from for background plasma densities up to $n_e=0.1~n_c$ are given in Fig. \ref{EMfields_lineouts}.
It is clear that denser plasmas generate proportionally higher space-charge fields in otherwise identical conditions.
However, Fig. \ref{EMfields_lineouts} shows that this is true only for the magnetic field.
The electric field does not scale linearly with the background plasma density.
The reason is that the ions follow the electron transverse expansion on the channel formation timescale.
In fact, Fig. \ref{long_channel_density} shows that electron and ion density distributions are similar.
It is therefore not granted that by increasing the overall density one increases the electric field within the channel.
Due to the existence of complex local structures, for the high-density background plasma with $n_e=0.1~n_c$, electric field fluctuations within the channel are on the same order as the expected sinusoidal space-charge electric field (as shown in Fig \ref{EMfields_lineouts} c).
Our simulations are in agreement with the charge displacement electric and the azimuthal magnetic fields within a laser-created channel that have already been observed experimentally using proton probes \cite{Satya_channel, Ashley_Bfield}.
Similarly, a strong azimuthal magnetic field is expected also during interaction with very intense lasers \cite{Arefiev_PRL2016_gamma_rays}.
In the next section we use the data from our simulations to define the channel properties for the case of $n_e=0.1~n_c$ and use it as an initial plasma configuration to propagate extremely intense laser pulses.
\section{Extreme intensity laser channeling. The role of radiation reaction for particle trapping.}
Now that we have determined what kind of channels can be created with a 10 ps-class laser pulse, we investigate how this structure can serve to guide ultra-intense laser pulses ($a_0\gtrsim100$).
Due to the high intensity, it is expected that radiation reaction will significantly affect the plasma dynamics.
We perform a series of 2D simulations at a range of laser intensities between $a_0=50$ and $a_0=600$.
Laser temporal profile has a 1.062 ps flat-top section with a 265 fs Gaussian rise and fall.
Background plasma density outside of the channel is $n_e=0.1~n_c$, the peak density of the channel walls is $n_e=0.2~n_c$ and the lowest density on channel axis is $n_e=0.02~n_c$.
The channel width in the beginning of the simulation is 25.5 $\mu$m.
The initialization of the pre-formed channel respects the values obtained in the previous section for $n_e=0.1~n_c$.
The simulation box is $1.27~ \mathrm{mm} \times 0.16~\mathrm{mm}$, resolved with $24 000 \times 3000$ cells.
Total simulation time is 4.2 ps, with a $\Delta$t = 0.12 fs.
Radiation reaction is described through classical Landau\&Lifshitz equation of motion \cite{Vranic_ClassicalRR}.
To acertain this is an adequate approach, we have verified that in our simulations the quantum parameter $\chi_e<0.2$ even for most energetic particles in the simulations.
The parameter $\chi_e$ is defined by $\chi_e=\sqrt{(p_\mu F^{\mu \nu})^2}/(E_c m_e c)$, where $E_c=m_e^2c^3/(\hbar e)$ is the Schwinger critical field.
For particles counter-propagating with the laser, $\chi_e\simeq2a_0\gamma \times \hbar\omega_0/(m_ec^2)$, while for particles moving in the same direction as the laser $\chi_e \simeq a_0/(2\gamma) \times \hbar\omega_0/(m_ec^2)$.
Classical description is valid as long as $\chi_e\ll 1$.
Fig. \ref{density_extreme} shows the electron density for several simulations performed with and without radiation reaction (RR).
A striking difference in the collective plasma dynamics with and without RR is observed already for $a_0=50$.
Without RR, the channel bifurcates, which is not the case when radiation reaction is taken into account.
At higher laser intensities ($a_0>100$), there is a trapped electron beam in the vicinity of the channel axis in simulations with RR, that can be accelerated directly by the laser.
When simulations are performed without radiation reaction, all the electrons are evacuated and there is no electron beam in the centre.
However, some direct laser acceleration occurs also without the radiation reaction, but away from the channel axis where the laser intensity is not at the maximum.
We first investigate the channel bifurcation.
For a laser intensity coresponding to $a_0=50$, radiation reaction is not expected to strongly affect particles which are co-propagating with the laser.
This hints the radiation reaction is important near the channel wall, because that is the only region where there is a current flowing in the oposite direction.
To examine this in more detail, we focus on a region where the channel-splitting occurs in the simulation without radiation reaction.
Figure \ref{current_vector} shows in-plane electron current precisely in this region right before the separation of a first side channel in the positive $x_2$ direction.
Panels a) and b) in Fig. \ref{current_vector} show a similar current structure with and without RR at $t=225~\omega_p^{-1}$.
Current loops are formed on the inside of the channel in the immediate vicinity of the channel wall.
The existence of the loops near the channel wall is typical in near-critical plasmas \cite{Louise_channel}, and it preceeds channel splitting.
The periodicity of these structures is on the order of the laser wavelength (the channel walls reach densities close to $n_c$, therefore the electron plasma frequency is also on the same order in this region).
Qualitative difference between the two simulations appears already at $t=230~\omega_p^{-1}$.
In the simulation without radiation reaction the current loops persist, while with radiation reaction they become elongated.
The reason is that in the upper part of the current loop in Fig. \ref{current_vector} e), the electrons move opposite the laser propagation direction.
Due to the interaction with the laser, some electrons may be reflected forward.
The reflection could happen with and without radiation reaction, this depends only on the local laser intensity and the counter-propagating particle energy.
However, the instantaneous energy of the electrons with RR is lower than without RR in otherwise identical conditions, because a fraction of electron energy is lost due to the radiation emission.
This is what makes the reflection more probable with radiation reaction \cite{capturePiazza}.
As their momentum already has an $x_2$ component, the reflected particles continue to propagate away from the channel axis as in Fig. \ref{current_vector} f).
The particles eventually close the current loop where the local laser intensity is lower.
The elongated current loops contribute to forming a less sharp channel boundary which helps re-connecting the main and the side channel.
The conclusions above apply also to lasers of higher intensities as the split channels consistently appear in the simulations without radiation reaction, but not in simulations with radiation reaction (Fig. \ref{density_extreme}).
Apart from the inhibited channel splitting, another important difference is that in the simulations with RR for $a_0>100$ the channel has a dense population of electrons near the central axis.
This threshold intensity for radiative trapping is of the same order, but slightly lower than in Ref. \cite{Pukhov_rrtrapp} that considered a shorter laser pulse.
A radiation reaction strong enough to compensate for the laser expelling force was observed, which allowed the electrons to remain in the region close to the peak laser field.
The electrons within the channel in these conditions can be accelerated to energies above 10 GeV, as illustrated in Fig. \ref{phasespaces_extreme}.
Panel a) shows the channel density map together with randomly selected electrons with relativistic factor $\gamma>2000$.
The laser intensity here is $a_0=600$, and all the electrons above 1 GeV reside in the vicinity of the channel axis.
As the channel is not fully cavitated, the particles are accelerated both with and without radiation reaction.
However, the most energetic particles are located at the channel axis, and these were injected there by radiative trapping.
The longitudinal phase spaces in Fig. \ref{phasespaces_extreme} b)-e) show that higher longitudinal electron momenta are achieved with radiation reaction.
The peak of high-energy electrons close to $x_1=300$ in panel e) corresponds to the very central region of the bunch close to the channel front.
Therefore, assisted by the radiation reaction, intense lasers can produce energetic and collimated electron beams within hollow channels.
Figure \ref{electron_spect} shows the electron spectrum of a beam accelerated in an identical plasma channel as above, but with a 150 fs long, 10 PW laser beam (to be available in ELI beamlines \cite{facilities_ELI}).
The laser was initially focused to a $5~ \mu$m FWHM focal spot.
The electron energy cutoff is 6 GeV, with an equivalent of more than $10^{10}$ electrons accelerated above 1 GeV.
The beam divergence is on the $50$ mrad level.
All these electrons are super-ponderomotive, as the laser energy is distributed within the channel and most of the propagation time the peak intensity corresponds to $a_0\sim50$.
During the propagation through 1.8 mm of plasma, the laser has lost less than 25 \% of its total energy and therefore the acceleration could potentially continue further within a longer plasma channel.
Using a channel with a smaller transverse size would be favourable to maintain a high laser intensity over a longer propagation distance.
This would ultimately result in a higher beam energy cutoff and increase the amount of accelerated charge.
One question that might arise is how a spatio-temporal variation of the channel width could affect the results above.
As the channel expands at the local sound speed, the width variations are small within the $\sim150$ fs timescale and we assume the channel does not evolve during the interaction.
However, spatial variation (in $x_1$ direction) depends on the first pulse parameters and the delay before the main pulse arrives.
We can map the varying channel width from Fig. \ref{long_channel_density}, where the channel is $\sim 32 ~ \mu$m wide at the vacuum-plasma interface and $\sim9 ~\mu$m wide 1 mm into the plasma.
Figure \ref{spec_compare} shows two electron spectra obtained with a constant channel width above, and using a channel as in Fig. \ref{long_channel_density}.
The energy cutoff is similar, but within the spatialy varying channel 30 \% fewer electrons were accelerated above 1 GeV in the first mm of propagation.
For completeness, we have displayed the spectra with and without radiation reaction, even though at $a_0\sim50$ the differences in the accelerated beam are minor.
We note that the configuration presented here is the simplest example where radiative trapping is of significance at extreme laser intensities.
The effect can also be found in standing waves formed by interaction of two or more lasers \cite{Gonoskov_radiative_trapping, Kirk_radiative_trapping}, which is beyond the scope of this paper.
We stress, however, that the configuration explored here can be tested in the next generation of laser facilities such as ELI \cite{facilities_ELI}.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{density_electrons_all}
\caption{Channel density at $t=402.5~\omega_p^{-1}$ for extreme channeling simulations a), c), d), f), i) without and b), d), e), h), j) with radiation reaction. For $a_0>100$, apart from inhibited channel splitting with radiation reaction, there is a population of electrons within the channel that does not appear when radiation reaction is not accounted for. These electrons are in the region of the strongest electromagnetic field and can be accelerated. }
\label{density_extreme}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{Current_structures2}
\caption{Vector plot of in-plane electron current a), c), e) without and b), d), f) with radiation reaction. The three frames show a region where the first channel splitting later occurs without RR, and does not occur with RR. Region around the channel wall e) without and f) with RR. Please note the direction of motion for negatively charged electrons is opposite to the current vectors displayed. }
\label{current_vector}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{phasespaces_a400a600}
\caption{Electron beam energy at $t=402.5~\omega_p^{-1}$. a) Density of the channel for $a_0=600$ with randomly electrons with relativistic factor $\gamma>2000$. Spheres that represent individual electrons are coloured according to the energy. Their vertical distance from the $x_1-x_2$ plane also corresponds to the energy. Most of these electrons are located within the channel in the central region that experiences the strongest laser field. b), c) Longitudinal phasespaces with and without radiation reaction respectively for $a_0=400$. d), e) Longitudinal phasespaces with and without radiation reaction respectively for $a_0=600$. In general, the maximum electron energies obtained with RR are higher. }
\label{phasespaces_extreme}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Electron_spectrum}
\caption{Electron energy spectrum from a simulation with a 10 PW laser beam with a pulse duration of 150 fs (soon to be available at ELI \cite{facilities_ELI}). The highlighted section of the spectrum corresponds to $\sim$1.6 nC of charge. The spectrum is recorded after 1.8 mm of laser propagation. }
\label{electron_spect}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{Spect_slide.pdf}
\caption{Electron energy spectrum with and without radiation reaction. a) For a constant channel width of 25.5 $\mu$m; b) the channel width varies in $x_1$ direction as in Fig. \ref{long_channel_density}.}
\label{spec_compare}
\end{figure*}
\section{Conclusions} \label{concl_sect}
We have studied channel formation and intense laser propagation through pre-formed channels.
We have shown that prepulses can be exploited to generate parabolic channels that would serve as a light pipe to guide the intense lasers through large-scale underdense plasmas.
Electrons with energies above 10 GeV can be obtained within a 1-mm long plasma channel using the next generation of laser facilities \cite{facilities_ELI}.
By focusing a 10 PW laser pulse with a duration of 150 fs to a $5~ \mu\mathrm{m}$ focal spot, 6 GeV energy gain can be obtained in 1.8 mm of propagation.
The channel width used here is $25.5~\mu\mathrm{m}$.
This was not in any way optimized to maintain the maximum laser intensity or generate maximum electron energy.
Further studies are required to show the optimal conditions for the proposed acceleration setup.
This would likely increase the electron acceleration efficiency and the maximum energy the electrons obtain in the light pipe.
\section*{Acknowledgements}
This work is supported by the European Research Council (InPairs ERC-2015-AdG Grant 695088), and FCT (Portugal) SFRH/BPD/119642/2016. Simulations were performed at Supermuc (Germany) and Fermi (Italy) through PRACE allocation and at the IST cluster (Lisbon, Portugal). The authors thank Dr T. Grismayer for fruitful discussions.
\section*{References}
\bibliographystyle{unsrt}
|
2,869,038,153,773 | arxiv | \section{Introduction}\label{Intro}
The current conceptual model of mineral dissolution in porous media is based on three 'dissolution regimes' that assist flow and transport prediction during dissolution\cite{1998-Fredd,2002-Golfier,1986-Chadam}. Accurate identification of these regimes is essential as the dissolution regime ultimately controls the evolution of permeability. Moving from one regime to the other results in orders of magnitude differences in permeability change with increasing porosity. As such, accurate prediction of mineral dissolution in porous media is crucial for a wide range of subsurface applications, including CO$_2$ sequestration and geothermal power generation \cite{2015-Pandey,2015-Black} where failure to predict the changes in permeability can lead to poor fluid injection efficiency and potentially irreversible reservoir damage \cite{2010-Gauss,2010-Portier}.
The balance between flow, diffusion, and reaction rates determines which dissolution pattern develops during reactive flow in a porous medium \cite{2013-szymczak}. When flow is slow compared to reaction rate, the face of the porous medium closest to the inlet will dissolve and result in compact dissolution. When flow is fast compared to the reaction rate, acidic fluid is quickly distributed throughout the pore spaces and the medium dissolves uniformly. At intermediate flow rates, the acidic fluid etches a wide pathway through the porous medium in the direction of flow and forms a wormhole. These regimes can be predicted based on the P\'eclet number $Pe$ (the ratio of advective to diffusive transport) and the Kinetic number $Ki$ (the ratio of chemical reaction to diffusive transport). However, these dissolution regimes do not take into account the structural heterogeneity of complex porous media, because they were first identified (Fig \ref{fig:conceptualmodel}) before the technology was developed to observe or model reactive flow at the scale of grains and pores. Thus, they are problematic when quantifying the relationships between flow, reaction, and pore structure.
\begin{figure}[!t]
\includegraphics[width=1\textwidth]{conceptualModel.pdf}
\caption{(A) Schematic depiction of dissolution regimes in the P\'eclet number - Kinetic number space. (B) This paper modifies this traditional conceptual model by adding the channeling regime. \label{fig:conceptualmodel}}
\end{figure}
Recent advances in x-ray-CT imaging techniques \cite{2017-reynolds} have enabled direct observation and quantification of dissolution-induced changes in the pore structure and provided insight into influences of structural heterogeneity, flow, and reaction rate on dissolution regime. Several experimental studies have observed mineral dissolution at the pore-scale in reservoir rock samples \cite{2009-Noirel,2013-Hao,2014-Luquot, 2015-deng, garing2015anti}. Others \cite{2015-Menke,2016b-Menke,2017-Menke,2018-Menke} studied the dissolution dynamics \textit{in situ} during fast flow in rocks of varying complexity, observing uniform dissolution in a structurally simple rock, but the opening of preferential flow pathways in the more complex rock samples. This path-widening did not progress longitudinally with flow, as is the case for wormholes, but instead opened everywhere along the dominant flow channel and was thus named 'channeling'. This regime was later confirmed \cite{2020-Yang} by observations of channeling in both fractured and vuggy rock samples. However, as of yet no in-depth experimental characterisation of the conditions required for channeling has been performed, and thus no new conceptual model has been proposed that includes channeling.
Pore-scale experimental techniques are often complemented by advances in numerical simulations that give insight into the complex relationship between pore structure, flow, and reaction. However, limitations in the numerical methods have not allowed for flow to be simulated at the high flow rates seen near reservoir injection wells \cite{2016-nunes,2016b-nunes,2016-Gray}, which limits the range of dissolution regimes that can be studied. Several studies \cite{2009-Szymczak, 2017-Soulaine} have attempted a comprehensive numerical investigation of the full spectrum of pore-scale dissolution regimes (Fig. \ref{fig:conceptualmodel}), but these were restricted to relatively homogeneous domains with minor differences in pore structure between models and small differences in flow rate. Channeling has thus not been characterised in numerical models at the pore-scale by any study to date because either the numerical capabilities for high flow rates or structural complexity in the model were lacking. Therefore, the placement of the boundaries between wormhole, channeling, and uniform dissolution regimes are unknown and the conceptual model of dissolution is missing information vital for accurate modelling of dissolution.
The work presented here is a numerical investigation into how pore-space complexity changes the conceptual model of dissolution regimes and how the channeling regime fits into our broader understanding of dissolution. Two synthetic 2D pore structures with varying levels of heterogeneity were created stochastically and their structural complexity characterized (Fig. \ref{fig:micromodels}). A series of 26 numerical simulations was performed on each of the geometries by injecting acid at different flow and reactive conditions using our new highly efficient open source numerical solver GeoChemFoam \cite{2021a-Maes,2021b-Maes,2021c-Maes,2022a-Maes}, which is based on the Open Source Computational Fluid Dynamics toolbox OpenFOAM \cite{2016-OpenFOAM}. We observe that many of the model scenario results do not fit the conceptual model of the three traditional dissolution regimes and have fundamentally categorically distinct porosity-permeability relationships. We show that these four dissolution regimes can be distinguished using the moments (mean, standard deviation, skewness, and kurtosis) of the distributions of pore throat size and acid concentration. We then employ hierarchical agglomerative clustering \cite{1987-ROUSSEEUW} to provide a quantitative measure of identifying the channeling regime and differentiating channeling from the other three regimes. Finally, we provide an updated conceptual model of dissolution regimes that includes channeling and demonstrate how the boundaries between regimes shift with increasing pore space complexity.
\section{Materials and Methods}
All numerical simulations were performed using \href{https://github.com/GeoChemFoam}{GeoChemFoam} on Intel Xeon processors (24 cores). For each image, an unstructured mesh is created within the pore-space using OpenFOAM utility \textit{snappyHexMesh}. For each time-step, velocity and acid concentration fields are solved. Then the reaction rate and the velocity of the dissolving faces are calculated and the mesh is updated. Mesh quality is checked at the end of each time-step and if the skewness is too large, the domain is completely remeshed. Since GeoChemFoam uses steady-state formulations of flow and transport, it can be applied with very large time-steps ($CFL\approx1000$), allowing for large speed-ups in computation time. Details of the numerical method are presented in \cite{2022a-Maes} and in the supplementary information. The original geometries and output files can be downloaded from our \href{https://zenodo.org/record/6993528}{Zenodo dataset archive}, the geometry creation scripts are on \href{https://github.com/hannahmenke/Channeling2022}{github} and an example input deck is on the \href{https://github.com/GeoChemFoam/GeoChemFoam/tree/main/Examples/}{GeoChemFoam wiki}.
\section{Numerical observations of pore-scale dissolution}\label{Sect:images}
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\textwidth]{micromodels.pdf}
\caption{(A) Model A. (B) Model B. The grains (gray) are rendered with the velocity field of the pore space (color) computed using an injection rate of 0.4 mL/min and a resolution of 2.5 $\mu$m per pixel. (C) Results showing the histogram of throat size (solid) and velocity (dotted) for Model A (blue) and Model B (red). The characteristic length $L$ is 1.125 x $10^{-4}$ m for Model A and 1.251 x $10^{-4}$ m for Model B. \label{fig:micromodels}}
\end{center}
\end{figure*}
A relatively homogeneous geometry was created with a small random deviation in both grain radius and placement of the grains (Model A, Fig \ref{fig:micromodels}A). Structural complexity was then increased by adding a larger random deviation of both grain radius and placement to create an increasingly heterogeneous geometry (Model B, Fig \ref{fig:micromodels}B). The distributions of throat sizes and velocity of Model A and Model B are presented in Fig \ref{fig:micromodels}C. Model A has velocity and pore throat size distributions that are narrow, while Model B shows a wide tail representing the focusing of flow into the preferential flow paths through larger pore throats. Additional details on geometry creation and the numerical modelling are included in the Supplementary Information.
For each geometry, we perform 26 simulations to identify the boundaries between dissolution regimes. The model solves the quasi-steady state Navier-Stokes equations and advection-diffusion of reactant in the pore space using a finite-volume discretization on an unstructured hybrid mesh consisting of hexahedral and split-hexahedral elements \citep{2016-OpenFOAM}. The numerical model, including meshing, time-stepping and convergence, is presented in detail in the supplementary information. A simplified chemical model is employed representing dissolution of calcite mineral during acid injection, with one fluid component and one reaction component \cite{2016-nunes,2017-Soulaine,2022a-Maes}. The molecular diffusion is the constant $D=10^{-9}$ m$^2$.s$^{-1}$. The displacement of the fluid-solid interface is handled using the Arbitrary Eulerian Lagrangian (ALE) method. Acid is injected from the left boundary at constant concentration and flow rate and the simulations are ended either when the porosity increases to 1.6 times the initial porosity or the permeability reaches a value 100 times larger than the initial permeability.
The relative importance of advection and reaction rate to molecular diffusion is characterized by the P\'eclet number $Pe=UL/D$ and Kinetic number $Ki=kL/D$, where $U$ [m.s$^{-1}$] is the average pore velocity, $L$ [m] is the average width of the flow pathways and $k$ [m.s$^{-1}$] is the reaction constant. Details on how to calculate $Pe$, $Ki$, $U$ and $L$ are presented in the supplementary information. For each simulation, the flow rate and reaction constant are adjusted to obtain the desired $Pe$ and $Ki$ at time=0.
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=1\textwidth]{ModelABConcentrationPoro0.5.pdf}
\caption{Pore structure and acid concentration during mineral dissolution in (A) Model A and (B) Model B at $Pe$ and $Ki$ ranging from 0.01 to 100 at a porosity of 0.57. The solid phase is rendered in grey and the acid concentration in colors. The pore throats extracted using a watershed algorithm are shown in white. Simulations categorised in the compact, wormhole, and uniform regimes are outlined in gray, blue, and green, respectively, while simulations that do not fit into any traditional regime are outlined in red and designated channeling. (C,E) Porosity-Permeability curves for all the Model A and Model B simulations, respectively. (D) Porosity-Permeability curves for selected (starred) simulations. \label{fig:GeometryA}}
\end{center}
\end{figure*}
Maps showing the distribution of the injected acid concentration at the time where dissolution has increased the porosity from ~0.45 to ~0.5 are presented (Fig. \ref{fig:GeometryA}A and B). Videos of the dynamic evolution of dissolution are provided in the supplementary information. In Fig \ref{fig:GeometryA}A, we observe the three traditional regimes for Model A: compact dissolution (gray), wormhole (blue) and uniform dissolution (green). The cases at the boundary between wormhole and uniform dissolution, outlined in red, are traditionally classified as (ramified) wormholes \cite{2009-Szymczak,2017-Soulaine}. However, here we observe they exhibit characteristics that contradict the wormholing concept. Rather than one ramified wormhole that has very little change in permeability until breakthrough (e.g. $Pe=1, Ki=0.1$), these include a very large number of small dissolution channels that extend towards the outlet of the model, resulting in a porosity-permeability evolution with similar curvature to those of uniform dissolution, but with a larger change in permeability with porosity as dissolution is present in these pathways at the outlet almost instantaneously. In these cases, there is a direct correspondence between dissolution pathways and initial fast flow paths (Fig. \ref{fig:micromodels}A). The most dominant flow paths are dissolved first, which leads to an initial increase in permeability that is higher than that observed for uniform dissolution (e.g. $Pe=100, Ki=10$) (Fig. \ref{fig:GeometryA}A). We will demonstrate that this regime is channeling, as identified in previous experimental studies \cite{2016b-Menke,2020-Yang}.
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=1\textwidth]{ModelAandBDissolutionZoom.pdf}
\caption{(Rows 1 and 4) Dissolution with change in porosity as a proxy for time for select simulations of Model A (A1-5) and B (B1-5). In the top of each example, the dissolved pore space is shown at different porosity values with undissolved being grey, and red, green, and blue showing dissolution at subsequent times. The solid yellow squares are magnified regions of interest of the outlet of the model during dissolution which are outlined as dashed yellow squares. (Rows 2 and 4) The pore throat size distributions of Model A (row 2) and Model B (row 4) of the above simulation at the porosity values depicted in rows 1 and 4 (grey, red, green, blue). The skewness (solid line) and kurtosis (dashed line) of pore throat size are shown with increasing porosity on the right axes (purple). (Rows 3 and 6) The concentration distributions of Model A (row 3) and Model B (row 4) at the same porosity values depicted in rows 1 and 4 (grey, red, green, blue). The skewness (solid line) and kurtosis (dashed line) of the concentration distributions are shown with increasing porosity using the right axes (purple). \label{fig:DissolutionThroatSize}}
\end{center}
\end{figure*}
The existence of channeling becomes more apparent as structural complexity increases in Model B (Fig. \ref{fig:GeometryA}B), where we again observe a number of cases (outlined in red) that cannot be classified using any of the three traditional regimes and instead follow the same convex porosity-permeability (\ref{fig:DissolutionThroatSize}E) trends as those in Model A (\ref{fig:DissolutionThroatSize}C). In addition, the increased structural complexity has shifted the boundaries between compact/wormhole and wormhole/channeling to lower $Pe$ and the size of the channeling regime has increased in the sense that the total number of cases that fall into the channeling regime has increased from 6 to 10. In all of the channeling cases, the permeability increases faster and attains a larger value than for uniform dissolution and is faster than for the more structurally homogeneous cases in Fig. \ref{fig:GeometryA}A.
To illustrate the impact of pore space heterogeneity on dissolution regime, the evolution of the dissolution patterns and the throat size and concentration distributions for selected cases of Model A and B are shown (Fig. \ref{fig:DissolutionThroatSize}D). The details of image analysis techniques used to extract these metrics can be found in the Supplementary Information. The corresponding evolutions of the porosity-permeability relationships are shown in Fig. \ref{fig:GeometryA}D. In the compact dissolution cases (A1, B1), the dissolution is transport-limited and creates large throats at the front of the model that result in a large skewness and kurtosis in throat size. Conversely, as the dissolution front advances, the highly concentrated acid spread into more of the pore space and the skewness and kurtosis of the concentration distributions decrease. The small deviations in the dissolution front in Model B result in a larger overall skewness and kurtosis of throat size and concentration and a larger slope in the porosity-permeability relationship than Model A. However, even at the largest porosity shown for Model B (porosity = 0.57), the dissolution front remains stable, and there is no dissolution near the outlet, so the overall change in permeability is low. When we apply a power law fit to the porosity-permeability relationship, we find the relative small exponent of 1.5 to 2.
In the cases A2 and B2, the dissolution front becomes unstable, and advection and reaction compete as the dissolution etches pathways (wormholes) through the models. Large pore throats are created both at the fronts and inside the wormholes, which result in a large increase in the skewness and kurtosis of pore throat size and a widening of the pore throat size distribution through time. The concentration distributions develop a peak indicative of a preferential flow path through the model with corresponding decreases in skewness and kurtosis, as the wormhole carries the acid towards the outlet. In Model A, similar competition between flow paths results in a porosity-permeability relationship (Fig. \ref{fig:GeometryA}D) that is similar to compact dissolution. There is a rapid increase in permeability once the wormhole is established in the fastest flow pathway, but has a much higher exponent of 11. The preferential flow path is more dominant in Model B. We observe less competition initially with breakthrough of the wormhole to the outlet occurring earlier with a larger increase in permeability and an exponent as high as 19.
In cases A3 and B3 there are many "independent" perturbations with dissolution happening in many different flow pathways in both Model A and B. However, in these cases significant differences exist between models. In Model A there is little to no dissolution at the outlet, whereas in Model B the dissolution is present almost instantaneously at the outlet, resulting in an immediate broadening of the pore throat size distribution and several peaks in the concentration distribution corresponding to the different fast flow pathways (channels). The flow in Model B is therefore stable thoroughout the dissolution inside these channels, whereas in Model A the acid does not reach the outlet until a flow instability develops when a preferential pathway becomes dominant and a wormhole is formed. This behavior is reflected in the porosity-permeability relationship in Fig. \ref{fig:GeometryA}D. Model A has a porosity-permeability relationship with a monotonically-increasing slope and a power law exponent of 11. This observation is in direct contrast to Model B, where the initial shape of the porosity-permeability relationship is convex as would be expected with uniform dissolution, but inverts later during dissolution and becomes a power law with exponent 16. In both cases the dissolution creates a fat tail in the distribution of throat sizes that increases the kurtosis and skewness of the throat size distribution and focuses flow in preferential pathways. These cases therefore have properties of both the wormhole and channeling regimes at different times during the experiment and sit on the boundary between wormhole and channeling, with Model A being closer to wormhole and Model B being closer to channeling.
In cases A4 and B4 the pore throats in the preferential flow pathways are dissolved across the entire domain at the very beginning of the simulations, which creates a fat tail in the throat size distributions and peaks in the concentration distributions. Notably, the skewness and kurtosis of concentration show very little change due to the broad spread of the acid even from the beginning of the simulations. Flow is focused in these channels and there is little dissolution in the slower flowing areas of the pore space. This focused dissolution results in a porosity-permeability relationship of power law exponent 6 to 12. In Model B the structural complexity is higher, and there are fewer fast flowing channels, however, they are more important and result in a higher order porosity-permeability relationship. The dissolution converges towards these fast channels and the flow inside them becomes so dominant that no wormhole forms in the domain. Unlike for cases A3 and B3, we do not see an inflection point in the porosity-permeability relationship. This is because, for channeling, flow is stable and the dissolution channels are instantaneously established as the dominant flow pathways and then become wider as the porosity increases.
In cases A5 and B5 the dissolution is reaction-limited and uniform across the domains, with no preferential pathways forming in either Model A or B. The kurtosis and skewness of pore throat size across the domains is flat as all flow paths are widened together. The concentration distribution has a large peak at the injection concentration which increases only slightly throughout the simulations as more of the model dissolves. Here, the skewness of concentration is below 0, which is contrary to all other dissolution regimes. During uniform dissolution the increased structural heterogeneity results in only a small increase in the power law exponent of the porosity-permeability relationship from 5 to 6.
\section{Channeling: a new class of dissolution regime}\label{Channeling}
We quantitatively identify dissolution regime by clustering the four moments (mean, standard deviate, skewness, and kurtosis) of the distributions in concentration and throat size at each time step (Fig \ref{fig:Clustering}). We used hierarchical agglomerative clustering for a range of numbers of clusters from 2 to 10 shown in Fig \ref{fig:Clustering}B. The Silhouette Coefficient (SC) is used to rank the optimal number of clusters where a higher index indicates that clusters are dense and well separated. For our group of simulations, the highest SC was observed with 4 clusters. This clustering (Fig. \ref{fig:Clustering}A) identifies channeling as independent regime and is in agreement with our visual characterization and physical understanding of the numerical experiments.
\begin{figure}
\includegraphics[width=1\textwidth]{ClusteringCombinedFigure.pdf}
\caption{Hierarchical Agglomerate Clustering of dissolution using the mean, standard deviation, skewness, and kurtosis of concentration and pore throat size. (A) Each circle (Model A) and dot (Model B) represents a single time-step in one numerical experiment, and is color coded by cluster by the clustering algorithm, according to the dendrogram in (C). (B) Silhouette Coefficient of the number of clusters. (C) The dendrogram of the cluster labels. \label{fig:Clustering}}
\end{figure}
The clustering dendrogram (Fig. \ref{fig:Clustering}C) gives insight into how the clustering algorithm determines each cluster boundary. First the channeling/uniform regimes split from the wormhole/compact regimes, followed by the compact and wormhole regimes, and finally channeling and uniform regimes. The order of splitting indicates that the difference in dissolution behavior is greatest between the channelling/uniform regimes and the wormhole/compact regimes, and smallest between the channeling and uniform regimes, which confirms our assertion that channeling is distinct from wormhole formation. The clustering indicates that as pore space complexity increases, the boundary between the compact and wormhole regimes as well as the boundary between the channeling and wormhole regimes shift towards lower $Pe$. Furthermore, many simulations straddle the boundary between regimes, beginning in one regime and ending in another as the dissolution changes the distribution of flow within the porous medium and flow becomes more or less stable in preferential flow pathways. This is consistent with our analysis of the dissolution progress shown in Figs \ref{fig:GeometryA} and \ref{fig:DissolutionThroatSize}.
We present our updated conceptual model of dissolution regimes in Fig \ref{fig:KiPeWormholeChannel}. Channeling is a distinct regime between wormhole formation and uniform dissolution. As the structural complexity of the porous medium increases, the boundaries between the wormhole/compact regimes and the channeling/wormhole regimes shift towards a lower $Pe$. In more heterogeneous structures, the relative importance of already existing flow paths increases, leading to the formation of wormholes and channels at lower flow rates.
\begin{figure}
\includegraphics[width=1\textwidth]{UpdatedConceptualModel.pdf}
\caption{The P\'eclet number - Kinetic number space with updated dissolution regimes. Increasing pore space heterogeneity shifts the boundaries between wormhole/channeling and channeling/uniform to lower $Pe$. \label{fig:KiPeWormholeChannel}}
\end{figure}
\section*{Reconciling the pore scale with the continuum scale}
We have characterised the dissolution regime of channeling, identified its location within the $Pe-Ki$ space, and quantified its relationship to wormhole formation and uniform dissolution. Previous experimental work in 3D has reported the porosity-permeability of channeling to have a power law order of between 7 and 11 \cite{2016-Menke} and the uniform regime to have an order of 5 \cite{2015-Menke}, which is consistent with our 2D observations of power law order 6 to 12 for channeling and 5 to 6 for the uniform regime. This indicates that the 2D results are likely to be directly extendable to 3D.
Characterisation of dissolution regimes are crucial for providing accurate porosity-permeability relationships for Darcy and reservoir-scale models. In contrast to other pattern formations such as viscous fingering in multi-phase flow, both the location and the conditions under which dissolution follows pre-existing flow paths is important. Wormholes develop from the pore-scale as micron-scale ramifications that merge and expand to eventually form dissolution pathways that impact flow at the field-scale. Similarly channeling will influence flow during dissolution from the pore- to the field-scale provided that scale-dependent structural complexity exists, as for example with the presence of vugs, fractures and faults \cite{2020-Yang}. Predicting such development of dissolution patterns at the field-scale requires an accurate estimation of the evolving permeability of the dissolving matrix \cite{2020-Faris}.
\section*{Conclusions}
We have performed a numerical investigation into dissolution regimes using the open source solver GeoChemFoam. Two model domains of increasingly heterogeneous pore structures were reacted at $Pe$ and $Ki$ ranging from 0.01 to 100. We have characterised a new dissolution regime called channeling and identified its placement within the $Pe-Ki$ space using hierarchical agglomerative clustering. Channeling in heterogeneous structures results in an immediate breakthrough in permeability at an order that is higher than uniform and increases further with increasing pore space complexity. Increasing structural heterogeneity results in the boundaries between the wormhole/compact and channeling/wormhole regimes shifting towards a lower $Pe$ as the influence of preferential flow pathways focuses dissolution.
This unique research provides a first-ever characterisation of the channeling regime. Channeling occurs in heterogeneous porous media, where differences in pore throat sizes cause dissolution to widen preferential flow pathways. This study is the first step towards understanding the multi-scale interactions between structure and dissolution in more complex multi-scale domains such as carbonate rocks where knowledge of how the pore space dissolves at the scale of grains and pores can be incorporated into field scale models. Indeed, in the carbonate reservoirs typically considered for industrial geologic carbon storage applications with a representative calcite reaction constant and carbonate reference pore throat sizes \cite{2017-Menke}, $Ki$ will range between 0.1 and 100. Therefore at sufficiently fast flow rates, the dissolution will be in the channeling regime. Accurate characterisation of the channeling regime is thus vital for accurate prediction of dissolution during many commercial processes essential for the clean energy transition. Indeed this method and results clearly show that a complete understanding of the channeling regime will be essential for any implementation of the advection-diffusion-reaction equations across a broad range of applications including flow organisation during magma melt \cite{2001-Spiegelman,2018-jones} and other geological processes \cite{manga2001using}, drug delivery systems \cite{2015-mcginty}, contaminant transport in underground reservoirs \cite{2020-hasan,2020-pak, 2011-dentz}, and virus spreading dynamics \cite{2017-lin}.
\backmatter
\bmhead{Supplementary information}
Supplementary Materials can be found at the end of this manuscript. Supplementary data can be found at \href{https://zenodo.org/record/6993528}{Zenodo dataset archive}, the geometry creation scripts are on \href{https://github.com/hannahmenke/Channeling2022}{github} and an example input deck is on the \href{https://github.com/GeoChemFoam/GeoChemFoam/tree/main/Examples/}{GeoChemFoam wiki}.
\bmhead{Acknowledgments}
This work was supported by the UK EPSRC funded project on Direct Numerical Simulation for Additive Manufacturing in Porous Media (EP/P031307/1) and by Energi Simulation.
\begin{appendices}
\section*{Appendix A: Model A and Model B Geometries}
\subsection*{Geometry Creation}
A uniform geometry was created with a uniform bead radius of 12 pixels placed on a diagonal grid with a spacing of 40 pixels and an offset of 20 pixels. A small random deviation of 2 pixels in the placement of the beads and 4 pixels in the radius of the beads was then introduced into this homogeneous model to allow for preferential flow paths to develop (Model A). Structural complexity was then increased by creating another model (Model B) using the same grid, spacing, and offset, but with random deviation of 6 pixels in bead radius and 12 pixels in bead placement. The model was set on a 1200 x 1200 pixel image which was then output at 10 times the resolution to preserve edges as a 12000 x 12000 pixel image. This image was then binned by 12 in each direction using ImageJ and padded by 2 on every side using Python to give the final model dimensions of 1004x1004 pixels. The resolution of the geometry was set to 3.5 $\mu$m per pixel, giving a domain size of 3cm$\times$3cm.
Each domain was meshed and the flow field calculated using the Open Source Computational Fluid Dynamics toolbox OpenFOAM \cite{2016-OpenFOAM} (Fig 2A \& B). The distribution of pore throat sizes and velocities are presented in Fig 2C and the distribution of pore and grains sizes are presented in Fig \ref{fig:PoreGrainPDF}. The scripts for creating the initial 12000x12000 geometries can be found on \href{https://github.com/hannahmenke/Channeling2022/}{github}. The original images with the radius, x, and y coordinates of each bead can be found on our \href{https://zenodo.org/record/6993528}{Zenodo dataset archive}.
\begin{figure}
\includegraphics[width=\textwidth]{PoreGrainPDF.png}
\caption{The pore and grain radius distributions for the initial geometries of Modal A and B.
\label{fig:PoreGrainPDF}}
\end{figure}
\subsection*{Geometry Analysis with Image Analysis}
The grains, pores, and pore throats were extracted from each time step in the simulations using a watershed segmentation algorithm and the Euclidean distance map of the grain and pore spaces was used to identify individual grains and pores with the boundaries between pores as throats. An example of this method with each initial geometry is shown in Fig \ref{fig:ImageProcessing}.The statistics of the grain, pore, and pore throat size distributions along with the characteristic length and velocities (at Pe=1) are given in Table \ref{Table:ModelProperties}.
\begin{figure}
\includegraphics[width=\textwidth]{ImageProcessing-01.png}
\caption{(A) The pore space with pores in white and grains in black. (B) A Euclidean distance map was calculated on the pore space. (C) The local maxima of the distance map are designated the center of each pore. The boundaries between pores are designated as throats. (D) Each pore and throat is then individually identified, and local statistics calculated. (E) The concentration map (colored) is then overlain on the pore space with the grains in grey and (F) the local concentration statistics for each pore and throat are then calculated.
\label{fig:ImageProcessing}}
\end{figure}
\begin{table}\centering
\caption{Table of initial geometry statistics\label{Table:ModelProperties}}
\begin{tabular}{lcc}
Statistic & Model A & Model B \\
\midrule
Characteristic Length $L$ [m] & 1.125 x $10^{-4}$ & 1.251 x x $10^{-4}$\\
Pore radius mean [pixels] & 6.4 & 8.3 \\
Pore radius standard deviation & 1.4 & 3.2 \\
Pore radius skewness & -0.23 & 0.19 \\
Pore radius kurtosis & 3.5 & 2.9 \\
Grain radius mean [pixels] & 9.6 & 11\\
Grain radius standard deviation & 1.6 & 3.4 \\
Grain radius skewness & -0.73 & 0.21 \\
Grain radius kurtosis & 5.8 & 3.6\\
Pore throat radius mean [pixels] & 2.5 & 3.8 \\
Pore throat radius standard deviation & 1.2 & 2.4 \\
Pore throat radius skewness & 1.0 & 1.1\\
Pore throat radius kurtosis & 4.7 & 5.0 \\
Pore velocity $U$ mean [m/s] & 8.9 x $10^{-6}$ & 8.0 x x $10^{-6}$\\
Pore velocity $U$ standard deviation & 0.83 & 1.0 \\
Pore velocity $U$ skewness & 2.6 & 3.5\\
Pore velocity $U$ kurtosis & 12 & 20 \\
\bottomrule
\end{tabular}
\end{table}
\subsection*{Geometry Analysis with Autocorrelation}
Here we compute the autocorrelation of the grains and velocities for both Model A and Model B (Fig \ref{fig:AutoCorrelation}). Both models have an autocorrelation function that steeply decreases towards zero with lag, over a length scale equal to the grain spacing. Model A is statistically anisotropic, with an autocorrelation function with square symmetry and prominent sidelobes reflecting the underlying grid. Model B is statistically isotropic, with no sidelobes.
The autocorrelation function of the along-flow component of the velocity field is statistically anisotropic, with rectangular symmetry. The scale length in the along-flow direction typically is similar to the grain spacing but is larger (by a factor of about five) in the cross-flow direction, as is expected for channels. For Model A, the autocorrelation has sidelobes reflecting the underlying periodicity of the medium, with wavelength equal to the grain spacing. The autocorrelation for Model B is similar, but without the sidelobes.
\begin{figure}
\includegraphics[width=\textwidth]{AutoCorrelation.png}
\caption{The autocorrelation function for Model A and Model B shown for grains and velocity at Pe=1. A and E are the grains in white with the pores in black. B and F are the autocorrelation functions of the grains. C and G are the velocities in the direction of flow, D and H are the autocorrelation functions of the velocity. I and J are the autocorrelations of the grains and velocities for Model A and B respectively in each direction plotted from the centre points of the autocorrelations marked by red dotted lines on B, F, D, and H.
\label{fig:AutoCorrelation}}
\end{figure}
\newpage
\section*{Appendix B: Numerical method}
\subsection*{Governing equations}
Under isothermal conditions and in the absence of gravitational effects, fluid motion in the pore-space is governed by the incompressible Navier-Stokes equations,
\begin{equation}\label{Eq:cont}\nabla\cdot\mathbf{u} = 0,
\end{equation}
\begin{equation}
\frac{\partial \mathbf{u}}{\partial t}+ \nabla\cdot\left(\mathbf{u}\otimes\mathbf{u}\right)=-\nabla p +\nu\nabla^2\mathbf{u},\label{Equ:momentum}
\end{equation}
with the continuity condition at the fluid-solid interface $\Gamma$,
\begin{equation}\label{Equ:bcu}
\rho\left(\mathbf{u}-\mathbf{w}_s\right)\cdot \mathbf{n}_{s}=-\rho_s\mathbf{w}_s\cdot\mathbf{n}_s \hspace{0.5cm} \text{at $\Gamma$},
\end{equation}
where $\mathbf{u}$ (m/s) is the velocity, $p$ (m$^2$/s$^2$) is the kinematic pressure, $\nu$ (m$^2$/s) is the kinematic viscosity, $\rho$ (kg/m$^3$) is the fluid density, $\rho_s$ (kg/m$^3$) is the solid density, $\mathbf{n_s}$ is the normal vector to the fluid-solid interface pointing toward the solid phase, and $\mathbf{w_s}$ (m/s) is the velocity of the fluid-solid interface, which is controlled by the surface reaction rate $R$ (kmol/m$^2$/s) such that
\begin{equation}\label{Eq:Ws}
\mathbf{w}_s=\frac{M_{ws}}{\rho_s}R\mathbf{n}_s,
\end{equation}
where $M_{ws}$ is the molecular weight of the solid. The concentration $c$ (kmol/m$^3$) of a species in the system satisfies an advection-diffusion equation
\begin{equation}\label{Eq:concentration}
\frac{\partial c}{\partial t}+ \nabla \cdot \left( c\mathbf{u} \right) = \nabla\cdot\left(D\nabla c\right),
\end{equation}
where $D$ (m$^2$/s) is the diffusion coefficient. The chemical reaction occurs at the fluid-solid interface $\Gamma$, such that
\begin{equation}\label{Equ:bcc1}
\left(c\left(\mathbf{u}-\mathbf{w}_s\right)-D\nabla c \right)\cdot \mathbf{n}_{s}=\zeta R \hspace{0.5cm} \text{at $\Gamma$},
\end{equation}
where $\zeta$ is the stoichiometric coefficient of the species in the reaction. In this work, we assume that the surface reaction rate depends only on the concentration of one reactant species, following
\begin{equation}
R=k_cc,
\end{equation}
where $k_c$ (m/s) is the reaction constant. At the inlet, the boundary conditions are constant flow rate $Q$ (m$^3$/s) and constant reactant concentration $c_i$ (kmol/m$^3$). To limit inlet boundary effect, the velocity is extrapolated from a zero gradient rather than taken as constant \citep{2016-OpenFOAM}. At the outlet, the boundary conditions are constant pressure $p_0$ (m$^2$/s) and a zero gradient for velocity and reactant concentration.
\subsection*{Dimensionless analysis}
The flow, transport and reaction conditions are characterized by the Reynolds number
\begin{eqnarray}
Re=\frac{UL}{\nu},
\end{eqnarray}
which quantifies the relative importance of inertial to viscous forces, the P\'eclet number,
\begin{eqnarray}
Pe=\frac{UL}{D},
\end{eqnarray}
which quantifies the relative importance of advective and diffusive transport, and the Kinetic number,
\begin{eqnarray}
Ki=\frac{k_cL}{D},
\end{eqnarray}
which quantifies the relative importance of chemical reaction and diffusive transport. Here $U$ and $L$ are the reference pore-scale velocity and length. The Kinetic number characterized if the chemical reaction at the surface of solid grains is in the reaction-limited ($Ki<1$) or transport-limited ($Ki>1$) regime. The Damk\"ohler number $Da$, which is the ratio of Kinetic and P\'eclet numbers, is also a relevant quantity. $Da$ quantifies the relative importance of reaction to advective transport globally, but not locally as the reactant can only be transported to the solid surface by diffusion (Equ. (\ref{Equ:bcu}) and (\ref{Equ:bcc1})). In this study, we assume that we are in the creeping flow regime ($Re<<1$) so that the dissolution regime is only dependent on $Pe$ and $Ki$. In addition, the reactant strength, defined as
\begin{eqnarray}
\beta=\frac{c_{i}M_{ws}}{\zeta\rho_s},
\end{eqnarray}
characterised how many kg of solid are dissolved by a kg of reactant.
The pore-scale reference velocity is chosen as the average pore velocity, defined as
\begin{eqnarray}
U=\frac{U_D}{\phi},
\end{eqnarray}
where $\phi$ is the porosity of the domain and $U_D$ (m/s) is the Darcy velocity, defined as
\begin{eqnarray}
U_D=\frac{Q}{A},
\end{eqnarray}
where $A$ (m$^2$) is the cross-sectional area of the domain. The pore-scale reference length scale $L$ is defined as
\begin{equation}
L=\sqrt{\frac{12K}{\phi}},
\end{equation}
where $K$ (m$^2$) is the permeability of the domain, and the parameter 12 is a constant defined so that the pore-scale length scale corresponds to the tube size for a capillary bundle of constant size. The permeability can be calculated as
\begin{equation}\label{Darcy}
K=-\frac{\nu U_DL_D}{\Delta P},
\end{equation}
where $L_D$ is the length of the domain and $\Delta P$ is the pressure drop between inlet and outlet. The pressure is a constant at the outlet, but not at the inlet (constant flow rate boundary condition). Therefore, the pressure drop is defined as \citep{2014-Raeini}
\begin{equation}\label{pressDrop}
\Delta P = -\frac{1}{Q}\frac{dW_P}{dt},
\end{equation}
where $W_P$ is the work done by the pressure force in the domain. Equ. \ref{Darcy} and \ref{pressDrop} together denote that, for an equivalent flow rate, a higher permeability corresponds to a lower energy dissipation in the domain. The rate of energy dissipation $\frac{dW_P}{dt}$ can be calculated as
\begin{equation}
\frac{dW_P}{dt}=-\int_V{\nabla p\cdot udV}.
\end{equation}
\subsection*{Quasi-static assumption}
Dissolution of a solid grain is typically orders of magnitude slower than reactant transport. This is characterised in our numerical model by $\beta Da<<1$ and $\beta Ki<<1$. For example, for dissolution of calcite ($M_{ws}=100$ kg/kmol, $\rho_s=2710$ kg/m$^3$) by an acid at pH=2 ($c_i=0.01$ kmol/m$^3$), the reactant strength $\beta$ is equal to $3.69\times10^{-4}$. Therefore, as long as $Pe<100$ and $Ki<100$, the displacement of the solid interface is slow compared to the transport of reactant in the domain, and flow (Equ. (\ref{Equ:momentum})) and transport (Equ. \ref{Eq:concentration})) can be assumed to be in a quasi-static state
\begin{equation}
\nabla\cdot\left(\mathbf{u}\otimes\mathbf{u}\right)=-\nabla p +\nu\nabla^2\mathbf{u},\label{Eq:momentumQS}
\end{equation}
\begin{equation}\label{Eq:concentrationQS}
\nabla \cdot \left( c\mathbf{u} \right) = \nabla\cdot\left(D\nabla c\right).
\end{equation}
The quasi-static assumption allows the models to run with a large time-step controlled only by the velocity of the solid interface to save on computational time.
\subsection*{Meshing}
The equations are solved using finite volume discretization over an unstructured hybrid mesh. To build the mesh, the solid surface is described using an \textit{stl} image. First, a Cartesian mesh of resolution $h$ is generated. The mesh is then snapped onto the solid surface using the \textit{snappyHexMesh} utility \cite{2016-OpenFOAM}, i.e. cell containing solid are then removed and replaced by hexahedral or tetrahedral cells that match the solid boundaries. An additional layer of cells of the same resolution $h$ is then added around the solid boundary to improve the representation of the solid surface. To decide the resolution used for the initial mesh, a convergence study on porosity and permeability was conducted for Model B (Table \ref{Table:MeshConvergence}). We observe that a resolution of 3 $\mu$m offers a good compromise between accuracy and size of computational mesh. Fig. \ref{fig:mesh} shows Model B with a zoom into a pore to observe the mesh at resolution 3 $\mu$m.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.75\textwidth]{mesh.png}
\caption{Example of pore-space meshing (a) Full domain for model B (b) zoom and visualization of mesh inside a pore.\label{fig:mesh}}
\end{center}
\end{figure}
\begin{table}[!ht]
\centering
\caption{Mesh convergence (Model B)\label{Table:MeshConvergence}}
\begin{tabular}{lccc}
Resolution ($\mu$m) & Porosity & Permeabilty (m$^2$) & number of cells \\
\midrule
6 & 0.432 & 3.17$\times 10^{-10}$ & 139k\\
3 & 0.455 & 5.64$\times 10^{-10}$ & 526k\\
2 & 0.457 & 5.68$\times 10^{-10}$ & 1141k\\
\bottomrule
\end{tabular}
\end{table}
\subsection*{Arbitrary Lagrangian Eulerian method}
The equations are solved using the Arbitrary Lagrangian Eulerian (ALE) method \citep{2016-Starchenko}, implemented in GeoChemFoam (\href{www.github.com/geochemfoam}{www.github.com/geochemfoam}) and the full solution procedure is presented in Fig. \ref{fig:solutionProcedure}. For each time-step, the mesh points are moved with velocity $\mathbf{w}$, which satisfies the Laplace equations with boundary condition (Equ. (\ref{Eq:Ws}))
\begin{eqnarray}\label{Eq:w}
\nabla\cdot D_m \nabla w_j = 0 \hspace{0.5cm} \text{j=x,y,z}\\
w_j=\mathbf{w}_s\cdot \mathbf{e}_j \hspace{0.5cm} \text{at $\Gamma$},
\end{eqnarray}
where $D_m$ is the diffusivity of the mesh motion, $w_j$ is the j-directional component and $\mathbf{e}_j$ is the j-directional standard basis vector. With these equations, the mesh points will track the fluid-solid interface, and the mesh motion is diffused to avoid large volume ratio between neighbor cells. However, as the mesh points are displaced, the skewness of the mesh can increase and lead to failure of the transport solver. To avoid this, the mesh's skewness is checked at the end of each time-step, and the domain is fully remeshed upon failure. After remeshing, the velocity, pressure and concentration fields are mapped to the new mesh. In addition, topological errors can appear when two faces of the same mineral grain overlap, leading to failure of the flow or transport solver. To avoid this, the faces which are fully located in a topological error are eliminated before remeshing. These collapsing faces are identified by the following condition: a face defined as faceI collapsed if a ray leading from its center following its normal vector pointing toward the solid phase meets another face defined as faceJ at a distance lower than the grid size, and faceI and faceJ do not intersect. Following this remeshing algorithm, our numerical simulations are stable and topological errors are eliminated.
\subsection*{Time-stepping strategy}
The simulations are performed using an adaptive time-stepping strategy based on the mesh Courant-Friedrich-Lewy (CFL) number defined as
\begin{equation}
mCFL = \frac{\mathbf{w}\Delta t}{h},
\end{equation}
where $\Delta t$ is the time-step and $h$ is the mesh resolution. The simulation are performed using a maximum $mCFL$ number of 0.005, which offers a good compromise between accuracy, robustness and efficiency. Fig. \ref{fig:TimeStepConvergence} shows a comparison of permeability evolution as a function of porosity for Model B at $Pe=1$, $Ki=1$ between mCFL=0.005 and mCFL=0.0025.
\begin{figure}[!b]
\centering
\includegraphics[width=0.8\textwidth]{TimeStepConvergence.png}
\caption{Comparison of permeability evolution as a function of porosity for Model B at $Pe=1$, $Ki=1$ for two different maximum mCFL numbers. \label{fig:TimeStepConvergence}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{solutionProcedureALE.png}
\caption{Solution procedure for solving quasi-steady state dissolution using the ALE method. \label{fig:solutionProcedure}}
\end{figure}
\section*{Appendix C: Robustness of $\phi$, $K$ and $L$ for stochastically generated micromodel}
The study presented in the paper is limited to one instance of each of two stochastic models (Model A and B). Future work will focus on extending the findings to any generated geometry and in particular on linking the dissolution regimes to the parameters of the stochastic distribution. For this, it would be essential that the geometrical parameters that are used in the calculation of $Pe$ and $Ki$, i.e. the porosity $\phi$, and the pore-scale length $L$, vary over a range much less than an order of magnitude, so that the calculation of $Pe$ and $ki$ are robust over different instance of the same stochastic distribution. Table \ref{Table:lengthscale} shows the variation of porosity and pore-scale length for 12 instances of each stochastic distribution (Model A and B). For model A, $\phi$ varies between 0.430 and 0.445 and $L$ varies between 1.04 and 1.15 $\times 10^{-4}$ m; for model B, $\phi$ varies between 0.451 and 0.473 and $L$ varies between 1.14 and 1.41 $\times 10^{-4}$ m. This shows that the calculation of $Pe$ and $Ki$ will be robust, as $\phi$ and $L$ varies on a scale much smaller than an order of magnitude.
\begin{table}[!ht]
\centering
\caption{Porosity and $L$ for 12 realizations of Model A and B)\label{Table:lengthscale}}
\begin{tabular}{l|cc|cc}
Instance & \multicolumn{2}{c|}{Model A} & \multicolumn{2}{c}{Model B} \\
\midrule
& $\phi$ & $L$ ($\times10^{-4}$m) & $\phi$ & $L$ ($\times10^{-4}$m) \\
1 & 0.437 & 1.11 & 0.455 & 1.22 \\
2 & 0.439 & 1.11 & 0.473 & 1.41 \\
3 & 0.436 & 1.11 & 0.455 & 1.34 \\
4 & 0.432 & 1.10 & 0.457 & 1.29 \\
5 & 0.445 & 1.13 & 0.466 & 1.28 \\
6 & 0.440 & 1.11 & 0.465 & 1.23 \\
7 & 0.436 & 1.11 & 0.464 & 1.31 \\
8 & 0.437 & 1.12 & 0.468 & 1.34 \\
9 & 0.439 & 1.15 & 0.460 & 1.31 \\
10 & 0.436 & 1.10 & 0.451 & 1.14 \\
11 & 0.430 & 1.04 & 0.463 & 1.25 \\
12 & 0.438 & 1.08 & 0.472 & 1.22 \\
\bottomrule
\end{tabular}
\end{table}
\section*{Appendix D: Time Sequence Videos of Dissolution}
Movies S1-10 show the dissolution time series for select simulations A1-A5 and B1-B5.
\href{https://youtube.com/shorts/x-J1-x83y0E}{Movie S1}: Visualisation of Model A $Pe$=0.01 $Ki$=0.1 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the compact dissolution regime.
\href{https://youtube.com/shorts/gTIHaQsBaRA}{Movie S2}: Visualisation of Model A $Pe$=1 $Ki$=1 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the wormhole formation dissolution regime.
\href{https://youtube.com/shorts/liEpQb-bI6w}{Movie S3}: Visualisation of Model A $Pe$=10 $Ki$=1 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the wormhole formation dissolution regime.
\href{https://youtube.com/shorts/tQOhGYgEOWE}{Movie S4}:Visualisation of Model A $Pe$=100 $Ki$=10 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the channeling dissolution regime.
\href{https://youtube.com/shorts/XMwYa4NCiaw}{Movie S5}: Visualisation of Model A $Pe$=100 $Ki$=0.1 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the uniform dissolution regime.
\href{https://youtube.com/shorts/hVkntEKNz2U}{Movie S6}: Visualisation of Model B $Pe$=0.01 $Ki$=0.1 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the compact dissolution regime.
\href{https://youtube.com/shorts/FNGScitflic}{Movie S7}: Visualisation of Model B $Pe$=1 $Ki$=1 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the wormhole formation dissolution regime.
\href{https://youtube.com/shorts/TuRnuCsQT10}{Movie S8}: Visualisation of Model B $Pe$=10 $Ki$=1 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the channeling dissolution regime.
\href{https://youtube.com/shorts/CJKEcgAfN_c}{Movie S9}: Visualisation of Model B $Pe$=100 $Ki$=10 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the channeling dissolution regime.
\href{https://youtube.com/shorts/Fwr5fxuAkZY}{Movie S10}: Visualisation of Model B $Pe$=100 $Ki$=0.1 evolution of porosity and concentration. The grains are gray, with the concentration field in color. The pore throats are extracted by a watershed algorithm on the Euclidean distance map of the pore space and superimposed in white. This is an example of the uniform dissolution regime.
\end{appendices}
|
2,869,038,153,774 | arxiv |
\section{Experiments\label{experiments}}
To empirically demonstrate the utility of \textbf{FACE} we present results of its execution on two distinct data sets. First, we show the behaviour of our algorithm on a toy data set and compare the three graph construction approaches introduced in Section~\ref{methods}. Following our discussion and critique of existing approaches for Counterfactual Examples generation, e.g., their suboptimality, we refer the reader to \cite{wachter2017counterfactual} and present one of their counterfactual examples to illustrate its inferiority with respect to considering just a norm as the cost.
Secondly, we apply our algorithm to the MNIST data set \cite{lecun2010mnist} and show how it can be used to derive meaningful digit transformations based on the calculated path.
\paragraph{Synthetic Data Set}
To this end, we trained a Neural Network, which architecture is based on two hidden layers of length 10 with ReLU activation functions. The toy data set (see, e.g., Figure~\ref{fig:kde_2d}) consists of three parts:
\begin{enumerate}
\item horizontal cloud of blue points to the left of the figure -- 200 points distributed uniformly at random across the y-axis and sampled from a mean-zero Gaussian with $0.4$ standard deviation on the x-axis,
\item vertical cloud of red points to the bottom of the figure -- 200 points distributed uniformly at random across the x-axis and sampled from a mean-zero Gaussian with $0.5$ standard deviation on the y-axis, and
\item vertical cloud of red points to the top-right of the figure -- 100 points sampled from a Gaussian distribution with $(3.5, 8.0)$ mean and $0.5$ standard deviation.
\end{enumerate}
\textbf{FACE} was initialised with $w(z)=-log(z)$ as the weight function and the $l_2$-norm as the distance function. Figures~\ref{fig:kde_2d}, \ref{fig:egraph} and \ref{fig:knn} show the results of applying \textbf{FACE} on the toy data set when used with $KDE$, $e$-graph and $k$-NN respectively.
\begin{figure}
\centering
\begin{subfigure}[t]{.31\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{new_figures/kde_025.pdf}
\caption{$\epsilon = 0.25$ distance threshold.}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{.31\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{new_figures/kde_050.pdf}
\caption{$\epsilon = 0.50$ distance threshold.}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{.31\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{new_figures/kde_2.pdf}
\caption{$\epsilon = 2$ distance threshold.}
\end{subfigure}%
\caption{The five shortest paths from a starting data point to a target (counterfactual) data point generated from a graph, which edge weights were computed using the $KDE$ approach. The targets are restricted by: i) $t_p \geq 0.75$ prediction threshold, ii) $t_d \geq 0.001$ density threshold and the distance threshold set to: (a) $\epsilon = 0.25$, (b) $\epsilon = 0.50$ and (c) $\epsilon = 1$.}
\label{fig:kde_2d}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.31\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{new_figures/egraph025.pdf}
\caption{$\epsilon = 0.25$ distance threshold.}
\end{subfigure}%
\hfill
\begin{subfigure}{.31\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{new_figures/egraph050.pdf}
\caption{$\epsilon = 0.50$ distance threshold.}
\end{subfigure}
\hfill
\begin{subfigure}{.31\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{new_figures/egraph1.pdf}
\caption{$\epsilon = 1$ distance threshold.}
\end{subfigure}
\caption{The five shortest paths from a starting data point to a target (counterfactual) data point generated from a graph, which edge weights were computed using the $e$-graph approach. The targets are restricted by $t_p \geq 0.75$ prediction threshold with the distance threshold set to: (a) $\epsilon = 0.25$, (b) $\epsilon = 0.50$ and (c) $\epsilon = 1$.}
\label{fig:egraph}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.31\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{new_figures/knn_2.pdf}
\caption{$k = 2$ neighbours and $\epsilon = 0.25$ distance threshold.}
\end{subfigure}%
\hfill
\begin{subfigure}{.31\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{new_figures/knn_4_035.pdf}
\caption{$k = 4$ neighbours and $\epsilon = 0.35$ distance threshold.}
\end{subfigure}
\hfill
\begin{subfigure}{.31\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{new_figures/knn_10_080.pdf}
\caption{$k = 10$ neighbours and $\epsilon = 0.80$ distance threshold.}
\end{subfigure}
\caption{The five shortest paths from a starting data point to a target (counterfactual) data point generated from a graph, which edge weights were computed using the $k$-NN graph approach. The targets are restricted by $t_p \geq 0.75$ prediction threshold with the $\epsilon$ distance threshold and $k$ neighbours set to: (a) $k = 2$ and $\epsilon = 0.25$; (b) $k = 4$ and $\epsilon = 0.35$; and (c) $k = 10$ and $\epsilon = 0.80$.}
\label{fig:knn}
\end{figure}
In each, the triplet follows a similar pattern: (a) no counterfactual is generated, (b) a ``good'' counterfactual is generated, and (c) a ``bad'' counterfactual is generated.\footnote{``Good'' and ``bad'' are with respect to the desired properties of counterfactuals discussed earlier in the paper.} Our experimental setup adheres to a real-life use case where \textbf{FACE} is originally applied with a fairly ``restrictive'' configuration, which is subsequently being relaxed until a counterfactual is found. Figure~\ref{fig:oxford_paper} shows the counterfactuals found by optimising Equation~\ref{adv_examples} proposed by \citet{wachter2017counterfactual}, which can be compared against the onse achived with \textbf{FACE} on the same data set (cf.\ Figures~\ref{fig:kde_2d}, \ref{fig:egraph} and \ref{fig:knn}).
\begin{figure}
\centering
\includegraphics[width=0.50\linewidth]{new_figures/oxford_paper.pdf}
\caption{Counterfactuals generated using the method proposed by \citet{wachter2017counterfactual}. $p$ denotes the penalty parameter and $t$ the classification threshold. These counterfactuals clearly do not comply with the desired properties described in Section~\ref{counterfactuals}.}
\label{fig:oxford_paper}
\end{figure}%
\paragraph{MNIST Data Set}
To this end, we applied \textbf{FACE} (based on the $k$-NN construction algorithm with $k=50$) to two images of the zero digit taken from the MNIST data set \cite{lecun2010mnist} with the target counterfactual class set to the digit eight.
The underlying predictive model is a Neural Network trained on the whole MNIST data set. Figure~\ref{mnist} depicts the full path from the starting instance (left) to the final counterfactual (right). The resulting path shows a smooth transformation through the zeros until an eight is reached.
\begin{comment}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\linewidth]{new_figures/mnist0-cropped.pdf}
\caption{An example of using FACE to compute a counterfactual example for an image of a zero.}
\label{fig:mnist0}
\end{figure}%
\vspace{-2em}
\begin{figure}[H]
\centering
\includegraphics[width=0.75\linewidth]{new_figures/mnist1-cropped.pdf}
\caption{The ``transformation'' path achieved by applying \textbf{FACE} to compute a counterfactual example for two different images of a zero.}
\label{fig:mnist1}
\end{figure}%
\end{comment}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=1\linewidth]{new_figures/mnist0-cropped.pdf}
\label{fig:Ng1}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=1\linewidth]{new_figures/mnist1-cropped.pdf}
\label{fig:Ng2}
\end{subfigure}
\caption{The ``transformation'' path achieved by applying \textbf{FACE} to compute a counterfactual example for two different images of a zero.}
\label{mnist}
\end{figure}
\vspace{-3mm}
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{figures/mnist0.pdf}
\caption{TODO}
\label{fig:problems}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{figures/mnist1.pdf}
\caption{TODO}
\label{fig:problems}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{figures/pred.pdf}
\caption{Prediction map\footnotemark}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{figures/density_estimation.pdf}
\caption{Density estimation\footnotemark}
\label{fig:sub2}
\end{subfigure}
\caption{Shortest 10-jump path from a starting data point to a target data point (counterfactual) that abides to the following restrictions: i) prediction threshold $\geq 0.75$, ii) density threshold $\geq 0.05$ and iii) distance threshold $= 0.50$.}
\label{fig:kde_2d}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure}
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{figures/egraph_025.pdf}
\caption{TODO}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{figures/egraph_075.pdf}
\caption{TODO}
\label{fig:sub2}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=1.20\linewidth]{figures/egraph_125.pdf}
\caption{TODO}
\label{fig:sub2}
\end{subfigure}
\caption{prediction threshold: 0.75, density threshold: 0.05, shortest 10 paths, using KDE}
\label{fig:egraph}
\end{figure}
\end{comment}
\section{Summary and Future Work\label{discussion}}
In this paper we have highlighted the shortcomings of popular Counterfactual Explanation approaches in the Machine Learning literature and proposed a new method, called \textbf{FACE}, that aims at resolving them. Our approach accounts for both the nature of the target instance (the counterfactual) and the degree to which the proposed change is feasible and actionable. Our research has led us to uncover the dangers of ignoring this information when explaining automated predictions and the possible adverse impact this may have on both involved parties. We will continue this line of research by evaluating the performance of our approach on real-world data sets of dynamic nature and exploring the degree to which our suggested counterfactuals match the \textit{true} change. Furthermore, we are interested in exploring the added value and the usefulness of the path itself for the explainee.
The \textbf{FACE} algorithm has made great strides in addressing actionability and feasibility issues of currently available Counterfactual Explanation methods,
nevertheless we believe that further exploration of alternative approaches is a fruitful area for further investigation.
\section{Introduction}
The widespread deployment of complex Machine Learning (ML) systems and their use for important decision-making have led to a rising interest in the fields of Interpretable and Explainable Machine Learning (IML and XML respectively). IML refers to developing models that not only perform well with respect to the usual measures of predictive performance, but which are also transparent. IML approaches typically aim to achieve this by choosing a model from a class of inherently interpretable models, for example, decision trees.
However, this approach may come at a cost of accuracy, as the complexity of the used models is inherently limited. On the other hand, XML is mainly concerned with post-hoc approaches that aim at explaining an ML model, or its predictions, after it has been trained, often treating it as a black-box. It imposes no limitations on the complexity of the compatible models, nevertheless popular approaches in XML, such as ``Global Surrogate Models'' or ``Local Surrogate Models'' \cite{ribeiro2016should, ribeiro2018anchors, plumb2018model}, add an extra layer of modelling complexity in exchange for transparency.
In a third category -- based on Counterfactual Explanations (CE) -- one does not need to worry about such issues, as the objective is not to understand the inner workings of a model \cite{wachter2017counterfactual}, but rather to provide a transformation of a particular instance leading to the desired prediction.
In this paper we are concerned with Counterfactual Explanations (or, Contrastive Explanations \cite{van2018contrastive}) that fall under the category of \textit{Example-Based Reasoning}. While other approaches aim at answering: ``Why has my loan been declined?'', CE aim at answering a question of a different nature: ``What do I need to do for my loan to be accepted?''
\citet{wachter2017counterfactual} propose three aims of (counterfactual) explanations with respect to their audience:
\begin{enumerate
\item to inform and help the explainee understand why a particular decision was reached,
\item to provide grounds to contest adverse decisions, and
\item to understand what could be changed to receive a desired result in the future, based on the current decision-making model.
\end{enumerate}
\begin{figure}[t!]
\centering
\includegraphics[width=0.65\linewidth]{figures/fig1.pdf}
\caption{$A$, $B$, $C$ and $D$ are four viable counterfactuals of $\times$, all satisfying the condition of having a different predicted class to the selected instance. We argue that $D$ is the best choice. $A$ is the result of minimising the $l_2$-norm. $B$ is a generic data point that has a large classification margin. Nevertheless, both $A$ and $B$ lie in a \emph{low density region}. $C$ and $D$ do not share the shortcomings of $A$ and $B$: they lie in high-density regions and have a relatively large classification margins. The major difference between $C$ and $D$ is the connection between $\times$ and $D$ via a high-density path, indicating that it is feasible for the original instance to be transformed into $D$ despite $C$ being simply closer.}
\label{fig:problems}
\end{figure}
Counterfactual explanations achieve all three of these aims \cite{wachter2017counterfactual}.
However, a na\"ive application of the last one -- the principle of ``the closest possible world'' that prescribes small changes that lead to the desired outcome -- may yield inadequate results.
Firstly, a counterfactual generated by the state-of-the-art explainability system is not necessarily representative of the underlying data distribution, and therefore may prescribe unachievable goals. This shortcoming is illustrated in Figure~\ref{fig:problems}, where points $A$ and $B$ -- both close to the explained data point $\times$ with respect to the $l_2$-norm -- achieve the desired prediction, nevertheless they lie in a low-density region. This last observation undermines
the practical feasibility of points $A$ and $B$ since there are no precedents of any other similar instances in the data.
Secondly, counterfactuals provided by current approaches may not allow for a \textit{feasible path} between the initial instance and the suggested counterfactual making actionable recourse infeasible. This argument is illustrated with point $D$ in Figure~\ref{fig:problems}, which we argue is a more actionable counterfactual than $C$.
Both these discoveries have prompted us to establish a new
line of research for Counterfactual Explanations: providing \emph{actionable} and \emph{feasible} paths to transform a certain data point into one that meets certain goals (e.g., belong to a desirable class).
\begin{comment}
\todo{KACPER: note on models evolving over time.}
\KS{The paper is titled ``Continuous Integration of Machine Learning Models with ease.ml/ci: Towards a Rigorous Yet Practical Treatment'' and they argue that if a model is better for one particular sub-population the doctor will gain confidence in drawing heavily on that model. When you improve the model (the overall predictive performance goes up) you might have change its characteristic for this small population, hence you loose trust}
\end{comment}
The contribution of this paper is twofold. We first critique the existing line of research on Counterfactual Explanations by pointing out the shortcomings of dismissing the inherent nature of the target counterfactual and its (real-life) context. We point out that existing research on counterfactual explanations is not aligned with real-world applications (e.g., offering a \emph{useful} counterfactual advice to customers who have been denied loans). To overcome this challenge we identify two essential properties of counterfactual explanations: \emph{feasibility} and \emph{actionability}, which motivate a new line of research concerned with providing high-density paths of change.
Secondly, we propose a novel, well-founded approach that provides feasible and actionable counterfactual explanations that respect the underlying data distribution and are connected via high-density paths (based on the shortest path distances defined via density-weighted metrics) to the explained instance.
Our approach -- which we call \emph{Feasible and Actionable Counterfactual Explanations} (\textbf{FACE}) -- mitigates all of the risks associated with the explanations produced by the current line of research.
We support our claims by discussing how ignoring these premises could lead to ``unachievable goals'' with undesired consequences such as a loss of end user's trust. Furthermore, we show that our algorithmic contribution to generate feasible and actionable counterfactuals is non-trivial as the generated counterfactuals come from dense regions in addition to being connected through a high-density path with the original instance. Therefore, the explanations are coherent with the underlying data distribution and can be tailored to the user by customising the ``feasible paths'' of change. In Section~\ref{counterfactuals} we establish the links and differences of our approach with current approaches in the literature. Section~\ref{methods} introduces our methodology and Section~\ref{related_work} discusses related work. In Section~\ref{experiments} we present our experimental results and we conclude our work with a discussion in Section~\ref{discussion}.
\section{Feasible Counterfactuals\label{methods}}
Before presenting \textbf{FACE} we introduce the necessary notation and background for completeness (see \cite{alamgir2012shortest} and references therein for an in-depth presentation of this topic). We then show how different variants of our approach affect its performance and the quality of generated counterfactuals.
\subsection{Background}
Let $\mathcal{X} \subseteq \mathbb{R}^d$ denote the input space and let $\{\boldsymbol{x}_i\}_{i=1}^{N} \in \mathcal{X}$ be an independent and identically distributed sample from a density $p$. Also, let $f$ be a positive scalar function defined on $\mathcal{X}$ and let $\gamma$ denote a path connecting $\boldsymbol{x}_i$ to $\boldsymbol{x}_j$, then the $f$-length of the path is denoted by the \textit{line integral} along $\gamma$ with respect to $f$\footnote{We assume that $\mathcal{X}$ is endowed with a density function $p$ with respect to the Lebesgue measure, where $p$ is $L$-Lipschitz continuous with $L > 0$.}:
\begin{equation}
\mathcal{D}_{f, \gamma} = \int_{\gamma} f(\gamma(t)) \cdot |\gamma'(t)| dt\text{.}
\end{equation}
The path with the minimum $f$-length is called the $f$-geodesic, and its $f$-length is denoted by $\mathcal{D}_{f, \gamma^\star}$.
Consider a geometric graph $G = (V, E, W)$ with vertices $V$, edges $E$ and (edge) weights $W$. The vertices correspond to the sampled instances (training data) and edges connect the ones that are close with respect to a chosen metric, which value (a measure of closeness) is encoded in the (edge) weights. We use the notation $i \sim j$ to indicate a presence of an edge connecting $\boldsymbol{x}_i$ and $\boldsymbol{x}_j$, with the corresponding weight $w_{ij}$; and $i \nsim j$ to mark that $\boldsymbol{x}_i$ and $\boldsymbol{x}_j$ are not directly connected, in which case the weight is assumed to be $w_{ij} = 0$.
Let $f$ depend on $\boldsymbol{x}$ through the density $p$ with $f_p(x) := \tilde{f}(p(x))$. Then, the $f$-length of a curve $\gamma: \left[\alpha, \beta\right] \rightarrow \mathcal{X}$ can be approximated by a Riemann sum of a partition of $\left[\alpha, \beta\right]$ in sub-intervals $[t_{i-1}, t_{i}]$ (with $t_0=\alpha$ and $t_N=\beta$):
\begin{equation*}
\hat{\mathcal{D}}_{f, \gamma} = \sum_{i=1}^{N} f_p\Big(\frac{\gamma(t_{i-1}) + \gamma(t_{i})}{2}\Big) \cdot \|\gamma(t_{i-1}) - \gamma(t_{i})\| \text{.}
\end{equation*}
As the partition becomes finer, $\hat{\mathcal{D}}_{f, \gamma}$ converges to $\mathcal{D}_{f, \gamma}$ \cite[Chapter 3]{gamelin2003complex}. This suggests using weights of the form:
\begin{align*}
&w_{ij} = f_p\Big(\frac{\boldsymbol{x}_i + \boldsymbol{x}_j}{2}\Big)\cdot \|\boldsymbol{x}_i - \boldsymbol{x}_j\|\text{,}\\
&\textrm{when} \hspace{5mm} \|\boldsymbol{x}_i - \boldsymbol{x}_j\| \leq \epsilon \text{.}
\end{align*}
The true density $p$ is rarely known but \citet{orlitsky2005estimating} show that using a \textit{Kernel Density Estimator} (KDE) $\hat{p}$ instead will converge to the $f$-distance. \citeauthor{orlitsky2005estimating} also show how to assign weights to edges while avoiding the need to perform density estimation altogether. Their results apply to two graph constructions, namely, a $k$-NN graph and an $\epsilon$-graph. In summary, for the three approaches the weights can be assigned as follows:
\begin{align}
&w_{ij} = f_{\hat{p}}\Big(\frac{\boldsymbol{x}_i + \boldsymbol{x}_j}{2}\Big)\cdot \|\boldsymbol{x}_i - \boldsymbol{x}_j\| & \textrm{for $KDE$;} \label{eq_kde}\\[2pt]
&w_{ij} = \tilde{f}\Big(\frac{r}{\|\boldsymbol{x}_i - \boldsymbol{x}_j\|}\Big) \cdot \|\boldsymbol{x}_i - \boldsymbol{x}_j\|, \hspace{5mm} r = \frac{k}{N \cdot \eta_d} & \textrm{for $k$-NN; and} \label{eq_knn}\\[2pt]
&w_{ij} = \tilde{f}\Big(\frac{\epsilon^d}{\|\boldsymbol{x}_i - \boldsymbol{x}_j\|}\Big) \cdot \|\boldsymbol{x}_i - \boldsymbol{x}_j\| & \textrm{for $\epsilon$-graph} \label{eq_egraph}\\[3pt]
&\hspace{10mm}\textrm{when} \hspace{5mm} \|\boldsymbol{x}_i - \boldsymbol{x}_j\| \leq \epsilon \nonumber
\end{align}
where $\eta_d$ denotes the volume of a sphere with a unit radius in $\mathbb{R}^d$.
\subsection{The FACE Algorithm: Feasible and Actionable Counterfactual Explanations}
Building up on this background we introduce the \textbf{FACE} algorithm. It uses $f$-distance to quantify the trade-off between the path length and the density along this path, which can subsequently be minimised using a shortest path algorithm by approximating the $f$-distance by means of a finite graph over the data set.
Moreover, \textbf{FACE} allows the user to impose additional feasibility and classifier confidence constraints in a natural and intuitive manner.
Firstly, a graph over the data points is constructed based on one of the three approaches: $KDE$, $k$-NN or $\epsilon$-graph. The user then decides on the properties of the target instance (i.e., the counterfactual): the prediction threshold -- a lower bound on prediction confidence outputted by the model, and the density (or its proxy) threshold. This part of the algorithm is described in Algorithm~\ref{algo1}, which assumes access to a $KDE$.
To generate a counterfactual, \textbf{FACE} must be given its expected class. Optionally, the counterfactual can be additionally constrained by means of: a subjective prediction confidence threshold ($t_p$), a density threshold ($t_d$), a custom weight function ($w$), and a custom conditions function ($c$), which determines if a transition from a data point to its neighbour is feasible.\footnote{Domain knowledge of this form (e.g., immutable features such as sex or conditionally immutable changes such as age, which is only allowed to change in one direction) are incorporated within the \emph{conditions function} $c(\cdot, \cdot)$. This knowledge is \emph{essential} if the desired counterfactual is to be useful.} Subject to the new weight function and conditions function, if possible, the graph is updated by removing appropriate edges; otherwise a new graph is constructed.\footnote{If the explainee wants to provide a custom cost function for the feature value changes, e.g., the cost of changing a job is twice that of change a marital status, a new graph has to be built from scratch. If, on the other hand, the cost function stays fixed and only new constraints (inconsistent with the current graph) are introduced, e.g., the counterfactuals should not be conditioned on a job change, the existing graph can be modified by removing some of its edges.} The Shortest Path First Algorithm (SPFA) (Dijkstra's algorithm) \cite{cormen2009introduction} is executed on the resulting graph over all the candidate targets, i.e., the set $\boldsymbol{I}_{CT}$ of all the data points that meet the confidence and density requirements (see line 11 in Algorithm~\ref{algo1}).
\newcommand\mycommfont[1]{\footnotesize\ttfamily\textcolor{blue}{#1}}
\SetCommentSty{mycommfont}
\begin{algorithm}
\caption{\textbf{FACE} Counterfactual Generator}
\label{algo1}
\SetKwData{Left}{left}
\SetKwData{This}{this}
\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}
\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{Data ($\boldsymbol{X} \in \mathbb{R}^d)$, density estimator ($\hat{p}: \mathcal{X}\rightarrow[0, 1]$), probabilistic predictor ($\boldsymbol{clf}: \mathcal{X}\rightarrow[0, 1]$), distance function ($d: \mathcal{X}\times\mathcal{X}\rightarrow \mathbb{R}_{\scaleto{/geq0}{3pt}}$), distance threshold ($\epsilon > 0$), weight function ($w: \mathcal{X}\times\mathcal{X}\rightarrow \mathbb{R}_{\scaleto{>=0}{3pt}}$), and conditions function ($c: \mathcal{X}\times\mathcal{X}\rightarrow \{True,False\}$).}
\Output{Graph ($V, E, W$) and candidate targets ($\boldsymbol{I}_{CT}$).}
\BlankLine
\tcc{Construct a graph.}
\For{every pair ($\boldsymbol{x}_i, \boldsymbol{x}_j$) in $\boldsymbol{X}$}{
\If{$d(\boldsymbol{x}_i, \boldsymbol{x}_j)$ > $\epsilon$ $\boldsymbol{~and~}$ $c(\boldsymbol{x}_i, \boldsymbol{x}_j)$ $\boldsymbol{is}$ $\boldsymbol{\textit{True}}$}
{
$i \nsim j$\\
$w_{ij} = 0$
}
\Else
{
$i \sim j$ \\
\tcc{In this case we use Equation~\ref{eq_kde} (KDE). This should be adjusted for $k$-NN and $\epsilon$-graph constructions by using Equation~\ref{eq_knn} and \ref{eq_egraph} respectively.}
$w_{ij} = w(\hat{p}(\frac{\boldsymbol{x}_i + \boldsymbol{x}_j}{2})) \cdot d(\boldsymbol{x}_i, \boldsymbol{x}_j)$\\
}
}
\tcc{Construct a set of candidate targets.}
$\boldsymbol{I}_{CT}$ = \{\} \\
\For{$\boldsymbol{x}_i$ in $\boldsymbol{X}$}{
\If{$\boldsymbol{clf}(\boldsymbol{x}_i) \geq t_p$ $\boldsymbol{~and~}$ $\hat{p}(\boldsymbol{x}_i) \geq t_d$}
{
$\boldsymbol{I}_{CT}$ = $\boldsymbol{I}_{CT}$ $\cup$ $i$
}
}
\end{algorithm}
\begin{comment}
\begin{algorithm}
\caption{Counterfactual Generation -- Part 2}
\label{algo2}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{subject instance ($\boldsymbol{x}$), subject target ($t$), Candidate targets ($\boldsymbol{I}_{CT}$).
Data ($\boldsymbol{X}, \boldsymbol{y}$), predictor ($\boldsymbol{C}$), prediction threshold ($t_p$), density estimator ($\boldsymbol{D}$), density threshold ($t_p$), distance function ($d$), distance threshold ($\epsilon$), weight function ($f$), personalised conditions function ($c_p$).}
\Output{Counterfactual $\boldsymbol{x}_{cf}$}
Construct the graph.\\
\hspace{2.5mm} $\boldsymbol{for}$ every ($\boldsymbol{x}_i, \boldsymbol{x}_j$) in $\boldsymbol{X}$: \\
\hspace{5mm} $\boldsymbol{if}$ $d(\boldsymbol{x}_i, \boldsymbol{x}_j)$ > $\epsilon$: \\
\hspace{7.5mm} $i \nsim j$ \\
\hspace{5mm} $\boldsymbol{else}$: \\
\hspace{7.5mm} $i \sim j$ \\
\hspace{7.5mm} $w_{ij} = f(\boldsymbol{D}(\frac{\boldsymbol{x}_i + \boldsymbol{x}_j}{2})) \cdot d(\boldsymbol{x}_i, \boldsymbol{x}_j)$ \\
\end{algorithm}
\end{comment}
\paragraph{Complexity}
Execution of the Shortest Path First Algorithm between two instances can be optimised to have the worst case time complexity of $\mathcal{O}(|E| + |V|log|V|)$ where $|E|$ denotes the number of edges and $|V|$ the number of nodes in the graph. This complexity then scales accordingly with the number of candidate targets. The first term of the complexity -- the number of edges -- can be controlled by the user to a certain extent as it depends on the choice of the distance threshold parameter. The second term can also be controlled (and subsequently the first term as well) by reducing the number of instances to be considered, in which case the objective would be similar to the one of ``Prototype Selection''. A sub-sampling as simple as a random sampling of the data points, or more sophisticated alternatives such as Maximum Mean Discrepancy (MMD) \cite{kim2016examples, gretton2012kernel}, can be used with a clear trade-off between the accuracy of the generated counterfactuals and the algorithm's speed. %
By defining the problem space in such a way our method is restricted to using paths that jump only in between existing data points with the generated counterfactuals also being part of the data set.
In practice a base graph can be generated and stored with the most generic conditions imposed, e.g., if the data represent people, edges between people of different sex would be removed. When an explainee requests a counterfactual, he can impose further restrictions (by removing edges) to create a personalised graph, e.g., this individual is not willing to get divorced. On the other hand, if personalised cost function is required, entirely new graph needs to be generated. While the theory presented here only holds for continuous distributions, which satisfy the requirements discussed earlier, the approach can still be used with discrete features.
\section{Counterfactual Thinking and Counterfactual Examples\label{counterfactuals}}
The desiderata put forward by \citet{wachter2017counterfactual} might at first seem sufficient to construct a helpful counterfactual for any task at hand.
However, our experiments show that this is not necessarily the case, prompting
the need for a new approach that ensures usefulness of counterfactual explanations in practice.
First, the nature of the target instance -- the derived counterfactual example -- is not taken into account. This may lead to a situation where the target instance is not representative of the underlying data distribution, i.e., it is located in a low density region, and thus essentially can be considered an outlier. Outliers are poor counterfactual explanations in practice as they would not naturally occur in the first place. In addition to being poor explanations, such counterfactuals are at risk of
harming the explainee
by suggesting a change of which the future outcome is highly uncertain, as classifiers tend to be less reliable in sparsely populated regions of the data space, especially close to a decision boundary.
Points $A$ and $B$ shown in Figure~\ref{fig:problems} are prime examples of this major drawback. The uncertainty in a prediction, coming either from a low classification margin or due to the low density of a region, should be of utmost importance when generating a counterfactual.
Beyond feasibility and actionability, it is also important to consider the model's confidence of predictions as it may contribute to issues with a delayed impact \cite{liu2018delayed}. For example, consider a person who had his loan application rejected and wants to know what changes to make for his application to be accepted next time. If this person is handed a counterfactual explanation and implements the proposed changes, his loan application will be accepted. However, if the \textit{new state} of the subject (the proposed counterfactual) is in a region of high uncertainty, then there exists a high risk that this individual will default. This unintended consequence can be either attributed to the counterfactual data point lying on a decision boundary (caused by minimising a norm constraint only) or to its placement in a low-density region where the model had not seen enough data. In the process of trying to help, the system generating counterfactuals may actually hurt the explainees.
Furthermore, the desiderata presented by \citet{wachter2017counterfactual} do not account for the extent to which the change -- a transformation from the current state to the suggested counterfactual state -- is feasible. %
``Counterfactual thinking'' refers to the concept of hypothesising what would have happened had something been done differently \cite{contrastive_thesis}, i.e., ``Had I done $X$ instead of $Y$, would the outcome still be $Z$?'' %
However, when adapting this concept to ML applications (e.g., see \cite{contrastive_thesis}) the outcome is usually decided prior to finding a counterfactual cause. What has been overlooked by the XML community is that the aim of a counterfactual explanation is for the explainee to \textit{actually try and make the change} given the actionable nature of the explanation. A customer whose loan application has been rejected would (probably) disregard a counterfactual explanation conditioned on him being 10 years younger.
In addition to the feasibility of a counterfactual explanation -- the target instance has to be in a region of high density -- it also has to be achievable in the real world, therefore ``reachable'' from the selected instance. This means that the explanation must not suggest altering attributes in ways that are particularly hard or even physically impossible in the real world, such as reducing one's age. %
This also implicitly implies the existence of
a short, continuous and feasible path from the selected instance to the target instance for the counterfactual to be actionable. Specifically, paths crossing low-density regions are arguably less feasible as instances in these regions are rare and unlikely by definition.
To sum up, the current state-of-the-art solutions do not satisfy the three requirements proposed by \citeauthor{wachter2017counterfactual}, which we believe are critical for actionability and thus practical utility of counterfactual explanations.
To remedy this situation we propose to following objectives for counterfactual explanations in addition to the inherent requirement of this instance belonging to the desired class:
\begin{enumerate}
\item feasibility of the counterfactual data point,
\item continuity and feasibility of the path linking it with the data point being explained, and
\item high density along this path and its relatively short length.
\end{enumerate}
\begin{comment}
Finally, our solution generates counterfactuals that are robust to future model changes. %
Since some of the actions suggested by counterfactuals may not be acted upon immediately the change of the model over time may invalidate them; especially if the optimisation objective was to find a counterfactual just past the decision boundary. %
With more data becoming available over time, the model will be refined. This might include a shift in the decision boundary or even an exploration of new areas. For example, due to the company's expansion in a new region, data for people with different backgrounds may become available, e.g., people with different socioeconomic backgrounds. We cannot guarantee that the decision boundary in a dense region will not change but we are even more uncertain about what may happen in an a low-density area regardless of current algorithm's confidence in that region -- adversarial examples are a great example of this phenomenon.
\end{comment}
\section{Related Work\label{related_work}}
Counterfactual explanations of Machine Learning models and predictions have been studied extensively in the recent years \cite{rudin2018please, efficient2019russel, contrastive_thesis,van2018contrastive}. Their popularity in Machine Learning is mainly attributed to the use of counterfactual explanations in everyday human life to explain phenomena that surround us \cite{miller2018explanation}, therefore they do not require the explainee to be familiar with Artificial Intelligence concepts to understand them. %
Despite this recent surge in popularity of counterfactual explanations in ML literature, they have been extensively studied in \cite{miller2017explainable} social sciences, hence are well grounded as an explanatory technique. %
Furthermore, they have been deemed to satisfy the ``Right to Explanation'' requirement \cite{wachter2017counterfactual} introduced by the European Union's General Data Protection Act (GDPR), therefore making them a viable solution for many businesses applying predictive modelling to human matters.%
To produce this type of explanations \citet{wachter2017counterfactual} adapted the standard machinery used in the \textit{Adversarial Examples} literature \cite{goodfellow2015explaining}:
\begin{equation}
\label{adv_examples}
\arg \min_{x'} \max_{\lambda} (f_w(\boldsymbol{x'}) - y')^2 + \lambda \cdot d(\boldsymbol{x}, \boldsymbol{x'}) \text{,}
\end{equation}
where $\boldsymbol{x}$ and $\boldsymbol{x'}$ denote respectively the current state of the subject and the counterfactual, $y'$ the desired outcome, $d(\cdot, \cdot)$ a distance function that measures the difficulty of moving from $\boldsymbol{x}$ to $\boldsymbol{x'}$ and $f_w$ a classifier parametrised by $w$. The problem is optimised by iteratively solving for $\boldsymbol{x'}$ and increasing $\lambda$ until a sufficiently close solution is found. %
\citeauthor{wachter2017counterfactual} emphasise the importance of the distance function choice and suggest using the $l_1$-norm penalty on the counterfactual, which encourages sparse solutions, weighted by the \textit{Median Absolute Deviation} (MAD) that for a feature $k$ is given by:
\begin{equation*}
MAD_{k} := median_{\boldsymbol{x} \in \mathcal{X}}(|\boldsymbol{x}_{k} - median_{\boldsymbol{x} \in \mathcal{X}}(\boldsymbol{x})| \text{,}
\end{equation*}
which leads to the following formulation of the distance function to be used for Equation~\ref{adv_examples}:
\begin{equation*}
d(\boldsymbol{x}, \boldsymbol{x'}) = \sum_{k=1}^K \frac{|\boldsymbol{x}_k - \boldsymbol{x'}_k|}{MAD_k} \text{.}
\end{equation*}
\citeauthor{wachter2017counterfactual} deal with discrete variables by doing a separate execution of the optimisation problem, one for each unique value of every feature, and then choosing a counterfactual with the shortest distance. %
While it is possible to adopt techniques from the Adversarial Learning literature \cite{wachter2017counterfactual} and keep discarding the unactionable counterfactuals until an actionable one is found such an approach is clearly sub-optimal.
\citet{ustun2018actionable} present an Integer Programming (IP) toolkit for linear models intended to be used by practitioners to analyse actionability and difficulty of recourse in a given population as well as generate actionable changes (counterfactuals) to subjects.
\citet{efficient2019russel} propose a Mixed Integer Programming (MIP) formulation to handle mixed data types to offer counterfactual explanations for linear classifiers that respect the original data structure. This formulation is guaranteed to find coherent solutions (avoiding nonsense states) by only searching within the ``mixed-polytope'' structure defined by a suitable choice of linear constraints. \citeauthor{efficient2019russel} chose an iterative approach to providing diverse collection of counterfactuals. Given one solution, the user can add extra constraints to the MIP that will restrict previous alterations. The list of counterfactuals is then ranked according to their $l_1$-norm distance to the original instance.%
\citet{van2018contrastive} propose a counterfactual generation method for decision trees. Their approach uses locally trained one-vs-the-rest decision trees to establish a set of disjoint rules that cause the chosen instance to be classified as the target class.
\textbf{FACE} improves over all of the aforementioned counterfactual generation schemata in a number of ways:
\begin{itemize
\item Contrarily to \cite{wachter2017counterfactual} and similarly to \cite{ustun2018actionable, efficient2019russel,van2018contrastive} it can handle discrete features and their restrictions in a more principled manner. For example, it natively supports features that cannot change, features which values can only change within a specified range and user preferences on subjective distance measures.
\item Contrarily to \cite{ustun2018actionable, efficient2019russel,van2018contrastive} and similarly to \cite{wachter2017counterfactual} it is \emph{model-agnostic} (not restricted to linear models or decision trees), hence it can handle any predictive model.
\item Contrarily to \cite{wachter2017counterfactual, ustun2018actionable,efficient2019russel, van2018contrastive} it produces counterfactual explanations that are both feasible and actionable.
\end{itemize}
|
2,869,038,153,775 | arxiv | \section{Introduction}
Determining the physical properties of dense cores of interstellar molecular clouds is of fundamental
importance in the field of star formation research. A powerful method for this purpose is to construct
the spectral energy distribution (SED) of the source from observational data obtained at multiple different
frequencies (i.e. continuum flux density $S_{\nu}$ as a function of frequency $\nu$). In particular, an SED
analysis can be used to derive the temperature of the dust component(s), its mass, and also the luminosity
over a frequency or wavelength range of interest. The core properties, such as temperature and mass, are
central to understanding the initial conditions and early stages of star formation within the parent core.
Of the interstellar molecular clouds that have proved to be fruitful targets for the studies of Galactic star formation,
the so-called infrared dark clouds (IRDCs; \cite{perault1996}; \cite{egan1998}; \cite{simon2006}; \cite{peretto2009})
have attracted a lot of interest in recent years (e.g. \cite{tang2019}; \cite{soam2019}; \cite{peretto2020}; \cite{miettinen2020}; \cite{retes2020}; \cite{moser2020} to name a few recent studies). Some of the IRDCs studied so far are found to be associated with early stages of high-mass star formation (e.g. \cite{rathborne2006}; \cite{beuther2007}; \cite{chambers2009}; \cite{battersby2010}), and even a few candidates for high-mass prestellar cores have been uncovered in IRDCs (\cite{cyganowski2014}, 2017; \cite{contreras2018}). Therefore,
IRDCs are of particular interest in the context of high-mass star formation, where our understanding of the physical mechanisms is still incomplete (e.g. \cite{motte2018} for a review).
In this paper, we present a study of the physical properties of dense cores in the filamentary IRDC G304.74+01.32, also known as the Seahorse IRDC (\cite{miettinen2018}). The Seahorse IRDC has been fairly well-studied in the submillimetre and millimetre dust continuum emission via single-dish observations (\cite{beltran2006}; \cite{miettinenharju2010}; \cite{miettinen2018}), and through molecular spectral line observations (\cite{miettinen2012}, 2020). Miettinen (2018) derived the dust temperatures of the clumps in the Seahorse IRDC using the
250, 350, and 500~$\mu$m peak surface brightness ratios measured with the Spectral and Photometric Imaging Receiver (SPIRE; \cite{griffin2010}) on board the \textit{Herschel} satellite (\cite{pilbratt2010})\footnote{\textit{Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.}. Moreover, Miettinen (2018) determined the masses of the clumps and the cores hosted by the clumps through monochromatic flux densities at 870~$\mu$m and 350~$\mu$m, respectively.
To improve the determination of the physical characteristics of the dense core population in the Seahorse IRDC compared to the aforementioned study by Miettinen (2018), the present study makes use of the source SEDs in the far-IR and submillimetre regime. Moreover, the present study focusses on the usage of data at $\sim9\arcsec-20\arcsec$ angular resolution, and does not emply the lower resolution \textit{Herschel}/SPIRE data. We note that although the SED analysis technique is standard and well established, its application to IRDC substructure samples (cores and clumps) has so far mostly concerned the Galactic plane IRDCs (e.g. \cite{rathborne2010}; \cite{beuther2010}; \cite{henning2010}; \cite{ragan2012}, 2013; \cite{veena2018}), while the Snake IRDC lies about $1\fdg3$ above the Galactic plane.
The core sample and observational data are described in Sect.~2. The analysis and results are described in Sect.~3, and discussed in Sect.~4. Section~5 summarises our results and main conclusions. Throughout this paper, we report the magnitudes in the Vega system and adopt a kinematic distance of $d=2.54$~kpc to the Seahorse IRDC (\cite{miettinen2012}, 2018).
\section{Source sample and data}
The initial source sample of the present study was taken to be the dense cores in the Seahorse IRDC uncovered
by Miettinen (2018) through 350~$\mu$m dust continuum observations with the Submillimetre APEX BOlometer CAmera
(SABOCA; \cite{siringo2010}) at $9\arcsec$ (0.11~pc) resolution (full width at half maximum or FWHM). These 17 cores
are listed in Table~\ref{table:sample}.
Miettinen (2018) employed the 3.4, 4.6, 12, and 22~$\mu$m IR data from the \textit{Wide-field Infrared Survey Explorer} (\textit{WISE}; \cite{wright2010}) to study whether the aforementioned 350~$\mu$m cores are associated with embedded young stellar objects (YSOs). We note that because the Seahorse IRDC lies about $1\fdg3$ ($\sim60$~pc) above the Galactic plane, it is outside the regions observed with the \textit{Spitzer} IR satellite (\cite{werner2004}). Five of the 17 cores (29.4\%) listed in Table~\ref{table:sample} are not associated with \textit{WISE} sources and are classified as IR dark, while the remaining 12 cores (70.6\%) are IR bright. As discussed by Miettinen (2018), the cores SMM~1b and SMM~6b are associated with \textit{WISE} sources whose IR colours suggest that they could be shock features (see the \textit{WISE} colour criteria from Koenig et al. (2012; Appendix therein)), but this would still likely be a sign of ongoing star formation activity in the cores. The \textit{WISE} source seen towards SMM~7 ($6\arcsec$ from the 350~$\mu$m maximum) was found to be so weak (only detected in the W4 band at 22~$\mu$m) that it was interpreted to be a chance projection of a background extragalactic object. However, SMM~7 shows hint of a secondary core (see Appendix~A), and the \textit{WISE} 22~$\mu$m source could be associated with it ($2\farcs8$ separation). Hence, in the present study SMM~7 is taken to be an IR bright, star-forming core. The core IRAS~13039-6108a is known to be associated with an optically thin \ion{H}{ii} region (\cite{sanchez2013}), and is hence associated with high-mass star formation. We also note that three of the clumps that were detected in the Seahorse IRDC at 870~$\mu$m and $19\farcs86$ resolution with the Large APEX BOlometer CAmera (LABOCA; \cite{siringo2009}) by Miettinen (2018), namely BLOB~1, SMM~5, and SMM~8, were not detected in our SABOCA map because the emission was resolved out at $9\arcsec$ resolution.
To construct the source SEDs, we employed the \textit{WISE} 22~$\mu$m data ($12\arcsec$ FWHM resolution), and the SABOCA 350~$\mu$m and LABOCA 870~$\mu$m data from Miettinen (2018). The \textit{WISE} W4 magnitudes of the sources in the Vega system were taken from the AllWISE catalogue\footnote{\url{https://irsa.ipac.caltech.edu/data/download/wise-allwise/}} (the total in-band brightnesses; see Table~3 in \cite{miettinen2018}), and those were used to compute the 22~$\mu$m flux densities by applying the colour corrections under the assumption of a $S_{\nu}\propto \nu^{-2}$ power-law spectrum together with an additional W4 correction (see \cite{cutri2012}). In case the LABOCA 870~$\mu$m clump was resolved into two or three cores in our higher resolution SABOCA imaging (SMM~1, SMM~4, IRAS~13037-6112, SMM~6, and IRAS~13039-6108), we used the relative SABOCA 350~$\mu$m flux densities of the cores to estimate their contribution to the LABOCA 870~$\mu$m emission.
The Seahorse IRDC was observed as part of the \textit{Herschel} Gould Belt Survey (GBS; \cite{andre2010})\footnote{\url{http://gouldbelt-herschel.cea.fr}}. The observations were done with the Photodetector Array Camera and Spectrometer (PACS; \cite{poglitsch2010}) at 70, 100, and 160~$\mu$m ($\sim9\arcsec$, $\sim10\arcsec$, and $\sim13\arcsec$ resolution, respectively) and with
SPIRE at 250, 350, and 500~$\mu$m ($\sim18\arcsec$, $\sim24\arcsec$, and $\sim35\arcsec$ resolution, respectively). To search for \textit{Herschel} point source counterparts to the Seahorse cores, we cross-matched our source catalogue with the PACS and SPIRE Point Source Catalogues\footnote{\url{https://irsa.ipac.caltech.edu/Missions/herschel.html}} using a search radius of $9\arcsec$, that is the beam size of our SABOCA data. Five cores, namely BLOB~2, SMM~4b and 4c, and IRAS~13039-6108a and 13039-6108b were not found to have counterparts in any of the PACS catalogues, and none of the target cores were found to have counterparts in the SPIRE catalogues. However, the Seahorse IRDC as a whole is clearly detected in the SPIRE images as shown in Fig.~2 in Miettinen (2018). Nevertheless, the relatively poor resolution of the SPIRE observations makes those data less useful for the present purpose (e.g. \cite{ragan2012}, 2013), and our SABOCA observations already probed the 350~$\mu$m band at 2.7 times higher resolution than SPIRE. We note that the LABOCA clumps BLOB~1, SMM~5, and SMM~8 that were not detected in our SABOCA map also had no matches in the \textit{Herschel} point source catalogues.
For the three \textit{IRAS} (\textit{Infrared Astronomical Satellite}; \cite{neugebauer1984}) sources in the Seahorse we also employed the 25, 60, and 100~$\mu$m data from the IRAS Point Source Catalogue v2.1\footnote{\url{https://irsa.ipac.caltech.edu/Missions/iras.html}}. The angular resolution of \textit{IRAS} at these wavelengths was about $1\arcmin-2\arcmin$ (\cite{beichman1988}). For the cores IRAS~13037-6112a and IRAS~13037-6112b we used the \textit{WISE} 22~$\mu$m and PACS 70 and 100~$\mu$m data to estimate the cores' relative contribution to the IRAS flux densities. In the case of the IRAS~13039-6108a/b core pair, all the IRAS emission was assigned to the brighter IRAS~13039-6108a component because it is associated with a 22~$\mu$m source while IRAS~13039-6108b is not.
We required that a source needs to have at least three detections in the far-IR and submillimetre regime in addition to the possible
\textit{WISE} 22~$\mu$m detection in order to construct a useful source SED for the purpose of the present study. Hence,
the final source sample is composed of 12 cores out of which two (SMM~1a and SMM~9) were not detected with \textit{WISE} and are classified as IR dark cores. The relative percentages of IR bright and IR dark cores in our final sample are therefore 83.3\% and 16.7\%. The photometric data of these sources are given in Table~\ref{table:photometry}. The PACS 70~$\mu$m image towards the Seahorse IRDC is shown in Fig.~\ref{figure:map}, while panchromatic zoom-in images towards the analysed cores are shown in Fig.~\ref{figure:images}.
\begin{table}[H]
\renewcommand{\footnoterule}{}
\caption{Initial source sample.}
{\small
\begin{minipage}{1\columnwidth}
\centering
\label{table:sample}
\begin{tabular}{c c c c}
\hline\hline
Source & $\alpha_{2000.0}$ & $\delta_{2000.0}$ & Type\\
& [h:m:s] & [$\degr$:$\arcmin$:$\arcsec$] & \\
\hline
SMM 1a & 13 06 19.38 & -61 30 19.42 & IR dark\\
SMM 1b & 13 06 23.36 & -61 30 10.37 & IR bright\tablefootmark{a}\\
SMM 2 & 13 06 28.82 & -61 29 43.43 & IR bright \\
SMM 3 & 13 06 37.00 & -61 28 51.00 & IR bright \\
BLOB 2 & 13 06 39.50 & -61 30 01.51 & IR dark \\
SMM 4a & 13 06 46.21 & -61 28 48.04 & IR bright \\
SMM 4b & 13 06 46.84 & -61 28 22.54 & IR bright \\
SMM 4c & 13 06 42.86 & -61 28 33.03 & IR dark \\
IRAS 13037 & 13 06 51.45 & -61 28 24.04 & IR bright \\
-6112a & & & \\
IRAS 13037 & 13 06 51.24 & -61 27 51.04 & IR bright \\
-6112b & & & \\
SMM 6a & 13 06 55.42 & -61 27 27.03 & IR bright\\
SMM 6b & 13 06 52.08 & -61 27 40.54 & IR bright\tablefootmark{a}\\
SMM 7 & 13 07 04.20 & -61 26 15.00 & IR bright\tablefootmark{b}\\
IRAS 13039 & 13 07 06.49 & -61 24 32.98 & IR bright/\ion{H}{ii}\tablefootmark{c} \\
-6108a & & & \\
IRAS 13039 & 13 07 08.58 & -61 23 56.97 & IR dark \\
-6108b & & & \\
SMM 9 & 13 07 12.53 & -61 22 46.43 & IR dark\\
IRAS 13042 & 13 07 20.66 & -61 21 52.33 & IR bright\\
-6105 & & & \\
\hline
\end{tabular}
\tablefoot{The coordinates refer to the SABOCA 350~$\mu$m peak positions of the sources (\cite{miettinen2018}). The source type in the last column is based on the source appearance in the \textit{WISE} IR images (\cite{miettinen2018}).\tablefoottext{a}{The core is associated with a \textit{WISE} IR source that could be a shock emission knot.}\tablefoottext{b}{The weak \textit{WISE} IR source seen towards SMM~7 was considered a candidate extragalactic object in our previous studies (\cite{miettinen2018}, 2020), but it could be an embedded YSO associated with a secondary 350~$\mu$m peak position at $\alpha_{2000.0}=13^{\rm h}07^{\rm m}04\fs21$, $\delta_{2000.0}=-61\degr26\arcmin 22\farcs50$ ($2\farcs8$ or 0.03~pc offset; see Appendix~A). Hence, SMM~7 is considered an IR bright core in the present study.}\tablefoottext{c}{The core is associated with an optically thin \ion{H}{ii} region (\cite{sanchez2013}).}}
\end{minipage} }
\end{table}
\begin{figure}[!htb]
\centering
\resizebox{\hsize}{!}{\includegraphics{pacs_map_2.ps}}
\caption{\textit{Herschel}/PACS 70~$\mu$m image towards the IRDC G304.74+01.32 (the Seahorse IRDC). The colour scale is displayed using
a linear stretch. The overlaid contours represent the LABOCA 870~$\mu$m emission (\cite{miettinenharju2010}; \cite{miettinen2018}); the contours start at $3\sigma$, and increase in steps of $3\sigma$, where $3\sigma=120$~mJy~beam$^{-1}$. The 870~$\mu$m clumps are labelled so that the numbers refer to the SMM IDs (e.g. 1 refers to SMM~1), while the sources I13037, I13039, and I13042 are the
three \textit{IRAS} sources in the filament. The sources BLOB~1 and BLOB~2 are labelled as B1 and B2. The plus signs indicate the LABOCA 870~$\mu$m emission peaks of the clumps. A scale bar of 1~pc projected length, and the LABOCA beam size ($19\farcs86$ FWHM) are shown in the bottom left corner.}
\label{figure:map}
\end{figure}
\begin{table*}
\caption{Photometric data of the analysed sources.}
\begin{minipage}{2\columnwidth}
\centering
\renewcommand{\footnoterule}{}
\label{table:photometry}
\begin{tabular}{c c c c c c c c c}
\hline\hline
Source & $S_{22}$ & $S_{25}$ & $S_{60}$ & $S_{70}$ & $S_{100}$ & $S_{160}$ & $S_{350}$ & $S_{870}$\\
& [Jy] & [Jy] & [Jy] & [Jy] & [Jy] & [Jy] & [Jy] & [Jy] \\
\hline
SMM 1a & \ldots & \ldots & \ldots & \ldots & $2.59\pm0.11$ & $8.04\pm0.87$ & $4.04\pm1.24$ & $0.75\pm0.09$ \\
SMM 1b & $0.16^{+0.00}_{-0.01}$ & \ldots & \ldots & $0.95\pm0.05$ & $1.60\pm0.15$ & \ldots & $4.15\pm1.28$ & $0.78\pm0.09$\\
SMM 2 & $0.35\pm0.01$ & \ldots & \ldots & $1.10\pm0.05$ & $1.65\pm0.06$ & $3.73\pm0.87$ & $2.18\pm0.69$ & $0.63\pm0.06$ \\
SMM 3 & $0.27^{+0.01}_{-0.00}$ & \ldots & \ldots & \ldots & $4.62\pm0.39$ & $10.72\pm2.38$ & $3.58\pm1.11$ & $0.82\pm0.11$ \\
SMM 4a & $0.03\pm0.01$ & \ldots & \ldots & \ldots & $3.32\pm1.28$ & \ldots & $4.98\pm1.53$ & $0.75\pm0.09$ \\
IRAS 13037 & $3.27\pm0.05$ & $5.23\pm0.42$ & $47.69\pm7.63$ & $24.08\pm1.02$ & $33.51\pm3.45$ & $56.34\pm8.01$ & $3.74\pm1.15$ & $0.77\pm0.09$\\
-6112a & & & & & $<144.25$ & \\
IRAS 13037 & $1.36^{+0.02}_{-0.01}$ & $2.18\pm0.17$ & $17.27\pm2.76$ & $8.72\pm1.17$ & $12.14\pm1.83$ & \ldots & $2.27\pm0.72$ & $0.47\pm0.05$ \\
-6112b & & & & & $<52.25$ & \\
SMM 6a & $0.28\pm0.01$ & \ldots & \ldots & $7.33\pm0.52$ & \ldots & \ldots & $4.97\pm0.70$ & $0.34\pm0.05$ \\
SMM 7 & $0.04^{+0.001}_{-0.002}$ & \ldots & \ldots & $1.19\pm0.17$ & $2.40\pm0.46$ & \ldots & $2.12\pm0.68$ & $0.52\pm0.07$ \\
IRAS 13039 & $3.33\pm0.05$ & $7.43\pm0.59$ & $105.6\pm14.8$ & \ldots & \ldots & \ldots & $5.88\pm1.80$ & $1.17\pm0.13$ \\
-6108a & & & & & $196.5\pm35.4$ & \\
SMM 9 & \ldots & \ldots & \ldots & $3.52\pm0.07$ & $7.84\pm0.19$ & $13.10\pm1.29$ & $3.61\pm1.12$ & $0.83\pm0.10$ \\
IRAS 13042 & $0.46\pm0.01$ & $0.67\pm0.08$ & $<4.97$ & $2.73\pm0.03$ & $4.77\pm0.13$ & $6.58\pm0.84$ & $1.27\pm0.44$ & $0.23\pm0.05$ \\
-6105 & & & & & $<196.5$ &\\
\hline
\end{tabular}
\tablefoot{The \textit{WISE} flux densities listed in the second column were computed from the AllWISE catalogue's Vega magnitudes by applying the colour corrections under the assumption of a $S_{\nu}\propto \nu^{-2}$ power-law spectrum. Moreover, an additional W4 correction was applied to calculate the final 22~$\mu$m flux density (see \cite{cutri2012}). For the \textit{IRAS} sources, the 100~$\mu$m flux density could be obtained from both \textit{Herschel}/PACS and \textit{IRAS} (given below the PACS value). However, IRAS 13039-6108a could not be found from the PACS Point Source Catalogue, and hence only the \textit{IRAS} 100~$\mu$m flux density is given in the table.}
\end{minipage}
\end{table*}
\section{Analysis and results}
\subsection{Modified blackbody fits to the SEDs}
The SEDs of the 12 analysed cores are shown in Fig.~\ref{figure:seds}. Depending on the source, the observed far-IR to submillimetre SEDs were fitted by single or two-temperature modified blackbody functions under the assumption of
optically thin dust emission, in which case the general function is given by (e.g. \cite{shetty2009a},b; \cite{casey2012}; \cite{bianchi2013})
\begin{equation}
\label{eqn:sed}
S_{\nu}=\frac{M_{\rm dust}}{d^2}\kappa_{\nu}B_{\nu}(T_{\rm dust})\,,
\end{equation}
where $M_{\rm dust}$ is the dust mass, $\kappa_{\nu}$ is the dust opacity, and $B_{\nu}(T_{\rm dust})$ is the Planck function
at a dust temperature of $T_{\rm dust}$ defined as
\begin{equation}
\label{eqn:planck}
B_{\nu}(T_{\rm dust})=\frac{2h \nu^3}{c^2}\frac{1}{e^{h\nu/k_{\rm B}T_{\rm dust}}-1}\,,
\end{equation}
where $h$ is the Planck constant, $c$ is the speed of light, and $k_{\rm B}$ is the Boltzmann constant.
The two-temperature component SEDs were fitted with a sum of two modified Planck functions (e.g. \cite{dunne2001}).
The frequency-dependent dust opacity was assumed to follow a power-law function of the form
\begin{equation}
\label{eqn:kappa}
\kappa_{\nu}=\kappa_0 \times \left(\frac{\nu}{\nu_0}\right)^{\beta}\,,
\end{equation}
where $\beta$ is the dust emissivity index. We assumed that the $\kappa_{\nu}$ values follow
the Ossenkopf \& Henning (1994) dust model values for graphite-silicate dust grains that have coagulated and accreted thin ice mantles over a period of $10^5$~yr at a gas density of $n_{\rm H}=10^5$~cm$^{-3}$. For reference, the value of $\kappa_{\nu}$ is 1.38~cm$^2$~g$^{-1}$ at 870~$\mu$m and $\beta \simeq 1.9$. The non-linear least squares SED fitting was implemented in {\tt Python} using the {\tt SciPy} optimisation module (\cite{virtanen2020}).
The dust masses derived through SED fits were converted to the total gas plus dust masses using a dust-to-gas mass ratio,
$R_{\rm dg}=M_{\rm dust}/M_{\rm gas}$. The latter value can be derived from the dust-to-hydrogen mass ratio,
$M_{\rm dust}/M_{\rm H}$, which is about 1/100 (e.g. \cite{weingartner2001}; \cite{draine2007}). Assuming that the chemical composition of the cores is similar to the solar mixture, where the hydrogen mass percentage is $X=71\%$ and those of helium and metals are $Y=27\%$ and $Z=2\%$ (e.g. \cite{anders1989}), the ratio of total gas mass (hydrogen, helium, and metals) to hydrogen gas mass is $(X+Y+Z)/X=1.41$. Based on this assumption, we adopted a dust-to-gas mass ratio of $R_{\rm dg}=1/141$.
In addition to the dust temperature and mass that were derived through fitting the observed SEDs, we also calculated
the bolometric luminosities of the sources by integrating over the fitted SED curves, that is (e.g. \cite{dunham2010})
\begin{equation}
\label{eqn:luminosity}
L=4\pi d^2 \int_0^{\infty} S_{\nu} {\rm d}\nu \,.
\end{equation}
To quantify the goodness of the SED fits, we calculated the reduced $\chi^2$ values defined as (e.g. \cite{dunham2006})
\begin{equation}
\label{eqn:chi}
\chi_{\rm red}^2 = \frac{1}{k}\sum\limits_{i=0}^{n}\left(\frac{S_{\nu}^{\rm obs}-S_{\nu}^{\rm model}}{\sigma(S_{\nu}^{\rm obs})}\right)^2\,,
\end{equation}
where for $n$ data points and $m$ free parameters there are $k=n-m$ degrees of freedom, and the parameters $S_{\nu}^{\rm obs}$ and $\sigma(S_{\nu}^{\rm obs})$ are the observed flux densities and their uncertainties and $S_{\nu}^{\rm model}$ is the flux density of the model fit at the corresponding frequency.
The derived SED parameters are listed in Table~\ref{table:properties}. The quoted uncertainties were derived
using the flux density uncertainties. The SED properties of the analysed cores and the derived physical properties of the cores
are discussed in Sects.~4.1 and 4.2, respectively.
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.3\textwidth]{Figures/smm1a.eps}
\includegraphics[width=0.3\textwidth]{Figures/smm1b_2comp.eps}
\includegraphics[width=0.3\textwidth]{Figures/smm2_2comp.eps}
\includegraphics[width=0.3\textwidth]{Figures/smm3.eps}
\includegraphics[width=0.3\textwidth]{Figures/smm4a_2comp.eps}
\includegraphics[width=0.3\textwidth]{Figures/IRAS13037a_2comp.eps}
\includegraphics[width=0.3\textwidth]{Figures/IRAS13037b_2comp.eps}
\includegraphics[width=0.3\textwidth]{Figures/smm6a_2comp.eps}
\includegraphics[width=0.3\textwidth]{Figures/smm7_2comp.eps}
\includegraphics[width=0.3\textwidth]{Figures/IRAS13039a_2comp.eps}
\includegraphics[width=0.3\textwidth]{Figures/smm9_2comp.eps}
\includegraphics[width=0.3\textwidth]{Figures/IRAS13042_2comp.eps}
\caption{Far-IR to submillimetre SEDs of the analysed cores. Data points at $\lambda \leq 70$~$\mu$m are highlighted in red, while the longer wavelength data are shown in blue. The data points shown by green square symbols were not included in the fit (see Sect.~4.1 for details). The downward pointing arrows indicate upper limits (relevant only for some of the \textit{IRAS} flux densities). The black, dashed lines represent the best modified blackbody fits to the data. For the two-temperature modified blackbody fits, the black, dashed line shows the sum of the two components. The blue and red dashed lines show the SED fits to the cold and warm component, respectively. We note that the $y$-axis scale is different for the bright \textit{IRAS} sources compared to the other panels.}
\label{figure:seds}
\end{center}
\end{figure*}
\subsection{Densities}
The volume-averaged H$_2$ number densities of the cores were calculated assuming a spherical geometry and using the formula
\begin{equation}
\label{eqn:density}
n({\rm H_2})=\frac{3M}{4\pi R^3 \mu_{\rm H_2}m_{\rm H}}\,,
\end{equation}
where the radii $R$ were taken from Miettinen (2018, Table~2 therein) and they refer to the
effective core radius, which is defined as $R=\sqrt{A_{\rm proj}/\pi}$, where $A_{\rm proj}$ is the projected area of the core calculated from the total number of pixels that were assigned to the core in the SABOCA map. The quantity $\mu_{\rm H_2}$ is the mean molecular weight per H$_2$ molecule, which in our case is $\mu_{\rm H_2}=2/X=2.82$ (\cite{kauffmann2008}, Appendix~A.1 therein), and $m_{\rm H}$ is the mass of the hydrogen atom.
We also calculated the surface densities of the cores using the formula
\begin{equation}
\label{eqn:surface}
\Sigma=\frac{M}{\pi R^2}\,.
\end{equation}
The core radii and derived densities are listed in the last three columns in Table~\ref{table:properties}. To calculate the values of $n({\rm H_2})$ and $\Sigma$, we used the sum of the masses of the cold and warm component. The uncertainties in $n({\rm H_2})$ and $\Sigma$ were propagated from the mass uncertainties. The surface densities are plotted as a function of core mass in Fig.~\ref{figure:Sigmavsmass}.
\begin{table*}
\caption{SED parameters and physical properties of the sources.}
\begin{minipage}{2\columnwidth}
\centering
\renewcommand{\footnoterule}{}
\label{table:properties}
\begin{tabular}{c c c c c c c c c c}
\hline\hline
Source & $n_{\rm SED}^{\rm fit}$ & $\chi_{\rm red}^2$ & $\lambda_{\rm peak}$ & $T_{\rm dust}$ & $M$ & $L$ & $R$ & $n({\rm H_2})$ & $\Sigma$\\
& & & [$\mu$m] & [K] & [M$_{\sun}$] & [L$_{\sun}$] & [pc] & [$10^5$ cm$^{-3}$] & [g cm$^{-2}$] \\
\hline
SMM 1a & 4 & 6.82 & 199.3 & $14.8\pm0.2$ & $54\pm8$ & $39^{+10}_{-9}$ & 0.11 & $1.4\pm0.2$ & $0.30\pm0.04$\\
SMM 1b & 5 & 5.28 & 230.8 & $13.0\pm0.3$ & $91\pm13$ & $47^{+12}_{-10}$ & 0.10 & $3.1\pm0.4$ & $0.60\pm0.09$ \\
& & & & $60.9\pm0.7$ & $0.01\pm0.001$ & \\
SMM 2 & 6 & 615.90 & 333.4 & $8.8\pm0.9$ & $175\pm47$ & $17^{+11}_{-7}$ & 0.08 & $11.7\pm3.1$ & $1.82\pm0.49$\\
& & & & $28.8\pm0.8$ & $0.3\pm0.1$ & \\
SMM 3 & 4 & 8.21 & 183.9 & $16.1\pm0.4$ & $45\pm8$ & $52^{+19}_{-15}$ & 0.10 & $1.5\pm0.3$ & $0.30\pm0.05$\\
SMM 4a & 4 & \ldots\tablefootmark{a} & 272.8 & $10.3\pm1.7$ & $155\pm60$ & $55^{+73}_{-37}$ & 0.13 & $2.4\pm0.9$ & $0.61\pm0.24$\\
& & & & $42.3\pm1.9$ & $0.12\pm0.06$ & \\
IRAS 13037-6112a & 8 & 12.17 & 125.0 & $21.8\pm0.9$ & $27\pm5$ & $491^{+238}_{-181}$ & 0.10 & $0.9\pm0.2$ & $0.18\pm0.03$\\
& & & & $62.8\pm1.6$ & $0.1\pm0.02$ & \\
IRAS 13037-6112b & 6 & 13.03 & 50.0 & $8.9\pm1.3$ & $131\pm47$ & $170^{+55}_{-43}$ & 0.08 & $8.8\pm3.1$ & $1.36\pm0.49$ \\
& & & & $59.3\pm0.9$ & $0.1\pm0.01$ & \\
SMM 6a & 4 & \ldots\tablefootmark{a} & 157.9 & $17.6\pm3.6$ & $27\pm12$ & $138^{+245}_{-96}$ & 0.12 & $0.5\pm0.2$ & $0.12\pm0.06$\\
& & & & $51.2\pm2.0$ & $0.1\pm0.04$ & \\
SMM 7 & 5 & 399.85 & 115.4 & $8.0\pm1.2$ & $181\pm76$ & $19^{+39}_{-16}$ & 0.08 & $12.1\pm5.1$ & $1.88\pm0.79$\\
& & & & $25.6\pm2.6$ & $0.8\pm0.6$ & \\
IRAS 13039-6108a & 5 & 3.77 & 58.8 & $8.3\pm1.3$ & $369\pm155$ & $1\,145^{+435}_{-356}$ & 0.14 & $4.6\pm1.9$ & $1.26\pm0.53$ \\
& & & & $50.2\pm0.9$ & $1.2\pm0.3$ & \\
SMM 9 & 5 & 12.34 & 187.5 & $12.2\pm1.0$ & $93\pm23$ & $68^{+41}_{-26}$ & 0.10 & $3.3\pm0.8$ & $0.64\pm0.15$\\
& & & & $24.8\pm0.7$ & $3\pm1$ & \\
IRAS 13042-6105 & 7 & 3.58 & 150.0 & $19.6\pm0.7$ & $8\pm2$ & $68^{+25}_{-19}$ & 0.06 & $1.3\pm0.3$ & $0.15\pm0.04$\\
& & & & $64.0\pm0.9$ & $0.01\pm0.001$ & \\
\hline
\end{tabular}
\tablefoot{The parameters given in the table are the number of flux density data points used in the SED fit, reduced $\chi^2$ value defined in Eq.~(\ref{eqn:chi}), wavelength of the peak position of the fitted SED, dust temperature, total (gas+dust) mass, luminosity (Eq.~(\ref{eqn:luminosity})), effective core radius taken from Miettinen (2018), volume-averaged H$_2$ number density (Eq.~(\ref{eqn:density})), and surface density (Eq.~(\ref{eqn:surface})). For the sources whose SED was fitted with a two-temperature model, the dust temperature and mass of the warm component are reported beneath the cold component values.\tablefoottext{a}{The number of flux density data points is equal to the number of free parameters of the model, and hence there are zero degrees of freedom. Therefore, the value of $\chi_{\rm red}^2$ becomes infinite.}}
\end{minipage}
\end{table*}
\begin{figure}[!htb]
\centering
\resizebox{\hsize}{!}{\includegraphics{Figures/Sigmavsmass.eps}}
\caption{Surface density as a function of core mass. The IR dark cores (no \textit{WISE} counterpart) are shown by blue data points, while the red data points show the IR bright cores (cores with a \textit{WISE} counterpart). The \ion{H}{ii} region IRAS~13037-6112a is indicated by a green data point. The two horizontal dashed lines show the threshold surface densities for high-mass star formation proposed by L{\'o}pez-Sepulcre et al. (2010; black) and Urquhart et al. (2014; magenta) when scaled to the present assumptions about the dust properties (see Sect.~4.2).}
\label{figure:Sigmavsmass}
\end{figure}
\subsection{Virial parameter analysis}
To study the dynamical state of the cores, we first calculated their virial masses, $M_{\rm vir}$ (\cite{bertoldi1992}). The cores
were assumed to be spherical with a radial density profile of the form $n(r)\propto r^{-p}$, where the density power-law index, $p$, was used to calculate the correction factor for the effects of a non-uniform density distribution as described in Appendix~A in Bertoldi \& McKee (1992). We adopted a value of $p=1.6$, which corresponds to the mean value derived by Beuther et al. (2002) for their sample of high-mass star-forming clumps. We note that comparable values were found in other studies of high-mass star-forming clumps and cores (e.g. \cite{mueller2002}: $\langle p \rangle=1.8\pm0.4$; \cite{garay2007}: $p=1.5-2.2$; \cite{zhang2009}: $p=1.6$ and $p=2.1$ for two cores in the IRDC G28.34+0.06; \cite{li2019}: $p=0.6-2.1$ with $\langle p \rangle=1.3$). For example, for the $p$ values in the range of $1.5-2$, a given virial mass varies by 34\%.
The total (thermal plus non-thermal), one-dimensional velocity dispersion needed in the calculation of $M_{\rm vir}$
was calculated as described in Fuller \& Myers (1992; see Eq.~(3) therein). In this calculation, we assumed that the gas and dust are collisionally coupled and characterised by the same temperature, that is the gas kinetic temperature was taken to be $T_{\rm kin}=T_{\rm dust}$, which is expected to happen at densities $n({\rm H_2})\gtrsim3.2\times10^4$~cm$^{-3}$ (\cite{goldsmith2001}). Furthermore, we used the temperature of the cold dust component because it dominates the mass in each target core (see Sect.~4.2).
In the present study, we have adopted a $[{\rm He}]/[{\rm H}]$ abundance ratio of $Y/(4X)=0.095$, which together with the assumption that all hydrogen is molecular leads to a mean molecular weight per free particle of $\mu_{\rm p}=2.37$. We note that the very often used value of $\mu_{\rm p}=2.33$ applies for a $[{\rm He}]/[{\rm H}]$ ratio of 0.1 with no metals (see \cite{kauffmann2008}). As the observed spectral linewidth we used the FWHM of the H$^{13}$CO$^+(J=2-1)$ rotational line detected by Miettinen (2020) towards all the parent clumps of the present target cores. The H$^{13}$CO$^+$ linewidths were derived through fitting the hyperfine structure of the line, and the critical density of the $J=2-1$ transition is $n_{\rm crit}=(2.6-3.1)\times10^6$~cm$^{-3}$ at temperatures between 10~K and 20~K (on the basis of the Einstein $A$ coefficient and collision rates given in the Leiden Atomic and Molecular Database (LAMDA\footnote{\url{http://home.strw.leidenuniv.nl/~moldata/}}; \cite{schoier2005})). Hence, H$^{13}$CO$^+(J=2-1)$ is well suited for our purpose as a high-density tracer. In those cases where the Seahore clump is hosting multiple cores (e.g. SMM~1), the same linewidth was adopted for all cores (e.g. for SMM~1a and 1b; see Table~\ref{table:virial}). We note, however, that the spectral lines on the core scale are expected to be narrower than on the larger clump scale owing to the dissipation of non-thermal, turbulent motions (e.g. \cite{myers1983}; \cite{pineda2010}; \cite{sokolov2018}), and a lower level of gas velocity dispersion would lead to a weaker internal kinetic pressure. The total velocity dispersion also depends on the molecular weight of the observed molecular species, which for H$^{13}$CO$^+$ is $\mu_{\rm mol}=30$.
The virial masses of the cores were used to calculate their virial parameters, $\alpha_{\rm vir}=M_{\rm vir}/M$ (\cite{bertoldi1992}; Eq.~(2.8a) therein). The value of $\alpha_{\rm vir}$ quantifies the relative importance of the core's internal kinetic energy and its gravitational energy, and it can be interpreted so that non-magnetised cores with $\alpha_{\rm vir}<2$ are gravitationally bound, those with $\alpha_{\rm vir}=1$ are in virial equilibrium, and when $\alpha_{\rm vir}<1$ the core is gravitationally unstable (e.g. \cite{russeil2010}; \cite{li2019}).
The aforementioned H$^{13}$CO$^+(J=2-1)$ linewidths, and the derived virial masses and virial parameters are given in Table~\ref{table:virial}. The uncertainties in $M_{\rm vir}$ were propagated from the temperature and linewidth uncertainties, and the uncertainties in $\alpha_{\rm vir}$ were propagated from those of $M_{\rm vir}$ and $M$. The values of $\alpha_{\rm vir}$ are plotted as a function of core mass in Fig.~\ref{figure:alphavsmass}.
\begin{table}[H]
\renewcommand{\footnoterule}{}
\caption{H$^{13}$CO$^+(J=2-1)$ linewidths (FWHM), virial masses, and virial parameters of the cores.}
{\normalsize
\begin{minipage}{1\columnwidth}
\centering
\label{table:virial}
\begin{tabular}{c c c c c}
\hline\hline
Source & $\Delta v$\tablefootmark{a} & $M_{\rm vir}$ & $\alpha_{\rm vir}$ \\
& [km~s$^{-1}$] & [M$_{\sun}$] & \\
\hline
SMM 1a & $1.13 \pm 0.03$ & $27\pm5$ & $0.51\pm0.12$\\
SMM 1b & $1.13 \pm 0.03$ & $24\pm4$ & $0.27\pm0.06$\\
SMM 2 & $0.74 \pm 0.07$ & $9\pm2$ & $0.05\pm0.02$\\
SMM 3 & $0.98 \pm 0.05$ & $20\pm5$ & $0.45\pm0.13$\\
SMM 4a & $1.11 \pm 0.03$ & $30\pm4$ & $0.19\pm0.08$\\
IRAS 13037-6112a & $0.83 \pm 0.05$ & $17\pm6$ & $0.64\pm0.27$\\
IRAS 13037-6112b & $0.83 \pm 0.05$ & $11\pm2$ & $0.08\pm0.03$\\
SMM 6a & $0.44 \pm 0.08$ & $10\pm6$ & $0.36\pm0.28$\\
SMM 7 & $0.62 \pm 0.08$ & $7\pm2$ & $0.04\pm0.02$\\
IRAS 13039-6108a & $0.92 \pm 0.03$ & $23\pm4$ & $0.06\pm0.03$ \\
SMM 9 & $1.22 \pm 0.01$ & $28\pm4$ & $0.29\pm0.08$ \\
IRAS 13042-6105 & $0.79 \pm 0.08$ & $9\pm4$ & $1.18\pm0.54$ \\
\hline
\end{tabular}
\tablefoot{\tablefoottext{a}{Taken from Miettinen (2020; Table~A.1 therein).} }
\end{minipage} }
\end{table}
\begin{figure}[!htb]
\centering
\resizebox{\hsize}{!}{\includegraphics{Figures/alphavsmass.eps}}
\caption{Virial parameter as a function of core mass. The two horizontal dashed lines show the virial equilibrium value ($\alpha_{\rm vir}=1$; magenta) and threshold of gravitational boundedness ($\alpha_{\rm vir}=2$; black).}
\label{figure:alphavsmass}
\end{figure}
\section{Discussion}
\subsection{Far-infrared to submillimetre SED characteristics of the Seahorse cores}
A modified blackbody composed of one or two temperature components is a simplified model for the continuum emission of a dense molecular cloud core. Emission at 22~$\mu$m is expected to originate in a warmer dust component closer to the central YSO compared to the colder envelope (see e.g. \cite{beuther2010}; \cite{ragan2012} for the context of \textit{Spitzer} 24~$\mu$m emission). Hence, a two-temperature model is expected to be a reasonable assumption for the IR bright cores in our sample. Our SED analysis was also based on the assumption of optically thin dust emission, but this assumption might be invalid at 22~$\mu$m wavelength. However, adding the dust optical thickness, $\tau$, in Eq.~(\ref{eqn:sed}) would introduce a third unknown variable (per temperature component) in the problem ($T_{\rm dust}$, $M$, $\tau$), and owing to the fairly low number of data points in our SEDs, the SED fits would start to be subject to overfitting (i.e. number of unknown parameters is comparable to the number of data points).
The angular resolution of the \textit{Herschel}/PACS data at 70, 100, and 160~$\mu$m is $9\arcsec-13\arcsec$, while
those of SABOCA and LABOCA are $9\arcsec$ and $19\farcs86$, respectively. Hence, the resolutions from the PACS to the SABOCA wavelength
are similar within a factor of 1.4, and even the LABOCA resolution is only 2.2 times lower than
the highest resolution ($9\arcsec$) data used in the SED fits. However, the \textit{IRAS} data have a much lower angular resolution compared to our other data, but the corresponding flux densities still appear to be in fairly good agreement with other measurements (see below).
As shown in Fig.~\ref{figure:seds}, a one or two-temperature modified blackbody could be well-fitted to the SEDs of the 22~$\mu$m dark cores SMM~1a and SMM~9, and also to the SEDs of the 22~$\mu$m bright, YSO hosting cores SMM~1b, 4a, 6a, and IRAS~13037-6112a, 13037-6112b, 13039-6108a, and 13042-6105. Of the aforementioned cores, SMM~1a was the only source where a single-temperature model was fitted to the observed SED. In the case of SMM~4a and SMM~6a, the number of flux density data points is equal to the number of free parameters of the two-component model, and hence there are zero degrees of freedom. Hence, the value of $\chi_{\rm red}^2$ of the best-fit SED for these two sources is infinite.
For the 22~$\mu$m bright cores SMM~2 and SMM~7, the observed 22~$\mu$m flux density is not consistent with a two-temperature model fit, which could be an indication of the presence of another, still warmer dust component, or of the failure of the assumptions used in the analysis, for example that of optically thin dust emission. Moreover, for the 22~$\mu$m bright core SMM~3, a two-temperature model fit yielded two overlapping modified Planck curves that appeared similar to the single-temperature model fit shown in Fig.~\ref{figure:seds}. We opted for the single-temperature fit because it yielded a much lower $\chi_{\rm red}^2$ value (8.21) than the poor two-temperature fit ($\chi_{\rm red}^2=745.44$). Also in the case of SMM~3 a more complex model might be need to explain its 22~$\mu$m emission (more than two dust components or optically thick 22~$\mu$m emisson or both), but we note that the 70~$\mu$m data point was not available for SMM~3 so a direct comparison with the cases of SMM~2 and SMM~7 is not feasible.
The SED of IRAS~13037-6112b could not be successfully fit with a two-temperature model when the \textit{IRAS} 60~$\mu$m flux density data point was included in the fit. Hence, we decided to omit that data point from the fit, and this choice was further supported by the presence of a PACS 70~$\mu$m data point that was measured at a much higher (factor of $\sim7$) angular resolution than the \textit{IRAS}
60~$\mu$m data point. For IRAS~13037-6112a, however, a two-temperature model could be fitted even when the 60~$\mu$m data point was included in the fit, although the best-fit model is in much better agreement with the PACS 70~$\mu$m data point. As seen in the SEDs of the \text{IRAS} sources, the \textit{WISE} 22~$\mu$m and \textit{IRAS} 25~$\mu$m flux densities are in agreement with each other, where the latter are 1.5--1.6 times higher. For IRAS~13039-6108a, a two-temperature model could not fit the \textit{IRAS} 100~$\mu$m data point, and hence we opted for an SED fit where that data point is omitted (the $\chi_{\rm red}^2$ value of the fit also dropped from 9.25 to 3.77 when the 100~$\mu$m data point was not used in the fit). We note that for IRAS~13039-6108a, which is associated with an \ion{H}{ii} region, no counterparts were found from the PACS Point Source Catalogue, which is probably the result of the extended source appearance at the PACS wavelengths (70~$\mu$m--160~$\mu$m; Fig.~\ref{figure:images}). This could explain the high \textit{IRAS} 100~$\mu$m flux density of the source. The source is also extended and associated with diffuse emission region in the \textit{WISE} 12~$\mu$m and 22~$\mu$m images as shown in Fig.~3 in Miettinen (2018) and Fig.~\ref{figure:images} herein, which is consistent with the \ion{H}{ii} region state of source evolution where a photodissociation region is surrounding the ionised gas bubble. We also note that for IRAS~13037-6112a, 13037-6112b, and 13042-6105, the SED fits are consistent with the \textit{IRAS} flux density upper limits.
One could argue that the adopted Ossenkopf \& Henning (1994) dust model of grains with thin ice mantles is not valid for the warm dust component. For example, the adopted dust opacities in a wavelength range of 20~$\mu$m--100~$\mu$m, which typically brackets the peak of the warm SED component, are on average about $1.6\pm0.1$ times higher than those in the corresponding model of grains without ice mantles. Nevertheless, the usage of the Ossenkopf \& Henning (1994) model of bare dust particles with no icy coverages would basically only lead to about 1.6 times higher masses of the warm dust component. Because the latter masses are so low compared to the cold component ($M_{\rm warm}/M_{\rm cold}=1\times10^{-4}-0.03$ with a mean of $5\times10^{-3}$), the usage of the thin ice mantle based parameters seems justified also for the warm component.
\subsection{Physical properties of the Seahorse cores and the potential for high-mass star formation}
The mean (median) values of the derived dust temperatures of the cold component, core masses, luminosities, number densities, and surface densities are $13.3\pm1.4$~K (12.6~K), $113\pm29$~M$_{\sun}$ (94~M$_{\sun}$), $192\pm94$~L$_{\sun}$ (62~L$_{\sun}$), $(4.3\pm1.2)\times10^5$~cm$^{-3}$ ($2.8\times10^5$~cm$^{-3}$), and $0.77\pm0.19$~g~cm$^{-2}$ (0.61~g~cm$^{-2}$). The mean (median) dust temperature and mass of the warm component are $47.0\pm5.0$~K (50.7~K) and $0.6\pm0.3$~M$_{\sun}$ ($0.1$~M$_{\sun}$). As mentioned in Sect.~4.1, the mass of the warm component is only 5 per mille of the cold component's mass on average (the median $M_{\rm warm}/M_{\rm cold}$ ratio is $2.5\times10^{-3}$). As expected, the \ion{H}{ii} region source IRAS~13039-6108a was found to be the most luminous source in our sample ($L=(1.1\pm0.4)\times10^3$~L$_{\sun}$), but surprisingly the second lowest dust temperature of the cold component in our sample was derived for IRAS~13039-6108a ($T_{\rm dust}=8.3\pm1.3$~K). IRAS~13039-6108a also does not show the highest warm component temperature in our sample, but there are five cores that are still warmer ($51.2-64.0$~K compared to 50.2~K for IRAS~13039-6108a).
In Fig.~\ref{figure:luminosityvsmass}, we plot the core luminosity as a function of core mass. The black dashed line plotted in Fig.~\ref{figure:luminosityvsmass} indicates the accretion luminosity defined as
\begin{equation}
\label{eqn:accretion}
L_{\rm acc}=\frac{G\dot{M}_{\rm acc}M_{\star}}{R_{\star}}\,,
\end{equation}
where $\dot{M}_{\rm acc}$ is the mass accretion rate and $M_{\star}$ and $R_{\star}$ are the mass and radius
of the central star. Following Ragan et al. (2012, Fig.~14 therein), we assumed that $\dot{M}_{\rm acc}=10^{-5}$~M$_{\sun}$~yr$^{-1}$,
the stellar mass is 10\% of the parent core mass, and that $R_{\star}=5$~R$_{\sun}$. In general, the YSO evolution in
the $L-M$ diagram shown in Fig.~\ref{figure:luminosityvsmass} occurs towards the top left corner (i.e. the parent core or envelope mass decreases while the luminosity increases; e.g. \cite{saraceno1996}; \cite{molinari2008}). Two IR bright cores in our sample lie close (within a factor of $\sim1.4$) to the aforementioned accretion luminosity trend. Also, the \ion{H}{ii} region IRAS~13039-6108a lies only a factor of 2 below that trend line. Most of the remaining cores lie clearly below that trend, for example the two 22~$\mu$m dark cores
lie approximately one dex below the trend, which could be an indication of a lower mass accretion rate (and/or lower stellar mass and larger central star). Only IRAS 13037-6112a is found to lie (by a factor of 2.9) above the $L_{\rm acc}$ line corresponding to an accretion rate of
$10^{-5}$~M$_{\sun}$~yr$^{-1}$. The latter rate is not sufficient to overcome the radiation pressure barrier of high-mass star formation, which requires values of $\dot{M}_{\rm acc}>10^{-4}$~M$_{\sun}$~yr$^{-1}$, and indeed such high values have been derived for high-mass star-forming objects (e.g. \cite{zinnecker2007}; \cite{motte2018} for reviews, and references therein). See Sect.~4.3 for further discussion.
Figure~\ref{figure:massvsradius} shows the core masses as a function of their effective radius. For comparison, the magenta dashed line plotted in the figure shows the empirical mass-radius threshold for high-mass star formation proposed by Kauffmann \& Pillai (2010); see also Kauffmann et al. (2010b). We note that the authors also used the Ossenkopf \& Henning (1994) dust opacities for grains with thin ice mantles coagulating for $10^5$~yr, but the model density was assumed to be $10^6$~cm$^{-3}$ rather than $10^5$~cm$^{-3}$ as we did. In a wavelength range of 350~$\mu$m--1~mm, the former dust opacities are on average $1.285\pm0.002$ times higher than our values. Moreover, Kauffmann \& Pillai (2010) decreased the opacities by a factor of 1.5 to calibrate between the dust extinction and dust emission based mass estimates (see \cite{kauffmann2010a} for details). We used the original Ossenkopf \& Henning (1994) opacities and a factor of 1.41 higher gas-to-dust mass ratio than what seems to have been used in the reference studies of Kauffmann \& Pillai (2010). Taking these differences into account, the Kauffmann \& Pillai (2010) threshold relationship for high-mass star formation can be written as
\begin{equation}
\label{eqn:threshold}
M_{\rm thresh}(R)=1\,051\,{\rm M}_{\sun} \times \left(\frac{R}{{\rm pc}}\right)^{1.33}\,.
\end{equation}
Seven out of our 12 cores (58\%) lie above the Kauffmann \& Pillai (2010) threshold for high-mass star formation. These include five IR bright cores, the \ion{H}{ii} region IRAS~13039-6108a, and the 22~$\mu$m dark core SMM~9, where IRAS~13039-6108a lies furthest above the threshold, that is by a factor of $4.8\pm2.0$. Moreover, the IR bright core SMM~3 and IR dark core SMM~1a lie very close to this critical threshold, namely a factor of $1.09\pm0.19$ and $1.03\pm0.15$ below it, respectively.
Baldeschi et al. (2017) also derived an empirical mass-radius threshold relationship for high-mass star formation (see their Eq.~(9)). The authors assumed that the opacity at 300~$\mu$m is 0.1~cm$^2$~g$^{-1}$, which includes a dust-to-gas mass ratio of 1/100. When scaled to our corresponding assumptions, the Baldeschi et al. (2017) threshold becomes
\begin{equation}
\label{eqn:threshold2}
M_{\rm thresh}(R)=1\,732\,{\rm M}_{\sun} \times \left(\frac{R}{{\rm pc}}\right)^{1.42}\,.
\end{equation}
As illustrated in Fig.~\ref{figure:massvsradius}, the Baldeschi et al. (2017) threshold lies above the Kauffmann \& Pillai (2010) threshold by $1.65\times (R/{\rm pc})^{1.0677}$. All the seven cores that lie above the Kauffmann \& Pillai (2010) threshold, also lie above the Baldeschi et al. (2017) threshold. For example, IRAS~13039-6108a fulfils the latter criterion by lying a factor of $3.5\pm1.5$ above it.
As illustrated in Fig.~\ref{figure:alphavsmass}, all the target cores appear to be gravitationally bound ($\alpha_{\rm vir}<2$). We note that the 22~$\mu$m dark cores SMM~1a and SMM~9 could harbour YSO(s) that are too weak to be detected with \textit{WISE} (e.g. the $5\sigma$ point-source sensitivity in the W4 band is 5.4~mJy; \cite{wright2010}), and indeed the PACS 70~$\mu$m observations suggest that the cores already have an ongoing central heating (although the 70~$\mu$m counterpart of SMM~1a could not be found from the PACS catalogue; see Appendix~A). The other way round, SMM~1a and SMM~9 are unlikely to be prestellar cores.
To quantitatively estimate the mass of the most massive star that could form in the target cores, we used the Kroupa (2001) stellar initial mass function based formula from Svoboda et al. (2016, their Eq.~(10); see also \cite{sanhueza2017}, Appendix therein), which is given by
\begin{equation}
\label{eqn:stellar}
M_{\star}^{\rm max}=20\,{\rm M}_{\sun}\times \left(\frac{\epsilon_{\rm SF}}{0.3}\times \frac{M_{\rm core}}{1\,064\,{\rm M}_{\sun}} \right)^{1/1.3}\,,
\end{equation}
where $\epsilon_{\rm SF}$ is the star formation efficiency (SFE). Adopting a value of $\epsilon_{\rm SF}=0.3$ (e.g. \cite{lada2003} for a review; \cite{alves2007}), Eq.~(\ref{eqn:stellar}) yields maximum stellar masses of $M_{\star}^{\rm max}=0.5\pm0.1 - 5.1\pm1.7$~M$_{\sun}$ for the IR bright cores, $2.0\pm0.2$~M$_{\sun}$ and $3.1\pm0.6$~M$_{\sun}$ for the 22~$\mu$m dark cores SMM~1a and SMM~9, and $8.9\pm2.9$~M$_{\sun}$ for IRAS~13039-6108a. This suggests that the IR bright and IR dark cores in the Seahorse IRDC could give birth to multiple systems of low-mass stars ($\sim0.1-2$~M$_{\sun}$) or intermediate-mass stars ($\sim2-8$~M$_{\sun}$) rather than collapse to form a massive star of at least 8~M$_{\sun}$, which only seems to be the case in the already known high-mass star-forming object IRAS~13039-6108a. We note that according to Eq.~(\ref{eqn:stellar}), high-mass star formation requires at least a 320~M$_{\sun}$ core for $\epsilon_{\rm SF}=0.3$. On the other hand, in the competitive accretion paradigm the cores like SMM~1a and SMM~1b could competitively accrete more mass from their parent clump SMM~1, and this process has the potential to lead to the formation of a massive star at the centre of the gravitational potential well (e.g. \cite{bonnell2006}).
To further assess our core sample's potential for high-mass star formation, in Fig.~\ref{figure:Sigmavsmass} we plot their surface densities
as a function of core mass. For comparison, two threshold surface densities for high-mass star formation are also shown in Fig.~\ref{figure:Sigmavsmass}. L{\'o}pez-Sepulcre et al. (2010) determined an empirical surface density threshold for massive star formation of $\Sigma_{\rm thres}=0.3$~g~cm$^{-2}$ on the basis of the outflow characteristics of their sample. The authors assumed that the dust opacity is 1~cm$^2$~g$^{-1}$ at 1.2~mm and that $R_{\rm dg}=1/100$. When scaled to the present assumptions, the L{\'o}pez-Sepulcre et al. (2010) surface density threshold becomes $\Sigma_{\rm thres}=0.4$~g~cm$^{-2}$. On the other hand, on the basis of a large sample of $\sim1\,700$ massive YSOs in $\sim1\,300$ clumps drawn from the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL; \cite{schuller2009}), Urquhart et al. (2014) found that a surface density of 0.05~g~cm$^{-2}$ might set a minimum threshold for high-mass star formation. Again, when taking into account that Urquhart et al. (2014) assumed different dust properties than we ($\kappa_{\rm 870\, \mu m}=1.85$~cm$^2$~g$^{-1}$ and $R_{\rm dg}=1/100$), we scaled their reported surface density threshold to 0.095~g~cm$^{-2}$. Seven (58\%) of the target cores
lie above the L{\'o}pez-Sepulcre et al. (2010) threshold by factors of $1.5\pm0.2 - 4.7\pm2.0$. The latter percentage is the same as for the cores that fulfil the Kauffmann \& Pillai (2010) and Baldeschi et al. (2017) thresholds for high-mass star formation (the cores in question are the same in both cases). All the cores are found to lie above the Urquhart et al. (2014) threshold by factors of $1.3\pm0.6 - 19.8\pm8.3$, where five cores lie in between the two $\Sigma_{\rm thres}$ values.
\begin{figure}[!htb]
\centering
\resizebox{\hsize}{!}{\includegraphics{Figures/luminosityvsmass.eps}}
\caption{Core luminosity plotted against the core mass. For comparison, a sample of ten IR dark and IR bright cores in the Snake IRDC from Henning et al. (2010) are shown with empty circles (see Sect.~4.3 and Appendix~B). The black dashed line shows the accretion luminosity with a mass accretion rate of $\dot{M}_{\rm acc}=10^{-5}$~M$_{\sun}$~yr$^{-1}$ and assuming a stellar mass of $M_{\star}=0.1\times M_{\rm core}$ and stellar radius of $R_{\star}=5$~R$_{\sun}$ (see Eq.~(\ref{eqn:accretion})). The magenta dashed lines above and below the aforementioned line show a $\pm1$~dex variation of the accretion luminosity in question.}
\label{figure:luminosityvsmass}
\end{figure}
\begin{figure}[!htb]
\centering
\resizebox{\hsize}{!}{\includegraphics{Figures/massvsradius.eps}}
\caption{Core mass plotted against the effective core radius. The magenta and black dashed lines indicate the empirical mass-radius thresholds for high-mass star formation proposed by Kauffmann \& Pillai (2010) and Baldeschi et al. (2017), respectively. When scaled to the assumptions adopted in the present study, these thresholds are given by $M_{\rm thresh}(R) = 1\,051\,{\rm M}_{\sun} \times (R/{\rm pc})^{1.33}$ and $M_{\rm thresh}(R) = 1\,732\,{\rm M}_{\sun} \times (R/{\rm pc})^{1.42}$ (see Sect.~4.2 for details).}
\label{figure:massvsradius}
\end{figure}
\subsection{Comparison with the properties of dense cores in the Snake IRDC}
As discussed in Miettinen (2018), the Seahorse IRDC shares some similarities with the Snake IRDC G11.11-0.12. Both the clouds are
filamentary in their projected morphology, and they appear to have been hierarchically fragmented into substructures (e.g. \cite{ragan2015}).
Altogether 17 cores were detected with SABOCA in the Seahorse IRDC, and three LABOCA clumps (SMM~5, SMM~8, and BLOB~1) were resolved out in the SABOCA map (\cite{miettinen2018}). Because 14 of these 20 sources are associated with a \textit{WISE} 22~$\mu$m source, the percentage of 22~$\mu$m bright cores is $70\%\pm19\%$, where the quoted uncertainty represents the Poisson error on counting statistics (here, SMM~7 is counted as IR bright opposite to the studies by Miettinen (2018, 2020); see Table~\ref{table:sample} and Appendix~A). Henning et al. (2010) found that 11 out of their 18 cores that are found along the Snake filament are associated with \textit{Spitzer} 24~$\mu$m emission, which makes the percentage of IR bright cores $61\%\pm18\%$, which is comparable to that in the Seahorse. Accounting for the off-filament cores in the Snake, the aforementioned percentage becomes even more similar to ours (i.e. $67\%\pm17\%$; \cite{henning2010}, Table~1 therein). The differences are that the Snake IRDC lies at a distance of 3.6~kpc (i.e. a factor of 1.4 further away than the Seahorse) and has a projected length of about 30~pc, and it is also very massive, $\sim10^5$~M$_{\sun}$ (\cite{kainulainen2013}; \cite{lin2017}). The Seahorse filament is about 9~pc long and has a mass of $\sim10^3$~M$_{\sun}$ (\cite{miettinen2018}).
We note that Henning et al. (2010) applied a similar SED analysis to derive the Snake IRDC's core properties as in the present study (i.e. modifed blackbody fitting), and hence we took their sample as our main comparison sample of IRDC cores. However, for a better comparison with the present results, we re-analysed the SEDs of the Snake IRDC cores using the same method and assumptions as in the present study (see Appendix~B for details).
In Fig.~\ref{figure:distributions}, we show the distributions of dust temperature, mass, and luminosity of the cores in both the Seahorse IRDC and the Snake IRDC, where the latter values are listed in Table~\ref{table:snake} (the ten out of 18 on-filament sources for which SED analysis could be done). The mean (median) values of these quantities for the Snake cores are $\langle T_{\rm dust}^{\rm cold}\rangle=19.6\pm1.1$~K (19~K), $\langle T_{\rm dust}^{\rm warm}\rangle=50.9\pm3.0$~K (50~K), $\langle M\rangle = 49\pm28$~M$_{\sun}$ (14~M$_{\sun}$), and $\langle L\rangle =349\pm267$~L$_{\sun}$ (71~L$_{\sun}$). These values are $1.5\pm0.2$ (1.5), $1.1\pm0.1$ (1.0), $0.4\pm0.3$ (0.1), and $1.8\pm1.6$ (1.2) times those for the Seahorse cores (see Sect.~4.2). The Snake cores appear somewhat warmer (in the cold component), less massive (by a factor of $2.5\pm1.9$ on average), and about 80\% more luminous on average (while the median luminosities differ by only a factor of 1.2). However, at least partly these differences can be attributed to our inclusion of the longer wavelength data (350~$\mu$m and 870~$\mu$m), which leads to lower temperatures of the cold componenent (hence higher mass).
Interestingly, also the Snake IRDC contains one IR bright core, the P1 core (i.e. core no.~9), with a luminosity of about
$2.7^{+0.8}_{-0.7}\times10^3$~L$_{\sun}$, which is comparable ($2.5^{+2.5}_{-1.2}$ times larger) to that we derived for IRAS~13039-6108a. The Snake P1 also has a mass similar to that of IRAS~13039-6108a (a factor of $1.3\pm0.6$ difference). We note that the Snake P1 is known to be associaated with 6.7~GHz Class~II methanol (CH$_3$OH) maser and 22~GHz water (H$_2$O) maser emission, which are signposts of high-mass star formation (\cite{pillai2006}; \cite{wang2014}).
In Fig.~\ref{figure:luminosityvsmass}, we also plot the luminosities and masses of the analysed core sample in the Snake IRDC filament. Three out of the nine IR bright Snake cores (33.3\%) lie close (within a factor of 1.5) to the line of accretion luminosity with $\dot{M}_{\rm acc}=10^{-5}$~M$_{\sun}$~yr$^{-1}$, although we note that the associated uncertainties are large. This is comparable to the corresponding percentage in our sample, that is two out of nine (22\%) IR bright cores lie within a factor of $\sim1.4$ of the trend; see Sect.~4.2. The one analysed IR dark core in the Snake filament lies within 1~dex below the aforementioned $L-M$ relationship, just like the two IR (22~$\mu$m) dark cores in our sample. Overall, the Seahorse and Snake cores' properties suggest that the Seahorse and Snake IRDCs are in comparable evolutionary stages, at least in terms of their star formation phase.
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.4\textwidth]{Figures/T_distribution.eps}
\includegraphics[width=0.4\textwidth]{Figures/T_distribution2.eps}
\includegraphics[width=0.4\textwidth]{Figures/M_distribution.eps}
\includegraphics[width=0.4\textwidth]{Figures/L_distribution.eps}
\caption{Distributions of dust temperature, mass, and luminosity of the cores in the Seahorse IRDC (blue histogram; present study) and Snake IRDC (red histogram; \cite{henning2010}; Appendix~B herein). The vertical dashed lines indicate the sample medians.}
\label{figure:distributions}
\end{center}
\end{figure*}
\subsection{Core fragmentation}
In the paper by Miettinen (2018), we studied the fragmention of the clumps in the Seahorse IRDC into cores, and in this section
we address the fragmention of the cores into still smaller units. As discussed in Appendix~A, five out of the 12 analysed cores (42\%) show evidence of fragmentation in the SABOCA 350~$\mu$m image. The observed, projected separations between the fragments range from 0.09~pc in SMM~7 to 0.21~pc in SMM~4a. We note that all these systems were extracted as single sources in the source extraction analysis by Miettinen (2018).
The observed fragment separations can be compared with the thermal Jeans length of the core, $\lambda_{\rm J}\propto T_{\rm kin}^{1/2}\rho^{-1/2}$ (e.g. \cite{mckee2007} for a review, Eq.~(20) therein). Again, we assumed that $T_{\rm kin}=T_{\rm dust}$ (for the cold component), and the mass density needed in the calculation was defined as $\rho=\mu_{\rm H_2}m_{\rm H}n({\rm H_2})$ to be consistent with Eq.~(\ref{eqn:density}). The thermal Jeans mass of the cores was calculated using the definition in McKee \& Ostriker (2007), that is
$M_{\rm J} \propto (\lambda_{\rm J}/2)^3\rho$.
The SED based core masses were compared with the Jeans mass by calculating the Jeans number, $N_{\rm J}=M/M_{\rm J}$. The aforementioned fragmentation parameters are listed in Table~\ref{table:fragmentation}. The uncertainties in the parameters were propagated from those in the temperature, density, and mass.
As seen in Table~\ref{table:fragmentation}, the observed fragment separation in SMM~6a is comparable to its thermal Jeans length (the ratio between the two is $1.4\pm0.3$). For the remaining four cores, the observed fragment separation is $3.0\pm0.2$ to $6.4\pm1.4$ times the thermal Jeans length, which could be an indication that the derived temperature of the cold dust component underestimates the parent core temperature (e.g. $8.3\pm1.3$~K for IRAS~13039-6108a seems very low) or that also non-thermal motions contribute to the Jeans instability. For these sources, we also calculated the Jeans length where the thermal sound speed, $c_{\rm s}$, is replaced by the effective sound speed defined as $c_{\rm s,\, eff}^2=c_{\rm s}^2+\sigma_{\rm NT}^2$ (e.g. \cite{bonazzola1987}; \cite{maclow2004} for a review). The one-dimensional non-thermal velocity dispersion is often written under the assumption of isotropicity as $\sigma_{\rm NT}^2=1/3 \times {\rm v}_{\rm rms}^2$ with ${\rm v}_{\rm rms}$ the three-dimensional, rms turbulent velocity. Using the H$^{13}$CO$^+(J=2-1)$ linewidths (column~(2) in Table~\ref{table:virial}), the effective sound speed increases the Jeans lenghts in SMM~1a, 4a, and 7 and IRAS~13039-6108a to 0.14~pc, 0.11~pc, 0.03~pc, and 0.06~pc, respectively. These are 1.5--3 times the corresponding thermal Jeans lengths, and 33\%--74\% of the observed fragment separation. We note that magnetic pressure would also contribute to the effective sound speed (e.g. \cite{mckee2003}), and could also contribute to the core fragmentation.
As seen in column~(6) in Table~\ref{table:fragmentation}, the cores contain multiple thermal Jeans masses, although the Jeans numbers are associated with significant uncertainties. The highest Jeans numbers are found for SMM~1a, 4a, and 7 and IRAS~13039-6108a, but those would be decreased by factors of 3.4--21 (e.g. to $N_{\rm J}=163\pm145$ for IRAS~13039-6108a) if the non-thermal Jeans lengths are used to derive the corresponding Jeans masses. Nevertheless, it is possible that the studied cores are fragmented into multiple condensations and this hypothesis can be tested by high-resolution observations with telescopes like Atacama Large Millimetre/submillimetre Array (ALMA; \cite{wootten2009}).
In conclusions, the analysed cores exhibit heterogeneous fragmentation properties so that SMM~6a is consistent with pure thermal Jeans fragmentation, while the other four cores listed in Table~\ref{table:fragmentation} appear to require additional mechanism(s) in their fragmentation in addition to gravity and thermal pressure support. Despite the physical mechanisms of how the cores were fragmented, their physical properties derived from the SEDs should be interpreted as those of the systems of at least two fragments, and this has direct consequence to the cores' ability to form either massive stars or multiple stellar systems.
\begin{table*}
\caption{Core fragment separations and the Jeans analysis parameters.}
\begin{minipage}{2\columnwidth}
\centering
\renewcommand{\footnoterule}{}
\label{table:fragmentation}
\begin{tabular}{c c c c c c}
\hline\hline
Source & $d_{\rm sep}$ & $\lambda_{\rm J}$ & $d_{\rm sep}/\lambda_{\rm J}$ & $M_{\rm J}$ & $N_{\rm J}$\\
& [pc] & [pc] & & [M$_{\sun}$] & \\
\hline
SMM 1a & 0.19 & $0.06\pm0.004$ & $3.0\pm0.2$ & $1.2\pm0.3$ & $44\pm13$\\
SMM 4a & 0.21 & $0.04\pm0.01$ & $5.2\pm1.1$ & $0.5\pm0.4$ & $289\pm234$\\
SMM 6a & 0.16 & $0.11\pm0.03$ & $1.4\pm0.3$ & $2.7\pm2.1$ & $10\pm9$\\
SMM 7 & 0.09 & $0.02\pm0.003$ & $5.9\pm1.3$ & $0.2\pm0.1$ & $1\,095\pm982$\\
IRAS 13039-6108a & 0.17 & $0.03\pm0.006$ & $6.4\pm1.4$ & $0.3\pm0.2$ & $1\,307 \pm 1\,158$\\
\hline
\end{tabular}
\tablefoot{The listed parameters are the observed, projected separation between the core fragments ($d_{\rm sep}$), thermal Jeans length ($\lambda_{\rm J}$), ratio between the latter two separations ($d_{\rm sep}/\lambda_{\rm J}$), thermal Jeans mass ($M_{\rm J}$), and the Jeans number defined as $N_{\rm J}=M/M_{\rm J}$.}
\end{minipage}
\end{table*}
\section{Summary and conclusions}
We used data from the \textit{WISE}, \textit{IRAS}, and \textit{Herschel} satellites together with our previous submillimetre
dust continuum observations with the APEX telescope to construct the far-IR to submillimetre SEDs of the SABOCA 350~$\mu$m selected
cores in the Seahorse IRDC G304.74+01.32. The SEDs were fitted using single and two-temperature modified blackbody models. Our main results are summarised as follows:
\begin{enumerate}
\item For the 12 analysed cores, out of which two are IR dark (no \textit{WISE} detection), the mean values of the derived dust temperatures for the cold (warm) component, masses, luminosities, H$_2$ number densities, and surface densities were found to be $13.3\pm1.4$~K ($47.0\pm5.0$~K), $113\pm29$~M$_{\sun}$, $192\pm94$~L$_{\sun}$, $(4.3\pm1.2)\times10^5$~cm$^{-3}$, and $0.77\pm0.19$~g~cm$^{-2}$. All the cores in our sample were found to be gravitationally bound ($\alpha_{\rm vir}<2$).
\item The most luminous source ($L=(1.1\pm0.4)\times10^3$~L$_{\sun}$) in our sample was found to be IRAS~13039-6108a, which is known to be in the \ion{H}{ii} region stage of evolution.
\item Two out of the nine analysed IR bright cores (22\%) were found to have luminosities that are consistent with the accretion luminosity where the mass accretion rate was assumed to be $10^{-5}$~M$_{\sun}$~yr$^{-1}$, the stellar mass was fixed at 10\% of the parent core mass, and the radius of the central star was assumed to be $5$~R$_{\sun}$. Most of the remaining cores (6 out of 10) were found to lie within 1~dex below the aforementioned accretion luminosity value.
\item Seven out of the 12 analysed cores (58\%) were found to lie above the mass-radius thresholds of high-mass star formation presented by Kauffmann \& Pillai (2010) and Baldeschi et al. (2017). The same seven cores were derived to have mass surface densities of $>0.4$~g~cm$^{-3}$ that also make them potential high-mass star-forming cores. Hence, in addition to IRAS~13039-6108a, the Seahorse IRDC is potentially hosting substructures capable of forming high-mass stars.
\item The average dust temperatures and luminosities of dense cores in the Seahorse IRDC were found to be fairly similar (within a factor of
$\sim1.8$) to those in the well-studied Snake IRDC G11.11-0.12, which is also known to host a high-mass star-forming object (the P1 region) and a number of lower mass cores. The Snake cores were found to be about 2.5 times less massive on average than the Seahorse cores, but at least part of the aforementioned differences can be explained by the fact that we also included the available submillimetre wavelength data in the SED fits of the Seahorse cores.
\item Five out of the 12 analysed cores (42\%; SMM~1a, 4a, 6a, 7, and IRAS~13039-6108a) show evidence of fragmentation in the SABOCA 350~$\mu$m image, and the fragment separation in SMM~6a is consistent with thermal Jeans fragmentation (i.e. $d_{\rm sep}=(1.4\pm0.3)\times \lambda_{\rm J}$), while the other four cores appear to require non-thermal fragmentation processes.
\end{enumerate}
Although the Seahorse IRDC lies about $1\fdg3$ or $\sim60$~pc above the Galactic plane, it appears to have comparable star-forming properties to the \textit{Spitzer}-selected filamentary IRDCs in or closer to the Galactic plane. More detailed studies of the Seahorse core fragments and implications for the cores' ability to form massive stars require high-resolution follow-up observations. In particular, (sub-)millimetre dust continuum and molecular spectral line imaging of the cores with instruments like ALMA would be the next natural step in studying the Seahorse IRDC filament.
\begin{acknowledgements}
I would like to thank the anonymous referee for providing comments and suggestions. This research has made use of NASA's Astrophysics Data System Bibliographic Services, the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration, and {\tt Astropy}\footnote{\url{http://www.astropy.org}}, a community-developed core Python package for Astronomy (\cite{astropy2013}, \cite{astropy2018}). This publication makes use of data products from the \textit{Wide-field Infrared Survey Explorer}, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. \textit{WISE} and NEOWISE are funded by the National Aeronautics and Space Administration.
\end{acknowledgements}
|
2,869,038,153,776 | arxiv | \section{Introduction}
In quantum mechanics, the formulation of quantum system states and quantum observables \cite{Dirac,Schrod,Landau,Neumann} uses the notion of state vectors $|\psi\rangle$ in the Hilbert space, the state density operators acting in the Hilbert space and quantum observables are identified with the Hermitian operators acting in this space. Different representations of the state vectors and density operators in the form of wave functions or density matrices as well as in the form of quasidistributions on the system phase space like Wigner function \cite{Wig32}, Husimi function \cite{Husimi40}, Glauber--Sudarshan function \cite{Glauber63,Sud563} were constructed. The probability representation of quantum states where the states are identified with fair probability distributions was introduced both for continious variables \cite{Mancini}-\cite{OVVILasRes} and discrete spin variables \cite{Dod,OlgaJETP,Bregence,Weigert1,Wiegert2,Painini}; see review \cite{MarmoPhysScr15t02015}. The problems of formulations of quantum mechanics in different representations are particularly associated with intension to find the formulation as close as possible to classical intuition and classical understanding what is the state and what is the observable in the classical physics.
The aim of this work is to present the formulation of notion of quantum states and observables on the example of spin--1/2 system (two--level atom, qubit), using the model of three classical--like coins and classical--like observables related to the games with the coins. In fact, we consider an analog of formal ``hidden variables'' for spin--1/2 system. The contemporary review of hidden variables in quantum mechanics is given by Genovese \cite{Genovese}. We construct in explicit form the bijective map of the density matrix of qubit states and quantum observables described by Hermitian $2\times2$ matrices onto probability distributions describing the positions of the three coins ``up'' or ``down'' and the classical--like observables associated with the rules of the usual game with coin tossing, respectively. The geometry of the qubit state in this construction corresponds to the map of Bloch sphere geometry \cite{ChuangNelson} onto triangle geometry illustrated by the triada of Malevich's squares on the plane \cite{Chernega1,Chernega2,Chernega3} and called quantum suprematism representation of spin--1/2 states \cite{PhysScr2018,Entr2018,confScr2018,MAVI2018Turin}. Different ideas to construct the formulation and geometry of quantum states closer to the classical picture of the system behavior were discussed earlier, e.g., in \cite{Wooters,Mielnik}.
This paper is organized as follows. In Section 2, we review the quantum suprematism approach to spin--1/2 states. In Section 3, we construct a map of the spin--1/2 (qubit, two--level atom) observable onto the classical--like coin observables. Also we obtain the formulas connecting the quantum observable statistics with the classical--coin statistics in the presence of the coin probabilities and coin--observable correlations. The conclusions are presented in Section 4.
\section{Qubit states in quantum suprematism picture}
The Hermitian density $2\times2$ matrix $\rho$ of spin--1/2 state in the basis $|m\rangle$, where $m=\pm1/2$ is the projection of spin onto the $z$ axis, reads
\begin{equation}\label{eq.1}
\rho=
\left(\begin{array}{cc}
\rho_{1/2, 1/2}&\rho_{1/2, -1/2}\\
\rho_{-1/2, 1/2}&\rho_{-1/2, -1/2}
\end{array}\right).
\end{equation}
Following \cite{Ventrig2017} and using the notation $p_1,\,p_2,\,p_3$ for probabilities to have in the state (\ref{eq.1}) the spin projections $m=+1/2$ onto axes $x,\,y,\,z,$ respectively, we can express these probabilities as
\begin{equation}\label{eq.2}
p_k=\mbox{Tr}\left(\rho\rho_k\right),\quad k=1,2,3,
\end{equation}
where $\rho_k=|\psi_k\rangle\langle \psi_k|$ are the density matrices of pure states with the state vectors
\begin{equation}\label{eq.3}
|\psi_1\rangle=\left(\begin{array}{c}
1/\sqrt2\\1/\sqrt2\end{array}\right),\quad |\psi_2\rangle=\left(\begin{array}{c}
1/\sqrt2\\i/
\sqrt2\end{array}\right),\quad |\psi_3\rangle=\left(\begin{array}{c}
1\\0\end{array}\right).
\end{equation}
In view of (\ref{eq.2}) and (\ref{eq.3}), we obtain the expression for the density matrix (\ref{eq.1}) in terms of the probabilities $p_1,\,p_2,\,p_3$, i.e.,
\begin{equation}\label{eq.4}
\rho=\left(\begin{array}{cc}
p_3&p_1-1/2-i(p_2-1/2)\\p_1-1/2+i(p_2-1/2)&1-p_3\end{array}\right).
\end{equation}
This relation means that we construct an invertable map of the density matrix $\rho$ onto the 3--vector with probability components $(p_1,p_2,p_3)=\vec{\cal P}$, i.e.,
\begin{equation}\label{eq.5}
\rho\leftrightarrow\vec{\cal P},
\end{equation}
and the sum of the vector components must be equal to unity. This relation demonstrates that the spin--1/2 state is determined by three probability distributions given by the probability vectors
\begin{equation}\label{eq.6}
\vec P_1=(p_1,1-p_1),\,\vec P_2=(p_2,1-p_2),\,\vec P_3=(p_3,1-p_3).
\end{equation}
The probability vectors are not independent. Since the density matrix (\ref{eq.4}) must have nonnegative eigenvalues, the probabilities $p_1,\,p_2,\,p_3$ should satisfy the inequality
\begin{equation}\label{eq.7}
(p_1-1/2)^2+(p_2-1/2)^2+(p_3-1/2)^2\leq 1/4.
\end{equation}
For pure states $\mbox{Tr}\rho^2\,=1$, the inequality is converted into the equality
\begin{equation}\label{eq.8}
(p_1-1/2)^2+(p_2-1/2)^2=p_3(1-p_3).
\end{equation}
Since $p_3(1-p_3)=(p_3-1/2)^2-1/4$ the relation (\ref{eq.8}) is symmetric with respect to permutation $1\rightarrow2\rightarrow3$. Condition (\ref{eq.7}) reflects the presence of quantum correlations in the qubit system.
Let us consider now three classical--like independent nonideal coins. Tossing these coins, we get three probability distributions (\ref{eq.6}). But for independent classical--like coins, the nonnegativity condition of matrix (\ref{eq.4}) does not valid. The quantumlike description of classical system states was studied in \cite{Koopman,Neumann1}; see also \cite{OVVILasRes}.
The states of classical systems can be associated with analogs of density operators \cite{OVVILasRes,Clmech} but there is no nonnegativity condition for these operators, and they can have negative eigenvalues. The probabilities satisfying inequality (\ref{eq.7}) and describing the spin--1/2 state belong to the ball and the surface of sphere of the radius 1/2 in the 3--dimensional space, with the center given by vector $\vec p_0$ with coordinates $\vec p_0=(1/2,1/2,1/2)$. The probabilities describing the possible states of three classical--like coins belong to the cube, i.e., $0\leq p_1\leq1,\,0\leq p_2\leq1$, $0\leq p_3\leq1$. There is no dependence of the different coin probabilities, i.e., there are no correlations providing inequality (\ref{eq.7}). But if one wants to simulate the quantum behavior of the spin--1/2 system, the corresponding correlations for the classical--like coins must be introduced. In this case, the map (\ref{eq.5}) of matrix (\ref{eq.4}) onto the vector $\vec{\cal P}$, where probabilities $p_1,\,p_2,\,p_3$ describe the classical--like coin states, provides the possibility of simulation of the spin--1/2 state behavior by the classical--like coins. Both the classical--like coin probability distributions (\ref{eq.6}) and probabilities determining quantum spin--1/2 state and satisfying inequality (\ref{eq.7}) are illustrated by the triadas of Malevich's squares \cite{Chernega1,Chernega2,Chernega3,PhysScr2018,Entr2018}. The squares have three sides determined by the probabilities $p_1,\,p_2,\,p_3$, and the length $y_k$ of the kth side reads
\begin{equation}\label{eq.9}
y_k=[2p_k^2+2p_{k+1}+2p_k p_{k+1}-4p_k-2p_{k+1}+2]^{1/2}.
\end{equation}
The sum of the areas of three Malevich's squares is
\begin{eqnarray
S(p_1,\,p_2,\,p_3)&&=2[3+2(p_1^2+p_2^2+p_3^2)-3(p_1+p_2+p_3)\nonumber\\
&&+p_1p_2+p_2p_3+p_3p_1].\label{eq.10}
\end{eqnarray}
For the classical--like coin states, the sum $S(p_1,\,p_2,\,p_3)$ has maximum value $S_{max}^{(c)}=6$. The maximum classical value of the sum (\ref{eq.10}) is reached for two cases where all probabilities are zero or one. For the spin--1/2 states, the maximum value of the area (\ref{eq.10}) is $S_{max}^{(1)}=3$ \cite{Entr2018}. This means that quantum correlations provide the constraints on the value of probabilities as well as the difference of the maximum value of the square area characteristics of the classical and quantum states. The picture of quantum probabilities in terms of quantum suprematism representation of the qubit states illustrates the difference of geometry of the classical--like coin states and spin--1/2 states. It is known that quantum correlations of two--qubit states provide the violation of Bell inequalities \cite{Bell64} characterized by the difference of maximum classical correlation parameters represented by the number 2 and quantum parameter given by the number $2\sqrt2$. We see that even in the qubit state the discussed quantum correlations provide the difference of independent classical--like coin system behavior and spin--1/2 system behavior simulated by the classical coins with extra constraints due to the difference of the square areas 6 and 3.
The correlations can be detected in experiments with superconducting circuits based on Josephson junction devices \cite{ShuelKoin,Astafiev,Walraff} or in experiments with neutrons \cite{Venalabor}. Corresponding measurements with the superconducting qubits, which are analogs of two--level atoms or spin--1/2 particles, also determine the maximum of the sum of areas of Malevich's squares.
For this, one has to measure the spin projections, e.g., of neutron on three perpendicular directions. The obtained mean values of the spin projections $x_1,\,x_2,\,x_3$ determine the probabilities $p_1,\,p_2,\,p_3$, i.e., $p_k=(x_k+1)/2,\,k=1,\,2,\,3.$ The results of the measurement are used to find the maximum of sum (\ref{eq.10}) of Malevich's square area. This sum has to be compared with the theoretical number 3.
\section{Quantum observables and classical--like variables}
In this section, we consider the simulation of quantum observables for qubits by three dichotomic classical--like random variables.
Let us define the rules of game with three classical--like coins as follows. If, due to tossing, the first coin has position ``up'' the gain equals $x$, for position ``down'' the loss is the same. Thus, the random variable
$X=\left(\begin{array}{c}
x\\-x\end{array}\right),$
associated with the first coin, has two values, and for second coin the analogous random variable is
$Y=\left(\begin{array}{c}
y\\-y\end{array}\right).$
For third coin we define gain and loss by the random variable
$Z=\left(\begin{array}{c}
z_1\\z_2\end{array}\right)$
with two different values. The mean values of random variables are determined by the probability distributions $(p_1,1-p_1),\,(p_2,1-p_2),$ and $(p_3,1-p_3)$ as follows:
\begin{eqnarray
&&\langle X\rangle=p_1 x-x(1-p_1), \nonumber\\
&&\langle Y\rangle=p_2 y-y(1-p_2),\label{eq.11}\\
&&\langle Z\rangle=p_3 z_1+z_2(1-p_3).\nonumber
\end{eqnarray}
Let us rewrite the random variables $X,\,Y,\,Z$ in the form of $2\times2$-matrix
\begin{equation}\label{eq.12}
A=\left(\begin{array}{cc}
z_1&x-iy\\
x+i y&z_2\end{array}\right).
\end{equation}
The matrix is an arbitrary Hermitian $2\times2$-matrix and can be used to simulate an arbitrary qubit observable. The qubit state has the density matrix $\rho$ (\ref{eq.4}) expressed in terms of the probabilities $p_1,\,p_2,\,p_3$. The density matrix provides the possibility to calculate all the moments of arbitrary qubit observable (\ref{eq.12}), i.e.,
\begin{equation}\label{eq.13}
\langle A^n\rangle=\mbox{Tr}(\rho A^n), \quad n=1,2,\,\ldots\,
\end{equation}
For example, as one can check the mean value of the observable $A$ has the form
\begin{equation}\label{eq.14}
\langle A\rangle=\langle X\rangle+\langle Y\rangle+\langle Z\rangle,
\end{equation}
which is the sum of the mean values of introduced classical--like random variables. To obtain the highest moments $\langle A^n\rangle$, we use the generating function
\begin{equation}\label{eq.15}
G(\lambda)=\mbox{Tr}\left[\rho\exp\lambda A\right]=\sum_{n=0}^\infty\frac{\lambda^n}{n!}\langle A^n\rangle.
\end{equation}
Using the formula
\begin{equation}\label{eq.16}
\exp t(\vec\sigma\vec n)=(\cosh t)1_2+(\sinh t)(\vec\sigma \vec n),
\end{equation}
where $1_2$ is the unity $2\times2$-matrix, $\sigma_x,\,\sigma_y,\,\sigma_z$ are Pauli matrices
\begin{equation}\label{eq.17}
\sigma_x=\left(\begin{array}{cc}
0&1\\1&0\end{array}\right),\quad\sigma_y=\left(\begin{array}{cc}
0&-i\\i&0\end{array}\right),\quad\sigma_z=\left(\begin{array}{cc}
1&0\\0&-1\end{array}\right),
\end{equation}
and $\vec n$ is the unit vector, i.e., $\vec n^2=1$, we obtain
\begin{equation}\label{eq.18}
G(\lambda)=\exp\left(\lambda\frac{z_1+z_2}{2}\right)\mbox{Tr}\left[\rho\exp(\lambda r)(\vec\sigma\vec n)\right].
\end{equation}
Here $\vec n=\vec r/r,$ $ \vec r=(x,y,z),$ $z=(z_1-z_2)/2,$\\ $r=\sqrt{x^2+y^2+z^2}$ and
\[\exp\lambda r(\vec \sigma\vec n )=\cosh\lambda r \left(\begin{array}{cc} 1&0\\0&1\end{array}\right)+(\sinh\lambda r)(\vec\sigma\vec n).\]
The statistics of quantum observable $A$ is determined by the highest moments
\begin{equation}\label{eq.16a}
\langle A^n\rangle=\frac{d^n G(\lambda)}{d\lambda^n}|_{\lambda=0}\,.
\end{equation}
Using formulas (\ref{eq.16a}), we obtain
\begin{eqnarray
&&\frac{d G(\lambda)}{d\lambda}=\frac{(z_1+z_2)}{2}G(\lambda)\nonumber\\
&&+r\exp\left(
\frac{\lambda(z_1+z_2)}{2}\right)\left[\sinh\lambda r+f\cosh\lambda r\right],\label{eq.17a}
\end{eqnarray}
where
\begin{eqnarray
&&f=r^{-1}\left[\langle A\rangle-\frac{z_1+z_2}{2}\right],\nonumber\\
&&\langle A\rangle=(2 p_1-1)x+(2p_2-1)y+p_3z_1+(1-p_3)z_2, \label{eq.18a}
\end{eqnarray}
and
\begin{equation}\label{eq.19a}
\frac{d^2G(\lambda)}{d\lambda^2}=(z_1+z_2)\frac{d G(\lambda)}{d\lambda}+\left[r^2-\left(\frac{z_1+z_2}{2}\right)^2\right]G(\lambda).
\end{equation}
One can check that
\begin{equation}\label{eq.20}
\langle A^2\rangle=(z_1+z_2)\langle A\rangle+\left[r^2-\left(\frac{z_1+z_2}{2}\right)^2\right].
\end{equation}
Due to (\ref{eq.19a}), all the derivatives of the generating function
d^n G(\lambda)/d \lambda^n$ are expressed in terms of $G(\lambda)$ and $d G(\lambda)/d\lambda.$
We have the following property of the highest moments of quantum observable $A$. All the highest moments depend on the probabilities $p_1,\,p_2,\,p_3$ only due to the dependence of mean value $\langle A\rangle$ on these probabilities.
The obtained results can be formulated as the following recepie: How to simulate the quantum mechanics of spin--1/2 system by classical rules of game with three classical--like coins and classical--like variables $x,\,y,\,z_1,\,z_2$ associated with the coin tossing? One has probability vector $\vec{\cal P}=(p_1,p_2,p_3)$ as a result of tossing the coins. The vector is mapped onto the matrix $\rho$ which is postulated to be density matrix, and this means that there are quantum correlations expressed by inequality (\ref{eq.7}). Three classical random variables defined by the rules of the coin game and taking values $(x,-x);\,(y,-y);\,(z_1,z_2)$ are associated with the matrix (\ref{eq.12}). This matrix is postulated to be a qubit quantum observable. After this, applying the quantum rules of obtaining the statistics of quantum observable for given quantum states, we express all the highest moments of an arbitrary observable in terms of classical coin probabilities $p_1,p_2,p_3$ and classical random variables. Such quantum ingredient as the fidelity is expressed in terms of the probabilities associated with the classical coin game, i.e.,
\begin{eqnarray
&&\mbox{Tr}(\rho_1\rho_2)=p_3{\cal P}_3- (1-p_3)(1-{\cal P}_3)\nonumber\\
&&+\left[(p_1-1/2)-i(p_2-1/2)\right]\left[({\cal P}_1-1/2)+i({\cal P}_2-1/2)\right]\nonumber\\
&&+\left[(p_1-1/2)+i(p_2-1/2)\right]\left[({\cal P}_1-1/2)-i({\cal P}_2-1/2)\right].\nonumber\\
&&\label{eq.21}
\end{eqnarray}
Here $p_1,p_2,p_3$ are the probabilities which determine the state with density matrix $\rho_1$ and ${\cal P}_1,\,{\cal P}_2,\,{\cal P}_3$ are the probabilities which determine the state $\rho_2$. Also such quantum property as the superposition principle can be formulated as nonlinear addition rule for probabilities determining the state, which are pure ones \cite{PhysScr2018,Cher22}.
\section{Conclusions}
To conclude, we point out the main results of our work. We demonstrated that the quantum mechanics of such system as qubit (spin--1/2, two--level atom) can be simulated by using three classical--like coin states associated with probabilities to get coin positions ``up'' and ``down''. Also quantum spin--1/2 observables can be simulated by the rules of game with these three coins. The quantumness of the system in this picture is related to the presence of quantum correlations imposed onto the coin behavior and expressed in terms of inequality (\ref{eq.7}). The state density matrices are constructed using the classical--like coin tossing probabilities by postulating the form of matrices (\ref{eq.4}). The new observation of this study is the existence of generating function (\ref{eq.15}) for highest moments of spin--1/2 observables and its expression in terms of classical--like coin probabilities and classical--like random variables. The approach has geometrical interpretation for the qubit states in terms of Malevich's square picture. The quantumness of the states is responsible for the bound 3 for the maximal area of the sum of Malevich's squares. The developed method can be extended to the case of qudits. We present this method in the future publication. It is worth noting that the formalism of quantum mechanics and its relation to classical physics formalism is discussed in the literature during many decades. In this connection, the review \cite{Mermin} presents the recent discussion of the approach called QBism \cite{Fuchs}. In addition to this, the quantum suprematism representation we used to discuss the example of spin--1/2 system states and observables in terms of absolutely classical objects like classical coins and classical random variables provides the possibility also to clarify some classical--quantum connections. In is worth noting that the probabilities $p_1,\,p_2,\,p_3$ do not satisfy the equation $p_1+p_2+p_3=1$. In fact, we use not joint probability distribution which gives the conditional probabilities with Bayes rule but the set of three probability distributions which obey the constraints (\ref{eq.7}).
\subsection*{Acknowledgments}
Vladimir I. Man'ko thanks Professor Tommaso Calarco for fruitful discussion of relations of Malevich's squares with qubit states.
|
2,869,038,153,777 | arxiv | \section{\bf Introduction, Definitions and Preliminaries}
Throughout this paper, we refer to \cite{GasparRahman} for definitions
and notations. We also suppose that $0<q<1$. For complex numbers $a$,
the $q$-shifted factorials are defined by
\begin{align}
(a;q)_0:=1,\quad (a;q)_{n} =\prod_{k=0}^{n-1} (1-aq^k)
\quad \text{and} \quad (a;q)_{\infty}:=\prod_{k=0}^{\infty}(1-aq^{k}),
\end{align}
where (see, for example, \cite{GasparRahman} and \cite{Slater})
$$(a;q)_n=\frac{(a;q)_\infty}{(aq^n;q)_\infty} \qquad \text{and} \qquad
(a;q)_{n+m}=(a;q)_n(aq^n;q)_m$$
and
$$\left(\frac{q}{a};q\right)_n=(-a)^{-n}\;q^{({}^{n+1}_{\,\,\,2})}
\frac{(aq^{-n};q)_\infty}{(a;q)_\infty}.$$
We adopt the following notation:
$$(a_1,a_2, \cdots, a_r;q)_m=(a_1;q)_m (a_2;q)_m\cdots(a_r;q)_m
\qquad (m\in \mathbb{N}:=\{1,2,3,\cdots\}).$$
Also, for $m$ large, we have
$$(a_1,a_2, \cdots, a_r;q)_\infty=(a_1;q)_\infty
(a_2;q)_\infty\cdots(a_r;q)_\infty.$$
The $q$-binomial coefficient is defined by
\begin{equation}
\begin{bmatrix}
n \\
k \\
\end{bmatrix}_q:=\frac{(q;q)_n}{(q;q)_k(q;q)_{n-k}}.
\end{equation}
The basic (or $q$-) hypergeometric function
of the variable $z$ and with $\mathfrak{r}$ numerator
and $\mathfrak{s}$ denominator parameters
is defined as follows (see, for details, the monographs by
Slater \cite[Chapter 3]{Slater}
and by Srivastava and Karlsson
\cite[p. 347, Eq. (272)]{SrivastavaKarlsson};
see also \cite{HMS-IMAJAM1983-1984} and \cite{Koekock}):
$${}_{\mathfrak r}\Phi_{\mathfrak s}\left[
\begin{array}{rr}
a_1, a_2,\cdots, a_{\mathfrak r};\\
\\
b_1,b_2,\cdots,b_{\mathfrak s};
\end{array}\,
q;z\right]
:=\sum_{n=0}^\infty\Big[(-1)^n \;
q^{\binom{n}{2}}\Big]^{1+{\mathfrak s}-{\mathfrak r }}
\,\frac{(a_1, a_2,\cdots, a_{\mathfrak r};q)_n}
{(b_1,b_2,\cdots,b_{\mathfrak s};q)_n}
\; \frac{z^n}{(q;q)_n},$$
where $q\neq 0$ when ${\mathfrak r }>{\mathfrak s}+1$.
We also note that
$${}_{\mathfrak r+1}\Phi_{\mathfrak r}\left[
\begin{array}{rr}
a_1, a_2,\cdots, a_{\mathfrak r+1}\\
\\
b_1,b_2,\cdots,b_{\mathfrak r };
\end{array}\,
q;z\right]
=\sum_{n=0}^\infty \frac{(a_1, a_2,\cdots, a_{\mathfrak r+1};q)_n}
{(b_1,b_2,\cdots,b_{\mathfrak r};q)_n}\;\frac{ z^n}{(q;q)_n}.$$\par
We remark in passing that, in a recently-published
survey-cum-expository review article, the so-called $(p,q)$-calculus
was exposed to be a rather trivial and inconsequential variation of
the classical $q$-calculus, the additional parameter $p$ being redundant
or superfluous (see, for details, \cite[p. 340]{HMS-ISTT2020}).
Basic (or $q$-)
series and basic (or $q$-) polynomials, especially
the basic (or $q$-) hypergeometric functions and basic
(or $q$-) hypergeometric polynomials, are
applicable particularly in several areas of Number Theory
such as the Theory of Partitions and
are useful also in a wide variety
of fields including, for example, Combinatorial Analysis,
Finite Vector Spaces, Lie Theory, Particle Physics, Non-Linear
Electric Circuit Theory, Mechanical Engineering, Theory of
Heat Conduction, Quantum Mechanics, Cosmology, and Statistics
(see also \cite[pp. 350--351]{SrivastavaKarlsson} and the references cited
therein). Here, in our present investigation, we are mainly concerned
with the Cauchy polynomials $p_n(x,y)$ as given
below (see \cite{Chen2003} and \cite{GasparRahman}):
\begin{equation}
\label{def}
p_n(x,y):=(x-y)(x- qy)\cdots (x-q^{n-1}y)
=\left(\frac{y}{x};q\right)_n\,x^n,
\end{equation}
together with the following Srivastava-Agarwal
type generating function
(see also \cite{Cao-Srivastava2013}):
\begin{equation}
\label{Srivas}
\sum_{n=0}^\infty
p_n (x,y)\;\frac{(\lambda;q)_n\,t^n}{(q;q)_n}
={}_{2}\Phi_1\left[
\begin{array}{rr}
\lambda,\frac{y}{x};\\
\\
0;
\end{array}\,
q; xt\right].
\end{equation}
In particular, for $\lambda=0$ in (\ref{Srivas}), we get the
following simpler generating function \cite{Chen2003}:
\begin{equation}
\label{gener}
\sum_{n=0}^{\infty} p_n(x,y)\;
\frac{t^n }{(q;q)_n} =
\frac{(yt;q)_\infty}{(xt;q)_\infty}.
\end{equation}
The generating function (\ref{gener})
is also the homogeneous version
of the Cauchy identity or the following
$q$-binomial theorem (see, for example, \cite{GasparRahman},
\cite{Slater} and \cite{SrivastavaKarlsson}):
\begin{equation}
\label{putt}
\sum_{k=0}^{\infty}
\frac{(a;q)_k }{(q;q)_k}\;z^{k}={}_{1}\Phi_0\left[
\begin{array}{rr}
a;\\
\\
\overline{\hspace{3mm}}\,;
\end{array} \,
q;z\right]=\frac{(az;q)_\infty}{(z;q)_\infty}\qquad (|z|<1).
\end{equation}
Upon further setting $a=0$, this last relation (\ref{putt})
becomes Euler's identity
(see, for example, \cite{GasparRahman}):
\begin{equation}
\label{q-expo-alpha}
\sum_{k=0}^{\infty} \frac{z^{k}}{(q;q)_k}=\frac{1}{(z;q)_\infty}
\qquad (|z|<1)
\end{equation}
and its inverse relation given below \cite{GasparRahman}:
\begin{equation}
\label{q-Expo-alpha}
\sum_{k=0}^{\infty}\frac{(-1)^k
}{(q;q)_k}\; q^{\binom{k}{2}}\,z^{k}=(z;q)_\infty.
\end{equation}
Based upon the $q$-binomial theorem (\ref{putt}) and Heine's
transformations, Srivastava {\it et al.} \cite{HMS-C-W2020} established
a set of two presumably new theta-function identities
(see, for details, \cite{HMS-C-W2020}).
The following usual $q$-difference operators are defined by
\cite{Liu97,SrivastaAbdlhusein,Saadsukhi}
\begin{equation}
\label{deffd}
{D}_a\big\{f(a)\big\}:=\frac{f(a)-f( q a)}{ a}, \quad
{\theta}_{a}= \big\{f(a)\}:=\frac{f(q^{-1}a)-f( a)}{q^{-1}a},
\end{equation}
and their Leibniz rules are given by (see \cite{Roman1982})
\begin{align}
\label{Lieb}
D_a^n\left\{f(a)g(a)\right\}=\sum_{k=0}^n
\qbinomial{n}{k}{q} q^{k(k-n)} D_a^k\left\{f(a)\right\}
D_a^{n-k}\left\{g(q^ka)\right\}
\end{align}
and
\begin{align}
\theta_a^n\left\{ f(a)g(a)\right\}=\sum_{k=0}^n \qbinomial{n}{k}{q}
\theta_a^k\left\{f(a)\right\} \theta_a^{n-k}\left\{g(q^{-k}a)\right\},
\label{Lieb2}
\end{align}
respectively. Here, and in what follows,
$D_a^0$ and $\theta_a^0$ are understood as the identity operators.
Recently, Chen and Liu \cite{Liu97,Liu98} constructed the following
pair of augmentation
operators, which is of great significance for deriving identities
by applying its various special cases:
\begin{equation}
\mathbb{T}(bD_x)=\sum_{n=0}^\infty\frac{(bD_x)^n}{(q;q)_n}\qquad\text{and}
\qquad
\mathbb{E}(bD_x)=\sum_{n=0}^\infty\frac{(b\theta_x)^n}{(q;q)_n}.
\end{equation}
Subsequently, Chen and Gu \cite{CHEN2008} defined
the Cauchy augmentation operators as follows:
\begin{equation}
\mathbb{T}(a, bD_x)=\sum_{n=0}^\infty\frac{(a;q)_n}{(q;q)_n}\,
(bD_x)^n
\end{equation}
and
\begin{equation}
\mathbb{E}(a,bD_x)=\sum_{n=0}^\infty\;\frac{(b;q)_n}{(q;q)_n}\;
(-b\theta_x)^n.
\end{equation}
On the other hand, Fang \cite{Fang2010}
and Zhang and Wang \cite{Zhang2010}
considered the following finite generalized $q$-exponential
operators with two parameters:
\begin{equation}
\mathbb{T}\left[\begin{array}{c}q^{-N},w\\
v\end{array}\Big|q;tD_x\right] =\sum_{n=0}^N\;\frac{(q^{-N},w;q)_n}
{(v,q;q)_n}\,(tD_x)^n
\end{equation}
and
\begin{equation}
\mathbb{E}\left[\begin{array}{c}q^{-N},w\\
v\end{array}\Big|q;t\theta_x\right]
=\sum_{n=0}^N\;\frac{(q^{-N},w;q)_n}{(v,q;q)_n}\,(t\theta_x)^n.
\end{equation}
Moreover, Li and Tan \cite{LiTan2016} constructed two generalized
$q$-exponential operators with three parameters as follows:
\begin{equation}
\mathbb{T}\left[\begin{array}{c}u,v\\
w\end{array}\Big|q;tD_x\right] =\sum_{n=0}^\infty\;
\frac{(u,v;q)_n}{(w,q;q)_n}
\,(tD_x)^n
\end{equation}
and
\begin{equation}
\mathbb{E}\left[\begin{array}{c}u,v\\
w\end{array}\Big|q;t\theta_x\right]
=\sum_{n=0}^\infty\;\frac{(u,v;q)_n}{(w,q;q)_n}\,
(t\theta_x)^n.
\end{equation}
Finally, we recall that Cao {\it et al.} \cite{JianCaoArjika} constructed
the following $q$-operators:
\begin{align}
\mathbb{T}(a,b,c,d,e,yD_x)=\sum_{n=0}^\infty\;\frac{(a,b,c;q)_n}{(q,d,e;q)_n}
\;(yD_x)^n
\label{1.9}
\end{align}
and
\begin{align}
\mathbb{E}(a,b,c,d,e,y\theta_x)=\sum_{n=0}^\infty\;\frac{(-1)^nq^{n\choose2}
(a,b,c;q)_n}{(q,d,e;q)_n}\;(y\theta_x)^n \label{1.10}
\end{align}
and thereby generalized Arjika's results in \cite{Arjika2020} by
using the $q$-difference equations (see, for details, \cite{JianCaoArjika}).
We remark that the $q$-operator (\ref{1.9}) is a particular case of
the homogeneous $q$-difference operator $\mathbb{T}({\bf a},{\bf b},cD_{x})$
(see \cite{HMS-Sama2020}) by taking
$${\bf a} =(a,b,c),\quad {\bf b} =(d,e) \qquad \text{and} \qquad c=y.$$
Furthermore, for $b=c=d=e=0$, the $q$-operator (\ref{1.10}) reduces to the operator
$\widetilde{L}(a,y;\theta_{x})$ which was investigated by Srivastava
{\it et al.} \cite{6}.
\begin{pro}{\rm (see \cite[Theorems 3]{JianCaoArjika})}\label{thm2}
Let $f(a,b,c,d,e,x,y)$ be a seven-variable analytic function in a
neighborhood of $(a,b,c,d,e,x,y)=(0,0,0,0,0,0,0)\in\mathbb{C}^7$.\\
\noindent
{\rm (I)} If $f(a,b,c,d,e,x,y)$ satisfies
the following difference equation$:$
\begin{align}\label{thm2_1}
&x\big\{f(a,b,c,d,e,x,y)-f(a,b,c,d,e,x,yq)\notag \\
&\qquad \qquad\quad \quad-(d+e)q^{-1}\;
[f(a,b,c,d,e,x,yq)-f(a,b,c,d,e,x,yq^2)]
\nonumber\\
&\qquad \qquad\quad \quad +deq^{-2}\;[f(a,b,c,d,e,x,yq^2)-f(a,b,c,d,e,x,yq^3)]\big\}
\nonumber\\
&\qquad \quad =y\big\{[f(a,b,c,d,e,x,y)-f(a,b,c,d,e,xq,y)]\notag \\
&\qquad \qquad\quad \quad-(a+b+c)[f(a,b,c,d,e,x,yq)-f(a,b,c,d,e,xq,yq)]
\nonumber\\
&\qquad\qquad\quad\quad +(ab+ac+bc)[f(a,b,c,d,e,x,yq^2)-f(a,b,c,d,e,xq,yq^2)]
\nonumber\\
&\qquad\qquad\quad\quad-abc[f(a,b,c,d,e,x,yq^3)-f(a,b,c,d,e,xq,yq^3)]\big\},
\end{align}
then
\begin{align}\label{thm2_1.2}
f(a,b,c,d,e,x,y)=\mathbb{T}(a,b,c,d,e,yD_x)\{f(a,b,c,d,e,x,0)\}.
\end{align}
\noindent
{\rm (II)} If $f(a,b,c,d,e,x,y)$ satisfies the following difference equation$:$
\begin{align}\label{thm2_2}
&x\big\{f(a,b,c,d,e,xq,y)-f(a,b,c,d,e,xq,yq)\notag \\
&\qquad \qquad\quad \quad-(d+e)q^{-1}\;
[f(a,b,c,d,e,xq,yq)-f(a,b,c,d,e,xq,yq^2)]\nonumber\\
&\quad\quad\quad\quad +deq^{-2}\;
[f(a,b,c,d,e,xq,yq^2)-f(a,b,c,d,e,xq,yq^3)]\big\}
\nonumber\\
&\qquad\quad =y\big\{[f(a,b,c,d,e,xq,yq)-f(a,b,c,d,e,x,yq)]\notag \\
&\qquad \qquad\quad \quad-(a+b+c)
[f(a,b,c,d,e,xq,yq^2)-f(a,b,c,d,e,x,yq^2)]\nonumber\\
&\qquad \qquad\quad\quad +(ab+ac+bc)[f(a,b,c,d,e,xq,yq^3)-f(a,b,c,d,e,x,yq^3)]
\nonumber\\
&\qquad\qquad\quad\quad -abc[f(a,b,c,d,e,xq,yq^4)-f(a,b,c,d,e,x,yq^4)]\big\},
\end{align}
then
\begin{align}
f(a,b,c,d,e,x,y)=\mathbb{E}(a,b,c,d,e,y\theta_x)\{f(a,b,c,d,e,x,0)\}.
\end{align}
\end{pro}
Liu \cite{Liu2010,Liu2011} initiated the method based upon
$q$-difference equations and deduced several results involving
Bailey's ${}_6\psi_6,\,q$-Mehler formulas
for the Rogers-Szeg\"{o} polynomials
and $q$-integral version of the Sears transformation.
\begin{lem}
\label{MALM}
Each of the following $q$-identities holds true$:$
\begin{align}\label{id1}
{D}_a^k \left\{\frac{1}{(as;q)_\infty}\right\}
=\frac{s^k}{(as;q)_\infty},
\end{align}
\begin{align}\label{id2}
\theta_a^k \left\{\frac{1}{(as;q)_\infty}\right\}
=\frac{s^kq^{-\binom{k}{2}}}{(asq^{-k};q)_\infty},
\end{align}
\begin{align}\label{id3}
{D}_a^k \left\{{(as;q)_\infty}\right\}
=(-s)^k \;q^{\binom{k}{2}}\;
(asq^k;q)_\infty,
\end{align}
\begin{align}\label{id4}
\theta_a^k \left\{{(as;q)_\infty}\right\}
=(- s)^k \;(as;q)_\infty,\\
D_a^n\left\{\frac{(as;q)_\infty}{(a\omega;q)_\infty}\right\}
=\omega^n\; \frac{\left(\frac{s}{\omega};q\right)_n}
{(as;q)_n}\;\frac{(as;q)_\infty}{(a\omega;q)_\infty}
\end{align}
and
\begin{align}\label{aberll}
\theta_a^n\left\{\frac{(as;q)_\infty}{(a\omega;q)_\infty}\right\}
=\left(-\frac{q}{a}\right)^n\; \frac{\left(\frac{s}{\omega};q\right)_n}
{\left(\frac{q}{a\omega};q\right)_n}
\; \frac{(as;q)_\infty}{(a\omega;q)_\infty}.
\end{align}
\end{lem}
We now state and prove the $q$-difference formulas as
Theorem \ref{dsdpqro1} below.
\begin{thm} \label{dsdpqro1}
Each of the following assertions holds true$:$
\begin{align}\label{gLEM}
&\mathbb{T}(r,f,g,v,w,uD_a)
\left\{\frac{(as;q)_\infty}{(az,at;q)_\infty}\right\} \notag \\
&\qquad \quad =\frac{(as;q)_\infty}{(az,at;q)_\infty}\;
\sum_{k=0}^\infty\;
\frac{\left(r,f,g,\frac{s}{z},at;q\right)_{k}(zu)^k}
{(v,w,as,q;q)_{k}}\;
{}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
rq^k,fq^k,gq^k;\\\\
vq^k,wq^k;
\end{array}
\end{matrix} q; ut\right]
\end{align}
and
\begin{align}\label{gdLEM}
&\mathbb{E}(r,f,g,v,w,u\theta_a)\left\{\frac{(az,at;q)_\infty}
{(as;q)_\infty}\right\} \notag \\
&\qquad =\frac{(az,at;q)_\infty}{(as;q)_\infty}\;
\sum_{k=0}^\infty
\; \frac{\left(r,f,g,\frac{z}{s},\frac{q}{at};q\right)_k\,
(-ut)^k}{\left(v,w,\frac{q}{as},q;q\right)_k}\;
{}_3\Phi_3\left[\begin{matrix}
\begin{array}{rrr}
rq^k,fq^k,gq^k;\\\\
vq^k,wq^k,0;
\end{array}
\end{matrix} q; -ut\right],
\end{align}
provided that $\max\left\{|az|,|as|,|at|,|ut|\right\}<1.$
\end{thm}
\begin{proof}
By the means of the definitions (\ref{1.9}) and (\ref{1.10}) of the
operators $\mathbb{T}(r,f,g,v,w,uD_a)$
and $\mathbb{E}(r,f,g,v,w,u\theta_a)$
and the Leibniz rules (\ref{Lieb}) and (\ref{Lieb2}), we observe that
\begin{align}\label{proof3}
&\mathbb{T}(r,f,g,v,w,uD_a)\left\{\frac{(as;q)_\infty}
{(az,at;q)_\infty}\right\}
\notag \\
&\qquad =\sum_{n=0}^\infty \frac{(r,f,g;q)_nu^n}{(v,w,q;q)_n}
D_a^n\left\{\frac{(as;q)_\infty}
{(az,at;q)_\infty}\right\}\nonumber\\
&\qquad=\sum_{n=0}^\infty \frac{(r,f,g;q)_nu^n}{(v,w,q;q)_n}\;
\sum_{k=0}^n \qbinomial{n}{k}{q}\; q^{k(k-n)}\;
D_a^k\left\{\frac{(as;q)_\infty}
{(az;q)_\infty}\right\} D_a^{n-k}
\left\{\frac{1}{(atq^{k};q)_\infty}\right\}\notag \\
&\qquad=\sum_{n=0}^\infty \frac{(r,f,g;q)_nu^n}{(v,w,q;q)_n} \;
\sum_{k=0}^n \qbinomial{n}{k}{q}\;q^{k(k-n)}
\; \frac{\left(\frac{s}{z};q\right)_k\,z^k}{(as;q)_k}\;
\frac{(as;q)_\infty}{(az;q)_\infty}
\; \frac{(tq^{k})^{n-k}}{(atq^{k};q)_\infty}\;
\notag \\
&\qquad= \frac{(as;q)_\infty}{(az,at;q)_\infty}\sum_{k=0}^\infty
\; \frac{\left(\frac{s}{z},at;q\right)_k\,z^k}{(as,q;q)_k}
\sum_{n=k}^\infty \frac{(r,f,g;q)_n\,u^n\, t^{n-k}}{(v,w,q;q)_n}\notag \\
&\qquad=\frac{(as;q)_\infty}{(az,at;q)_\infty}\sum_{k=0}^\infty
\; \frac{\left(r,f,g,\frac{s}{z},at;q\right)_k\,(uz)^k}{(v,w,as,q;q)_k}
\sum_{n=0}^\infty \frac{(rq^k,fq^k,gq^k;q)_n(ut)^{n}}{(vq^k,wq^k,q;q)_n}.
\end{align}
Similarly, we have
\begin{align}\label{proof3a}
&\mathbb{E}(r,f,g,v,w,u\theta_a)\left\{\frac{(az,at;q)_\infty}
{(as;q)_\infty }\right\}
\notag\\
&\quad =\sum_{n=0}^\infty \frac{(-1)^n\;q^{n\choose2}
(r,f,g;q)_nu^n}{(v,w,q;q)_n}
\theta_a^n\left\{\frac
{(az,at;q)_\infty}{(as;q)_\infty}\right\}\nonumber\\
&\quad=\sum_{n=0}^\infty \frac{(-1)^n\;q^{n\choose2}(r,f,g;q)_nu^n}{(v,w,q;q)_n}\;
\sum_{k=0}^n \qbinomial{n}{k}{q}\theta_a^k\left\{\frac{(az;q)_\infty}
{(as;q)_\infty}\right\} \theta_a^{n-k}
\left\{(atq^{-k};q)_\infty \right\}\notag \\
&\quad=\sum_{n=0}^\infty \frac{(-1)^n\; q^{n\choose2}(r,f,g;q)_nu^n}
{(v,w,q;q)_n}\;
\sum_{k=0}^n \qbinomial{n}{k}{q}
\frac{\left(-\frac{q}{a}\right)^k\;
\left(\frac{z}{s};q\right)_k}{\left(\frac{q}{as};q\right)_k}\notag \\
&\qquad \quad \cdot \frac{(az;q)_\infty}{(as;q)_\infty}
(atq^{-k};q)_\infty\; (-tq^{-k})^{n-k}\;
\notag \\
&\quad= \frac{(az,at;q)_\infty}{(as;q)_\infty}\sum_{k=0}^\infty
\; \frac{(-1)^kq^{-{k\choose2}}\left(\frac{z}{s},
\frac{q}{at};q\right)_k\,t^k}
{\left(\frac{q}{as},q;q\right)_k}
\sum_{n=k}^\infty \frac{q^{{n\choose2}-k(n-k)}
(r,f,g;q)_n\,u^n\,t^{n-k}}
{(v,w,q;q)_n}\notag \\
&\quad= \frac{(az,at;q)_\infty}{(as;q)_\infty}\sum_{k=0}^\infty
\; \frac{\left(r,f,g,\frac{z}{s},\frac{q}{at};q\right)_k\,(-ut)^k}
{\left(v,w,\frac{q}{as},q;q\right)_k}
\sum_{n=0}^\infty \frac{q^{n\choose2}(rq^k,fq^k,gq^k;q)_n\,(ut)^{n}}
{(vq^k,wq^k,q;q)_n},
\end{align}
which evidently completes the proof of Theorem \ref{dsdpqro1}.
\end {proof}
We remark that, when $g=w=0$, Theorem \ref{dsdpqro1} reduces to
the concluding result of Li and Tan \cite{LiTan2016}.
\begin{cor} \label{sdpqro1}
It is asserted that
\begin{align}
\mathbb{T}(r,f,g,v,w,uD_s)\left\{ \frac{1 }{(xs;q)_\infty}\right\}
=\frac{1 }{(xs;q)_\infty} {}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
r,f,g;\\
\\
v,w;
\end{array}
\end{matrix} q;xu\right]
\end{align}
and
\begin{align}
\mathbb{E}(r,f,g,v,w,-u\theta_s)\left\{(xs;q)_\infty\right\}
=(xs;q)_\infty\,{}_3\Phi_3\left[\begin{matrix}
\begin{array}{rrr}
r,f,g;\\
\\
v,w,0;
\end{array}
\end{matrix} q;xu\right],
\end{align}
provided that $\max\left\{|xs|,|xu|\right\}<1$.
\end{cor}
The goal in this paper is to give potentially
useful generalizations of a number
$q$-series and $q$-integral identities
such as the $q$-binomial
theorem or the $q$-Gauss sum,
the $q$-Chu-Vandermonde summation formula
and the Andrews-Askey integral.
Our paper is organized as follows.
In Section \ref{generalize0}, we give two formal generalizations
of the $q$-binomial theorem or the $q$-Gauss sum
by applying the $q$-difference equations.
In Section \ref{generalize1}, we derive a set of two extensions
$q$-Chu-Vandermonde summation formulas by making use of the
$q$-difference equations. Next, in Section \ref{generalize2},
we derive two new generalizations
of the Andrews-Askey integral by means of the $q$-difference equations.
Finally, in our last section (Section \ref{conclusion}), we present
a number of concluding remarks and observations concerning the various
results which we have considered in this investigation.
\section{\bf A Set of Formal Generalizations of the $q$-Binomial Theorem}
\label{generalize0}
We begin this section by recalling the following $q$-binomial theorem
(see, for example, \cite{GasparRahman}, \cite{Slater}
and \cite{SrivastavaKarlsson}):
\begin{equation}
\sum_{n=0}^\infty\frac{(a;q)_n\,x^n}{(q;q)_n}
=\frac{(ax;q)_\infty}{(x;q)_\infty}\qquad (|x|<1).\label{qbino}
\end{equation}
In Theorem \ref{thm_10} below, we give two
generalizations of the $q$-binomial theorem
\eqref{qbino} by applying the $q$-difference equations.
\begin{thm}
\label{thm_10}
Each of the following assertions holds true$:$
\begin{align}
&\sum_{n=0}^\infty\frac{(a;q)_n \;a^{-n}}{(q;q)_n}\;\sum_{k=0}^\infty\;
\frac{(q^{-n},ax;q)_k\,q^k}{(q;q)_k}\;\sum_{j,i \geqq 0}\;
\frac{(r,f,g;q)_{j+i}\;u^{j+i}}
{(q;q)_i \;(v,w;q)_{j+i}}\;\frac{\left(\frac{c}{b},axq^k;q\right)_{j}}
{(cx,q;q)_{j}}\;(aq^k)^i\;b^j\notag \\
&\qquad =\frac{(ax;q)_\infty }{(x;q)_\infty}\; \sum_{j,i\geqq 0}
\frac{(r,f,g;q)_{j+i}\;u^{j+i}}{(q;q)_i\; (v,w;q)_{j+i}}\;
\frac{\left(\frac{c}{b},x;q\right)_{j}}{(cx,q;q)_{j}}\; b^j \qquad
(|x|<1)
\label{vlem10}
\end{align}
and
\begin{align}
&\sum_{n=0}^\infty\;\frac{(a;q)_n \;a^{-n}}{(q;q)_n}\;\sum_{k=0}^\infty\;
\frac{(q^{-n},ax;q)_k\,q^k}{(q;q)_k}\;\sum_{i,j\geqq 0} \;
\frac{(-1)^{j+i}\;q^{\left({}^i_2\right)}(r,f,g;q)_{j+i}}
{(q;q)_i\;(v,w;q)_{j+i}}\notag \\
&\qquad \qquad \qquad \cdot \frac{\left(\frac{bq^{1-k}}{a},
\frac{q}{cx};q\right)_j}{\left(\frac{q^{1-k}}{ax},q;q\right)_j}
\,(uc)^{j+i} \notag \\
& \qquad= \frac{(ax;q)_\infty}{(x;q)_\infty}\; \sum_{i,j=0}^\infty
\; \frac{(-1)^{j+i}\;q^{\left({}^i_2\right)}\;(r,f,g;q)_{j+i}}
{(q;q)_i(v,w;q)_{j+i}}\; \frac{\left(\frac{q}{bx};q\right)_j}
{\left(\frac{q}{x},q;q\right)_j} \,(bu)^{j+i}, \label{thlem10}
\end{align}
provided that both sides of $\eqref{vlem10}$ and $\eqref{thlem10}$ exist.
\end{thm}
\begin{rem}
{\rm For $u=0$ and by using the fact that
$$\sum_{k=0}^\infty\frac{(q^{-n},ax;q)_k}{(q;q)_{k}}\;q^k
={}_2\Phi_1\left[\begin{matrix}
\begin{array}{rrr}
q^{-n}, ax;\\
\\
0;
\end{array}
\end{matrix} q; q\right]=(ax)^n,
$$
the assertions $\eqref{vlem10}$ and $\eqref{thlem10}$ reduce
to $\eqref{qbino}$.}
\end{rem}
In our proof of Theorem \ref{thm_10}, we shall need Theorem \ref{thm_121}
and Corollary \ref{cor_11} below.
\begin{thm}
\label{thm_121}
Each of the following assertions holds true$:$
\begin{align}
&\mathbb{T}(r,f,g,v,w,uD_x)\left\{\frac{p_n\left(x,\frac{y}{a};q\right)_n\;
(cx;q)_\infty}{(ax,bx;q)_\infty}\right\} \notag \\
&\qquad =\frac{(y;q)_n}{a^n}\; \frac{(cx;q)_\infty}{(ax,bx;q)_\infty}
\sum_{k=0}^\infty\frac{(q^{-n},ax;q)_k\,q^k}{(y,q;q)_k}\notag \\
&\qquad \qquad \cdot \sum_{j,i\geqq 0}\frac{(r,f,g;q)_{j+i}\;u^{j+i}}
{(q;q)_i (v,w;q)_{j+i}}\;
\frac{\left(\frac{c}{b},axq^k;q\right)_{j}}{(cx,q;q)_{j}}\;
(aq^k)^i\;b^j\label{thm10}
\end{align}
and
\begin{align}
&\mathbb{E}(r,f,g,v,w,u\theta_x)\left\{\frac{p_n\left(x,\frac{y}{a}\right)
(bx,cx;q)_\infty}{(ax;q)_\infty}\right\}\notag \\
&\qquad =\frac{(y;q)_n }{a^ n }\frac{(bx,cx;q)_\infty}{(ax;q)_\infty}\;\sum_{i,j\geqq 0}
\frac{(-1)^{j+i}\;q^{({}^i_2)}\;(r,f,g;q)_{j+i}}{(q;q)_i\;(v,w;q)_{j+i}}\notag \\
&\qquad \qquad \cdot \frac{\left(\frac{bq^{1-k}}{a},\frac{q}{cx};q\right)_j}
{\left(\frac{q^{1-k}}{ax},q;q\right)_j}
\,(uc)^{j+i},\label{2f2.16}
\end{align}
provided that $\max\left\{|ax|,|bx|,|cx|\right\}<1$.
\end{thm}
\begin{cor}
\label{cor_11}
Each of the following assertions holds true$:$
\begin{align}
&\mathbb{T}(r,f,g,v,w,uD_x)\left\{\frac{x^n(cx;q)_\infty}
{(ax,bx;q)_\infty}\right\}\notag \\
&\qquad=\frac{1}{a^n}\;\frac{(cx;q)_\infty}{(ax,bx;q)_\infty}\;\sum_{k=0}^\infty
\; \frac{(q^{-n},ax;q)_k\,q^k}{(q;q)_k}\notag \\
&\qquad \qquad \cdot \sum_{j,i\geqq 0}\;\frac{(r,f,g;q)_{j+i}\;u^{j+i}}
{(q;q)_i \;(v,w;q)_{j+i}}\;
\frac{\left(\frac{c}{b},axq^k;\right)_{j}}{(cx,q;q)_{j}}\;(aq^k)^i\;b^j
\label{lem10}
\end{align}
and
\begin{align}
&\mathbb{E}(r,f,g,v,w,u\theta_x)\left\{\frac{x^n(cx,bx;q)_\infty}
{(ax;q)_\infty}\right\}\notag \\
&\qquad = \frac{1 }{a^ n }\frac{(bx,cx;q)_\infty}{(ax;q)_\infty}\sum_{k=0}^\infty\;
\frac{(q^{-n},ax;q)_k\,q^k}{(q;q)_k}\sum_{i,j\geqq 0}
\frac{(-1)^{j+i}q^{({}^i_2)}\;(r,f,g;q)_{j+i} }{(q;q)_i\;(v,w;q)_{j+i}}
\notag \\
&\qquad \qquad \cdot \frac{\left(\frac{bq^{1-k}}{a},\frac{q}{cx};q\right)_j}
{\left(\frac{q^{1-k}}{ax},q;q\right)_j}
\;(cu)^{j+i}, \label{ff2.16}
\end{align}
provided that $\max\left\{|ax|,|bx|,|cx|,|cu|\right\}<1$.
\end{cor}
\begin{rem}
{\rm For $y=0,$ the assertions $\eqref{thm10}$ and $\eqref{2f2.16}$
reduce to $\eqref{lem10}$ and $\eqref{ff2.16},$ respectively.}
\end{rem}
\begin{proof}[Proof of Theorem $\ref{thm_121}$]
Upon first setting $x\to ax$ in (\ref{ttf}) and then multiplying
both sides of the resulting equation by
$\frac{(cx;q)_\infty}{(bx;q)_\infty},$ we get
\begin{equation}
\label{9z}
\sum_{n=0}^\infty\;\frac{(q^{-n};q)_k\,q^k}{(y,q;q)_k}\;
\frac{(cx;q)_\infty}{(axq^k,bx;q)_\infty}
=\frac{(ax)^n\left(\frac{y}{ax};q\right)_n\;
(cx;q)_\infty}{(y;q)_n(ax,bx;q)_\infty}.
\end{equation}
Now, by applying the operator $\mathbb{T}(r,f,g,v,w,uD_x)$
to both sides of (\ref{9z}), it is easy to see that
\begin{align}
&\sum_{k=0}^\infty\frac{(q^{-n};q)_k\,q^k}{(y,q;q)_k}\;
\mathbb{T}(r,f,g,v,w,uD_x)
\left\{\frac{(cx;q)_\infty}{(axq^k,bx;q)_\infty}\right\}
\notag \\
&\qquad=\frac{a^ n}{(y;q)_n}\;
\mathbb{T}(r,f,g,v,w,uD_x)
\left\{\frac{x^n\left(\frac{y}{ax};q\right)_n(cx;q)_\infty}
{(ax,bx;q)_\infty}\right\}\notag \\
&\qquad= \frac{a^n}{(y;q)_n}\;\mathbb{T}(r,f,g,v,w,uD_x)
\left\{\frac{p_n\left(x,\frac{y}{a}\right)(cx;q)_\infty}
{(ax,bx;q)_\infty}\right\}.\label{2.16}
\end{align}
The proof of the first assertion (\ref{thm10}) of Theorem \ref{thm_121}
is completed by using the relation (\ref{gLEM})
in the left-hand side of (\ref{2.16}). \\
The proof of the second assertion (\ref{2f2.16}) of Theorem \ref{thm_121}
is much akin to that of the first assertion (\ref{thm10}). The details
involved are, therefore, being omitted here.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{thm_10}$]
Multiplying both sides of (\ref{qbino}) by
$\frac{(cx;q)_\infty}{(bx;q)_\infty}$, we find that
\begin{equation}
\sum_{n=0}^\infty\;\frac{(a;q)_n}{(q;q)_n}\;
\frac{x^n\,(cx;q)_\infty}{(ax,bx;q)_\infty}
=\frac{(cx;q)_\infty}{(bx,x;q)_\infty}.\label{2qbino}
\end{equation}
Eq. (\ref{vlem10}) can be written equivalently as follows:
\begin{align}
&\sum_{n=0}^\infty\;\frac{(a;q)_n}{(q;q)_n}\cdot
\frac{a^{-n}(cx;q)_\infty}{(bx,ax;q)_\infty}
\;\sum_{k=0}^\infty\;\frac{(q^{-n},ax;q)_k\,q^k}{(q;q)_k}\notag \\
&\qquad \qquad \qquad \cdot \sum_{j,i\geqq 0}\;\frac{(r,f,g;q)_{j+i}\;
u^{j+i}}{(q;q)_i (v,w;q)_{j+i}}
\frac{\left(\frac{c}{b},axq^k;q\right)_{j}}{(cx,q;q)_{j}}\;(aq^k)^i\;b^j
\notag \\
& \qquad=\frac{(cx;q)_\infty}{(bx,x;q)_\infty}\;
\sum_{j,i=0}\frac{(r,f,g;q)_{j+i}\;u^{j+i}}{(q;q)_i (v,w;q)_{j+i}}
\cdot \frac{\left(\frac{c}{b},x;q\right)_{j}}{(cx,q;q)_{j}}\; b^j .
\label{rvlem10}
\end{align}
If we use $F(r,f,g,v,w,a,u)$ to denote the right-hand side
of (\ref{rvlem10}),
it is easy to verify that $F(r,f,g,v,w,a,u)$ satisfies
(\ref{thm2_1}). By applying (\ref{thm2_1.2}), we thus find that
\begin{align}
\label{efss}
F(r,f,g,v,w,x,u)&=\mathbb{T}(r,f,g,v,w,uD_x)\Big\{F(r,f,g,v,w,x,0)\Big\}
\notag \\
&=\mathbb{T}(r,f,g,v,w,uD_x)\left\{\frac{(cx;q)_\infty}{(bx,x;q)_\infty}\right\}
\notag \\
&=\mathbb{T}(r,f,g,v,w,uD_x)\left\{\sum_{n=0}^\infty\;\frac{(a;q)_n}
{(q;q)_n}\; \frac{x^n\,(cx;q)_\infty}{(ax,bx;q)_\infty} \right\}
\notag \\
&=\sum_{n=0}^\infty\;\frac{(a;q)_n}{(q;q)_n}\;
\mathbb{T}(r,f,g,v,w,uD_x)\left\{\frac{x^n\,(cx;q)_\infty}{(ax,bx;q)_\infty}
\right\}.
\end{align}
The proof of the first assertion (\ref{vlem10}) of Theorem \ref{thm_10}
can now be completed by making use of the relation (\ref{lem10}).
The proof of the second assertion (\ref{thlem10}) of Theorem \ref{thm_10}
is much akin to that of the first assertion (\ref{vlem10}). The details
involved are, therefore, being omitted here.
\end{proof}
\section{\bf Two Generalizations of the $q$-Chu-Vandermonde Summation Formula}
\label{generalize1}
The $q$-Chu-Vandermonde summation formula is recalled here as follows
(see, for example, \cite{GEAndrews1986} and \cite{GasparRahman}):
\begin{equation} \label{ttf}
{}_2\Phi_1\left[\begin{matrix}
\begin{array}{rrr}
q^{-n},x;\\
\\
y;
\end{array}
\end{matrix} q; q\right]=\frac{\left(\frac{y}{x};q\right)_n}
{(y;q)_n}\;x^n
\qquad \big(n\in \mathbb{N}_0:=\mathbb{N}\cup\{0\}\big).
\end{equation}
In this section, we give two generalizations of the
$q$-Chu-Vandermonde summation formula \eqref{ttf}
by applying $q$-difference equations.
\begin{thm}
\label{thm_11}
The following assertion holds true for $y \neq 0$$:$
\begin{align}
&\sum_{k=0}^{n} \frac{(q^{-n},x;q)_k\,q^k}{(q,y;q)_k}
\;{}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
r,f,g;\\
\\
v,w;
\end{array}
\end{matrix} q; uq^k\right] \notag \\
&\qquad=\frac{x^n\left(\frac{y}{x};q\right)_n}{(y;q)_n}\; \sum_{k,j\geqq 0}
\frac{(r,f,g;q)_{k+j}}{(q;q)_{j}\;(v,w;q)_{k+j}}\;
\frac{\left(\frac{q^{1-n}}{y},\frac{qx}{y};q\right)_{k}}
{\left(\frac{xq^{1-n}}{y},q;q\right)_k}\; u^{k+j}\left(\frac{q}{y}\right)^j.
\label{qCh}
\end{align}
\end{thm}
We next derive another generalization of the $q$-Chu-Vandermonde
summation formula \eqref{ttf} as follows.
\begin{thm}
\label{thm_1f1}
For $m\in\mathbb{N}_0$ and $y\neq 0$$,$
it is asserted that
\begin{align}
{}_2\Phi_1\left[\begin{matrix}
\begin{array}{rrr}
q^{-n},x;\\
\\
y;
\end{array}
\end{matrix} q; q^{1+m}\right]
=\frac{x^n\left(\frac{y}{x};q\right)_n}{(y;q)_n}
\;\sum_{j= 0}^m\begin{bmatrix}
m \\
j \\
\end{bmatrix}_q \;\frac{\left(\frac{q^{1-n}}{y},\frac{qx}{y};q\right)_{m-j}}
{\left(\frac{xq^{1-n}}{y};q\right)_{m-j}} \; \left(\frac{q}{y}\right)^j.
\label{AqCh}
\end{align}
\end{thm}
\begin{rem}
{\rm For $u=0$ or $m=0,$ the assertion $\eqref{qCh}$ or $\eqref{AqCh}$ reduces to
the $q$-Chu-Vandermonde summation formula $\eqref{ttf}$.
Furthermore, if we first set $i+j=m$ and then extract the coefficients
of $\displaystyle \frac{(r,f,g;q)_m}{(v,w;q)_m}u^m$
from the two members of the assertion $\eqref{qCh}$ of Theorem $\ref{thm_11},$
we obtain the transformation formula (\ref{AqCh}), which leads us to
the $q$-Chu-Vandermonde summation formula $\eqref{ttf}$ when $m=0$.
Also, upon putting $n=0,$ the assertion $\eqref{AqCh}$
reduces to the following identity$:$}
\begin{align}
\sum_{j=0}^m \begin{bmatrix}
m \\
j \\
\end{bmatrix}_q\;\left(\frac{q}{y};q\right)_{m-j}\;
\left(\frac{q}{y}\right)^j= 1\qquad (y\neq 0).
\end{align}
\end{rem}
\begin{proof}[Proof of Theorem $\ref{thm_11}$]
We first write (\ref{ttf}) in the following form:
\begin{equation}
\sum_{k=0}^{n}\; \frac{(q^{-n};q)_k\,q^k}{(y,q;q)_k}
\;\frac{1}{(xq^k;q)_\infty}
=\frac{(-1)^{n}\;y^n\; q^{\left({}^{n}_{2}\right)}}
{(y;q)_n}\frac{\left(\frac{xq^{1-n}}{y};q\right)_\infty}
{\left(x,\frac{qx}{y};q\right)_\infty}.
\end{equation}
Eq. (\ref{qCh}) can be written equivalently as follows:
\begin{align}
&\sum_{k=0}^\infty\;\frac{(q^{-n};q)_k\,q^k}{(q,c;q)_k}\cdot
\frac{1}{(xq^k;q)_\infty}\; {}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
r,f,g;\\
\\
v,w;
\end{array}
\end{matrix} q; uq^k\right] \notag \\
&\qquad=\frac{(-1)^{n}\;y^n \;q^{\left({}^{n}_{2}\right)}}
{(y;q)_n}\;\frac{\left(\frac{xq^{1-n}}{y};q\right)_\infty}
{\left(x,\frac{qx}{y};q\right)_\infty}\cdot\sum_{i,j\geqq 0}\;
\frac{(r,f,g;q)_{j+i}\;u^{j+i}}{(q;q)_i\;(v,w;q)_{j+i}}\;
\frac{\left(\frac{q^{1-n}}{y},\frac{qx}{y};q\right)_{j}}
{\left(\frac{xq^{1-n}}{y},q;q\right)_{j}}\;\left(\frac{q}{y}\right)^i.
\label{qdqdqq}
\end{align}
If we use $G(r,f,g,v,w,x,u)$ to denote the right-hand side of (\ref{qdqdqq}),
it is easy to observe that $G(r,f,g,v,w,x,u)$ satisfies (\ref{thm2_1}). By
using (\ref{thm2_1.2}), we obtain
\begin{align}
\label{efss1}
G(r,f,g,v,w,x,u)&=\mathbb{T}(r,f,g,v,w,uD_x)\Big\{G(r,f,g,v,w,x,0) \Big\}
\notag \\
&=\mathbb{T}(r,f,g,v,w,uD_x)\left\{\frac{(-1)^{n}\;y^n\; q^{\left({}^{n}_{2}\right)}}{(y;q)_n}
\;\frac{\left(\frac{xq^{1-n}}{y};q\right)_\infty}{\left(x,\frac{qx}{y};q\right)_\infty}\right\}
\notag \\
&=\mathbb{T}(r,f,g,v,w,uD_x)\left\{\sum_{k=0}^\infty\frac{(q^{-n};q)_k\,q^k}
{(y,q;q)_k}\; \frac{1}{(xq^k;q)_\infty} \right\}
\notag \\
&=\sum_{k=0}^{n}\frac{(q^{-n};q)_k\,q^k}{(y,q;q)_k} \;
\mathbb{T}(r,f,g,v,w,uD_x)\left\{\frac{1}{(xq^k;q)_\infty}
\right\}.
\end{align}
Finally, by using the fact that
\begin{equation}
\mathbb{T}(r,f,g,v,w,uD_x)\left\{\frac{1}{(xq^k;q)_\infty}\right\}
=\frac{1}{(xq^k;q)_\infty} \;
{}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
r,f,g;\\
\\
v,w;
\end{array}
\end{matrix}
q; uq^k\right], \label{fgLEM}
\end{equation}
and after some simplification involving $\frac{1}{(x;q)_\infty}$,
we get the left-hand side of (\ref{qCh}).
\end{proof}
\section{\bf New Generalizations of the Andrews-Askey Integral}
\label{generalize2}
The following famous formula is known as the
Andrews-Askey integral (see, for details, \cite{GEA-RA1981}).
It was derived from Ramanujan's celebrated
${}_1\Psi_1$-summation formula.
\begin{pro}{\rm (see \cite[Eq. (2.1)]{GEA-RA1981})}.
For $\max\left\{|ac|,|ad|,|bc|,|bd|\right\}<1,$
it is asserted that
\begin{align}
\label{eqd}
\int_c^d\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_\infty}
{(at,bt;q)_\infty}\; {\rm d}_qt
=\frac{d(1-q)\left(q,\frac{dq}{c},\frac{c}{d},abcd;q\right)_\infty}
{(ac,ad,bc,bd;q)_\infty}.
\end{align}
\end{pro}
The Andrews-Askey integral \eqref{eqd} is indeed an important
formula in the theory of $q$-series (see \cite{Liu97}).
Recently, Cao \cite{JianCao2013} gave the following two generalizations of
the Andrews-Askey integral \eqref{eqd} by the method based upon
$q$-difference equations.
\begin{pro}{\rm (see \cite[Theorems 14 and 15]{JianCao2013})}
\label{aznzzT}
For $N\in\mathbb{N}$ and $r=q^{-N},$ suppose that
$$\max\left\{|ac|,|ad|,|bc|,|bd|,\left|\frac{qwr}{v}\right|,
\left|\frac{q}{v}\right|\right\}<1.$$
Then
\begin{align}
\label{hdqd}
&\int_c^d\; \frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_\infty}
{(at,bt;q)_\infty}
\, {}_4\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
r,w,\frac{c}{t},abcd;\\
\\
ac,\frac{qwr}{v};
\end{array}
\end{matrix} q; \frac{qt}{vbcd}\right]\; {\rm d}_qt\notag \\
&\qquad \quad =\frac{d(1-q)\left(q,\frac{dq}{c},\frac{c}{d},abcd,
\frac{qw}{v},\frac{qr}{v};q\right)_\infty}
{\left(ac,ad,bc,bd,\frac{qwr}{v},\frac{q}{v};q\right)_\infty}
\; {}_2\Phi_1\left[\begin{matrix}
\begin{array}{rrr}
w,r;\\
\\
v;
\end{array}
\end{matrix} q; \frac{q}{bc}\right].
\end{align}
Furthermore$,$ for $N\in\mathbb{N}$ and $r=q^{-N},$ suppose that
$$\max\left\{|ac|,|ad|,|bc|,|bd|,\left|\frac{v}{w}\right|,
\left|\frac{v}{r}\right|\right\}<1.$$
Then
\begin{align}
\label{1hdqd}
&\int_c^d\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_\infty}
{(at,bt;q)_\infty}
\, {}_4\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
r,w,\frac{c}{t},\frac{q}{ad};\\
\\
\frac{q}{at},\frac{qrw}{v};
\end{array}
\end{matrix} q; q\right]\; d_qt \notag \\
&\qquad \quad=\frac{d(1-q)\left(q,\frac{dq}{c},
\frac{c}{d},abcd,\frac{v}{wr},v; q\right)_\infty}
{\left(ac, ad, bc, bd,\frac{v}{w},\frac{v}{r};q\right)_\infty}
\; {}_2\Phi_1\left[\begin{matrix}
\begin{array}{rrr}
w,r;\\
\\
v;
\end{array}
\end{matrix} q; \frac{vbc}{wr}\right].
\end{align}
\end{pro}
In this section, we give the following two generalizations of the
Andrews-Askey integral \eqref{eqd} by using the method
of $q$-difference equations.
\begin{thm}
\label{fgazzzT}
For $M\in\mathbb{N}$ and $r=q^{-M},$ suppose that
$$\max\left\{|ac|,|ad|,|bc|,|bd|,\left|\frac{q}{bc}\right|\right\}<1.$$
Then
\begin{align}
\label{gqdqd}
&\int_c^d\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_\infty}
{(at,bt;q)_\infty}
\;\sum_{k=0}^\infty
\; \frac{\left(r,f,g,\frac{c}{t},abcd;q\right)_{k}\;
\left(\frac{qt}{bcd}\right)^k}
{(v,w,ac,q;q)_{k}}\; {}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
rq^k,fq^k,gq^k;\\
\\
vq^k,wq^k;
\end{array}
\end{matrix} q; q\right] \; {\rm d}_qt \notag \\
&\qquad \quad =\frac{d(1-q)\left(q,\frac{dq}{c},
\frac{c}{d},abcd; q\right)_\infty}
{(ac, ad, bc, bd;q)_\infty} \;
{}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
r,f,g;\\
\\
v,w;
\end{array}
\end{matrix} q; \frac{q}{bc}\right].
\end{align}
\end{thm}
\begin{thm}
\label{vT}
For $M\in\mathbb{N}$ and $r=q^{-M},$ suppose that
$\max\left\{|ac|,|ad|,|bc|,|bd|\right\}<1$.
Then
\begin{align}
\label{vdqd}
&\int_c^d\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_\infty}
{(at,bt;q)_\infty}
\;\sum_{k=0}^\infty
\; \frac{\left(r,f,g,\frac{c}{t},\frac{q}{ad};q\right)_k\,
\left(\frac{vw}{rfg}\right)^k}
{\left(v,w,\frac{q}{at},q;q\right)_k}
\; {}_3\Phi_3\left[\begin{matrix}
\begin{array}{rrr}
rq^k,fq^k,gq^k;\\
\\
vq^k,wq^k,0;
\end{array}
\end{matrix} q;-\frac{vw}{rfg}\right]\; {\rm d}_qt \notag \\
& \qquad \quad =\frac{d(1-q)\left(q,\frac{dq}{c},
\frac{c}{d},abcd; q\right)_\infty}
{(ac, ad, bc, bd;q)_\infty}\;{}_3\Phi_3\left[\begin{matrix}
\begin{array}{rrr}
r,f,g;\\
\\
v,w,0;
\end{array}
\end{matrix} q; \frac{vwbc}{rfg}\right].
\end{align}
\end{thm}
\begin{rem}
{\rm For $r=1,$ both $\eqref{gqdqd}$ and $\eqref{vdqd}$
reduce to $\eqref{eqd}$.
Moreover$,$ for $r=q^{-N},$ $g=w=0$ and $u=\frac{q}{bcd},$
the assertion (\ref{gqdqd}) of
Theorem \ref{fgazzzT} reduces to $\eqref{hdqd}$.
For $r=q^{-N},$ $g=w=0$ and $u=\frac{v}{rfbcd},$
the assertion $\eqref{gqdqd}$ of
Theorem $\ref{vT}$ reduces to $\eqref{vdqd}$.}
\end{rem}
\begin{proof}[Proof of Theorems $\ref{fgazzzT}$ and $\ref{vT}$]
Eq. (\ref{gqdqd}) can be written equivalently as follows:
\begin{align}
\label{qdqdqq1}
&\int_c^d\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_\infty}
{(bt;q)_\infty} \cdot \frac{(ac;q)_\infty}{(at, abcd;q)_\infty}\;
\sum_{k=0}^\infty\;\frac{\left(r,f,g,\frac{c}{t},abcd;q\right)_{k}
\left(\frac{qt}{bcd}\right)^k}{(v,w,ac,q;q)_{k}}\notag \\
&\qquad \qquad \qquad \cdot{}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
rq^k,fq^k,gq^k;\\
\\
vq^k,wq^k;
\end{array}
\end{matrix} q; q\right] \; {\rm d}_qt\notag \\
&\qquad \quad = \frac{d(1-q)\left(q,\frac{dq}{c},
\frac{c}{d};q\right)_\infty}
{(bc, bd;q)_\infty}\cdot\frac{1}{(ad;q)_\infty}\;
{}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
r,f,g;\\
\\
v,w;
\end{array}
\end{matrix} q; \frac{q}{bc}\right].
\end{align}
If we use $H(r,f,g,v,w,a,u)$ to denote the right-hand side of (\ref{qdqdqq1}),
it is easy to see that $H(r,f,g,v,w,a,u)$ satisfies (\ref{thm2_1})
with $u=\frac{q}{bcd}$. By making use of
(\ref{thm2_1.2}), we thus find that
\begin{align*}
H(r,f,g,v,w,a,u)&=\mathbb{T}(r,f,g,v,w,uD_a)\Big\{H(r,f,g,v,w,a,0)\Big\}
\notag \\
&=\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_a\right)
\Big\{H(1,f,g,v,w,a,u) \Big\} \notag \\
&=\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_a\right)
\left\{\frac{d(1-q)\left(q,\frac{dq}{c},\frac{c}{d}; q\right)_\infty}
{(bc, bd;q)_\infty}\cdot\frac{1}{(ad;q)_\infty} \right\}
\notag \\
&=\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_a\right)
\left\{\int_c^d\;\frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_\infty}
{(bt;q)_\infty} \cdot \frac{(ac;q)_\infty}{(at, abcd;q)_\infty}\;
{\rm d}_qt \right\}
\notag \\
&=\int_c^d\; \frac{\left(\frac{qt}{c},\frac{qt}{d};q\right)_\infty}
{(bt;q)_\infty} \cdot
\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_a\right)
\left\{\frac{(ac;q)_\infty}{(at,abcd;q)_\infty}
\right\} {\rm d}qt.
\end{align*}
Now, by applying the fact that
\begin{align*}
&\mathbb{T}\left(r,f,g,v,w,\frac{q}{bcd}\;D_a\right)
\left\{\frac{(ac;q)_\infty}{(at,abcd;q)_\infty}\right\}
\notag \\
&\qquad \quad=\frac{(ac;q)_\infty}{(at,abcd;q)_\infty}\;
\sum_{k=0}^\infty\;
\frac{\left(r,f,g,\frac{c}{t},abcd;q\right)_{k}\;
\left(\frac{qt}{bcd}\right)^k}
{(v,w,ac,q;q)_{k}}\; {}_3\Phi_2\left[\begin{matrix}
\begin{array}{rrr}
rq^k,fq^k,gq^k;\\
\\
vq^k,wq^k;
\end{array}
\end{matrix} q; q\right],
\end{align*}
we get the left-hand side of (\ref{gqdqd}).
The proof of the assertion (\ref{vdqd}) of Theorem \ref{vT}
is much akin to that of the assertion (\ref{gqdqd})
of Theorem \ref{fgazzzT}. The details involved are,
therefore, being omitted here.
The proofs of Theorems \ref{fgazzzT} and \ref{vT}
are thus completed.
\end{proof}
\section{\bf Concluding Remarks and Observations}
\label{conclusion}
In our present investigation, we have introduced a set of
two $q$-operators \break $\mathbb{T}(a,b,c,d,e,yD_x)$ and
$\mathbb{E}(a,b,c,d,e,y\theta_x)$ with to applying them
to derive two potentially useful
generalizations of the $q$-binomial theorem, two extensions of
the $q$-Chu-Vandermonde summation formula and two new generalizations
of the Andrews-Askey integral by means of the $q$-difference equations.
We have also briefly described relevant
connections of various special cases
and consequences of our main results
with several known results.
It is believed that the $q$-series and $q$-integral identities, which we
have presented in this paper, as well as the various related recent works
cited here, will provide encouragement and motivation for further researches
on the topics that are dealt with and investigated in this paper.\\
\medskip
\noindent
{\bf Conflicts of Interest:} The authors declare that they
have no conflicts of interest.\\
\medskip
|
2,869,038,153,778 | arxiv | \section{Introduction}
{ \itshape
The general solution of spherically symmetric self-dual Yang-Mills equations discovered
by Lesnov and Saveliev two decades ago has led to extraordinary developments. I met Misha for the
first time in 1992 when this work had already proven to be so important for two dimensional
conformal/integrable systems. We immediately started to collaborate and have done
so ever since. Unlike
many of his country men he felt that he should not leave his country for good, and fought for
his family and him-self while keeping a remarkable enthousiasm for research. Working with Misha
has been a wonderful experience which terminated so abruptly! I will always remember our excited
and friendly discussions, his kindness and enthousiam, his fantastic knowledge of the scientific
literature! We, at
\'Ecole Normale, were lucky enough to invite him for several extended visits which were extremely
fruitful. Misha and I also met in other places, but altogether much too rarely, and exchanged
uncountably many email messages. Now I am sorry for the many occasions to meet him that I had to
decline. In particular I never found time to visit him in Russia. Our collaboration on the
present subject was entirely by e-mail. Our last encouter in person was in Cambridge (UK) at the
beginning of March 1997. At that time I thought as a matter of course that we would meet soon again,
but this is not so! In the large number of email messages we exchanged since then, it is clear that
he was under a great pressure, but yet he was always coming up with exciting ideas, calculations and
so on.
My other regret is that, although we were very good friends, we seldom had time to socialise
outside research. I will always remember these few very warm and friendly encounters, and
especially when his Svetlana's (as he used to say) were present.
M. Saveliev was great both as a scientist and as a
human being. He was obviously such a good father, husband, friend!}
\vglue 1 true cm
In recent times we turned\cite{GS98} to the classical integration of theories in more than two
dimensions with local extended supersymmetries. Our motivation was twofold. On the one hand this
problem is very important for the recent developments in duality and M theory. On the other hand,
the recent advances initiated by Seiberg and Witten indicate that these theories are in many ways
higher dimensional analogues of two dimensional conformal/integrable systems, so that progress may be
expected. Since fall 1997, we have studied super Yang-Mills theories in ten dimensions. There, it
was shown by Witten\cite{W86} that the field equations are equivalent to flatness conditions. This
is a priori similar to well known basic ones of Toda theories, albeit no real progress could
be made at that time, since the corresponding Lax type equations involve an arbitrary light like
vector which plays the role of a spectral parameter. At first, we reformulated the field
equations in a way which is similar to a super version of the higher dimensional
generalisations of Toda theories developed by Razumov and Saveliev\cite{RS97}, where the
Yang-Mills gauge algebra is extended to a super one. This has not yet been published since,
contrary to our initial hope, the two types of theories do not seem to be equivalent. I hope to
return to this problem in a near future. In the mean time, we found the existence of an on-shell
gauge, in super Yang-Mills where the field equations simplify tremendously and where the first
similarity with self-dual Yang-Mills in four dimensions came out\cite{GS98}. This directly led to
the present progress.
As is well known, super Yang-Mills theories in ten dimensions just describes a standard non abelian
gauge field coupled with a charged Majorana-Weyl spinor field in the adjoint representation of
the gauge group. The dynamics
is thus specified by the standard action
\begin{equation}
S=\int d^{10} x {\> \rm Tr }
\left\{
-{1\over 4}Y_{mn}Y^{mn}
+{1\over 2}\bar \phi\left(\Gamma^m \partial_m \phi+\left[X_m,\, \phi\right]_- \right)\right\},
\label{action}
\end{equation}
\begin{equation}
Y_{mn}=\partial_mX_n-\partial_nX_m +\left[X_m,\, X_n\right]_-.
\label{F0def}
\end{equation}
The notations are as follows\footnote{They are essentially the same as in ref.\cite{GS98}.}:
$X_m(\underline x)$ is the vector potential, $\phi(\underline x) $ is the Majorana-Weyl spinor. Both
are matrices in the adjoint representation of the gauge group
${\bf G}$. Latin indices
$m=0,\ldots 9$ describe Minkowski components. Greek indices $\alpha=1,\ldots 16$ denote
chiral spinor components. We will use the superspace formulation with odd coordinates
$\theta^\alpha$. The super vector potentials, which are valued in the gauge group, are noted
$A_m\left(\underline x,\underline \theta\right)$, $A_\alpha\left(\underline x,\underline
\theta\right)$. As shown in refs. \cite{W86}, \cite{AFJ88}, we may
remove all the additional fields and uniquely reconstruct the physical fields $X_m$, $\phi$ from
$A_m$ and $A_\alpha$ if we impose the condition $\theta^\alpha A_\alpha=0$ on the latter.
With this condition, it was shown in refs. \cite{W86}, \cite{AFJ88}, that the field equations
derived from the Lagrangian \ref{action} are equivalent to the flatness conditions
\begin{equation}
{\cal F}_{\alpha \beta=0},
\label{flat}
\end{equation}
where ${\cal F}$ is the supercovariant curvature
\begin{equation}
{\cal F}_{\alpha \beta}=D_\alpha A_\beta+D_\beta A_\alpha+\left[A_\alpha,\, A_\beta\right]+
2\left(\sigma^m\right)_{\alpha\beta}A_m.
\label{curdef}
\end{equation}
$D_\alpha$ denote the superderivatives
\begin{equation}
D_\alpha=\partial_\alpha-\left(\sigma^m\right)_{\alpha \beta}
\theta^\beta {\partial_m},
\label{sddef}
\end{equation}
and we use the Dirac matrices
\begin{equation}
\Gamma^m=\left(\begin{array}{cc}
0_{16\times16}&\left(\left(\sigma^m\right)^{\alpha\beta}\right)\\
\left(\left(\sigma^m\right)_{\alpha\beta}\right)&0_{16\times16}
\end{array}\right),\quad
\Gamma^{11}= \left(\begin{array}{cc}
1_{16\times16}&0\\0&-1_{16\times16}\end{array}\right).
\label{real1}
\end{equation}
Throughout the paper, it will be convenient to use the following particular realisation:
\begin{equation}
\left(\left(\sigma^{9}\right)^{\alpha\beta}\right)=
\left(\left(\sigma^{9}\right)_{\alpha\beta}\right)=
\left(\begin{array}{cc}
-1_{8\times 8}&0_{8\times 8}\\
0_{8\times 8}&1_{8\times 8}
\end{array}\right)
\label{real2}
\end{equation}
\begin{equation}
\left(\left(\sigma^{0}\right)^{\alpha\beta}\right)=-
\left(\left(\sigma^{0}\right)_{\alpha\beta}\right)=
\left(\begin{array}{cc}
1_{8\times 8}&0_{8\times 8}\\
0_{8\times 8}&1_{8\times 8}
\end{array}\right)
\label{real3}
\end{equation}
\begin{equation}
\left(\left(\sigma^{i}\right)^{\alpha\beta}\right)=
\left(\left(\sigma^{i}\right)_{\alpha\beta}\right)=\left(\begin{array}{cc}
0&\gamma^i_{\mu,\overline \nu}\\
\left(\gamma^{i\, T}\right)_{\nu,\overline \mu}&0
\end{array}\right),\quad i=1,\ldots 8.
\label{real4}
\end{equation}
The convention for greek letters is as follows: Letters from the beginning of the alphabet run from
1 to 16. Letters from the middle of alphabet run from 1 to 8. In this way, we shall separate
the two spinor representations of $O(8)$ by rewriting $\alpha_1,\ldots, \alpha_{16} $ as
$\mu_1,\ldots, \mu_8, \overline \mu_1,\ldots, \overline \mu_8$
Using the above explicit realisations on sees that the equations to solve take the form
\begin{eqnarray}
F_{\mu \nu}\equiv D_\mu A_\nu+D_\nu A_\mu +\left[A_\mu,\,
A_\nu\right]_+&=&2\delta_{\mu\nu}\left(A_0+A_9\right)\label{dynuu}\\
F_{\overline \mu \overline \nu}\equiv D_{\overline \mu} A_{\overline \nu}+D_{\overline \nu}
A_{\overline \mu} +\left[A_{\overline \mu},\, A_{\overline \nu}\right]_+&=&
2\delta_{{\overline \mu}{\overline \nu}}\left(A_0-A_9\right)\label{dyndd}\\
F_{ \mu \overline \nu}\equiv D_{ \mu} A_{\overline \nu}+D_{\overline \nu} A_{ \mu}
+\left[A_{\mu},\, A_{\overline \nu}\right]_+&=&-2\sum_{i=1}^8 A_i\gamma^i_{\mu,\overline
\nu}\label{dynud}
\end{eqnarray}
In my last paper with M. Saveliev \cite{GS98}, these flatness conditions in superspace were used
to go to an on-shell light-cone gauge where half of the superfields vanish. After reduction to
$(1+1)$ dimensions, the non-linear part of the equations was transformed into equations for a scalar
superfield which are (super) analogues of the so called Yang equations which were much studied in
connection with solutions of self-dual Yang-Mills equations in four dimensions. The main
differences between the two type of relations is that derivatives are now replaced by
superderivatives, that there are sixteen equations instead of four, and that the indices are paired
differently. Nevertheless, it was found that these novel features are precisely such that the
equations may be solved by methods very similar to the ones developed in connection with self-dual
Yang-Mills in four dimensions. The aim of the present paper is to push this analogy much further,
by deriving the analogues of the Lax pair of Belavin Zakharov\cite{BZ78} which was instrumental for
deriving multi-instanton solutions at the end of the seventies.
\section{The Lax representation}
The original theory is $O(9,1)$ invariant, but the choice of Dirac matrices just summarized is
covariant only under a particular $O(8)$ subgroup. The Lax reprsentation will come out after
picking up a particular $O(7)$ subgroup of the latter. This done simply by remarking that we may
choose one $\gamma^i$ to be the unit matrix, in which case the others are antisymmetric and obey
the $O(7)$ Dirac algebra. This is so, for instance in the following explicit representation of the
$O(8)$ gamma matrices, where $\gamma^8$ is equal to one, which we will use throughout:
\begin{eqnarray}
\gamma^1= \tau _1\otimes \tau _3\tau _1\otimes {\bf 1} \quad &\quad
\gamma^5=\tau _3\otimes \tau _3\tau _1\otimes {\bf 1} \nonumber\\
\gamma^2= {\bf 1}\otimes \tau _1\otimes \tau _3\tau _1 \quad &\quad
\gamma^6= {\bf 1}\otimes \tau _3\otimes \tau _3\tau _1 \nonumber\\
\gamma^3=\tau _3\tau _1 \otimes {\bf 1}\otimes \tau _1 \quad &\quad
\gamma^7= \tau _3\tau _1\otimes {\bf 1} \otimes \tau _3 \nonumber\\
\gamma^4= \tau _3\tau _1\otimes \tau _3\tau _1\otimes \tau _3\tau _1 \quad &\quad
\gamma^8={\bf 1}\otimes {\bf 1}\otimes {\bf 1}.
\label{gamdef}
\end{eqnarray}
With this choice, it follows from equations \ref{dynuu}--\ref{dynud} that
\begin{equation}
F_{\mu \nu}=2\delta_{\mu \nu}\left(A_0+A_g\right),\quad
F_{\overline \mu \overline \nu}=2\delta_{\overline \mu \overline \nu}\left(A_0-A_g\right),\quad
F_{\mu \overline \nu}+F_{ \nu \overline \mu}=-4\delta_{ \mu \nu}A_8 .
\label{symdyn}
\end{equation}
We have symmetrized the mixed (last) equations so that the right-hand sides only involve
Kronecker delta's in the spinor indices. By taking $\gamma^8$ to be the unit
matrix, we have introduced a mapping between overlined and non overlined indices.
Accordingly, in the previous equation and hereafter, whenever we write an overlined index and non
overlined one with the same letter (such as $\mu$ and $\overline \mu$) we mean that they are
numerically equal, so that
$\gamma^8_{\mu \overline \mu}=1$. Next, in parallel with what was done for self-dual Yang-Mills
in four dimensions, it is convenient to go to complex (super) coordinates.
Thus we introduce, with $i$ the square root of minus one\footnote{For the new symbols, the group
theoretical meaning of the fermionic indices $\mu$ $\overline \mu$ is lost. We adopt this
convention to avoid clusy notations.},
$$
G_{\mu \nu}=F_{\mu \nu}-F_{\overline \mu \overline \nu}
+iF_{\overline \mu \nu}+iF_{\mu \overline \nu}
$$
$$
G_{\overline \mu \overline \nu}=F_{\mu \nu}-F_{\overline \mu \overline \nu}
-iF_{\overline \mu \nu}-iF_{\mu \overline \nu},
$$
\begin{equation}
G_{ \mu \overline \nu}=F_{\mu \nu}+F_{\overline \mu \overline \nu}
+iF_{\overline \mu \nu}-iF_{\mu \overline \nu},
\label{Gdef}
\end{equation}
\begin{equation}
\Delta_\mu= D_\mu +iD_{\overline \mu},\quad
\Delta_{\overline \mu}= D_\mu -iD_{\overline \mu},
\label{Deltadef}
\end{equation}
\begin{equation}
B_\mu= A_\mu +iA_{\overline \mu},\quad
B_{\overline \mu}= A_\mu -iA_{\overline \mu}.
\label{Bdef}
\end{equation}
A straightforward computation shows that
$$
\left[\Delta_\mu,\, \Delta_\nu\right]_+
=4\delta_{\mu \nu}\left(\partial_9-i\partial_8 \right),\quad
\left[\Delta_{\overline \mu},\, \Delta_{\overline \nu}\right]_+=
4\delta_{\mu\nu}\left(\partial_9+i\partial_8 \right),
$$
\begin{equation}
\left[\Delta_{ \mu},\, \Delta_{\overline \nu}\right]_++
\left[\Delta_{ \nu},\, \Delta_{\overline \mu}\right]_+
=8\delta_{\mu
\nu}\partial_0
\label{anti}
\end{equation}
Consider, now the system of differential equations
\begin{equation}
{\cal D}_\mu\Psi \left(\lambda\right)\equiv
\left(\Delta_{\mu}+\lambda \Delta_{\overline \mu}+B_{\mu}+\lambda B_{\overline
\mu}\right)\Psi(\lambda)=0, \mu=1,\ldots, 8.
\label{BZ}
\end{equation}
Of course, although we do not write it for simplicity of notations, $\Psi(\lambda)$ is a
superfield function of $\underline x$ and $\underline \theta$. The parameter $\lambda$ is an
arbitrary complex number. The consistency condition of these equations is
\begin{equation}
\left[{\cal D}_\mu,\, {\cal D}_\nu\right]_+\Psi(\lambda)=0.
\label{cons}
\end{equation}
This gives
$$
\left\{4\delta_{\mu \nu}\left(\partial_9-i\partial_8 \right)
+G_{\mu \nu}\right\}\Psi
+\lambda\left\{8\delta_{\mu \nu}\partial_0
+G_{\nu\overline \mu}
+ G_{\overline \mu\nu} \right\}\Psi
$$
$$
+\lambda^2\left\{ 4\delta_{\mu
\nu}\left(\partial_9+i\partial_8 \right)
+ G_{\overline \nu\overline \mu}
\right\}\Psi=0.
$$
Thus we correctly get that, for $\mu\not=\nu$
$$
G_{\mu \nu}=G_{\overline \mu \overline \nu}=
G_{\mu \overline \nu}+G_{ \nu \overline \mu}=0,
$$
and that $G_{\mu \mu}$ $G_{\overline \mu \overline \mu} $, $
G_{\mu \overline \mu}$ do not depend upon $\mu$. Thus these consistency conditions are equivalent
to the symmetrized dynamical equations \ref{symdyn}.
\section{Hermiticity conditions for superfields:}
We take the gauge group to be $SU(N)$. Then the physical fields $X_m$ and $\phi^\alpha$ are
anti-hermitian matrices. At this point, we need to derive the associated hermiticity conditions
for our superfields $A_m$, $A_\alpha$. Consider, in general a superfield
\begin{equation}
F(\underline x, \underline \theta)= \sum_{p=0}^{16}\sum_{\alpha_1,\ldots,
\alpha_p} {\theta^{ \alpha_1}\cdots
\theta^{ \alpha_p}\over p !}F^{[p]}_{\alpha_1\ldots \alpha_p}(\underline x),
\label{exp}
\end{equation}
Then
$$
F^\dagger (\underline x, \underline \theta)= \sum_{p=0}^{16}\sum_{\alpha_1,\ldots,
\alpha_p} F^{[p]\dagger }_{\alpha_1\ldots \alpha_p}(\underline x){\theta^{ \alpha_p \dagger}\cdots
\theta^{ \alpha_1 \dagger }\over p !},
$$
If $F=F_b$ is bosonic, $F_b^{[p]}$ is commuting (resp. anticommuting) for $p$ even (resp. $p$
odd). Then, assuming that $\theta^{\alpha \dagger}=\theta^{\alpha } $, we may write
$$
F_b^\dagger (\underline x, \underline \theta)=K_b \sum_{p=0}^{16}\sum_{\alpha_1,\ldots,
\alpha_p} {\theta^{ \alpha_1}\cdots
\theta^{ \alpha_p }\over p !}F^{[p]\dagger }_{b \alpha_1\ldots \alpha_p}(\underline x) K_b
$$
where
\begin{equation}
K_b=\left(-1\right)^{{\cal R}({\cal R}+1)/2}.
\label{Kbdef}
\end{equation}
where
\begin{equation}
{\cal R}=\theta^\alpha \partial_\alpha
\label{Rdef}
\end{equation}
If $F=F_f$ is fermionic, $F_f^{[p]}$ is anticommuting (resp. commuting) for $p$ even (resp. $p$
odd). Then,
$$
F_f^\dagger (\underline x, \underline \theta)= K_f \sum_{p=0}^{16}\sum_{\alpha_1,\ldots,
\alpha_p} {\theta^{ \alpha_1}\cdots
\theta^{ \alpha_p }\over p !}F^{[p]\dagger }_{f \alpha_1\ldots \alpha_p}(\underline x)
K_f,
$$ \begin{equation}
K_f=\left(-1\right)^{{\cal R}({\cal R}-1)/2}.
\label{Kfdef}
\end{equation}
One may verify that the superfields $A_m$, $A_\alpha $ have decomposition of the type
\ref{exp} with
$F^{[p]\dagger }_{\alpha_1\ldots \alpha_p}=-F^{[p]}_{\alpha_1\ldots \alpha_p}$ for all $p$.
Thus we conclude that $A_m^\dagger=-K_b A_m K_b$, $A_\alpha^\dagger=-K_f A_\alpha K_f$.
Next consider the effect of the superderivative operator. The action on the $p$th component of a
superfield \ref{exp} is given by
$$
\left(D_{\alpha} F\right)^{[p]}_{\alpha_1,\ldots, \alpha_p}
=F^{[p+1]}_{\alpha\, \alpha_1\ldots \alpha_p}-
\sum_{i=1}^p\left(-1\right)^{i+1}\sigma^m_{\alpha, \alpha_i}
\partial_m F^{[p-1]}_{\alpha_1\ldots /\!\!\!\! \alpha_i \ldots \overline
\alpha_p}
$$
Since the matrix $\sigma^m$ are real, we immediately get
\begin{equation}
D_{\mu} \left (K_b F_b^\dagger K_b \right)=K_f \left(D_{\mu} F_b\right)^\dagger K_f,\quad
D_{\mu} \left (K_b F_f^\dagger K_b \right)=K_f \left(D_{\mu} F_f\right)^\dagger K_f
\label{hermvr}
\end{equation}
The last equations are of course consistent with the fact that the superderivatives transform a
bosonic superfield into a fermionic one and vice versa. At this time, the fact that $A_\alpha$ and
its superderivatives satisfy different hermiticity conditions leads to complications which we will
avoid by only looking at solutions such that $\phi^\alpha=0$. For these purely bosonic solutions
$A^{[2p]}_{\alpha,\, \alpha_1,\ldots \alpha_{2p}}=0$ and $A^{[2p+1]}_{m,\, \alpha_1,\ldots
\alpha_{2p+1}}=0$. All superfield components are commuting, and we may choose, instead of the
above,
\begin{equation}
K_b=K_f=K=\left(-1\right)^{{\cal R}({\cal R}-1)/2}.
\label{Kdef}
\end{equation}
Then, it is easy to show that $\Psi(\lambda)$ and $\left(K\Psi^\dagger(1/\lambda^*) K\right)^{-1}$ satisfy
the same equation. Thus we shall assume that
\begin{equation}
\Psi(\lambda)=K\Psi^{\dagger -1}(1/\lambda^*) K
\label{hermcond}
\end{equation}
\section{The two pole ansatz}
As for self-dual Yang-Mills in four dimensions, we assume that $\Psi$ is a meromorphic function
of $\lambda$. Condition \ref{hermcond} shows that poles appear in pairs. The simplest ansatz involve
two poles. The following displays the corresponding solution, for the gauge group $SU(2)$,
following the line of ref\cite{BZ78} rather closely. Taking the poles at zero and $\infty$ we
write the ansatz
\begin{eqnarray}
\Psi(\lambda)&=&\left(u {\bf 1}+\lambda fA-
{ \tilde f\tilde A\over \lambda}\right)\nonumber\\
\Psi^{-1}(\lambda)&=&\left(u {\bf 1}-\lambda fA+
{ \tilde f\tilde A\over \lambda}\right)
\label{ansBZ}
\end{eqnarray}
where
\begin{equation}
A={1\over a \tilde a+b \tilde b}\left(\begin{array}{cc}
ab &a^2 \\
-b^2 &-ab
\end{array}\right).
\label{Adef}
\end{equation}
In these definitions $u$, $f$, $a$, $b$ are superfields. In agreement with equations \ref{Kdef},
we introduce the notation
\begin{equation}
\tilde F= KF^\dagger K
\label{tidef}
\end{equation}
for any (matrix valued or not) superfield. It is easy to see that
\begin{equation}
A^2=\tilde A^2=0,\quad
\left[A,\, \tilde A\right]_+={\bf 1}.
\label{Aprop}
\end{equation}
The equations just written are such that the definitions \ref{ansBZ} are
consistent with equation \ref{hermcond}, and with the relation $\Psi(\lambda)
\Psi^{-1}(\lambda)=1$, provided we assume that
\begin{equation}
u^2=1-f\tilde f.
\label{udef}
\end{equation}
Next, we derive algebraic equations for the superfields appearing in the ansatz, by rewriting
equation \ref{BZ} as
\begin{equation}
B_{\mu}+\lambda B_{\overline\mu}=
\Psi(\lambda)\left(\Delta_{\mu}+\lambda \Delta_{\overline
\mu}\right)\Psi^{-1}(\lambda) .
\label{Beq1}
\end{equation}
Identifying the powers in $\lambda$ gives the following set of
independent equations
\begin{eqnarray}
\tilde f \tilde A\Delta_\mu\left(\tilde f \tilde A\right)&=&0
\label{class1}\\
\tilde f\tilde A\Delta_{\mu}u
- u \Delta_{\mu} \left(\tilde f\tilde A\right)
- \tilde f\tilde A \Delta_{\overline \mu}\left(\tilde f\tilde A\right)&=&0
\label{class2}\\
u {\bf 1}\Delta_{\mu}u
+ \tilde f\tilde A \Delta_{\overline \mu}u
+ \tilde f\tilde A\Delta_{\mu} \left(fA\right)
- u \Delta_{\overline \mu}\left(\tilde f\tilde A\right)
+ fA\Delta_{\mu} \left(\tilde f\tilde A\right) &=&-B_\mu,
\label{class3}
\end{eqnarray}
together with three more relations deduced from the above according to equation \ref{hermcond}.
At this point it is useful to write
\begin{equation}
A={1\over a \tilde a+b\tilde b} \Upsilon.
\label{Upsdef}
\end{equation}
Since the matrix $\Upsilon$ is such that $\Upsilon^2=0$. Equation \ref{class1} is satisfied iff
\begin{equation}
\Delta_\mu\tilde a=\Delta_\mu \tilde b=0.
\label{eq1}
\end{equation}
Equation \ref{class2} may be transformed into
$$
\tilde \Upsilon \Delta_{\mu}\tilde g
= \tilde
\Upsilon \Delta_{\overline \mu}
\tilde \Upsilon
$$
where we have let
\begin{equation}
\tilde g=u {a \tilde a+b\tilde b \over \tilde f }
\label{gdef}
\end{equation}
Equation \ref{class2} is satisfied if we have
\begin{equation}
\Delta_\mu \tilde g=\tilde h_{\overline \mu }, \quad
\tilde h_{\overline \mu }=\tilde b \Delta_{\overline
\mu }\tilde a -\tilde a\Delta_{\overline \mu }\tilde b.
\label{eq2}
\end{equation}
Remarkably, equation \ref{eq1} is a particular case of equations which already appeared in
ref\cite{GS98} where general solutions were obtained which are only dependent upon $x^0$ and
$x^9$. We shall obtain solutions of equations \ref{eq2} below. Once these two equations are solved,
equation
\ref{class3} allows to derive the vector potentials. For this it is convenient to rewrite it under
the form
\begin{equation}
B_\mu= {1\over u}\Delta_{\mu}u
+ {\tilde \Upsilon \over \tilde g}\Delta_{\mu} \left({\Upsilon \over g}\right)
- \Delta_{\overline \mu}\left({\tilde \Upsilon \over \tilde g}\right)
+ {\Upsilon \tilde \Upsilon\over g}\Delta_{\mu} \left({1 \over \tilde g}\right)
\label{Beq}
\end{equation}
\section{A particular solution}
At this preliminary stage, and in order to arrive at a concrete solution, we choose a simple
particular ansatz. We only retain dependence in $x^0\equiv t$ and $x^9\equiv x$. A simple linear
solution of equations \ref{eq1} is
\begin{equation}
a=1, \quad b=t+i\sum_\mu\theta^\mu \theta^{\overline \mu},
\label{s1}
\end{equation}
so that
$$
\Delta_\mu a=0, \quad \Delta_\mu b=2 D_\mu b=\left(\theta^\mu+i\theta^{\overline \mu}\right)
$$
$$
a\tilde a+b\tilde b=b +\tilde b =2t.
$$
Then equation \ref{eq2} gives
\begin{equation}
g=-8 x +c
\label{s2}
\end{equation}
where $\Delta_{\overline \mu}c=0$. We will simply choose $c$ to be a constant. Using equations
\ref{udef}, \ref{gdef}, we obtain
$$
u =\sqrt{ {\left|c-8x\right|^2 \over 4t^2+\left|c-8x\right|^2} }.
$$
Finally, using equation \ref{Beq} one gets
$$
B_\mu=(\theta^\mu+i\theta^{\overline \mu}) \left \{
{4t \over \left(4t^2+\left|c-8x\right|^2\right)}
- {2 \over \left| (c-8x)\right|^2} \left(\begin{array}{cc}
\tilde b +2 \tilde b ^2 b & \tilde b ^2 \\
1 +2 \tilde b b & \tilde b
\end{array}\right)\right.
$$
$$
\left. +
{8\over (c^*-8x)^2 }
\left(\begin{array}{cc}
b&1 \\
-b ^2 &-b
\end{array}\right) \right \}
+(\theta^\mu-i\theta^{\overline \mu}) \left \{
{-16 \left(16 x-c-\tilde c\right)t^2 \over \left|c-8x\right|^2 \left(4t^2+\left|c-8x\right|^2
\right)}\right.
$$
$$
\left.
- {8 \over (c-8x)\left| (c-8x)\right|^2 }
\left(\begin{array}{cc}
\tilde b b +\tilde b ^2 b ^2 &\tilde b+\tilde b ^2 b\\
b+\tilde b b ^2 & 1 + \tilde bb
\end{array}\right)
+ {2\over c^*-8x}
\left(\begin{array}{cc}
1 &0 \\
-2\tilde b&-1
\end{array}\right)
\right.
$$
\begin{equation}
\left.
-
{8\over (c^*-8x)\left| (c-8x)\right|^2}
\left(\begin{array}{cc}
b \tilde b+ 1 &-b\tilde b ^2- \tilde b\\
- b ^2 \tilde b-b & b ^2 \tilde b ^2 + b \tilde b
\end{array}\right)\right\}
\label{simple}
\end{equation}
\section{Outlook}
It seems clear that the symmetrised system of equations \ref{symdyn} is completely and explicitly
integrable much like self-dual Yang-Mills in four dimensions. Note that, in the gauge introduced in
ref.\cite{GS98} where $A_{\overline \mu}=0$, the right most equations \ref{symdyn} give
$D_{\overline \mu}A_{\nu}+D_{\overline \nu}A_{\mu}=0 $, for $\mu\not=\nu$. This is precisely the
condition which was used in ref\cite{GS98} to let $A_{\mu}=D_{\overline \mu}\Phi$. In other words,
the present Lax pair is equivalent to the set of equations which was previously solved in
ref.\cite{GS98}.
Concerning the full Yang-Mills equations or equivalently the unsymmetrised equations
\ref{dynuu}--\ref{dynud}, any solution is also a solution of the symmetrised equations
\ref{symdyn}. Thus we should be able to derive solutions of the latter which are general enough so
that we may impose that they be solutions of the former. This problem is currently under
investigation.
\bigskip
\noindent { \bfseries\Large Acknowledgements: }\\
It is a pleasure to acknowledge stimulating discussions with P. Forgacs and H.~Samtleben.
|
2,869,038,153,779 | arxiv | \section{Introduction}
The AdS/CFT correspondence, also known as holographic duality, provides a novel way to study strongly correlated quantum systems in terms of weakly coupled gravity. In particular, it can describe strongly correlated gapped systems in terms of gravity duals. One class of such models at zero density include the GPPZ gapped geometry \cite{Girardello:1999hj}, AdS soliton \cite{Witten:1998zw}, AdS with cutoffs in IR \cite{Erlich:2005qh} and so on. Another class of models are for finite density systems with translational symmetry breaking effects, see e.g. \cite{Kiritsis:2015oxa}.
Physical systems in the real world often have boundaries and the boundary effects play important roles, ranging from D-branes in string theory to topological states in condensed matter physics. One well known example of the topological states in condensed matter system is the topological insulator, which is gapped in the bulk while nontrivial gapless charged excitations exist on the boundary \cite{ti-kane}. We are interested in constructing a holographic model for topological insulators.\footnote{Previous attempts to study the holographic model of topological insulator include e.g. \cite{Hoyos-Badajoz:2010etp, Rozali:2012gf}.} Instead, we replace it with a simpler question to analyze what happens to a holographic gapped system in the presence of a boundary. We study this problem in the framework of AdS/BCFT.
AdS/BCFT allows us to study the properties of field theories with boundaries from the holographic dual. In AdS/BCFT, the bulk geometry terminates at the end-of-the-world (EOW) brane such that the boundary of EOW brane near AdS coincides with the boundary of BCFT \cite{Takayanagi:2011zk, Fujita:2011, Karch:2000gx}. AdS/BCFT has been actively explored during the last decade. A far from complete list includes applications to condensed matter physics \cite{Fujita:2012, Melnikov:2012tb}, cosmology \cite{Antonini:2019qkt}, black hole physics \cite{Geng:2021mic, Suzuki:2022xwv}, quantum information \cite{Seminara:2018pmr} and so on. However, so far the studies of AdS/BCFT have been mainly limited to critical gapless systems with boundaries.
The purpose of this paper is to study the properties of gapped systems in the presence of boundaries in the framework of AdS/BCFT. We will focus on the vacuum states of the first class of models as mentioned in the first paragraph at zero temperature and zero density. Here we consider two different holographic models of gapped systems. The first one is the gapped geometry in Einstein-scalar theory. We choose Neumann boundary condition for the fields on the EOW brane. Taking a proper scalar potential term localized on the brane, we can get a consistent background for the gapped geometry with an EOW brane. Then we will study the transport properties and entanglement entropies of the BCFT. The second gapped system is described by the AdS soliton \cite{Witten:1998zw}. The AdS soliton can be obtained by analytic continuation of the AdS Schwartzschild black hole. At finite temperature there is a first order phase transition between the AdS Schwartzschild black hole and the AdS soliton, which describes the confinement/deconfinement phase transition. There is a compact spatial dimension in the AdS soliton which sets the scale of the transition.
We consider the presence of a boundary for the dual field theory of the AdS soliton
along one noncompact spatial direction and study its transport properties and entanglement entropies. We will make comparisons on the profiles and the properties between these two different gapped systems in the presence of boundaries.
Our paper is organized as follows. In section \ref{sec2}, we first construct a gapped system in the presence of a boundary in Einstein-scalar theory using AdS/BCFT, and then study its conductivity along the spatial direction of the boundary as well as its entanglement entropy. In section \ref{sec3}, we study the properties of gapped system which is described by the AdS soliton in AdS/BCFT. We summarize our results in section \ref{sec4} and discuss the possible open questions. Some calculation details are collected in the appendices.
\section{A gapped system in AdS$_4$/BCFT$_3$}
\label{sec2}
In this section, we study the holographic gapped system with boundaries in the Einstein-scalar gravity and consider its properties in the framework of AdS/BCFT \cite{Takayanagi:2011zk, Fujita:2011}. We focus on the case of three dimensional field theories with two dimensional boundaries and it is straightforward to generalized to other dimensions.
The configuration under consideration is shown in Fig. \ref{fig:cf}. The three dimensional boundary field theory is defined on the manifold $M$ with boundary $P$ along $y$ direction. The gravity dual lives in the bulk $N$ with the EOW brane $Q$ which anchors to the BCFT boundary $P$. Note that $u$ is the holographic direction and the boundary $M$ lives at $u=0$.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw [fill=blue!10] (0,0)--(7,0)--(10,3)--(3,3);
\draw [fill=green!5] (0,0)
to[out=150, in=270](-3,5)--(0,8)
to[out=-90, in=-210](3,3);
\draw [fill=red] (0,0)--(3,3) node[anchor=east,midway]{ $P$ };
\draw[black] (5.0, 1.5) node{ $M$ };
\draw[black] (-1.0, 4.0) node{ $Q$ };
\draw[black] (5.0, 6.0) node{ $N$ };
\draw[black] (0, 0);
\draw[-latex, very thick, blue, opacity=0.7] (7, 0)--(8.5, 1.5) node[anchor=west, near end]{$ ~\{ t,\,y\} $ };
\draw[-latex, very thick, blue, opacity=0.8] (0, 0)--(8.5, 0) node[anchor=north, at end]{ $x$ };
\draw[-latex, very thick, opacity=0.8 ] (11, 2)--(11, 5) node[anchor=west, midway]{ $u$ };
\end{tikzpicture}
\end{center}
\vspace{-0.3cm}
\caption{\small The configuration under consideration. The field theory lives in the manifold $M$ with boundary $P$. The dual gravity lives in the bulk $N$ with boundary $Q$. }
\label{fig:cf}
\end{figure}
We consider the Einstein-scalar gravitational theory
\begin{equation}
S_\text{bulk}=S_N +S_Q \,,
\end{equation}
with
\begin{eqnarray}
\label{eq:action1}
\begin{split}
S_N&=\int_N d^4x\sqrt{-g}\,\bigg[\frac{1}{2\kappa^2}\bigg(R+6-\frac{1}{2}(\partial\phi)^2
-V(\phi)\bigg)-\frac{Z(\phi)}{4e^2}F^2
\bigg] \,,\\
S_{Q}&=\int_Q d^3x\sqrt{-\gamma}\, \bigg[\frac{1}{\kappa^2}(K-T\big)+v(\phi)
\bigg] \,,\\
\end{split}
\end{eqnarray}
where the bulk gauge field $A_a$ is dual to the electric current on the boundary and it has field strength $F_{ab}=\partial_a A_b-\partial_b A_a$. $\kappa$ and $e$ are the gravitational constant and the bulk gauge coupling constant, respectively. Note that the scalar fields $\phi$ is real. The induced metric on the EOW brane $Q$ is denoted as $\gamma_{\mu\nu}$, where $K$ and $T$ are the extrinsic curvature and the tension of the EOW brane $Q$. Note that on $Q$ we also consider a potential term $v(\phi)$ and it contributes to the effective tension of the brane.
We set $2\kappa^2=e^2=1$.
The equations of motion in the bulk $N$ are
\begin{eqnarray}
\begin{split}
R_{ab}-\frac{1}{2}g_{ab}\big(R+6\big)-\frac{1}{2}T_{ab}&=0 \,,\\
\nabla_b\big(Z(\phi) F^{ba}\big)
&=0\,,\\
\nabla_{a}\nabla^{a}\phi-\frac{\partial_\phi Z(\phi)}{4}F^2-\partial_\phi V(\phi)&=0 \,,
\end{split}
\end{eqnarray}
where
\begin{eqnarray}
T_{ab}&=&Z(\phi)\bigg[F_{ac}F_{b}^{~c}-\frac{1}{4}g_{ab}F^2\bigg]
+\nabla_{a}\phi\nabla_{b}\phi-g_{ab}\bigg[\frac{1}{2}(\partial\phi)^2+V(\phi)\bigg] \,. \nonumber
\end{eqnarray}
The equations of motion on the EOW brane $Q$ can be obtained from the variations. The variation for metric fields, scalar field and vector field yields
\begin{eqnarray}
\begin{split}
\delta S\Big{|}_Q&=
\int_Qd^3x\sqrt{-\gamma}\,\frac{1}{2\kappa^2}\left[K_{\mu\nu}-(K-T)\gamma_{\mu\nu} -\frac{1}{2} v(\phi) \gamma_{\mu\nu}
\right] \delta\gamma^{\mu\nu}
\\
&+\int_Q d^3x\sqrt{-\gamma}\,\left[-n^a\nabla_a\phi+v'(\phi)
\right]\,\delta\phi \\
&+\int_Q d^3x \sqrt{-\gamma}\,n_a\left( -\frac{Z(\phi)}{e^2}F^{ab}\right)\,\delta A_b\,.
\end{split}
\end{eqnarray}
Note that $n^a$ is the outward unit vector of $Q$. Here $\gamma_{\mu\nu}$ should be understood as the metric from the Gaussian normal coordinate on the EOW brane. Following the standard AdS/BCFT, we impose Neumann boundary condition on $Q$.\footnote{AdS/BCFT with Dirichlet boundary condition or mixed boundary condition can be found in e.g.
\cite{Chu:2017aab, Miao:2018qkc}. It would be interesting to consider gapped systems with generalized boundary conditions in AdS/BCFT.}
Then we obtain the following equations on $Q$
\begin{eqnarray}{\label{eq:Qbc}}
\begin{split}
K_{\mu\nu}-(K-T)\gamma_{\mu\nu}-\frac{1}{2}v(\phi)\gamma_{\mu\nu}&=0\,, \\
n^{a}\partial_a\phi-\partial_\phi v(\phi)&=0\,,\\
n_aF^{ab}&=0\,.
\end{split}
\end{eqnarray}
Note that we choose Dirichlet boundary condition on $M$ and $P$.
\subsection{Zero temperature ground state}
\label{subsec:groundstate}
We focus on the vacuum solution at zero temperature and zero density and consider the following ansatz of the metric fields, scalar and vector fields
\begin{eqnarray}\label{eq:ansatz d4}
ds^2=\frac{1}{u^2}\bigg[-dt^2+dx^2+dy^2+\frac{du^2}{f(u)}\bigg] \,,~~~ \phi=\phi(u)\,,~~~A_a=0\,.
\end{eqnarray}
Near the AdS boundary, i.e. $u\to 0$, the metric field $f(u)\to 1$. The IR regime is $u\to\infty$.
The equations of motion for the system in $N$ are
\begin{eqnarray}
\begin{split}
\frac{V-6}{u^2 f}+\frac{6}{u^2}-\frac{1}{2} \phi'^2&=0\,, \\
\phi'^2-\frac{2f'}{uf}&=0\,,\\
\phi''+\phi'\left(
\frac{f'}{2f}-\frac{2}{u}\right)-\frac{\partial_\phi V}{u^2 f}&=0\,.
\end{split}
\end{eqnarray}
We have the bulk solution which satisfy the gapped spectrum condition
\begin{equation}
\label{eq:bgsol1}
f(u)=1+a_0 u^n\,,~~~\phi(u)=\frac{2\sqrt{2}}{\sqrt{n}}\,\text{arcsinh}\big[\sqrt{a_0}u^{\frac{n}{2}}\big]
\end{equation}
with
\begin{equation}\label{eq:bgsp}
V(\phi)=(n-6)\,\Big(\sinh\Big[
\frac{\sqrt{n}}{2\sqrt{2}} \phi \Big]\Big)^2\,.
\end{equation}
Note that $a_0>0$ and can be set to be $1$ using the scaling symmetry $u\to \lambda u,\, (t,x,y)\to \lambda(t,x,y),\, (f,\phi)\to (f,\phi)$. In the following we set $a_0=1$.
In the IR region, i.e. $u\to\infty$, from the solution \eqref{eq:bgsol1} we have $f(u)\to u^n$. It is known that this kind of geometry has a gapped spectrum for probe fields when $n\geq 2$ \cite{Liu:2013una}.
Additionally,
in the deep IR, i.e. $u\to\infty$, the Ricci scalar is divergent (except the $n=4$ case where the Ricci scalar is finite while the Kretschmann scalar is divergent),
from which we know that there is a curvature singularity for the solution \eqref{eq:bgsol1}. Nonetheless, the singularity is physically acceptable if the Gubser criterion is satisfied \cite{Gubser:2000nd, Charmousis:2010zz} which constraints
$n \leq 6$.\footnote{Note that the strong energy condition requires $n \leq 6$, while the null energy condition does not put any constraint on the system.} In the following we will focus on the cases $n\in [2, 6].$
Near the AdS boundary, we have $\phi\to0$ and $V(\phi)=\frac{n(n-6)}{8}\phi^2+\cdots$. This gives the effective mass of scalar field $m^2=\frac{n(n-6)}{4}$ which is always above the BF bound for arbitrary $n$. Here we focus on the parameter regimes $2\leq n\leq 6$. The scalar field near the AdS boundary behaves as
\begin{equation} \phi\to \frac{ 2\sqrt{2} }{\sqrt{n}} u^{n/2}\,\Big(1-\frac{1}{6} u^n +\frac{3}{40} u^{2n}+\dots\Big)\,.
\end{equation} For the parameters we are interested in, i.e. $n\in \,[2, 6]$, $\phi$ is dual to operators of dimension $n/2$. The dual system is a $Z_2$ spontaneously symmetry broken state.\footnote{Note that for $n\in \,[2, 5]$, both quantizations are possible and $\phi$ could also be viewed as being dual to operator with conformal dimension $(6-n)/2$ and this seems to be an unphysical case since the dual theory has a deformation with scalar source which does not produce any response.}
For simplicity, we suppose the manifold $M$ is restricted to be a half infinite plane with coordinates $t,y$ and $x$ with $x\geq 0$. Assuming the boundary $Q$ is parameterized as $x(u)$, the space-like unit vector $n^a$ normal to the boundary $Q$ (outward direction) is given by
\begin{eqnarray}
\label{eq:normal1}
(n^t,n^x,n^y,n^u)=\bigg(0,~~\frac{-u}{\sqrt{1+f(u)x'(u)^2}},~~0,~~\frac{u f(u)x'(u)}{\sqrt{1+f(u)x'(u)^2}}\bigg)\,.
\end{eqnarray}
The extrinsic curvature $K_{ab}$ can be obtained from $K_{ab}=h_a^{~c}h_b^{~d}\nabla_c n_d$ where $h_{ab}=g_{ab}-n_a n_b$.
Note that since the coordinates here are not the Gaussian normal coordinate of the EOW brane, we should use $h_{ab}$ to calculate the boundary equations. In the end, the boundary conditions (\ref{eq:Qbc}) result in the following constraints on the EOW brane $Q$,
\begin{eqnarray}
\label{eq:bc}
\begin{split}
x''+\frac{f'}{2f}x' &=0 \,, \\
x'+\frac{\big(2T-v(\phi) \big)\sqrt{1+fx'^2}}{4f} &=0 \,, \\
n^{u}\partial_u\phi-\partial_\phi v(\phi) &=0\,.
\end{split}
\end{eqnarray}
From above equations, we find the solution for the EOW brane $Q$
\begin{eqnarray}
\label{eq:bgeol2}
x&=&c\, u\, {}_2F_1\left[\frac{1}{2},\frac{1}{n}, 1+\frac{1}{n}, - u^n \right]\,,\\
\label{eq:scalarpot2}
v(\phi)&=&2T+\frac{4c}{\sqrt{1+c^2}}\cosh
\left[\frac{\sqrt{n}}{2\sqrt{2}}\phi
\right]\,,
\end{eqnarray}
where $c$ is a real integration constant. The first equation parameters the profiles of the EOW brane, while the second equation is a potential term for the scalar field on $Q$ which should be thought of as an input quantity to determine the profiles of the system.
Equations (\ref{eq:bgsol1}) and (\ref{eq:bgeol2}) are the background solutions of the gravitational system.
The profile of the EOW brane $Q$, which is described by $x(u)$ in \eqref{eq:bgeol2}, is independent of the parameter $T$ while depends on the effective tension $T-v(\phi)/2$, i.e. the contribution of the potential of the scalar field that is parameterized by the parameter $c$.
When $c=0$, the profile of the EOW brane $Q$ is trivial and it is given by $x=0$. When $c\neq 0$, different from the case without scalar field that was first studied in \cite{Takayanagi:2011zk}, the profile of $Q$ here is nonlinear in $u$. Near the AdS boundary, i.e. $u\to 0$, we have linear behavior at leading order
\begin{equation}
\frac{x}{c}= u- \frac{1}{2(n+1)} u^{n+1} + \frac{3}{8(2n+1)}u^{2n+1}+\cdots ,
\end{equation}
while in the deep IR, i.e. $u\to\infty$, we have
\begin{equation}
\frac{x}{c} =
\begin{cases}
\log(2u)+\frac{1}{4 u^2} \cdots &\quad\quad \textrm{if $n=2$} \\[2ex]
\frac{1}{\sqrt{\pi}}\Gamma \left(\frac{1}{2}-\frac{1}{n} \right) \Gamma \left( 1+\frac{1}{n}\right)- \frac{2 n}{n-2} \frac{ \Gamma (1+\frac{1}{n} )}{ \Gamma (\frac{1}{n}) } u^{1-\frac{n}{2}}+\cdots &\quad\quad \textrm{if $n>2$}
\end{cases}
\end{equation}
These expressions indicate that near the boundary $P$ of BCFT, the profile of the EOW brane $Q$ is linear in $x$ with a slop $1/c$. When $u\to \infty$, the EOW brane approaches to infinity for $n=2$, depending on the sign of $c$, while it approaches a constant $x_m$ for $n=3,4,5,6$.
Fig. \ref{fig:config} shows the profiles of the EOW brane as a function of $x/c$ at different values of $n$. The particular properties of the profiles for the EOW brane will play important roles in the calculations of the entanglement entropy that we study in the section \ref{subsec:2ee}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.62\textwidth]{fig-Q.pdf}
\end{center}
\vspace{-0.3cm}
\caption{\small The plot for the profile of the EOW brane $Q$ as a function of $x/c$ when $c\neq 0$ for $n=2$ (red), $\,3$ (brown), $\,4$ (orange), $\,5$ (purple), $\,6$ (blue). When $c<0$, the EOW brane $Q$ extends along negative $x$, while it extends along positive $x$ when $c>0$. }
\label{fig:config}
\end{figure}
With the profile (\ref{eq:bgeol2}) of $Q$, the normal vector \eqref{eq:normal1} on $Q$ can be simplified as
\begin{eqnarray}
n^a=\bigg(0,~~\frac{-u}{\sqrt{1+c^2} },~~0,~~\frac{c\, u \sqrt{1
u^n } }{ \sqrt{1+c^2} } \bigg)\,.
\end{eqnarray}
Then we can obtain the projection tensor
\begin{equation}
h_{ab}dx^adx^b=\frac{1}{u^2}\left[-dt^2+dy^2+\frac{1}{1+c^2}\bigg(c\, dx +\frac{du}{ \sqrt{f}}\bigg)^2\right]\,.
\end{equation}
One can check that the trace of extrinsic curvature on the EOW brane, $K=\frac{-3c }{\sqrt{1+c^2} }\, \sqrt{1+u^n}$, is divergent near the singularity of geometry $N$ when $c\neq 0$. From the induced metric
\begin{equation}
\label{eq:im}
\tilde{\gamma}_{\mu\nu}dx^\mu dx^\nu\Big{|}_Q=\frac{1}{u^2}\left[-dt^2+dy^2+\frac{1+c^2}{f}d u^2\right]\end{equation}
in the coordinates $\{t, y, u\}$,
we know that on the EOW brane, the metric is asymptotic to AdS$_3$ in UV with AdS radius $\sqrt{1+c^2}$, which is different from the one in $N$. Similar to the gapless system in AdS/BCFT which is a pure AdS on the EOW brane \cite{Takayanagi:2011zk}, the induced metric on $Q$ is also asymptotic AdS. The intrinsic curvature from the induced metric on the EOW brane $Q$ is in general divergent except $n=3$ where the Kretschmann scalar is divergent. Nonetheless it is physically acceptable following the arguments in \cite{Gubser:2000nd, Charmousis:2010zz} since it is believed that all the singularities are resolvable after considering extra degrees of freedom that do not affect any calculations here.
One particular interesting case is $n=2$. In this case all the above formulae can be simplified and we just collect them here for later use
\begin{eqnarray}\label{eq:ir2}
\begin{split}
f(u)& =1+
u^2 \,, ~~~
\phi(u)=2\,\text{arcsinh}\big[
u \big]\,,~~~~x(u)=c\,
\text{arcsinh}\big[
u \big]
\,,\\
V(\phi)&=-4\,\left(\text{sinh}\left[\frac{\phi}{2} \right]\right)^2\,
, ~~~~
v(\phi)=2T+\frac{4c}{\sqrt{1+c^2}}\text{cosh}\left[\frac{\phi}{2}\right]\,.
\end{split}
\end{eqnarray}
\subsection{Conductivity}
\label{subsec:con}
In the previous subsection, we have constructed the gapped geometry in the presence of a boundary and found the solutions (\ref{eq:bgsol1}) and (\ref{eq:bgeol2}) with proper choices of the potential terms for the scalar fields in the bulk $N$ \eqref{eq:bgsp} and on the EOW brane $Q$ \eqref{eq:scalarpot2}. In this subsection, we will study its conductivity along the spatial direction $y$ of the boundary $P$.
We consider the linear fluctuations of the gauge fields
\begin{equation}
\label{eq:gauflu}
\delta A_i(t, x, u)=\int\frac{d\omega}{2\pi} a_i(\omega, x, u) e^{-i\omega t}\,.
\end{equation}
We are interested in the conductivity along the $y$ direction. It turns out that the equations of motion for $a_y$ decouples from other fields. The fluctuation equation for $a_y$ in $N$ is
\begin{eqnarray}
\label{eq:fulay1}
a_y''+\left(\frac{f'}{2f}+\frac{\phi' \partial_\phi Z}{Z}\right)a_y'+\frac{\omega^2
+\partial_x^2}{f} a_y&=0\,,
\end{eqnarray}
and the equation for $a_y$ on the boundary $Q$ is
\begin{equation}\label{eq:be}
(-\partial_x a_y +f x' \partial_u a_y)\Big{|}_Q=0\,.
\end{equation}
Here the prime denotes the derivative with respect to the radial coordinate $u$. Now we have a boundary value problem for the partial differential equation \eqref{eq:fulay1}.
The solution of the above equations depend on if $c$ equals zero or not and we first focus on the case with nonzero $c$.
When $c\neq 0$, we can solve \eqref{eq:fulay1} by using the separation of variables. We choose
\begin{equation}
\frac{Z'}{Z}=\frac{\alpha}{\sqrt f}\,,
\end{equation}
where the prime is the derivative with respect to $u$, i.e.
\begin{equation}
\label{eq:conZ}
\setlength\arraycolsep{1pt}
Z=\exp{\left(\alpha\Big(\sinh\left[\frac{\sqrt{n}}{2\sqrt 2}\phi\right]\Big)^{\frac{2}{n}} \, {}_2F_1\left[\frac{1}{2},\frac{1}{n},1+\frac{1}{n},-\Big(\sinh\left[\frac{\sqrt{n}}{2\sqrt 2}\phi\right]\Big)^2\right]\right)}\,.
\end{equation}
Note that we have normalized $Z\to 1$ near the AdS boundary. When $n=2$, the above result can be further simplified as $Z= e^{\alpha \phi/2}$. For other values of $n$, we have $u\to \infty$, $Z\sim e^{\alpha x_m}$, i.e. $Z$ approaches a constant value in the deep IR.
With this choice of $Z$, we find the solution of \eqref{eq:fulay1} with boundary equation \eqref{eq:be} is
\begin{equation}
\label{eq:solay}
a_y=e^{b x +\frac{b}{c^2} x(u)-i\omega t}\,.
\end{equation}
The second term $x(u)$ in the exponential should be viewed as the solution in \eqref{eq:bgeol2} and in this way, $a_y$ is a function explicitly depending on the variables $x, u, t$. In \eqref{eq:solay}, we have (when $|\omega| < \frac{|\alpha|}{2\sqrt{1+c^2}}$)
\begin{equation}
b=\frac{c}{2(1+c^2)}
\left(-\alpha\pm\sqrt{\alpha^2-4(1+c^2)\omega^2}\right)\,.
\end{equation}
For $|\omega| <\frac{|\alpha |}{2\sqrt{1+c^2}}$, the solution (\ref{eq:solay}) is real and normalizable\footnote{This is only true for the cases $n>2$ and $c\alpha>0$ and the following discussion should apply for these cases. When $n=2$, the field $a_y$ is divergent at either $x\to \infty$ or $x\to -\infty$ and we do not have a reliable solution yet. One might expect that the conclusions below are also true for the case of $n=2$. } for both choices of $b$. However, for $\omega>\frac{|\alpha|}{2\sqrt{1+c^2}}$, the sector with
\begin{equation}
b=\frac{c}{2(1+c^2)}\left(-\alpha + i\sqrt{4(1+c^2)\omega^2-\alpha^2}\right)
\end{equation}
describes the infalling wave.
Following \cite{Kiritsis:2015oxa, Yan2018}, we use the analytic continuation from $|\omega| > \frac{|\alpha |}{2\sqrt{1+c^2}}$ to $|\omega| < \frac{|\alpha |}{2\sqrt{1+c^2}}$ to fix
\begin{equation}
\label{eq:b}
b=\frac{c}{2(1+c^2)}\left(-\alpha-
\sqrt{\alpha^2-4(1+c^2)\omega^2}\right)\,.
\end{equation}
From the above solution, i.e. \eqref{eq:solay} with \eqref{eq:b}, we can compute the conductivity. When $u\to 0$, we have
\begin{equation} a_y=e^{bx-i\omega t}\,\left(1+ \frac{b}{c}u+\mathcal{O}(u^2)\right)\,. \end{equation}
Note that on $M$, we choose Dirichlet boundary condition for the gauge field (i.e. fixing the source) and we do not need to include any counterterm for the gauge field.
We have the on-shell action for the gauge field,
\begin{equation}
S_M=-\int dt dx dy\,\sqrt{-\gamma}\, Z A_\nu F^{u\nu}n_u
\end{equation}
where $\gamma$ is the induced metric on $M$ while $n_u$ is an outward-pointing unit vector of $M$, i.e. $\gamma_{\mu\nu}=\text{diag} \big(-1/u^2, 1/u^2, 1/u^2\big)$ and $n^u=-u \sqrt{f}$.
In the case with the fluctuations of the gauge field along $y$ direction, we have
\begin{equation}
S_M=\int dtdxdy\, Z a_y \partial_u a_y=
\int dtdxdy\, a_y^{(0)}a_y^{(1)}\,
\end{equation}
from which we have the retarded Green's function on $M$,
\begin{equation}
G_R=\frac{a_y^{(1)}}{a_y^{(0)}}\,.
\end{equation}
Therefore we have conductivity in $M$
\begin{equation}
\sigma_{y}= \frac{1}{i\omega}\frac{a_y^{(1)}}{a_y^{(0)}}=\frac{b}{i\omega c}\,.
\end{equation}
For $\omega<\frac{|\alpha |}{2\sqrt{1+c^2}}$, $b$ is real, this means that $\sigma_y$ is pure imaginary.
The DC conductivity in $M$ along $y$ direction can be obtained from the real part
\begin{equation}\sigma_\text{DC}=\lim_{\omega\to 0} \text{Re} [\sigma_y] =0\,.\end{equation}
Note that we have assumed $\alpha<0$ for simplicity and used the fact that $b\simeq -\frac{c}{\alpha}\omega^2$ when $\omega\to 0$, which means that there is no pole for $\sigma_y$ at $\omega\to 0$.\footnote{Note that when $\alpha>0$, from \eqref{eq:b} one concludes that there is a pole at $\omega=0$. However, from the experimental point of view, one needs to consider the subtle commutability between the two limits $T\to 0$ and $\omega\to 0$. Nevertheless, now we have $\lim_{\omega\to \epsilon^+}\text{Re}[\sigma_y]=0 $ and one might naively take it as an insulator for any $\alpha$.}
The gap of the conductivity is given by $\frac{|\alpha |}{2\sqrt{1+c^2}}$.\footnote{One might need to calculate the conductivity along $x$ direction to confirm if it is also gapped for $\sigma_x$ and we will not discuss this here.}
When $\omega >\frac{|\alpha |}{2\sqrt{1+c^2}}$, we have a nonzero conductivity with
$\text{Re}[\sigma_y]=\frac{1}{2\omega (1+c^2)}\sqrt{4(1+c^2)\omega^2-\alpha^2}$.
For the boundary $P$, we have not considered any dynamics of the gauge field on the EOW brane $Q$. The gauge field on $Q$ should be understood as the induced gauge field of $A_a$ in the bulk. Since the induced metric on $Q$ is asymptotic AdS$_3$, it is known from AdS$_3$/CFT$_2$ \cite{Jensen:2010em} that the expansion of the gauge field near $P$ depends on the action in the bulk. For gauge field with a canonical kinetic term\footnote{In presence of a Chern-Simons term, the expansion will be slightly different and depends on the level \cite{Jensen:2010em, Andrade:2011sx}. However, the dual current is no longer conserved and we will not consider this case here.}
we have \cite{Jensen:2010em, Faulkner:2012gt}
\begin{equation}\label{eq:exax2}
a_y \sim a_y^{(r)}\log(u)+a_y^{(s)}+\dots\,,
\end{equation}
where $a_y^{(r)}$ is the response of the dual operator while $a_y^{(s)}$ can be understood as the source term. Note that there is a scaling anomaly and the defination of the source depends on the Landau pole of the theory \cite{Faulkner:2012gt}.
Along $P$, we evaluate the solution (\ref{eq:solay}) on $Q$ and obtain
\begin{equation}
a_y\Big{|}_Q=e^{-i\omega t}\left(1+b(c+\frac{1}{c})u+\dots\right)\end{equation}
when $u\to 0.$ Comparing to \eqref{eq:exax2} we know that the Green's function is completely trivial and therefore we have $\sigma=0$ on $P$.
When $c=0$, the boundary equation \eqref{eq:be} can be further simplified as
$\partial_x a_y=0$ on $Q$.
In this case we have solution $a_y=a_y(u)$ which is a solution \eqref{eq:fulay1} with $\partial_x^2 a_y=0$. In appendix \ref{app:sch}, we analyze the solution of this equation by writing it into a Schr\"{o}dinger problem and show that it indeed has gapped spectrum. Repeating the previous study along $P$ one concludes that the conductivities are trivial both in $M$ and $P$.
Our study shows that for the holographic insulator in the presence of a boundary, the conductivity on the boundary is also trivial. This indicates that the strong correlation would not make a trivial insulator topologically nontrivial.\footnote{In the literature of condensed matter physics, there are also examples of topological insulator with gapped boundary states \cite{Witten:2015aba, Seiberg:2016rsg} and it would be interesting to be attempt to make contact with these field theories.} To obtain a topological insulator, it seems that one has to add more dynamics of gauge field on $Q$ and we leave this possibility for future investigation.
\subsection{Entanglement entropy}
\label{subsec:2ee}
Entanglement entropy is an important physical quantity in quantum many body systems \cite{Nishioka:2009un}. For a topological insulator, the gapless modes on the boundary are encoded in the degeneracies of the bulk ground state entanglement spectrum \cite{Fidkowski}. More generally, the concept of quantum entanglement plays important roles in characterising topological phase \cite{kitaev, wen}. Although the study in the previous subsection shows that the gapped system from holography in the presence of a boundary is a topologically trivial insulator, it should still be interesting to explore its entanglement entropies. In this subsection, we study the entanglement entropies of the gapped system with boundaries from AdS/BCFT.
It is known that the entanglement entropy is dominated by the divergent area law with the UV cutoff. In the presence of a boundary, additional terms might contribute to the entanglement entropy \cite{Fujita:2011, Seminara:2018pmr}. In \cite{Myers:2012ed, Liu:2012eea, Liu:2013una}, a renormalized entanglement entropy, which is finite and independent of the UV cutoff, has been introduced to characterize the entanglement at certain length (or energy) scale. We will generalize it to the case of BCFT.
We will first compute the entanglement entropy and then study the renormalized entanglement entropy for the gapped system in AdS/BCFT.
The subsystem under consideration is an infinite strip adjacent to the boundary
i.e. $0<x<\ell$, while $y$ is infinite which will be renormalized to be $y\in [-L, L]$ with $L\to\infty$. The minimal surface $\gamma$ is specified by $u=u(x)$ which is a section at constant $y$. The extremal surface has the boundary condition $u(\ell)=0$.
The induced metric on $\gamma$ is
\begin{equation}
ds^2_\gamma=\frac{1}{u^2}\bigg[\,\Big(1+\frac{u'^2}{f(u)}\Big)\,dx^2+dy^2\bigg]\,,
\end{equation}
from which one obtains the area functional
\begin{equation}
\label{eq:areafun1}
A=2L\,\int_{x_*}^{\ell} dx\,\frac{1}{u^2}\sqrt{1+\frac{u'^2}{f}}\,.
\end{equation}
When $f=1$, the above equations reduces to the AdS$_4$/BCFT$_3$ in \cite{Seminara:2018pmr}. Here we focus on the gapped geometries with $f$ shown in \eqref{eq:bgsol1}.
Since the above functional does not implicitly depend on $x$, there is a conserved quantity
\begin{equation}
\label{eq:consq}
\frac{1}{u^2\sqrt{1+\frac{u'^2}{f(u)}}}=C\,,
\end{equation}
where $C$ is a constant. The final profile of the surface $\gamma$ depends on the value of $c$ which determines the embedding of the EOW brane $Q$ via \eqref{eq:bgeol2}. In the following we will discuss the cases $c\leq 0$ and
$c>0$ separately. For these two different cases, the cartoon plot of the extremal surfaces are shown in Fig. \ref{fig:exsurface0}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.5]
\draw [blue!10] (0,0)--(7.5,0);
\draw [purple] (6,0)--(6,6) node[midway, right]{$\gamma_1$};
\draw [green, domain=-2.5:0] plot(\x,-sinh\x );
\draw [orange] (6,0) .. controls (6,1) and (5,4.5) .. (2,4.5) node[above]{$(x_t, u_t)$ } node[midway, left]{$\gamma_2$}.. controls (1,4.5) and (-1,4) .. (-1.5, 2.13) node[left]{$(x_*, u_*)$};
\draw[black] (-2.5, 6.0) node[above]{ $Q $ };
\filldraw[orange] (2, 4.5) circle (2pt);
\filldraw[orange] (-1.5, 2.13) circle (2pt);
\filldraw[black] (0, 0) circle (2pt) node[below] {$O$};
\filldraw[black] (6, 0) circle (2pt) node[below] {$\ell$};
\draw[-latex, blue, opacity=0.8] (0, 0)--(8.5, 0) node[anchor=north, at end]{ $x$ };
\draw [blue!10] (15,0)--(17,0);
\draw [purple] (21,0)--(21,6) node[midway, right]{$\gamma_1$};
\draw [green, domain=15:17.7] plot(\x, -1155/2+227/2*\x -15/2* \x^2 + 1/6*\x^3);
\draw [orange] (21,0) .. controls (21,1) and (19,3) .. (17, 10/3) node[midway, left]{$\gamma_2$} node[left]{$(x_*, u_*)$};
\draw[black] (17.7, 6.0) node[above]{ $Q$ };
\filldraw[orange] (17, 10/3) circle (2pt);
\filldraw[black] (15,0) circle (2pt) node[below]{$O$};
\filldraw[black] (21, 0) circle (2pt) node[below]{$\ell$};
\draw[-latex, blue, opacity=0.8] (15, 0)--(23.5, 0) node[anchor=north, at end]{ $x$ };
\end{tikzpicture}
\end{center}
\vspace{-0.3cm}
\caption{\small Cartoon plot for the configuration of the extremal surfaces at $c\leq 0$ ({\em left}) and $c>0$ ({\em right}). The right plot is for the cases with $n>2$ while there is no configuration of $\gamma_1$ for $n=2$. We have suppressed the $y$-axis and now the boundary theory lives along the $x$-axis. The Green line is the profile of the EOW brane $Q$ with $u(x)$ parameterized by \eqref{eq:bgeol2}. For the strip geometry we considered, there might exist two different kinds of extremal surfaces.}
\label{fig:exsurface0}
\end{figure}
\begin{itemize}
\item Case 1: $c\leq 0$.
\end{itemize}
In the case $c<0$ the profiles of the EOW brane $Q$ is along the regime $x\leq 0$ as shown in \eqref{eq:bgeol2}, while in the case $c=0$ the profile of $Q$ sits along $u$ axis with $x=0$. Nevertheless, the properties of extremal surfaces in these two cases (except the case of $c=0, n=2$) share lots of similarities and therefore we discuss them together.
Intuitively we expect to have two different kinds of local exterma of the area functional as shown in the left plot of Fig. \ref{fig:exsurface0}. One configuration is the surface $x=\ell$ which corresponds to $C=0$ in \eqref{eq:consq}, i.e. the purple line $\gamma_1$ in the left plot of Fig. \ref{fig:exsurface0}.
This configuration exists at arbitrary value of $\ell>0$. As we will discuss later, for $\ell>\ell_c$, this is the unique configuration.
The entanglement entropy is\footnote{Note that one can suppress $G$ using the unit $16\pi G=1$. In the following we will not do this.}
\begin{eqnarray}
\label{eq:ee1a}
\setlength\arraycolsep{1pt}
S=\frac{2L}{4G}\int_{u_c}^\infty \frac{du}{u^2\sqrt{f}}\,
=\frac{2L}{4G} \frac{2}{(n+2) u_c^{(n+2)/2}}\, {}_2 F_1\left[\frac{1}{2},
\frac{1}{2}+\frac{1}{n},
\frac{3}{2}+\frac{1}{n},
- \frac{1}{u_c^n} \right] \,,
\end{eqnarray}
where $n\in[2,~6]$, $G$ is the Newton constant and $u_c$ is the cutoff near the boundary. When $u_c\to 0$, we have
$u_c A/(2L) \to 1$. Note that the entanglement entropy \eqref{eq:ee1a} is independent of $\ell$. Therefore we have $\partial{S}/\partial \ell=0$, which means that the renormalized entropy is zero for this configuration.
Another kind of configuration is shown as orange curved line $\gamma_2$ in the left plot of Fig. \ref{fig:exsurface0}. We have the turning point $(x_t, u_t)$ at which $u'(x_t)=0$ and the intersecting point $(x_*, u_*)$ between the extremal surface $\gamma_2$ and the EOW brane $Q$ where $n_Q\cdot n_\gamma=0$, i.e.
$u'(x_*)=-c\sqrt{f}$.
From \eqref{eq:consq}, we therefore have $C^{-1}=u_t^2=u_*^2\sqrt{1+c^2}$ which leads to
\begin{equation} \label{eq:eerel1}
u_t=u_* (1+c^2)^{1/4}
\end{equation}
and
\begin{equation}
\label{eq:eerel2}
u'^2=\left(\,\frac{u_t^4}{u^4}-1\right)f\,.
\end{equation}
When $c=0$, we have $u_t=u_*$.
Note that for $x<x_t$ we have $u'>0$, while for $x>x_t$ we have $u'<0$, where the prime is the derivative with respect to $x$.
From \eqref{eq:eerel2} we have the relation
\begin{equation}
\label{eq:eeeqn1}
\ell-x_*=\int_{u_*}^{u_t} du \frac{1}{\sqrt{(\frac{u_t^4}{u^4}-1\big)f}}+\int^{u_t}_0 du \frac{1}{\sqrt{(\frac{u_t^4}{u^4}-1\big)f}}\,.
\end{equation}
Note that $u_*(x_*)$ is given by \eqref{eq:bgeol2}.
From (\ref{eq:eeeqn1}, \ref{eq:eerel1}) and the relation \eqref{eq:bgeol2} which relates $u_*(x_*)$, one could obtain $u_t$ as a function of $\ell$ as $ u_t=u_t(\ell, c, n)$.
The equation \eqref{eq:eeeqn1} can only be solved numerically. In the left plot of Fig. \ref{fig:config2}, we show the dependence of $u_t$ as a function of $\ell$ for different $n$. We can see that the existence of a maximal $\ell_m$. Below it, i.e. $\ell<\ell_m$, there exist two different extremal surfaces in addition to the configuration of the straightforward line. Above $\ell_m$ the configuration of this kind (i.e. the orange curve in the left plot of Fig. \ref{fig:exsurface0}.) does not exist. This is different from the pure AdS case with a negative tension on the EOW brane in which there does not exist a maximal $\ell$ which separates the topology of minimal surfaces \cite{Seminara:2018pmr}.
In the right plot of Fig. \ref{fig:config2}, we show the dependence of $\ell_m$ as function of $c$ for different $n$. We found that $\ell_m$ decreases when $c$ becomes smaller and there exists a critical value $c_m$ such that below it we do not have any curved configuration of extremal surface. This reminds us the existence of the critical tension for the extremal surfaces in the pure AdS/BCFT \cite{Seminara:2018pmr}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig-ut-l.pdf}
~~~
\includegraphics[width=0.46\textwidth]{fig-l-c.pdf}
\end{center}
\vspace{-0.3cm}
\caption{\small {\em Left: } The location of the turning point $u_t$ as a function of the width of the strip $\ell$ when $c=-1$ and different $n$. {\em Right}: The maximal value the width $\ell_m$ below which there are two extremal curved surfaces as a function of the tension parameter $c$ for different $n$ when $c\leq 0$. There exists a critical value of $c\approx -1.32$
below which we do not have curved extremal surface.
In these two plots, we have $n=2$ (red),$\,3$ (brown), $\,4$ (orange), $\,5$ (purple), $\,6$ (blue).
}
\label{fig:config2}
\end{figure}
Fig. \ref{fig:es} shows examples of extremal surfaces for a specific value of $n=2, c=-1$. The BCFT lives in $x\geq 0$ and the green line refers to the location of the EOW brane $Q$. We choose one specific value of $\ell$ with $\ell<\ell_m$ and plot the two curved (brown and orange) and one straight (purple) extremal surfaces. For any value of $\ell$, the vertical line of extremal surface always exists. When $\ell$ is smaller than $\ell_m$, there exist three different configurations of extremal surfaces. When $
\ell=\ell_m$ there exist two different configurations of extremal surface where the two curved lines merge into the same line, while when $\ell$ is greater than $\ell_m$, there exists only one extremal surface which is the straight line.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.62\textwidth]{fig-min-surface.pdf}
\end{center}
\vspace{-0.4cm}
\caption{\small Plot of the extremal surfaces for $c=-1,\, n =2$. In this case $\ell_m\approx 0.036$. When $\ell=0.015<\ell_m$, there exist three different extremal surfaces, including the curved brown, orange lines and the straight purple line. The dots on the curves are the locations of the turning points.
}
\label{fig:es}
\end{figure}
The entanglement entropy can be obtained from the area of the extremal surfaces
\begin{equation}
\label{eq:ee1}
\begin{split}
S&= \frac{A}{4G}=\frac{L}{2G} \,\int_{x_*}^{\ell-\epsilon} dx\,\frac{1}{u^2}\sqrt{1+\frac{u'^2}{f}}\,.
\end{split}
\end{equation}
As seen from the discussions above, there might be multiple extremal surfaces and the RT surface is determined by the one with minimal area. Using the same cutoff $u_c$ which satisfies $u(\ell-\epsilon)=u_c$, the area of the extremal surfaces $ \frac{u_c A}{2L}$ for $n=2, c=-1$ are shown in Fig. \ref{fig:eec>0}. We find that there exists a critical $\ell_c$ below which the orange curve has minimal area, while above $\ell_c$ the straight vertical purple line has minimal area. There is a first order transition at $\ell_c$. Moreover, we always have $\ell_c<\ell_m$. These phenomena are quite general for any $c\leq 0$ except the case of $c=0, n=2$ which we will comment on at the end of this part.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.49\textwidth]{fig-S-l-clt0.pdf}
~~~
\includegraphics[width=0.46\textwidth]{fig-lc.pdf}
\end{center}
\vspace{-0.4cm}
\caption{\small {\em Left:} Plot of entanglement entropy $\frac{u_cA}{2L}$ as a function of $\ell$ for $c=-1, n=2$. We have used $u_c=10^{-3}$. Note that when $\ell$ is small, one needs to choose smaller $u_c$ to make sure $\epsilon\ll \ell$.
{\em Right:} The critical length $\ell_c$ as a function of $c$ for different $n$.}
\label{fig:eec>0}
\end{figure}
We find that the entanglement entropy satisfies
\begin{eqnarray}
S=\frac{A}{4G}=\frac{L}{2G}\,\Big[ \frac{a_1}{u_c}-a_2+\mathcal{O}(u_c) \Big] \,,
\end{eqnarray}
where $a_1$ is approximately equal to $1$, which can be seen from \eqref{eq:ee1}. Furthermore, $a_2 ~(a_2>0)$ is a function of $\ell$ and independent on the cutoff. From \eqref{eq:areafun1}, when $\ell\to 0$, we have $f\to 1$, $u'\to -\infty$, one expects $a_2\propto 1/\ell$ at very small $\ell$. For larger $\ell>\ell_c$, from \eqref{eq:ee1a} we have $a_2=0$.
For other values of $\ell$,
we have to obtain the behavior of $a_2$ numerically.
For holographic CFTs without boundary, we have $a_2\propto 1/\ell$ with a constant coefficient \cite{Nishioka:2009un}, while for AdS$_4$ plus the EOW brane with constant tension, we also have $a_2\propto 1/\ell$ with the coefficient depending on the effective tension of the EOW brane \cite{Seminara:2018pmr}.
Nonetheless, in our case, $a_2$ has a complicated dependence on $\ell$.
Now let us discuss the renormalized entanglement entropy following \cite{Myers:2012ed,Liu:2013una}. Close to $u\to 0$, from \eqref{eq:eerel2} we have
\begin{equation}\label{eq:uto01}
x(u)=\ell-\frac{u^3}{3 u_t^2}+\dots\,.
\end{equation}
From the variation of \eqref{eq:ee1} with respect to $\ell$ and using \eqref{eq:uto01}, we have
\begin{eqnarray}
\label{eq:effent}
\mathcal{F}=
\frac{\ell^2}{2L}\frac{\partial S}{\partial \ell } =\frac{1}{4G}\frac{\ell^2}{u_t^2}\,.
\end{eqnarray}
The detailed derivation of the above equation can be found in the appendix \ref{app:ree}.
We see that the renormalized entanglement entropy $\mathcal{F}$ is determined by $\ell, u_t$,
which takes the similar form as the case of AdS$_4$/CFT$_3$ without boundary \cite{Myers:2012ed,Liu:2013una}.
However now the detailed dependence of $u_t$ on $\ell$ is different from the case without boundary. Compared with AdS$_4$/BCFT$_3$ in pure AdS$_4$ where $\mathcal{F}$ is independent of $\ell$ \cite{Chu:2017aab,Seminara:2018pmr}, now we have interesting nontrivial
$\ell$ dependence of $\mathcal{F}$ as shown in Fig. \ref{fig:reeF}: when $\ell<\ell_c$, $\mathcal{F}$ is positive and monotonically decreasing, at $\ell=\ell_c$ there is a discontinuity for $\mathcal{F}$ and when $\ell>\ell_c$, $\mathcal{F}=0$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.65\textwidth]{fig-F-l-clt0.pdf}
\end{center}
\vspace{-0.4cm}
\caption{\small Plot of the renormalized entanglement entropy $4G \mathcal F$ as a function of $\ell$ when $c=-1$ and $n=2$. The solid lines are for the minimal surfaces while the dashed brown, yellow and purple lines are for the non-minimal extremal surfaces. The dashed black line is the location of the transition $\ell_c$.
}
\label{fig:reeF}
\end{figure}
We make some comments on the case of $c=0, n=2$. In this case the extremal surface behaves differently comparing to other cases of $c\leq 0$ (the left plot in Fig. \ref{fig:config5}), i.e. when $\ell<\ell_m$, there is only one curved extremal surface in addition to the straight vertical one, while in other cases there exist two different curved extremal surfaces. In the right plot of Fig. \ref{fig:config5}, we show one example of the extremal surface for $\ell<\ell_m$. When $\ell>\ell_m$, we have only one single straight extremal surface which is the same as other cases.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.46\textwidth]{fig-ut-l-c0.pdf}
~~~
\includegraphics[width=0.45\textwidth]{fig-min-surface-c0n2.pdf}
\end{center}
\vspace{-0.3cm}
\caption{\small {\em Left:} Plots of $u_t$ as functions of $\ell$ when $c=0$ and $n=2$ (red),$\,3$ (brown), $\,4$ (orange), $\,5$ (purple), $\,6$ (blue). {\em Right:} The extremal surfaces for $c=0, n=2$. In this case $\ell_m\simeq 0.785$ and we have chosen $\ell=0.753$ (with $\ell<\ell_m$) so that there exists two different extremal surfaces.
}
\label{fig:config5}
\end{figure}
Fig. \ref{fig:ee0n2} shows the area of the extremal surfaces (left plot) and the renormalized entanglement entropy (right plot) as a function of the width of the strip $\ell$. We see that different from the other cases discussed in this part (e.g. Fig. \ref{fig:eec>0}), the area of the curved extremal surface is equal to the area of the straight vertical surface at $\ell=\ell_m$. The renormalized entanglement entropy is positive and monotonically decreasing when $\ell<\ell_c$ and is continuous while not smooth at $\ell=\ell_c$, which is different from the other cases of $c\leq 0$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.46\textwidth]{fig-S-l-c0-n2.pdf}
\includegraphics[width=0.45\textwidth]{fig-F-l-c0-n2.pdf}
\end{center}
\vspace{-0.3cm}
\caption{\small {\em Left:} Plots of the area of the extremal surfaces as a function of $\ell$ when $c=0, n=2$. We set the cutoff $u_c=10^{-3}$.
{\em Right:} The renormalized entanglement entropy $4G \mathcal F$ as a function of $\ell$ for $c=0, n=2$.
}
\label{fig:ee0n2}
\end{figure}
Comparing the right plot in Fig. \ref{fig:ee0n2} with the one in Fig. \ref{fig:reeF}, we see the renormalized entanglement entropy $\mathcal{F}$ behaves differently. Note that in both these two cases, we have fixed the same metric in $N$ (i.e. $n=2$) while different values of $c$ which plays the role of the effective tension of the EOW brane. This indicates the boundary of BCFT have nontrivial effects on the renormalized entanglement entropy, i.e. the number of the effective degrees
of freedom inside the strip.
\begin{itemize}
\item Case 2: $c > 0$.
\end{itemize}
In this case we have different profiles of the EOW branes, which depends on the value of $n$. As can be seen from Fig. \ref{fig:config}, when $n=2$ the EOW brane will approach $x\to\infty$, while when $n\in (2, 6]$, the EOW brane can only approach a finite value of $x_m$.
Thus, in the case $n=2$, there is only one kind of extremal surface, while in the later case there might be two different kinds of extremal surfaces when $x>x_m$, as shown in the right plot of Fig. \ref{fig:exsurface0}. In the following we will study these two cases separately.
The entanglement entropy associated with the straight line has the same form as \eqref{eq:ee1a}. We focus on the configuration of the curved extremal surface, i.e. the curved line $\gamma_2$ in the right plot of Fig. \ref{fig:exsurface0}.
The intersecting point between the extremal surface and the EOW brane $(x_*, u_*)$ satisfies $n_Q\cdot n_\gamma=0$, i.e.
$u'(x_*)=-c\sqrt{f}$. Then $C^{-1}=u_*^2\sqrt{1+c^2}$.
Therefore we have
\begin{equation}
\label{eq:eerel3}
u'=-\sqrt{f}\,\bigg(\frac{u_*^4}{u^4}(1+c^2)-1\bigg)^{1/2}\,.
\end{equation}
Then we have the relation
\begin{align}
\ell-x_*=-\int_{u_*}^0 du\, \frac{1}{\sqrt{f}\,\bigg(\frac{u_*^4}{u^4}(1+c^2)-1\bigg)^{1/2}}\,.
\end{align}
As we know $x_*(u_*)$ from the equation \eqref{eq:bgeol2} for the EOW brane $Q$, one can obtain $u_*$ as a function of $c, n, \ell$. The plots for $u_*$ as functions of $\ell$ at two different values of $c$ and different $n$'s are shown in Fig. \ref{fig:cf-sol}. For $n=2$, we see that $u_*$ is monotonically increasing when we increase $\ell$ and this is what we expected because the EOW brane approaches $x\to\infty$. For other values of $n=3,4,5,6$, there exists a critical value of $c$ which separates different behaviors of the extremal surfaces. In the left plot with $c<c_m$, we find that when $\ell<x_m$, there exists only one curved extremal surfaces, while when $x_m<\ell<\ell_m$, there exist two different curved surfaces, and when $\ell_m<\ell$, there does not exist any curved extremal surface. Note that the vertical straight extremal surface shown as $\gamma_1$ in the right plot of Fig. \ref{fig:exsurface0} exits when $x_m<\ell$. When we increase $c$ to make it larger than a critival value $c_m$, as shown in the right plot of Fig. \ref{fig:cf-sol}, we find that when $\ell<x_m$ there exists only one curved extremal surface, while when $\ell>x_m$ there exists only the vertical straight extremal surface.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.44\textwidth]{fig-us-l-cs.pdf}
~~
\includegraphics[width=0.44\textwidth]{fig-us-l-cl.pdf}
\end{center}
\vspace{-0.3cm}
\caption{\small The location of interacting point between the extremal surface and the EOW brane $u_*$ as a function of $\ell$ for $c=1/4$ ({\em left}) and $c=3$ ({\em right}). In both cases, we have $n=2$ (red),$\,3$ (brown), $\,4$ (orange), $\,5$ (purple), $\,6$ (blue).
}
\label{fig:cf-sol}
\end{figure}
Three typical extremal surfaces are shown in Fig. \ref{fig:ee-sol}. The blue line is the $x$ axis of the BCFT, while the green line is the location of the EOW brane $Q$. The left plot is for $n=2$ and there always exist a single curved extremal surface for arbitrary $\ell$, shown as orange line. The middle and right plots are for $n=4$ while different $c$. In the middle plot, $c<c_m$ and $x_m<\ell<\ell_m$, we have three extremal surfaces. In the right plot, $c>c_m$ while $\ell<x_m$, we have only one curved extremal surface. In the following we will study the behavior of the entanglement entropy and the renormalized entanglement entropy for these typical behaviors.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.32\textwidth]{fig-min-surface-n2c1.pdf}
~
\includegraphics[width=0.31\textwidth]{fig-min-surface-n4c05.pdf}
~
\includegraphics[width=0.32\textwidth]{fig-min-surface-n4c2-l.pdf}
\end{center}
\vspace{-0.3cm}
\caption{\small Three typical extremal surfaces for $c>0$: we have set $n=2, c=1$ ({\em left}), $n=4, c=1/2$ ({\em middle}) and $n=4,c=2$ ({\em right}). The green line is the location of the EOW brane $Q$ and the blue line is the $x$ axis of BCFT.
}
\label{fig:ee-sol}
\end{figure}
The entanglement entropy of the strip can be obtained from the area of the minimal surface
\begin{eqnarray}
\label{eq:ee2}
\begin{split}
S=\frac{A}{4G}&=\frac{L}{2G}\,\int_{x_*}^{\ell-\epsilon}\, dx\frac{1}{u^2}\,\sqrt{1+\frac{u'^2}{f}}\,.
\end{split}
\end{eqnarray}
When there are multiple extremal surfaces, we again need to choose the one with the minimal area.
The areas of the above typical configurations of extremal surfaces can be found in Fig. \ref{fig:ee-sol2}. For $n=2$ and $c=1$ (left), the entanglement entropy is continuous and smooth when we increase $\ell$. For $n>2$, we find that for the case $c<c_m$ (middle) or $c>c_m$ (right), there is a continuous transition at $\ell_c$ (with $x_m<\ell_c<\ell_m$) or smooth crossover at $\ell=x_m$ from the orange curved line to the purple straight line.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{fig-S-l-c1n2.pdf}
\includegraphics[width=0.3\textwidth]{fig-S-l-c01n6.pdf}
\includegraphics[width=0.3\textwidth]{fig-S-l-c2n4.pdf}
\end{center}
\vspace{-0.4cm}
\caption{\small Plots of the area of extremal surfaces as a function of the width of the strip $\ell$ for $n=2, c=1$ ({\em left}), $n=6, c=1/10$ ({\em middle}), $n=4,c=2$ ({\em right}).
We have set $u_c=10^{-3}$. The one with smallest area gives the correct entanglement entropy.
}
\label{fig:ee-sol2}
\end{figure}
Due to the divergence of the entanglement entropy, we again study the renormalized entanglement entropy which is independent of the cutoff. Close to $u\to 0$, from \eqref{eq:eerel3} we have
\begin{equation}\label{eq:uto02}
x(u)=\ell-\frac{u^3}{3 u_*^2\,(1+c^2)}+\dots\,.
\end{equation}
Following \cite{Myers:2012ed,Liu:2013una} and the calculations in the appendix \ref{app:ree}, we do the variation of \eqref{eq:ee2} with respect to $\ell$. Using \eqref{eq:uto02}, we can obtain the dimensionless renormalised entanglement entropy
\begin{eqnarray}
\label{eq:effent2}
\mathcal{F}=
\frac{\ell^2}{2L}\frac{\partial S}{\partial \ell }=\frac{1}{4G}\frac{\ell^2}{u_*^2\sqrt{1+c^2}}\,.
\end{eqnarray}
The behavior of renormalized entanglement entropy is shown in Fig. \ref{fig:ree-sol}. The left plot is for $n=2$ and we find that the renormalized entanglement entropy is non-negative and monotonically decreasing.
The middle plot is for $n=6$ while $c<c_m$. The solid line is for the configuration with minimal area and we see that there is a jump for the renormalized entanglement entropy at $\ell=\ell_c$ (with $x_m<\ell_c<\ell_m$). The right plot is for $n=4$ while $c>c_m$. We see that the renormalized entanglement entropy is continuous while not smooth at $\ell=x_m$ (black dashed line).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{fig-F-l-c1n2.pdf}
\includegraphics[width=0.3\textwidth]{fig-F-l-c01n6.pdf}
\includegraphics[width=0.3\textwidth]{fig-F-l-c2n4.pdf}
\end{center}
\vspace{-0.4cm}
\caption{\small Plots of the renormalized entanglement entropy $4G\mathcal F$ as a function of $\ell$ for $n=2, c=1$ ({\em left}), $n=6, c=1/10$ ({\em middle}), $n=4,c=2$ ({\em right}).
}
\label{fig:ree-sol}
\end{figure}
Finally, let us discuss on the effect of the boundary on the BCFT. We have seen that with different choices of $c$ which is related to the effective tension of the EOW brane, the profiles of the brane are different. From the plot in Fig. \ref{fig:reeF}, the right one in Fig. \ref{fig:ee0n2} and the left one in Fig. \ref{fig:ree-sol} which are for the same bulk geometry with $n=2$ while different values of $c$, we find that the renormalized entanglement entropy behaves differently. Fig. \ref{fig:ree-c} shows the behavior of the renormalized entanglement entropy in the limit $\ell\to 0$ as a function of $c$ which is independent of $n$. It is known that the renormalized entanglement entropy can be viewed as the number of the effective degrees of freedom. We find that with different profile of the EOW brane which should determined by the properties of the BCFT, the UV degrees of freedom of the BCFT are different. When the size of the strip is large enough, i.e. $\ell>\ell_c$, the renormalized entanglement entropy goes to zero.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig-F0-c.pdf}
\end{center}
\vspace{-0.5cm}
\caption{\small Plots of $4G\mathcal F(\ell\to 0)$ as a function of $c$. The behavior is independent of $n$ since in the limit $\ell\to 0$, only the UV physics is important.
}
\label{fig:ree-c}
\end{figure}
\section{AdS Soliton in AdS/BCFT}
\label{sec3}
In the previous section we have studied the gapped system which is dual to Einstein-scalar theory in the presence of a boundary. In this section, we will study another gapped system which is described by the AdS soliton \cite{Witten:1998zw}.
We will focus on the five dimensional AdS soliton where one of the spatial dimension is compact, therefore the dual filed theory is 2+1 dimensional with an additional compact extra dimension. We consider the presence of a boundary for the dual field theory along the non-compact dimension and study its transports and entanglement structure parallel to the discussion in the previous section.
The action of the holographic model is
\begin{eqnarray}
\mathcal{S}_\text{bulk}&=&\mathcal{S}_N+\mathcal{S}_Q\,,
\end{eqnarray}
where
\begin{eqnarray}
\begin{split}
\mathcal{S}_N&=\int_N d^5x\sqrt{-g}\,
\bigg[\frac{1}{2\kappa^2}\bigg(R+12 \bigg) -\frac{1}{4e^2}F^2 \bigg] \,,\\
\mathcal{S}_{Q}&=\int_Q d^4x\sqrt{-\gamma}\, \bigg[\frac{1}{\kappa^2}\big(K-T\big)
\bigg]\,.
\end{split}
\end{eqnarray}
The EOW brane $Q$ is similar to the setup in the previous subsection as shown in Fig. \ref{fig:cf}, which extends from the boundary $P$ of the BCFT to the bulk. $T$ is the tension of the brane. We set $2\kappa^2=e=1$.
The equations of motion in $N$ are
\begin{eqnarray}
R_{ab}-\frac{1}{2}g_{ab}\big(R+12\big)-\frac{1}{2}\bigg[\mathcal{F}_{ac}\mathcal{F}_{b}^{~c}-\frac{1}{4}g_{ab}\mathcal{F}^2\bigg]&=0 \,,\\
\nabla_b F^{ba}&=0\,.
\end{eqnarray}
The metric of AdS soliton
geometry at zero density is
\begin{equation}
\label{eq:adssoliton}
ds^2=\frac{1}{u^2}\bigg[-dt^2+dx^2+dy^2+\frac{du^2}{f(u)}\bigg]+\frac{f(u)}{u^2}d\theta^2\,,~~ f(u)=1-\frac{u^4}{u_0^4}\,,~~~A_a=0\,,
\end{equation}
$\theta$ has a period of $\theta\sim \theta +\pi u_0$. Note that $u_0$ sets the scale of the gap. This geometry is asymptotic to AdS and approaches to $R^{1,2}\times S^1$ near the boundary. Here $M$ is also defined on the half plane with $x\geq 0$.
The AdS boundary is at $u\to 0$. AdS soliton exists at $u\leq u_0$.
Obviously, the AdS soliton geometry \eqref{eq:adssoliton} is a solution of the system.
The equations of motion on $Q$ are
\begin{eqnarray}
\begin{split}
K_{\mu\nu}-(K-T)\gamma_{\mu\nu}&=0\,, \\
n_aF^{ab}&=0\,.
\end{split}
\end{eqnarray}
where $n^a$ is the outforward unit vector for $Q$. We assume $Q$ is described by equation $u=u(x)$, then we have
\begin{eqnarray}
\label{eq:sol-na}
(n^t,n^x,n^y,n^u,n^\theta)=\bigg(0,~~\frac{-u}{\sqrt{1+f(u)x'(u)^2}}\,,~~0,~~\frac{u f(u)x'(u)}{\sqrt{1+f(u)x'(u)^2}}\,,~~0\bigg)\,.
\end{eqnarray}
Plugging \eqref{eq:sol-na} into the equations on $Q$, we find that there is only one consistent solution with trivial embedding $x(u)=0$ with $T=0$.\footnote{It is interesting to study if other nontrivial consistent embedding could be found when we choose Dirichlet or mixed boundary conditions. We leave this possibility for future study.}
This fact makes the discussion for the AdS soliton simpler than the gapped geometry in section \ref{subsec:groundstate} where there are different profiles for $Q$.
\subsection{Conductivity}
\label{ss:solcon}
With the above configurations for the AdS soliton with a boundary, we can study its transport physics and entanglement structure. We first study the conductivity along the $y$-direction.
Considering the fluctuations of the gauge fields as \eqref{eq:gauflu}, we obtain
the fluctuation equation for $a_y$ in $N$
\begin{eqnarray}\label{eq:fulay2}
a_y''+\left(\frac{f'}{f}-\frac{1}{u}\right)a_y'+\frac{\omega^2+\partial_x^2}{f} a_y&&=0\,,
\end{eqnarray}
and the equation for $a_y$ on the EOW brane $Q$
\begin{equation}
(-\partial_x a_y +f x' \partial_u a_y)\Big{|}_Q=0\,.
\end{equation}
Since $Q$ is described by $x=0$, the boundary equation can be simplied further as $\partial_x a_y\big{|}_Q=0$.
Therefore this is quite similar to the case of $c=0$ in section \ref{subsec:con}, and we have solution
\begin{equation} a_y=c_0 a(u,\omega)
\end{equation}
where $c_0$ is a constant and $a(u,\omega)$ satisfy
\begin{eqnarray} \label{eq:fula}
a''+\left(\frac{f'}{f}-\frac{1}{u}\right)a'+\frac{\omega^2}{f} a=0\,,
\end{eqnarray}
We have analyzed in appendix \ref{app:sch} that the system is gapped by transforming the above equations into a Schr\"{o}dinger problem to show that the real part of conductivity is a sum of discrete poles. Thus for this model, both in $M$ and $P$ the conductivities along the boundary of BCFT are trivial.
\subsection{Entanglement entropy}
Similar to the discussions in section \ref{subsec:2ee}, we study the entanglement entropy of a strip geometry. The subsystem under consideration is $0<x<\ell$, while $-L<y<L$ with $L\to\infty$ and $0\leq \theta\leq \pi u_0 $.
The extremal surface $\gamma$ is specified by $u=u(x)$ which is a section at $y=\text{constant}$.
The induced metric on $\gamma$ is
\begin{equation}
ds^2_\gamma=\frac{1}{u^2}\bigg[\,\Big(1+\frac{u'^2}{f(u)}\Big)\,dx^2+dy^2\bigg]+\frac{f(u)}{u^2}d\theta^2\,,
\end{equation}
from which one obtains the area functional
\begin{equation}
\label{eq:areafun2}
A=2\pi u_0 L\,\int_{x_*}^{\ell} dx\,\frac{\sqrt{f+u'^2}}{u^3}\,.
\end{equation}
Since the above functional does not implicitly depend on $x$, there is a conserved quantity
\begin{equation}
\label{eq:consq2sol}
\frac{f}{u^3\sqrt{f+u'^2}}=C\,.
\end{equation}
Note that for the extremal surface, we have boundary $u(\ell)=0$. Since the geometry is only for $0<u\leq u_0$ and $Q$ is located at $x=0$, one might expect that there are two different kinds of extremal surfaces as shown in the cartoon plot of Fig. \ref{fig:caree-sol}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw[black] (0,0) node[anchor=north, below]{ $O$ };
\draw [blue!10] (0,0)--(7,0);
\draw [black, dashed] (0,6)--(8,6);
\draw [purple] (6,0)--(6,6) node[midway, right]{$\gamma_1$};
\draw [green] (0,0)--(0,7);
\draw [orange] (6,0) .. controls (6,2) and (5,4.5) .. (0,5) node[left]{$(x_*, u_*)$ } node[midway, left]{$\gamma_2$};
\draw[black] (0, 7) node[above]{ $Q$ };
\draw[black] (0, 6) node[left]{ $u_0$ };
\filldraw[orange] (0, 5) circle (2pt);
\filldraw[black] (0, 0) circle (2pt);
\filldraw[blue] (6, 0) circle (2pt);
\draw[black] (6, 0) node[below]{ $\ell$ };
\draw[-latex, blue, opacity=0.8] (0, 0)--(8.5, 0) node[anchor=north, at end]{ $x$ };
\end{tikzpicture}
\end{center}
\vspace{-0.3cm}
\caption{\small Cartoon plot for the extremal surfaces $\gamma_1$ and $\gamma_2$ in the AdS soliton geometry with an EOW brane $Q$. }
\label{fig:caree-sol}
\end{figure}
The first configuration $\gamma_1$ in Fig. \ref{fig:caree-sol} is the surface $x=\ell$ which corresponds to $C=0$ in \eqref{eq:consq2sol}. Note that this configuration should exist for any $\ell$. The entanglement entropy from this extremal surface is
\begin{align}
\label{eq:ee-sol-straight}
S=\frac{2\pi u_0 L}{4G}\int_{u_c}^{u_0} \frac{du}{u^3}\,
=\frac{\pi u_0 L}{4G}\left(\frac{1}{u_c^2}-\frac{1}{u_0^2}\right) ,
\end{align}
where $u_c$ is the cutoff close to the boundary and $u(\ell-\epsilon)=u_c$.
When $u_c\to 0$, we have $u_c A/(2\pi u_0 L)\to 1/2$.
The entanglement entropy is independent of $\ell$ and therefore we have
$\partial{S}/\partial \ell=0$.
Another configuration $\gamma_2$ in Fig. \ref{fig:caree-sol} only exists for small $\ell$. We have the intersecting point $(x_*, u_*)$ between the extremal surface and the EOW brane where $n_Q\cdot n_\gamma=0$, i.e.
$u'(x_*=0)=0$.
From \eqref{eq:consq2sol}, we have $C=\sqrt{f(u_*)}/u_*^3$ which leads to
\begin{equation}
\label{eq:eerel2sol}
u'=-\sqrt{\frac{u_*^6 f^2}{u^6 f(u_*)}-f}\,.
\end{equation}
Then we have relation
\begin{equation}
\label{eq:eeeqn1sol}
\ell=\int^{u_*}_0 du\, \frac{1}{\sqrt{\frac{u_*^6 f^2}{u^6 f(u_*)}-f}}\,.
\end{equation}
From \eqref{eq:eeeqn1sol}, one could obtain $u_*$ as a function of $\ell$ as
shown in the left plot of Fig. \ref{fig:config-s1}.
Note that we should have $u_*< u_0$. There exists a maximal value $\ell_m$ below which we have two different configurations of curved extremal surfaces. One example of the extremal surfaces at $\ell<\ell_m $ is shown in the right plot of Fig. \ref{fig:config-s1}. These features remind us the discovery in \cite{Klebanov:2007ws} without boundary.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.44\textwidth]{fig-ut-l-soliton.pdf}
~~~
\includegraphics[width=0.43\textwidth]{fig-min-surface-soliton.pdf}
\end{center}
\vspace{-0.3cm}
\caption{\small {\em Left:} The location of intersecting point $u_*$ as function of the width of the strip $\ell$. {\em Right:} One example of the extremal surfaces in AdS soliton geometry with the EOW brane for a given width $\ell<\ell_m$. Note that here we have set $u_0=1$.
}
\label{fig:config-s1}
\end{figure}
The entanglement entropy can be obtained from the area of the extremal surfaces
\begin{equation}
\begin{split}
\label{eq:eesol}
S&= \frac{A}{4G}=\frac{\pi u_0 L}{2G} \,\int_{x_*}^{\ell-\epsilon} dx\frac{\sqrt{f+u'^2}}{u^3}\,.
\end{split}
\end{equation}
The extremal surface with minmial area gives the correct entanglement entropy. In the left plot of Fig. \ref{fig:ree-soliton}, we show the area of the extremal surfaces as a function of the width of the strip $\ell$. We find that there exists a critical value of $\ell_c$ which is smaller than $\ell_m$ that was found in the left plot of Fig. \ref{fig:config-s1}. When $\ell<\ell_c$, the orange curved extremal surface has minimal area. When $\ell>\ell_c$, the straight vertical purple curve has minimal area. Furthermore, when $\ell\to 0$, we have $f\to 1, u'\to-\infty$, from \eqref{eq:eesol} we found that $S\propto \frac{1}{\ell^2}$ which reflects the UV properties of the BCFT.
The renormalized entanglement entropy can also be discussed. Close to $u\to 0$, from \eqref{eq:eerel2sol} we have
\begin{equation}\label{eq:uto02sol}
x(u)=\ell-\frac{u^{4}}{4 u_*^3}+\dots\,.
\end{equation}
From the variation of \eqref{eq:eesol} with respect to $\ell$ and using \eqref{eq:uto02sol}, we obtain
\begin{eqnarray}
\label{eq:effentsol}
\mathcal{F}=
\frac{\ell^3}{2\pi u_0 L}\frac{\partial S}{\partial \ell } =\frac{\sqrt{f(u_*)}}{4G}\frac{\ell^3}{u_*^3}\,.
\end{eqnarray}
The behavior of renormalized entanglement entropy is shown in the right plot of Fig. \ref{fig:ree-soliton}, which reminds us the plot in Fig. \ref{fig:reeF}. We find that the renormalized entanglement entropy (solid lines) is non-negative and monotonically decreasing and there is a discontinuous transition at $\ell_c$ (dashed black line). Different from the discussion in previous section, there is no free parameters similar to the effective tension of the EOW brane to make the entanglement structure more richer.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig-S-l-soliton.pdf}
~~
\includegraphics[width=0.48\textwidth]{fig-F-l-soliton.pdf}
\end{center}
\vspace{-0.3cm}
\caption{\small {\em Left:} The area of the extremal surfaces $u_cA/(2\pi u_0 L)$ as function of the width of the strip $\ell$. When we set the cut off $u_c=10^{-5}$, the difference along the vertical axis is of order $10^{-10}$ and thus we do not explicitly show them. {\em Right:} The renormalized entanglement entropy $4G\mathcal{F}$ as function of the width $\ell$. There is a discontinuous transition for renormalized entanglement entropy at $\ell=\ell_c$.
}
\label{fig:ree-soliton}
\end{figure}
\section{Conclusion and discussion}
\label{sec4}
We have studied the properties of two holographic gapped systems at zero density in the presence of boundaries using AdS/BCFT. The first gapped system is described by Einstein-scalar gravity and the second one is the dual of AdS soliton geometry. In the first system, the profiles of the EOW brane is quite richer, depending on the effective tension of the EOW brane, while in the second system we only find one consistent trivial profile of the EOW brane. In these two systems, both the bulk and boundary conductivities in BCFT along the spatial direction of its boundary are trivial and hence we learn that the strong correlation can not make a topologically trivial insulator topologically nontrivial. The entanglement structure in the first system is very rich. The boundary physics has non-trivial effects on the entanglement structure of the system. For example, by comparing the plot in Fig. \ref{fig:reeF}, the right one in Fig. \ref{fig:ee0n2} and the left one in Fig. \ref{fig:ree-sol} which are for the same bulk geometry parameterized by $n=2$ while different values of $c$ that parameterized the effective tension of the EOW brane, we find that the renormalized entanglement entropies behave differently. Nevertheless, in the presence of a boundary, the renormalized entanglement entropy is always non-negative and monotonically decreasing and can discontinuously or continuously or even smoothly evolve when we increase the length scale of the subsystem.
In the system of AdS soliton with a boundary, the renormalized entanglement entropy exhibits a ``unique" behavior with a discontinuous drop when we increase the size of the subsystem.
One immediate open question is to study other fluctuations modes, e.g. metric fluctuations, to check if there are any gapless degrees of freedom on the boundary of BCFT. Another interesting question is to define a proper physical quantity to extract the contribution from the boundary side. A possible candidate might be the quantity of the differences between the cases with and without boundaries. The entanglement structure in the gapped geometry without any EOW brane has been studied in \cite{Liu:2013una} where the renormalized entanglement entropy has been calculated to be the same expression as (\ref{eq:effent}). For the geometry without any boundary, one might naively identify the system through a mirror reflection $x\to -x$ which would result in the same conclusions as $c=0$. For $n=2$, we have seen that the renromalized entanglement entropy crucially depends on the value of $c$. Especially for small $\ell$ when $c$ is positive, we have larger renormalized entanglement entropy, while when $c$ is negative, we have smaller renormalized entanglement entropy. This indicates that the different profiles of the EOW brane can add or reduce the UV degrees of freedom of the CFT. It would be interesting to study how to reveal this procedure more precisely. Meanwhile, it would be interesting to study the entanglement entropy for other different subsystems to see if similar phenomena could be observed.
It would be very interesting to construct a holographic model for topological insulator from AdS/BCFT and then make predictions from the model. The study in this work suggests that new ingredients should be incorporated if we start from a gapped system by introducing an EOW brane using AdS/BCFT. One possibility is to study other types of gauge theories on the gravity side, e.g. the Dirac-Born-Infeld action for the gauge field, to make the boundary equation for the gauge field more complicated in order to have gapless excitations. Another possibility is to introduce a new dynamical gauge field on the brane to model the gapless excitations in analogy to the holographic Kondo model \cite{Erdmenger:2015xpq, Andrei:2018die}. We leave these interesting questions for future research.
\vspace{.8cm}
\subsection*{Acknowledgments}
We thank Li Li, Rong-Xin Miao, Francisco Peña-Benitez, Jie Ren, Ya-Wen Sun, Xin-Meng Wu for useful discussions. This work is supported by the National Natural Science Foundation of China grant No.11875083. Jun-Kun Zhao is also supported by the National Natural Science Foundation of China grant No.12122513 and No.12075298.
\vspace{.6 cm}
|
2,869,038,153,780 | arxiv | \section{Introduction}
In this work, we continue to explore the extensions of coherent state transforms to the context of Clifford
analysis started in \cite{KMNQ, DG, MNQ, PSS}.
In \cite{MNQ}, an extension of the coherent state transform (CST) to unitary maps
from the spaces of $L^2$ functions on $M={\mathbb R}^m$ and on the $m$--dimensional torus, $M={\mathbb T}^m$,
to the spaces of square integrable monogenic functions on ${\mathbb R} \times M$ was studied.
We consider the cases when $M$ is an $m$--dimensional sphere, $M=S^m$, equipped with the
$SO(m+1, {\mathbb R})$--invariant metric of unit volume. These cases are a priori more complicated than
those studied before
as the
transform uses (for $m>1$) the Laplacian and the Dirac operators for the non--flat metrics
on the spheres. We show that there is a unique $SO(m+1, {\mathbb R})$ invariant measure
on ${\mathbb R} \times {\mathbb S}^m \cong {\mathbb R}^{m+1}\setminus \{0\}$ such that the natural Clifford CST (CCST)
is unitary. This transform is factorized into a contraction operator given by
heat operator evolution
at time $t=1$ followed by Cauchy-Kowalewsky (CK) extension, which
exactly compensates the contraction for our choice of measure on
${\mathbb R}^{m+1}\setminus \{0\}$.
In the usual coherent state Segal--Bargmann transforms \cite{Ba, Se, Ha1, Ha2, St, HM}, instead
of the CK extension to a manifold with
one more real dimension, one
considers the analytic continuation to a
complexification of the initial manifold (playing the role of phase space of the system).
The CCST is of interest in Quantum Field Theory as it establishes
natural unitary isomorphisms between
Hilbert spaces of solutions of the Dirac equation and one--particle Hilbert spaces in the Schr\"odinger representation. The standard CST, on the other hand,
studies the unitary equivalence of the Schr\"odinger representation
with special K\"ahler representations with the wave functions defined
on the phase space.
In the section \ref{ss-32} we consider a one-parameter family of CCST, using heat operator evolution
at time $t>0$ followed by CK extension, and we show that, by changing
the measure on ${\mathbb R}^{m+1}\setminus \{0\}$ to a new Gaussian
(in the coordinate $\log(|\underline x|)$) measure $d\mu_t$, these transforms
are unitary. As $t$ approaches $0$ (so that the first factor in the
transform is contracting less than for higher values of $t$) the measures $d\mu_t$
become more concentrated around the radius $|\underline x|=1$ sphere
and as $t \to 0$, the measure $d\mu_t$ converges to
the measure
$$\delta(y) \, dy \, d\sigma_m \, ,
$$
where $y=\log(|\underline x|)$,
supported on ${\mathbb S}^m$.
\section{Clifford analysis}
\label{ss-ca}
Let us briefly recall from \cite{BDS, DSS, DS, LMQ, So, FQ, PQS, DQC}, some definitions and results from Clifford analysis.
Let ${\mathbb R}_{m+1}$ denote the real Clifford algebra with $(m+1)$ generators, $ e_j, j = 1, \dots, m+1$, identified
with the canonical basis of ${\mathbb R}^{m+1} \subset {\mathbb R}_{m+1}$ and satisfying the relations
$e_i e_j + e_j \ e_i = - 2 \delta_{ij}$. Let ${\mathbb C}_{m+1}={\mathbb R}_{m+1}\otimes {\mathbb C}$.
We have that ${\mathbb R}_{m+1} = \oplus_{k=1}^{m+1} {\mathbb R}_{m+1}^k$,
where $ {\mathbb R}_{m+1}^k$ denotes the space of $k$-vectors, defined by ${\mathbb R}_{m+1}^0 = {\mathbb R}$ and ${\mathbb R}_{m+1}^k
= {\rm span}_{\mathbb R} \{ e_A \, : \, A \subset \{1, \dots , m+1\}, |A| = k\}$, where
$e_{i_1 \dots i_k} = e_{i_1} \dots e_{i_k}$.
Notice also that ${\mathbb R}_1 \cong {\mathbb C}$ and ${\mathbb R}_2 \cong {\mathbb H}$.
The inner product in ${\mathbb R}_{m+1}$ is defined by
$$
<u, v> = \left(\sum_A u_A e_A , \sum_B v_B e_B\right) = \sum_A u_A v_A .
$$
The Dirac operator is defined as
$$
\underline D = \sum_{j=1}^{m+1} \, e_j \, \partial_{x_j} .
$$
We have that $\underline D^2 = - \Delta_{m+1}$.
Consider the subspace of ${\mathbb R}_{m+1}$ of $1$-vectors
$$
\{\underline x = \sum_{j=1}^{m+1} x_j e_j: \,\, x=(x_1,\dots,x_m)\in {\mathbb R}^{m+1}\}\cong {\mathbb R}^{m+1},
$$
which we identify with ${\mathbb R}^{m+1}$. Note that $\underline x^2 = - |\underline x|^2 = - (x, x).$
Recall that a continuously differentiable function $f$ on an open domain ${\cal O} \subset {\mathbb R}^{m+1}$,
with values on ${\mathbb C}_{m+1}$,
is called (left) monogenic on ${\cal O}$ if it satisfies the Dirac equation (see, for example, \cite{BDS,DSS,So})
\begin{equation}\nonumber
\underline D f(x) = \sum_{j=1}^{m+1} \, e_j \, \partial_{x_j} \, f(x) = 0.
\end{equation}
For $m=1$, monogenic functions on ${\mathbb R}^2$ correspond to holomorphic
functions of the complex variable $x_1+e_1e_2 \, x_2$.
The Cauchy kernel,
$$
E(x) = \frac{\overline{\underline{x}}}{|x|^{m+1}},
$$
is a monogenic function on ${\mathbb R}^{m+1} \setminus \{0\}$. In the spherical coordinates,
$r = e^y= |\underline x|, \xi= \frac{\underline x}{|\underline x|}$,
the Dirac operator reads
\begin{equation}
\label{e-sdo}
\underline D = \frac 1r \, \underline \xi \left(r \partial_{r} + \Gamma_{\underline \xi}\right) = e^{-y} {\underline \xi} \left(\partial_y + \Gamma_{\underline \xi}\right) ,
\end{equation}
where $\Gamma_{\underline \xi}$ is the spherical Dirac operator,
$$
\Gamma_{\underline \xi} = - \underline \xi \partial_{\un \xi} = - \sum_{i < j} \, e_{ij} \left(x_i \partial_{x_j} - x_j \partial_{x_i} \right).
$$
We see from (\ref{e-sdo}) that the equation for monogenic
functions in the spherical coordinates is,
on ${\mathbb R}^{m+1} \setminus \{0\}$, equivalent to
\begin{equation}
\label{e-mesc}
\underline D (f) = 0 \Leftrightarrow \partial_y \- f = -\Gamma_{\underline \xi}(f) \, , \qquad
r>0.
\end{equation}
The Laplacian $\Delta_x$ has the form
$$
\Delta_x = \partial_{r}^2 + \frac mr \partial_{r} + \frac{1}{r^2} \Delta_{\underline \xi},
$$
where $\Delta_{\underline \xi}$ is the Laplacian on the sphere (for the invariant metric).
The relation between the spherical Dirac operator and the spherical Laplace operator
is (see eg \cite{DSS}, (0.16) and section II.1)
\begin{equation}
\label{e-lop}
\Delta_{\underline \xi} = \left((m-1) I - \Gamma_{\un \xi}\right) \, \Gamma_{\un \xi}
\end{equation}
Let ${\mathcal H}(m+1, k)$ denote the space of (${\mathbb C}_{m+1}$--valued) spherical harmonics of degree $k$. These are the eigenspaces
of the self--adjoint spherical Laplacian, $\Delta_{\underline \xi}$,
\begin{eqnarray} \label{e-loe}
\nonumber f & \in & {\mathcal H}(m+1, k) \\
\Delta_{\underline \xi} (f) &=& -k(k+m-1) f.
\end{eqnarray}
The spaces ${\mathcal H}(m+1, k)$ are a direct sum of eigenspaces of the self--adjoint spherical
Dirac operator
\begin{eqnarray}
\label{e-sdo1}
\nonumber {\mathcal H}(m+1, k) &=& {\mathcal M}^+(m+1, k) \, \oplus \, {\mathcal M}^-(m+1, k-1) \\
\Gamma_{\un \xi} (P_k(f)) &=& -k P_k(f) \\
\nonumber \Gamma_{\un \xi} (Q_l (f)) &=& (l+m) Q_l(f) \, , \quad f \in L^2({\mathbb S}^m, d\sigma_m)\otimes {\mathbb C}_{m+1},
\end{eqnarray}
where $P_k, Q_l$, denote the orthogonal projections on the subspaces ${\mathcal M}^+(m+1, k)$
and ${\mathcal M}^-(m+1, l)$ of $L^2({\mathbb S}^m, d\sigma_m)\otimes {\mathbb C}_{m+1}$.
The functions in ${\mathcal M}^+(m+1, k)$ and ${\mathcal M}^-(m+1, l)$
are in fact the restriction to ${\mathbb S}^m$ of (unique) monogenic functions
\begin{eqnarray} \label{e-ck1}
\nonumber \tilde P_k(f)(\underline x) &=& r^k \, P_k(f)\left(\frac{\underline x}{|\underline x|}\right) \\
\tilde Q_l(f)(\underline x) &=& r^{-(l+m)} \, Q_l(f)\left(\frac{\underline x}{|\underline x|}\right) , \quad f \in L^2({\mathbb S}^m, d\sigma_m)\otimes {\mathbb C}_{m+1}, k, l \in {\mathbb Z}_{\geq 0} ,
\end{eqnarray}
where, for all $f \in L^2({\mathbb S}^m, d\sigma_m)\otimes {\mathbb C}_{m+1}$, $\tilde P_k(f)$ are monogenic homogeneous polynomials of degree $k$
and $\tilde Q_l(f)$ are monogenic functions on ${\mathbb R}^{m+1}\setminus \{0\}$,
homogeneous of degree $-(l+m)$.
\section{Clifford Coherent State Transforms on Spheres.}
\label{s-3}
\subsection{CCST on spheres and its unitarity}
\begin{definition}
\label{d-am}
Let ${\mathcal A}(\So^m)$ be the space of analytic ${\mathbb C}_{m+1}$--valued functions on ${\mathbb S}^m$
with monogenic continuation to the whole of ${\mathbb R}^{m+1}\setminus \{0\}$.
\end{definition}
\begin{remark}
Let $V$ denote the space of finite linear combinations of spherical monogenics,
\begin{equation}
\label{e-ssm}
\nonumber V = {\rm span}_{\mathbb C} \left\{ P_k(f), Q_l(f), \, \, k, l \in {\mathbb Z}_{\geq 0} , f \in L^2({\mathbb S}^{m}, d\sigma_{m}) \otimes {\mathbb C}_{m+1} \right\}.
\end{equation}
We see from (\ref{e-ck1}) that $V \subset {\mathcal A}(\So^m)$. We will denote by $\tilde V$ the space of CK extensions of elements of
$V$ to ${\mathbb R}^{m+1}\setminus \{0\}$ (see (\ref{e-ck1})),
\begin{equation}
\label{e-ssm2}
\tilde V = {\rm span}_{\mathbb C} \left\{\tilde P_k(f), \tilde Q_l(f), \, \, k, l \in {\mathbb Z}_{\geq 0} , f \in L^2({\mathbb S}^{m}, d\sigma_{m}) \otimes {\mathbb C}_{m+1} \right\}.
\end{equation}
\end{remark}
In analogy with the case $m=1$ and also with the ``usual CST on spheres",
introduced in \cite{St, HM},
we will introduce the CCST
\begin{eqnarray}
\label{e-cstm}
\nonumber U_{(m)} \, &:& \, L^2({\mathbb S}^{m}, d\sigma_{m}) \otimes {\mathbb C}_{m+1} \longrightarrow
{\mathcal M}L^2({\mathbb R}^{m+1}\setminus \{0\}, \tilde \rho_m \, d^{m+1}x) \\
U_{(m)} &=& CK_{{\mathbb S}^m} \circ e^{\Delta_{\underline \xi}/2} = e^{-y \Gamma_{\underline \xi}} \circ e^{\Delta_{\underline \xi}/2} \\
\nonumber U_{(m)}(f)(\underline x) &=& \int_{{\mathbb S}^m} \, \tilde K_1(\underline x, \underline \xi) \, f(\xi) \,
d \sigma_m \, ,
\end{eqnarray}
where $CK_{{\mathbb S}^m} \, : \, {\mathcal A}(\So^m) \longrightarrow {\mathcal M}({\mathbb R}^{m+1}\setminus \{0\})$ denotes the CK extension,
$K_1$ is the heat kernel on ${\mathbb S}^m$ at time $t=1$
and $\tilde K_1(\cdot , \xi)$ denotes the CK extension
of $K_1$ to
${\mathbb R}^{m+1}\setminus \{0\}$ in its first variable (see Lemma \ref{l-1}, (\ref{e-kex}) and (\ref{e-uut}) below).
Our goal is to find (whether there exists) a function $\tilde \rho_m$ on
${\mathbb R}^{m+1}\setminus \{0\}$,
$$
\tilde \rho_m(\underline x) = \rho_m(y) , \, \qquad y = \log(|\underline x|)
$$
which makes the (well defined) map in (\ref{e-cstm}) unitary.
For $m=1$ there is a unique positive answer to the above question
given by
$$
\rho_1(y) = \frac 1{\sqrt{\pi}} \, e^{-y^2-2y}
$$
so that
$$
\tilde \rho_1(\underline x) = \frac 1{\sqrt{\pi}} \, e^{-\log^2(|\underline x|)-2\log(|x|)}.
$$
Our main result in the present paper is the following.
\begin{theorem}
\label{t-1}
The map $U_{(m)}$ in (\ref{e-cstm}) is a unitary isomorphism for
\begin{equation}
\label{e-rom}
\tilde \rho_m(\underline x) = \frac {e^{-\frac{(m-1)^2}{4}}}{\sqrt{\pi}} \, e^{-\log^2(|\underline x|)-2\log(|\underline x|)}.
\end{equation}
\end{theorem}
\begin{remark}
\label{r-1}
It is remarkable that the only dependence on $m$ of the corresponding function
$\rho_m(y)$ is in the constant multiplicative factor, $e^{-\frac{(m-1)^2}{4}}$.
\end{remark}
Given the factorized form of $U_{(m)}$ in (\ref{e-cstm}) we have the
diagram
\begin{align}
\label{d333}
\begin{gathered}
\xymatrix{
&& \mathcal{M}L^2 ({\mathbb R}^{m+1} \setminus \{0\}, \tilde \rho_m \, d^{m+1}x) \\
L^2({\mathbb S}^m, d \sigma_m)\otimes {\mathbb C}_{m+1} \ar@{^{(}->}[rr]_{e^{{\Delta_{\underline \xi}}/2}} \ar[rru]^{U_{(m)}} && {\mathcal A}(\So^m)
, \ar[u]_{CK_{{\mathbb S}^m} \, = \, e^{-y \Gamma_{\underline \xi}} }
}
\end{gathered}
\end{align}
We divide the proof of Theorem \ref{t-1} into several lemmas.
\begin{lemma}
\label{l-1}
Let $f \in {\mathcal A}(\So^m)$ and consider its Dirac operator spectral decomposition or, equivalently, its decomposition
in spherical monogenics,
\begin{equation}
\label{e-mdec}
f = \sum_{k \geq 0} \, P_k(f) + \sum_{k \geq 0} \, Q_k(f).
\end{equation}
Then its CK extension is given by
\begin{eqnarray}
\label{e-ck2}
\nonumber
CK_{\So^m}(f)(\underline x) &=&
\sum_{k \geq 0} \, \tilde P_k(f)(\underline x) + \sum_{k \geq 0} \, \tilde Q_k(f)(\underline x)\\
&=&
\sum_{k \geq 0} \, |\underline x|^k \, P_k(f)\left(\frac{\underline x}{|\underline x|}\right) + \sum_{k \geq 0} \, |\underline x|^{-(k+m)} \, Q_k(f)\left(\frac{\underline x}{|\underline x|}\right) \\
&=& e^{-y \Gamma_{\underline \xi}} (f) = |\underline x|^{-\Gamma_{\underline \xi}} (f) .
\nonumber
\end{eqnarray}
\end{lemma}
\begin{proof}
Since $f \in {\mathcal A}(\So^m)$ the two first lines in the right hand side
of (\ref{e-ck2}) are the Laurent expansion of
$CK_{\So^m}(f)(\underline x) $ in spherical monogenics (see \cite{DSS}, Theorem 1,
p. 189), uniformly
convergent on compact subsets of
${\mathbb R}^{m+1} \setminus \{0\}$.
The third line in the right hand side follows from
(\ref{e-sdo1}) and
the fact that $\Gamma_{\underline \xi}$ is
a self-adjoint operator.
\end{proof}
\begin{remark}
We thus see that, for $f \in {\mathcal A}(\So^m)$, the operator
of CK extension to ${\mathbb R}^{m+1} \setminus \{0\}$
is
$$
CK_{\So^m} = e^{-y \Gamma_{\underline \xi}} ,
$$
in agreement with (\ref{e-mesc}) and (\ref{e-cstm}).
\end{remark}
\begin{lemma}
\label{l-2}
Let $f \in L^2(\So^m, d\sigma_m)\otimes \C_{m+1}$ and consider its decomposition
in spherical monogenics,
\begin{equation}
\nonumber
f = \sum_{k \geq 0} \, P_k(f) + \sum_{k \geq 0} \, Q_k(f).
\end{equation}
Then the map
\begin{eqnarray*}
\label{e-cstm10}
U_{(m)} \, &:& \, L^2({\mathbb S}^{m}, d\sigma_{m}) \otimes {\mathbb C}_{m+1} \longrightarrow
{\mathcal M}({\mathbb R}^{m+1}\setminus \{0\})\\
U_{(m)} &=& CK_{{\mathbb S}^m} \circ e^{\Delta_{\underline \xi}/2}= e^{-y \Gamma_{\underline \xi}} \circ e^{\Delta_{\underline \xi}/2} \, ,
\end{eqnarray*}
where ${\mathcal M}(\Omega)$ denotes the space of monogenic functions
on the open set $\Omega \subset {\mathbb R}^{m+1}$,
is well defined and
\begin{eqnarray}
\label{e-ck3} \nonumber
U_{(m)}(f)(\underline x) &=& e^{-y \Gamma_{\underline \xi}} \circ e^{\Delta_{\underline \xi}/2}(f)(\underline x)=\\
\nonumber &=& \sum_{k \geq 0} \, e^{-k(k+m-1)/2} \, |x|^k \, P_k(f)\left(\frac{\underline x}
{|x|}\right) + \sum_{k \geq 0} \,
e^{-(k+1)(k+m)/2} \, |x|^{-(k+m)} \, Q_k(f)\left(\frac{\underline x}{|x|}\right) \\
&=& \int_{{\mathbb S}^m} \tilde K_1(\underline x, \underline \xi) \, f(\xi) \, d\sigma_m(\xi),
\end{eqnarray}
where $K_1$ denotes the heat kernel on ${\mathbb S}^m$ at time $t=1$ and $\tilde K_1$ is the CK extension
to ${\mathbb R}^{m+1} \setminus \{0\}$
of $K_1$ in its first variable.
\end{lemma}
\begin{proof}
From (\ref{e-lop}), (\ref{e-loe}) and (\ref{e-sdo}) we have
\begin{eqnarray*}
e^{\Delta_{\underline \xi}/2}(f) (\underline \eta) &=&
\sum_{k \geq 0} \, e^{-k(k+m-1)/2} \, P_k(f)(\underline \eta) + \sum_{k \geq 0} \,
e^{-(k+1)(k+m)/2} \, Q_k(f)(\underline \eta) \\
&=& \int_{{\mathbb S}^m} K_1(\underline \eta, \underline \xi) \, f(\underline \xi) \, d\sigma_m(\underline \xi) .
\end{eqnarray*}
From \cite{DSS,DQC} we obtain
\begin{equation}
\label{e-kex}
K_1(\underline \eta, \underline \xi) = \sum_{k \geq 0} \, e^{-k(k+m-1)/2} \left(C^+_{m+1, k}(\underline \eta, \underline \xi) +
C^-_{m+1, k-1}(\underline \eta, \underline \xi) \right) ,
\end{equation}
where $C^-_{m+1, -1} =0$,
\begin{eqnarray*}
C^+_{m+1, k}(\underline \eta, \underline \xi) &=& \frac 1{1-m} \left[-(m+k-1) \, C_k^{(m-1)/2}(<\underline \eta, \underline \xi>)+
(1-m) \, C_{k-1}^{(m+1)/2}(<\underline \eta, \underline \xi>) \, \underline \eta \wedge \underline \xi \right],\\
C^-_{m+1, k-1}(\underline \eta, \underline \xi) &=& \frac 1{m-1} \left[k \, C_k^{(m-1)/2}(<\underline \eta, \underline \xi>)+
(1-m) \, C_{k-1}^{(m+1)/2}(<\underline \eta, \underline \xi>) \, \underline \eta \wedge \underline \xi \right], \quad k \geq 1, \,
\end{eqnarray*}
$\underline \eta \wedge \underline \xi = \sum_{i<j} (\eta_i\xi_j-\eta_j\xi_i)e_{ij}$
and $C_k^\nu$ denotes the Gegenbauer polynomial of degree $k$ associated with $\nu$.
Now we prove that $K_1( \cdot , \xi) \in {\mathcal A}(\So^m)$ for every $\xi \in {\mathbb S}^{m+1}$.
From Lemma \ref{l-1} and (\ref{e-kex}) we conclude that if $K_1( \cdot , \xi) $ has a CK extension then its
Laurent series is given by
\begin{eqnarray}
\label{e-uut}
\tilde K_1(\underline x, \underline \xi) &=& \tilde K_1^+(\underline x, \underline \xi) + \tilde K_1^-(\underline x, \underline \xi) = \\
\nonumber &=& \sum_{k \geq 0} \, e^{-k(k+m-1)/2} \, |x|^k \, C^+_{m+1, k}\left(\frac{\underline x}{|x|}, \underline \xi\right) +
\sum_{k \geq 1} \, e^{-k(k+m-1)/2} \, |x|^{-(k+m-1)} \, C^-_{m+1, k-1}\left(\frac{\underline x}{|x|}, \underline \xi\right) .
\end{eqnarray}
Let us now show that this series is uniformly convergent in all compact subsets
of ${\mathbb R}^{m+1}\setminus \{0\}$.
From the explicit expressions for the degree $k$ Gegenbauer polynomials (see e.g. \cite{DSS}, p. 182)
\begin{equation}
\nonumber
C_k^{m/2}(<\underline \eta, \underline \xi>) =
\sum_{j=0}^{[k/2]} \, \frac{(-1)^j 2^{k-2j}(m/2)_{k-j}}{j! (k-2j)!} \, <\underline \eta, \underline \xi>^{k-2j},
\end{equation}
where $(a)_j = a (a+1) \cdots (a+j-1)$. We see that
$$
|C_k^{m/2}(<\underline \eta, \underline \xi>)| \leq \frac{(m+2k)!!}{(m-1)!!}\sum_{j=0}^{[k/2]} \frac{2^{-j}}{j!(k-2j)!}\leq \frac{(2k+m)!}{(m-1)!} \, ,
\quad \forall \eta, \xi \in {\mathbb S}^m .
$$
Therefore
we obtain that
\begin{eqnarray}
\label{e-ineq}
\nonumber |C^+_{m+1, k}(\underline \eta, \underline \xi) | & \leq &
\frac{(2k+m-1)!}{(m-1)!} \, (k+m-1) \, + \, \frac{(2k+m-1)!}{m!} \, m (m-1) \\
&=& \frac{(2k+m-1)!}{(m-1)!} \, (k+2m-2) \, ,
\quad \forall \eta, \xi \in {\mathbb S}^m .
\end{eqnarray}
Let $s \in (0,1)$. From the Stirling formula and (\ref{e-ineq}) we conclude that there
exists $k_0 \in {\mathbb N}$ such that
$$
\nonumber |C^+_{m+1, k}(\underline \eta, \underline \xi) | \leq e^{s k(k+m-1)/2} \, ,
\quad \forall \eta, \xi \in {\mathbb S}^m, \forall k > k_0 ,
$$
and therefore
$$
e^{-k(k+m-1)/2} \, |C^+_{m+1, k}\left(\frac{\underline x}{|x|}, \underline \xi\right)| \leq
e^{-(1-s) k(k+m-1)/2}, \quad \forall \eta, \xi \in {\mathbb S}^m, \forall k > k_0 .
$$
Then the series,
$$
\tilde K_1^+(\underline x, \underline \xi) = \sum_{k \geq 0} \, e^{-k(k+m-1)/2} \, |x|^k \, C^+_{m+1, k}\left(\frac{\underline x}{|x|}, \underline \xi\right) ,
$$
is uniformly convergent on all compact subsets of ${\mathbb R}^{m+1}$ and therefore its sum is monogenic on ${\mathbb R}^{m+1}$ in the first variable.
To prove that the second series in (\ref{e-uut}) is
uniformly convergent in compact subsets of
${\mathbb R}^{m+1}\setminus \{0\}$ we use
the fact that the inversion is an isomorphism between ${\mathcal M}({\mathbb R}^{m+1})$ and
${\mathcal M}_0({\mathbb R}^{m+1} \setminus \{0\})$
(see section 1.6.5 of \cite{DSS})
$$
f \mapsto If, \, If(\underline x) = \frac{\underline x}{|x|^{m+1}} \, f\left(\frac{\underline x}{|x|^{2}}\right) \, .
$$
It is then equivalent to prove that the
series
$$
\left((I\otimes {\rm Id}) (\tilde K_1^-)\right)(\underline x, \underline \xi) = \frac{\underline x}{|x|^{2}} \,
\sum_{k \geq 1} \, e^{-k(k+m-1)/2} \, |x|^{k} \, C^-_{m+1, k-1}\left(\frac{\underline x}{|x|}, \underline \xi\right) ,
$$
is uniformly convergent on compact subsets of ${\mathbb R}^{m+1}$.
But this is a direct consequence of the following inequalities for $|C^-_{m+1, k-1}(\underline \eta, \underline \xi)|$,
similar to the inequalities
(\ref{e-ineq})
for $|C^+_{m+1, k}(\underline \eta, \underline \xi)|$,
\begin{equation}
\label{e-ineq2}
|C^-_{m+1, k-1}(\underline \eta, \underline \xi) | \leq \frac{(2k+m-1)!}{(m-1)!} \, (k+m-1) \, ,
\quad \forall \eta, \xi \in {\mathbb S}^m \, .
\end{equation}
We have thus established that $\tilde K_1( \cdot , \xi) \in {\mathcal M}({\mathbb R}^{m+1} \setminus \{0 \}), \, \forall \xi \in {\mathbb S}^m$
with Laurent series given by (\ref{e-uut}). Analogously we can show that
$\tilde K_1( \cdot , \cdot) \in C^\infty({\mathbb R}^{m+1} \setminus \{0 \} \times {\mathbb S}^m) \otimes {\mathbb C}_{m+1}$.
From (\ref{e-ineq}) and (\ref{e-ineq2}), we also obtain,
\begin{eqnarray*}
|P_k(f)(\eta)| &=& \left|\int_{{\mathbb S}^m} \, C^+_{m+1, k}(\underline \eta, \underline \xi) \, f(\xi) \, d\sigma_m\right|
\leq \frac{(2k+m-1)!}{(m-1)!} \, (k+2m-2) \, ||f||, \, \\
|Q_{k-1}(f)(\eta)| &=& \left|\int_{{\mathbb S}^m} \, C^-_{m+1, k-1}(\underline \eta, \underline \xi) \, f(\xi) \, d\sigma_m\right|
\leq \frac{(2k+m-1)!}{(m-1)!} \, (k+m-1) \, ||f|| , \, \quad \forall \eta \in {\mathbb S}^m \, .
\end{eqnarray*}
As in the case of $\tilde K_1(\cdot , \xi)$,
these inequalities imply that, for every $f \in L^2({\mathbb S}^m, d\sigma_m) \otimes {\mathbb C}_{m+1}$,
the Laurent series for $U_m(f)$ in (\ref{e-ck3})
is uniformly convergent on compact subsets of ${\mathbb R}^{m+1}\setminus \{0\}$.
\end{proof}
\begin{lemma}
\label{l-3}
The map $U_{(m)}$ in (\ref{e-cstm}) and (\ref{e-ck3}) is an isometry for the measure factor
$\tilde \rho_m$ given by (\ref{e-rom}).
\end{lemma}
\begin{proof}
Given the $SO(m+1, {\mathbb R})$--invariance of the measures on ${\mathbb S}^m$ and on ${\mathbb R}^{m+1}\setminus \{0\}$ in (\ref{e-cstm}) and (\ref{e-rom}), so that (\ref{e-mdec}) is an orthogonal decomposition and so is (\ref{e-ck3}),
we see that to prove isometricity of $U_{(m)}$ it is sufficient to prove
\begin{eqnarray}
\label{e-iso1}
\nonumber ||U_{(m)}(P_k(f))|| &=& ||P_k(f)||, \\
||U_{(m)}(Q_k(f))|| &=& ||Q_k(f)||,
\end{eqnarray}
for all $k \in {\mathbb Z}_{\geq 0}$ and $f \in L^2(\So^m, d\sigma_m)\otimes \C_{m+1}$.
We have
\begin{eqnarray*}
||U_{(m)}(P_k(f))||^2 &=& e^{-k(k+m-1)} \, \int_o^\infty \, r^{2k} \rho_m(\log(r)) r^m dr \, ||P_k(f)||^2 , \\
||U_{(m)}(Q_{k-1}(f))||^2 &=& e^{-k(k+m-1)} \, \int_o^\infty \, r^{-2(k-1+m)} \rho_m(\log(r)) r^m dr \, ||Q_{k-1}(f)||^2 \, ,
\end{eqnarray*}
and therefore isometricity is equivalent to the following two infinite systems of equations setting constraints
on the Laplace transform of the function $\rho_m(y)$. The system coming from the $P_k$ is
\begin{equation}
\label{e-const1}
\int_{\mathbb R} \, \rho_m(y) \, e^{y(2k+m+1)} \, dy = e^{k(k+m-1)} \, , \qquad k \in {\mathbb Z}_{\geq 0} ,
\end{equation}
and the system coming from the $Q_k$ is
\begin{equation}
\label{e-const2}
\int_{\mathbb R} \, \rho_m(y) \, e^{-y(2k+m-3)} \, dy = e^{k(k+m-1)} \, , \qquad k \in {\mathbb Z}_{\geq 0} .
\end{equation}
It is easy to verify that the function $\rho_m$ corresponding to $\tilde \rho_m$
in (\ref{e-rom})
$$
\rho_m(y) = \frac {e^{-\frac{(m-1)^2}{4}}}{\sqrt{\pi}} \, e^{-y^2-2y}.
$$
satisfies both (\ref{e-const1}) and (\ref{e-const2}).
\end{proof}
\begin{remark}
\label{r-b1}
Notice that each of the two systems (\ref{e-const1}) and
(\ref{e-const2})
determines
$\rho_m$ uniquely so that it is remarkable that they both give the same solution.
\end{remark}
\begin{proof} (of Theorem \ref{t-1}). From Lemmas \ref{l-1}, \ref{l-2} and \ref{l-3} we
see that the only missing part is the surjectivity of $U_{(m)}$. But this follows from the fact that
the space $\tilde V$ in (\ref{e-ssm2})
is dense, with respect to uniform convergence
on compact subsets, in the space of monogenic functions on ${\mathbb R}^{m+1}\setminus \{0\}$ and therefore is also
dense
on ${\mathcal M}L^2({\mathbb R}^{m+1}\setminus \{0\}, \tilde \rho_m \, d^{m+1}x)$ since this has finite measure. Since
the image of an isometric map is closed and the
image of $U_{(m)}$ contains $\tilde V$ we conclude that
$U_{(m)}$ is surjective.
\end{proof}
As we mentioned in the introduction the mechanism for the unitarity of the
CST, $U_{(m)}$, was its
factorization into a contraction given by
heat operator evolution
at time $t=1$ followed by Cauchy-Kowalewsky (CK) extension,
which
exactly compensates the contraction, given our choice of measure on
${\mathbb R}^{m+1}\setminus \{0\}$.
\subsection{One-parameter family of unitary transforms}
\label{ss-32}
In the present section we will consider a one-parameter family of transforms, using heat operator evolution
at time $t>0$ followed by CK extension. We show that, by changing
the measure on ${\mathbb R}^{m+1}\setminus \{0\}$ to a new Gaussian
(in the coordinate $\log(|x|)$) measure $$d\mu_t =
\tilde \rho^t_m \, d^{m+1}x,$$ these transforms
are unitary.
Thus we consider the transforms
\begin{eqnarray}
\label{e-cstmb}
\nonumber U^t_{(m)} \, &:& \, L^2({\mathbb S}^{m}, d\sigma_{m}) \otimes {\mathbb C}_{m+1} \longrightarrow
{\mathcal M}L^2({\mathbb R}^{m+1}\setminus \{0\}, \tilde \rho^t_m \, d^{m+1}x) \\
U^t_{(m)} &=& CK_{{\mathbb S}^m} \circ e^{t\Delta_{\underline \xi}/2}
= e^{-y\Gamma_{\underline \xi}} \circ e^{t\Delta_{\underline \xi}/2} \\
\nonumber U^t_{(m)}(f)(\underline x) &=& \int_{{\mathbb S}^m} \, \tilde K_t(\underline x, \underline \xi) \, f(\xi) \,
d \sigma_m \, ,
\end{eqnarray}
where
$\tilde K_t(\cdot , \xi)$ denotes the CK extension
of $K_t$ to
${\mathbb R}^{m+1}\setminus \{0\}$ in its first variable.
Our goal is to find (whether there exist), for every $t>0$, a function $\tilde \rho^t_m$ on
${\mathbb R}^{m+1}\setminus \{0\}$,
$$
\tilde \rho^t_m(\underline x) = \rho^t_m(y) , \,
$$
which makes the (well defined) map in (\ref{e-cstmb}) unitary.
Again, for $m=1$, there is a unique positive answer to the above question
given by
$$
\rho_1^t(y) = \frac 1{\sqrt{t\pi}} \, e^{- \frac{y^2}t -2y}
$$
so that
$$
\tilde \rho^t_1(\underline x) = \frac 1{\sqrt{t\pi}} \, e^{- \frac 1t \log^2(|x|)-2\log(|x|)}.
$$
We then have
\begin{theorem}
\label{t-1b}
The map $U^t_{(m)}$ in (\ref{e-cstmb}) is a unitary isomorphism for
\begin{equation}
\label{e-romb}
\tilde \rho^t_m(\underline x) = \frac {e^{-\frac{t(m-1)^2}{4}}}{\sqrt{t\pi}} \, e^{- \frac 1t \log^2(|x|)-2\log(|x|)}.
\end{equation}
\end{theorem}
Given the factorized form of $U^t_{(m)}$ in (\ref{e-cstmb}) we have the
diagram
\begin{align}
\label{d333b}
\begin{gathered}
\xymatrix{
&& \mathcal{M}L^2 ({\mathbb R}^{m+1} \setminus \{0\}, \tilde \rho^t_m \, d^{m+1}x) \\
L^2({\mathbb S}^m, d \sigma_m)\otimes {\mathbb C}_{m+1} \ar@{^{(}->}[rr]_{e^{{t\Delta_{\underline \xi}}/2}} \ar[rru]^{U^t_{(m)}} && {\mathcal A}(\So^m)
, \ar[u]_{CK_{{\mathbb S}^m} \, = \, e^{-y \Gamma_{\underline \xi}}}
}
\end{gathered}
\end{align}
Again we divide the proof of Theorem \ref{t-1b} into several lemmas.
Notice, however, that Lemma \ref{l-1} remains unchanged.
\begin{lemma}
\label{l-2b}
Let $f \in L^2(\So^m, d\sigma_m)\otimes \C_{m+1}$ and consider its decomposition
in spherical monogenics,
\begin{equation}
\nonumber
f = \sum_{k \geq 0} \, P_k(f) + \sum_{k \geq 0} \, Q_k(f).
\end{equation}
Then the map
\begin{eqnarray*}
\label{e-cstm10t}
U^t_{(m)} \, &:& \, L^2({\mathbb S}^{m}, d\sigma_{m}) \otimes {\mathbb C}_{m+1} \longrightarrow
{\mathcal M}({\mathbb R}^{m+1}\setminus \{0\})\\
U^t_{(m)} &=& CK_{{\mathbb S}^m} \circ e^{t\Delta_{\underline \xi}/2}= e^{-y \Gamma_{\underline \xi}} \circ e^{t\Delta_{\underline \xi}/2} \, ,
\end{eqnarray*}
is well defined and
\begin{eqnarray}
\label{e-ck3b} \nonumber
U^t_{(m)}(f)(\underline x) &=& e^{-y \Gamma_{\underline \xi}} \circ e^{t\Delta_{\underline \xi}/2}(f)(\underline x)=\\
\nonumber &=& \sum_{k \geq 0} \, e^{-tk(k+m-1)/2} \, |x|^k \, P_k(f)\left(\frac{\underline x}
{|x|}\right) + \sum_{k \geq 0} \,
e^{-t(k+1)(k+m)/2} \, |x|^{-(k+m)} \, Q_k(f)\left(\frac{\underline x}{|x|}\right) \\
&=& \int_{{\mathbb S}^m} \tilde K_t(\underline x, \underline \xi) \, f(\xi) \, d\sigma_m(\xi),
\end{eqnarray}
where $\tilde K_t$ is the CK extension
to ${\mathbb R}^{m+1} \setminus \{0\}$
of $K_t$ in its first variable.
\end{lemma}
\begin{proof} The proof is identical to the proof of Lemma \ref{l-2}.
The Gaussian form (in $k$) of the coefficients
coming from $e^{t\Delta_{\underline \xi}/2}$
and the inequalities
(\ref{e-ineq}), (\ref{e-ineq2}) again imply that
$\tilde K_t(\cdot , \xi)$ and $U^t_{(m)}(f)$ are monogenic on
${\mathbb R}^{m+1}\setminus \{0\}$ and their
Laurent
series are given by
\begin{eqnarray}
\label{e-uutb}
\tilde K_t(\underline x, \underline \xi) &=& \tilde K_t^+(\underline x, \underline \xi) + \tilde K_t^-(\underline x, \underline \xi) \\
\nonumber &=& \sum_{k \geq 0} \, e^{-tk(k+m-1)/2} \, |x|^k \, C^+_{m+1, k}\left(\frac{\underline x}{|x|}, \underline \xi\right) +
\sum_{k \geq 1} \, e^{-tk(k+m-1)/2} \, |x|^{-(k+m-1)} \, C^-_{m+1, k-1}\left(\frac{\underline x}{|x|}, \underline \xi\right) ,
\end{eqnarray}
and by (\ref{e-ck3b}).
\end{proof}
\begin{lemma}
\label{l-3b}
The map $U^t_{(m)}$ in (\ref{e-cstmb}) and (\ref{e-ck3b}) is an isometry for the measure factor
$\tilde \rho^t_m$ given by (\ref{e-romb}).
\end{lemma}
\begin{proof}
Given the $SO(m+1, {\mathbb R})$--invariance of the measures on ${\mathbb S}^m$ and on ${\mathbb R}^{m+1}\setminus \{0\}$ in (\ref{e-cstmb}) and (\ref{e-romb})
we see that to prove isometricity of $U^t_{(m)}$ it is sufficient to prove
\begin{eqnarray}
\label{e-iso1b}
\nonumber ||U^t_{(m)}(P_k(f))|| &=& ||P_k(f)||, \\
||U^t_{(m)}(Q_k(f))|| &=& ||Q_k(f)||,
\end{eqnarray}
for all $k \in {\mathbb Z}_{\geq 0}$ and $f \in L^2(\So^m, d\sigma_m)\otimes \C_{m+1}$.
Again, isometricity is equivalent to the following two infinite systems of equations setting constraints
on the Laplace transform of the functions $\rho^t_m(y)$. The system coming from the $P_k$ is
\begin{equation}
\label{e-const1b}
\int_{\mathbb R} \, \rho^t_m(y) \, e^{y(2k+m+1)} \, dy = e^{tk(k+m-1)} \, , \qquad k \in {\mathbb Z}_{\geq 0} ,
\end{equation}
and the system coming from the $Q_k$ is
\begin{equation}
\label{e-const2b}
\int_{\mathbb R} \, \rho^t_m(y) \, e^{-y(2k+m-3)} \, dy = e^{tk(k+m-1)} \, , \qquad k \in {\mathbb Z}_{\geq 0} .
\end{equation}
It is easy to verify that the function $\rho^t_m$ corresponding to $\tilde \rho^t_m$ in (\ref{e-romb})
satisfies both (\ref{e-const1b}) and (\ref{e-const2b}).
\end{proof}
\begin{proof}
The proof of Theorem \ref{t-1b} is completed exactly
as the proof of Theorem \ref{t-1} so that we omit it here.
\end{proof}
\newpage
{\bf \large{Acknowledgements:}}
The authors were partially
supported by Macau Government FDCT through the project 099/2014/A2,
{\it Two related topics in Clifford analysis}, by the Macao Science and Technology
Development Fund, MSAR, Ref. 045/2015/A2
and by the University of Macau Research Grant MYRG115(Y1-L4)-FST13-QT.
The authors
JM and JPN were also partly supported by
FCT/Portugal through the projects UID/MAT/04459/2013 and PTDC/MAT-GEO/3319/2014.
|
2,869,038,153,781 | arxiv | \section{Introduction}
The problem of quickest change detection aims to detect an abrupt change in stochastic processes as soon as possible. This arises in a wide range of applications including quality control engineering \cite{lai1995sequential}, finance \cite{shiryaev2002quickest}, computer security \cite{thottan2003anomaly, cardenas2009evaluation}, condition monitoring \cite{rice2010flexible} and cognitive radio networks \cite{jayaprakasam2009sequential}. In these applications, a change of the underlying distribution usually indicates that the event we are interested in occurs, and in order to take actions, we need to detect such occurrence as soon as possible. Generally speaking, the design of quickest change detection procedures, mainly involves two performance indices: detection delay and false detection. Usually, one seeks to find the detection procedure that minimizes detection delay subject to certain false detection constraint. The mathematical characterization of these two indices and the model assumption distinguishes two problem formulations: the Bayesian one due to Shiryaev \cite{Shiryaev1963, shiryaev2007optimal} and the minimax one due to Lorden \cite{lorden1971procedures} and Pollak \cite{pollak1985optimal}. The strict optimal or asymptotically optimal detection procedures have been established for these problems; see surveys \cite{polunchenko2012state, veeravalli2012quickest}.
Quickest change detection with wireless sensor networks (WSNs) has raised recent interest \cite{Veeravalli2001, mei2011quickest, banerjee2012dataBayesian, banerjee2012data, Geng2013}. The reasons are twofold: one is due to the good inherent properties the WSNs possess, such as being flexible and robust, and the other one is that the quickest change detection procedure can be carried out with WSNs in many applications including infrastructure health monitoring, habit monitoring and intrusion detection. The limited resources (limited energy for battery-powered sensor nodes and limited bandwidth for communication channels) associated with WSNs, however, brings new challenges. In the setting of classical quickest change detection, it is assumed that the decision maker can access the observation at each time instant and the sampling cost is free, whereas one has to take the energy and bandwidth constraints into consideration for detection with WSNs and the decision maker usually can only access part of the observation. In summary, the quickest change detection with WSNs needs to deal with trade-offs among three performance indices: detection delay, false detection and energy cost.
There have been several proposals on the quickest change detection with energy constraint; see \cite{premkumar2008optimal, banerjee2012dataBayesian, banerjee2012data}. The existing approaches admit the following two features. One is that the energy constraint is characterized by the sampling cost, i.e., the number of observations made. To meet this energy constraint, the observations are taken only at certain time instants. The other feature is that the decision whether or not to take a sample is made at the fusion center based on the detection statistic.
To cope with the energy constraint, we consider ``censoring" strategy at the sensor nodes instead. Specifically, the sensor nodes take observations at each time instant, but only send those that are deemed as ``informative" to the decision center. The benefits to adopt censoring strategy at the sensor nodes are summarized as follows.
On one hand, in most practical applications, the energy consumption of sensing is negligible compared with that of communication \cite{dargie2010fundamentals}, so it is effective to reduce the total energy consumption by reducing the number of communications. On the other hand, if the decision whether or not to take observations is made based on the detection statistic at the fusion center, the feedback information from the center to the sensor nodes will be needed, which can cause additional energy consumption and bandwidth cost.
In this paper, the minimax problem formulation of quickest change detection with a censoring sensor node is studied. Like the classical minimax formulation, we consider both Lorden's and Pollak's problem and try to find the optimal censoring strategy and stopping time such that the detection delay is minimized subject to constraints on both average run length (ARL) and average energy cost before the change. The main contributions of our work are summarized as follows.
\begin{enumerate}
\item To the best of our knowledge, this paper is the first work that studies the minimax formulation of quickest change detection with a sensor that adopts censoring strategy. It is shown in our numerical example that the censoring strategy provides a very good trade-off between the detection performance and the energy constraint.
\item We show that the censoring strategy that has the maximal post-censoring Kullback-Leibler (K-L) divergence coupled with Cumulative Sum (CuSum) and Shiryaev-Roberts-Pollak (SRP) detection procedure is asymptotically optimal for both Lorden's and Pollak's problem as the ARL goes to infinity, respectively. (Theorem \ref{Theorem:AsymOptCuSum} and Theorem \ref{Theorem:AsymOptSRP})
\item In general, to find the asymptotically optimal censoring strategy that maximizes the post-censoring K-L divergence can only be done numerically. The computation burden of searching over the whole admissible class is huge, especially when the dimension of the observation is high. To alleviate the computation burden, we provide two necessary conditions on the asymptotically optimal censoring strategy. One is that it should use up the available energy (Theorem \ref{Theorem:EqualGreaterKL}), the other is that it has a very special structure, i.e., the likelihood ratio of the \emph{no send} region is a single interval (Theorem \ref{Theorem:likelihoodratio}).
\end{enumerate}
The related literature are summarized as follows. The idea of detection with censoring strategy is introduced in \cite{rago1996censoring} and later studied in \cite{appadwedula2008decentralized} and \cite{Tay2007}. The main result of \cite{rago1996censoring} and \cite{appadwedula2008decentralized} is that the likelihood ratio of the censoring region is a single interval under several different performance indices. The asymptotic detection performance of large-scale censoring networks is studied in \cite{Tay2007}. Premkumar and Kumar \cite{premkumar2008optimal} considered Bayesian quickest change detection with sleep/awake control of the sensor nodes. The energy constraint is formulated as the average number of sensors used and the problem is solved by formulating it as an infinite-horizon Markovian decision process problem. Banerjee and Veeravalli \cite{banerjee2012dataBayesian} studied a similar problem but with one sensor node. The authors provided an asymptotically optimal low-complexity stopping rule. The same authors studied the minimax problem with the same energy formulation in \cite{banerjee2012data}. They proposed a heuristical ``DE-CuSum" (``data efficient" CuSum) algorithm and proved that the algorithm is asymptotically optimal. Geng and Lai \cite{Geng2013} studied minimax quickest change detection with a sensor that can be recharged with energy harvesting techniques. The authors proposed a very simple asymptotically optimal power allocation scheme.
The remainder of this paper is organized as follows. The mathematical model of the considered problem is given in Section \ref{Section:problem setup}. In Section \ref{Section:Main results}, we show the main results of this paper. First we prove that the asymptotically optimal censoring strategy for both Lorden's and Pollak's problem is the one that has the maximal post-censoring K-L divergence. Then two properties of the asymptotically optimal censoring strategy are shown, i.e., that it uses up the available energy and the likelihood ratio of the \emph{no send} region is a single interval. Numerical examples are given in Section \ref{Section:Numerical Example} to illustrate the main results. Some concluding remarks are presented in the end.
\textit{Notations}: $\mathbb{N}$, $\mathbb{N}_{+}$, $\mathbb{R}$, $\mathbb{R}_{+}$ and $\mathbb{R}_{++}$ are the set of non-negative integers, positive integers, real numbers, non-negative real numbers and positive real numbers, respectively. $k\in \mathbb{N}$ is the time index.
$\mathbf{1}_{A}$ represents the indicator function that takes value $1$ on the set $A$ and $0$ otherwise.
$\times$ stands for the Cartesian product and $\text{Pr}(\cdot)$ denotes the probability.
\section{Problem Setup} \label{Section:problem setup}
We consider the optimal censoring strategy and detection scheme for a wireless sensor system that adopts censoring strategy. A remote sensor is deployed to take observations from the environment at each time instant and selectively sends them to a center that makes decisions to continue or declare a change of the monitored environment sequentially. By ``selectively", we mean that due to limited resources (e.g., energy and bandwidth), the sensor cannot send its observations all the time.
To make the most of the limited energy and achieve better detection performance, it is natural to come up with a strategy that only sends ``informative" data and discards the ``less informative" ones, which is the concept of censoring.
For detection structure, we consider the scenario corresponding to case A in \cite{veeravalli2001decentralized}, where the sensor has no local memory and does not receive feedback from the center, i.e., the sensor makes decisions whether or not to communication with the center only based on its current observation; see Fig. \ref{Fig:topology}. The reasons why we consider the scenario where there is only one sensor being used are as follows. On one hand, if we assume the observations are independent across the different sensors, the multiple sensors case can be easily reduced to the one sensor case for the problem we study, so for brevity of presentation, the one sensor case is preferred. On the other hand, if the observations taken from different sensors may be correlated, the problem of quickest change detection with energy constraints will be much more complicated, which we shall leave to the future work. In fact, most related literature studied the one sensor case, such as \cite{banerjee2012dataBayesian, banerjee2012data, Geng2013, geng2013bayesian }. In the following, we introduce the mathematical problem.
\begin{figure}
\centering
\includegraphics[width=2.5in]{blockdiagram}\vspace{-2mm}
\caption{ Block diagram of the quickest change detection system. } \label{Fig:topology}
\vspace{-4mm}
\end{figure}
\subsection{Classical Minimax Quickest Change Detection}
Let $X_k$ be the observation at time $k$. Along the time horizon, a sequence of observations $\{X_k\}_{k\in\mathbb{N}_{+}}$ about the monitored environment are taken locally at the sensor. Let $\{X_k\}_{k\in\mathbb{N}_{+}}$ be a series of random variables defined on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Assume that before an unknown but not random time instant $\nu$ ($\nu$ may be $\infty$ and in this case the change never happens), the environment is ``in control", and the interested change event happens at $\nu$, after which the system is ``out of control" and some measures must be taken as soon as possible. Specifically, the observations at the sensor before $\nu$, $X_1,X_2,\ldots,X_{\nu-1}$ denoted by $X_1^{\nu-1}$ are i.i.d. with measure $\mathbb{P}_{\infty}$, and $X_{\nu}^{\infty}$ are i.i.d. with measure $\mathbb{P}_{1}$.
Note that $\mathbb{P}_{\nu}$ denote the probability measure when the change happens at $\nu$. If there is no change, we denote this measure by $\mathbb{P}_{\infty}$. The expectation $\mathbb{E}_{\nu}$ and $\mathbb{E}_{\infty}$ are defined accordingly. We assume $\mathbb{P}_{\infty}$ and $\mathbb{P}_1$ are mutually locally absolutely continuous measures.
Define the filtration $\{\mathcal{F}_k\}_{k\in\mathbb{N}}$ induced by the observations $\{X_k\}_{k\in\mathbb{N}_{+}}$ as
\[\mathcal{F}_k\triangleq\sigma(X_1,X_2,\ldots,X_k), \forall k\in\mathbb{N}_{+}, \]
with the understanding that $\mathcal{F}_0$ is the trivial $\sigma$-algebra. Note that $\mathcal{F}=\vee_{k\geq 0} \mathcal{F}_k$. A random variable $T\in\mathbb{N}_{+}$ is called a stopping time if the event $\{T=k\}\in \mathcal{F}_k, \forall k\in \mathbb{N}_{+}$. The objective of classical quickest change detection is to find a stopping time $T$ that detects the change event \emph{as quickly as possible}, subject to the \emph{risk of false detection} \cite{polunchenko2012state}. For the different problem formulations (e.g., Bayesian and minimax), there are different criteria how to measure ``\emph{as quickly as possible}" and ``\emph{risk of false detection}".
In the minimax formulation of the quickest change detection, one aims to minimize the worst detection delay in a sense. In the minimax setting, the risk of false detection is measured by the average run length (ARL) to false alarm $\mathbb{E}_{\infty}[T]$ \cite{lorden1971procedures}.
As the detection delay is concerned, there are mainly two criteria: Lorden's ``worst-worst case" detection delay \cite{lorden1971procedures} and Pollak's ``worst case" conditional average delay \cite{pollak1985optimal}. Given a stopping time $T$, the associated Lorden's detection delay is defined by
\begin{align}
\mathcal{D}_L(T) \triangleq \sup_{0\leq\nu<\infty} \big\{ \esssup \mathbb{E}_{\nu}[(T-\nu)^+|\mathcal{F}_{\nu}] \big\},
\end{align}
where $(T-\nu)^+ = (T-\nu)\mathbf{1}_{\{(T-\nu)\geq0\}}$. Pollak's detection delay is defined by
\begin{align}
\mathcal{D}_P(T) \triangleq \sup_{0\leq\nu<\infty} \mathbb{E}_{\nu}[(T-\nu)^+|T>\nu].
\end{align}
Denote by $\mathcal{C}_{\gamma}$ the class of stopping times that has the ARL lower bounded by $\gamma \geq 1$, i.e.,
\[\mathcal{C}_{\gamma} \triangleq \{T:\mathbb{E}_{\infty}[T]\geq \gamma\}.\]
Lorden's minimax optimization problem is to find the optimal stopping time $T^*$ such that
\begin{align}
\mathcal{D}_L(T^*)=\inf_{T\in\mathcal{C}_{\gamma}}\mathcal{D}_L(T), \:\text{for every}\: \gamma \geq 1.
\end{align}
Similarly, the Pollak's minimax optimization problem aims to find the optimal stopping time $T^*$ such that
\begin{align}
\mathcal{D}_P(T^*)=\inf_{T\in\mathcal{C}_{\gamma}}\mathcal{D}_P(T), \:\text{for every}\: \gamma \geq 1.
\end{align}
Lorden \cite{lorden1971procedures} proved that Page's CuSum detection procedure \cite{page1954continuous} is first-order asymptotically optimal as $\gamma \to \infty$. Later, Moustakides \cite{moustakides1986optimal} and Ritov \cite{ritov1990decision} proved that the CuSum procedure is in fact strictly optimal for any $\gamma>1$. The strict optimal detection procedure for the Pollak's problem is still an open problem. In the regime of asymptotic optimality, it has been proved that the CuSum algorithm is first-order asymptotically optimal \cite{lai1998information} and some variants of the Shiyaev-Roberts (SR) procedure (e.g., the SR-$r$ and SRP procedure) are third-order asymptotically optimal \cite{tartakovsky2012third}.
\subsection{Minimax Quickest Change Detection with Energy Constraint}
The classical minimax quickest change detection formulations, however, do not consider the cost of sending observations. This is, however, an important issue when a wireless sensor is used. In this paper, we take the energy constraint into account and study the variation of the classical minimax quickest change detection.
We consider the censoring strategy at the sensor's side. After taking an observation at each time instant, the sensor needs to decide whether or not to send it to the decision maker. Assume that each communication between the sensor and the center will cost the sensor $1$ unit of energy. Let the binary-valued variable $\mu_k$ be the censoring rule at time $k$ and $\vec{\mu}=\{\mu_1,\ldots,\mu_k,\ldots\}$ be the decision policy.
Note that since it is assumed that there is no feedback or memory for the sensor, decision is made based only on current observations,
i.e.,
\[\theta_k=\mu_k(X_k).\]
We consider stationary policies of the form $\vec{\mu}=\{\mu,\ldots,\mu,\ldots\}$. The reasons why we consider stationary policy are as follows. First, the stationary policy can facilitate local processing and strategy implementation at the sensor nodes. Second, compared with time-varying policies, the stationary policy is easier to be analyzed, which can provide insights of the benefits of censoring strategies for the quickest change detection. Third, a reasonable time-varying policy is the one where the censoring strategy varies with the detection statistic. Then the observations available at the center are correlated. We leave such complicated case to the future work. As $\vec{\mu}$ is fully specified by $\mu$, in the sequel we will use these two notations interchangeably.
We pose the energy constraint by
\begin{align}
\mathbb{E}_{\infty}[\mu(X_1)] \leq \epsilon, \label{Eqn:energyconstraint}
\end{align}
where
$0 < \epsilon \leq 1$ is the upper bound of the average units of energy used at each time before the change happens.
We ignore the energy cost constraint after the change event as the cost of the detection system still working after the change is already penalized by the detection delay. The parameter $\epsilon$ can be tuned to achieve desired trade-off between energy cost and detection performance.
Note that since the censoring strategy is stationary and $X_1^{\infty}$ are i.i.d under $\mathbb{P}_{\infty}$, the above energy constraint can be rewritten as
\begin{align*}
\mathbb{E}_{\infty}[\mu(X_k)] \leq \epsilon, \, \forall k\geq 1.
\end{align*}
Such an energy constraint naturally arises in wireless sensor networks and similar ones have been studied in \cite{banerjee2012data}.
In particular, the above energy constraint is equivalent to the pre-change duty cycle (PDC) introduced in \cite{banerjee2012data}. Like PDC, the energy constraint defined in \eqref{Eqn:energyconstraint} can be rewritten by
\begin{align}
\limsup_{n}\frac{1}{n}\mathbb{E}_{\infty} \left[ \sum_{k=1}^{n-1} \theta_k \right] \leq \epsilon.
\end{align}
To cope with this constraint, we adopt censoring strategy, whereas \cite{banerjee2012data} resorts to detection procedure and they propose a so called ``DE-CuSum" algorithm to detect the change event.
Let us illustrate with a simple example why the quickest change detection problem with energy constraint is interesting also from a practical viewpoint. Consider the structural health monitoring of a bridge \cite{rice2010flexible}. Wireless battery-powered sensor nodes are deployed to monitor the bridge. Based on the collected data, a person in charge aims to detect any abnormal condition (e.g., a small crack in the bridge) as soon as possible to take suitable actions. The false alarm rate should be as small as possible, as unnecessary actions are costly. Note that the reciprocal of the ARL is connected to the false alarm rate. The energy constraint \eqref{Eqn:energyconstraint} is natural as the battery-powered sensors might be difficult and costly to recharge. For the overall system design, the energy constraint can be viewed as a design parameter, which should be considered together with the likelihood of an abnormal event and the false alarm rate.
Define the filtration $\{\mathcal{F}^{\mu}_k\}_{k\in\mathbb{N}}$ induced by the observations $\{X_k\}_{k\in\mathbb{N}_{+}}$ and the decision rule $\mu$ as
\[\mathcal{F}^{\mu}_k \triangleq \sigma(\theta_1X_1,\ldots,\theta_kX_k,\theta_1,\ldots,\theta_k), \forall k\in\mathbb{N}_{+}, \]
with the understanding that $\mathcal{F}^{\mu}_0$ is the trivial $\sigma$-algebra.
Similar to the classical setup, we define the following quantities:
\begin{align}
\mathcal{D}^{\mu}_L(T) \triangleq & \sup_{0\leq\nu<\infty} \big\{ \esssup \mathbb{E}^{\mu}_{\nu}[(T-\nu)^+|\mathcal{F}^{\mu}_{\nu}] \big\}, \label{Eqn:DetectionDelayLorden} \\
\mathcal{D}^{\mu}_P(T) \triangleq & \sup_{0\leq\nu<\infty} \mathbb{E}^{\mu}_{\nu}[(T-\nu)^+|T>\nu],\\
\mathcal{C}^{\mu}_{\gamma}\triangleq & \, \{T:\mathbb{E}^{\mu}_{\infty}[T]\geq \gamma\},
\end{align}
where $\mathbb{E}^{\mu}_{\nu}$ is the expectation when the change event happens at time $\nu$ and the decision policy $\mu$ is adopted; $\mathbb{E}^{\mu}_{\infty}$, $\mathbb{P}^{\mu}_{\nu}$ and $\mathbb{P}^{\mu}_{\infty}$ are defined similarly. Denoted by $\mathcal{U}_{\epsilon}$ the class of all the admissible decision policies:
\[\mathcal{U}_{\epsilon} \triangleq \, \{ \mu: \mathbb{E}_{\infty}[\mu(X_1)] \leq \epsilon \}. \]
Then the two problems corresponding to the classical Lorden's and Pollak's problem we are interested in are formulated as follows:
\begin{problem} \label{Problem:Lorden}
\begin{align*}
\text{find} & \quad \mu^{*}, T^{*} \\
\text{s.t.} & \quad \mathcal{D}^{\mu^{*}}_L(T^*)=\inf_{T\in\mathcal{C}_{\gamma},\mu\in\mathcal{U}_{\epsilon}}\mathcal{D}^{\mu}_L(T), \:\text{for every}\: \gamma>1.
\end{align*}
\end{problem}
and
\begin{problem} \label{Problem:Pollak}
\begin{align*}
\text{find} & \quad \mu^{*}, T^{*} \\
\text{s.t.} & \quad \mathcal{D}^{\mu^{*}}_P(T^*)=\inf_{T\in\mathcal{C}_{\gamma},\mu\in\mathcal{U}_{\epsilon}}\mathcal{D}^{\mu}_P(T), \:\text{for every}\: \gamma>1.
\end{align*}
\end{problem}
\section{Main Results} \label{Section:Main results}
In this section, we provide solutions to the above two problems. We first prove that the censoring strategy that has the maximal post-censoring K-L divergence coupled with the CuSum algorithm and SRP procedure is asymptotically optimal for Problem \ref{Problem:Lorden} and Problem \ref{Problem:Pollak}, respectively. In general, to compute such asymptotically optimal censoring strategy can only be done numerically. If we search for the asymptotically optimal censoring strategy in the whole admissible class $\mathcal{U}_{\epsilon}$, the computation load will be very huge, especially when the dimension of the observation is high. To alleviate this computation burden, we give two necessary conditions on the asymptotically optimal censoring strategy. One is that the asymptotically optimal censoring strategy should use up the available energy, i.e., equation \eqref{Eqn:energyconstraint} becomes equality for the asymptotically optimal censoring strategy. The other is that the asymptotically optimal censoring strategy has a special structure, that is, the likelihood ratio of the \emph{no send} region is a single interval.
\subsection{Asymptotically Optimal Censoring Strategy for Lorden's Problem}
Define a variation of likelihood ratio function as
\begin{align} \label{Eqn:likelihoodratio}
L^{\mu}(X_k,\theta_k) \triangleq \left\{
\begin{array}{ll}
(\mathrm{d}\mathbb{P}_1/\mathrm{d}\mathbb{P}_{\infty})(X_k), & \text{if } \theta_k=1,\\
\frac{\mathbb{P}^{\mu}_1\{ \theta_k=0\} }{\mathbb{P}^{\mu}_{\infty}\{ \theta_k=0\}}, & \text{if } \theta_k=0,
\end{array} \right.
\end{align}
where $\mathrm{d}\mathbb{P}_1/\mathrm{d}\mathbb{P}_{\infty}$ is the Radon-Nikodym derivative. Note that since it is assumed that $\mathbb{P}_{\infty}$ and $\mathbb{P}_1$ are mutually locally absolutely continuous, such derivative always exists.
We introduce a statistic based on $L^{\mu}(X_k,\theta_k)$, which can be calculated recursively by
\begin{align}
S_k=&\max_{1\leq q \leq k} \, \prod_{i=q}^{k}L^{\mu}(X_i,\theta_i)\\
=&\max(S_{k-1},1)L^{\mu}(X_k,\theta_k), \forall k\in\mathbb{N}_{+},
\end{align}
with $S_0=0$.
The stopping time based on $S_k$ is given by
\begin{align}
T_{\gamma}^A=\inf\{k \geq 0 : S_k \geq A \}, \label{Eqn:StoppingtimeCUSUM}
\end{align}
where $A$ is a constant threshold such that
\begin{align}
\mathbb{E}^{\mu}_{\infty}[T_{\gamma}^A] = \gamma.
\end{align}
When defining stopping times, we adopt the convention that $\inf\{\emptyset\}=\infty$, i.e., $T_{\gamma}^A=\infty$ if the statistics $S_k$ never crosses $A$.
\begin{lemma} \label{Lemma:optimalstopping}
Given any censoring strategy $\mu \in \mathcal{U}_{\epsilon}$, the stopping time $T_{\gamma}^A$ defined in \eqref{Eqn:StoppingtimeCUSUM} is strictly optimal for Lorden's problem, i.e., for any $\mu$,
\begin{align}
\mathcal{D}^{\mu}_L(T_{\gamma}^A)=\inf_{T\in\mathcal{C}_{\gamma}}\mathcal{D}^{\mu}_L(T), \:\text{for every}\: \gamma>1.
\end{align}
\end{lemma}
\begin{proof}
We introduce the random variable
\begin{align}
Z_k = \left\{
\begin{array}{ll}
X_k, & \text{if } \theta_k=1,\\
\wp, & \text{if } \theta_k=0,
\end{array} \right.
\end{align}
where $\wp$ is a particular symbol indicating the event when the center does not receive the data from the remote sensor. ``Receiving nothing" can be regarded as a special observation, since the censoring strategy is observations dependent. Instead of just discarding such special observations (assign likelihood ratio $1$ in the recursion update for Page's CuSum statistics), we assign the corresponding likelihood ratio value according to the policy used. Note that $\wp$ should be chosen such that $\mathbb{P}_1(\wp) = \mathbb{P}_{\infty}(\wp) = 0.$
Since the censoring strategy is stationary, it is easily seen that if the change event happens at $\nu$ and the particular policy $\mu$ is adopted, $Z_1^{\nu-1}$ are i.i.d. with measure $\mathbb{P}^{\mu}_{\infty}$ and $Z_{\nu}^{\infty}$ are i.i.d. with density function $\mathbb{P}^{\mu}_1$. Denote by $\{\overline{\mathcal{F}}_k\}_{k\in\mathbb{N}}$ the filtration induced by the observations $\{Z_k\}_{k\in\mathbb{N}_{+}}$, defined similar to $\{\mathcal{F}_k\}_{k\in\mathbb{N}}$. The detection delay associated with Lorden's problem defined in \eqref{Eqn:DetectionDelayLorden} can be rewritten as
\[ \mathcal{D}^{\mu}_L(T) \triangleq \sup_{0\leq\nu<\infty} \big\{ \esssup \mathbb{E}^{\mu}_{\nu}[(T-\nu)^+|\overline{\mathcal{F}}_{\nu}] \big\}. \]
Then given any admissible censoring strategy $\mu \in \mathcal{U}_{\epsilon}$, Problem \ref{Problem:Lorden} can be written as
\begin{align*}
\text{find} & \quad T^{*} \\
s.t. & \quad \mathcal{D}^{\mu}_L(T^*)=\inf_{T\in\mathcal{C}_{\gamma}}\mathcal{D}^{\mu}_L(T), \:\text{for every}\: \gamma>1.
\end{align*}
Note that $T$ is also a stopping time with respect to $\overline{\mathcal{F}}_{k}$, so the above problem is just the classical Lorden's formulation. The CuSum procedure has been proved to be strictly optimal for Lorden's formulation \cite{moustakides1986optimal, ritov1990decision}. It is easily verified that $L^{\mu}(X_k,\theta_k)$ defined in \eqref{Eqn:likelihoodratio} is in effect the likelihood ratio function of $Z_k$. Hence the stopping time defined in \eqref{Eqn:StoppingtimeCUSUM} is indeed the Page's CuSum procedure based on observations $Z_k$. The strict optimality of $T_{\gamma}^A$ thus follows, which concludes the proof.
\end{proof}
In the following, we study the optimal censoring strategy. To avoid degenerate problem, we assume the finiteness of K-L divergence of local observations in the sequel. Specifically, it is assumed that
\begin{align} \label{Eqn:finiteAssumption}
0<\mathbb{D}(\mathbb{P}_1||\mathbb{P}_{\infty}) \triangleq \mathbb{E}_{k}[\ell(X_k)] < \infty,
\end{align}
where $\ell(X_k)=\ln \frac{\mathrm{d}\mathbb{P}_1}{\mathrm{d}\mathbb{P}_{\infty}}(X_k)$ is the log-likelihood ratio function.
\begin{theorem} \label{Theorem:AsymOptCuSum}
Introduce the censoring strategy
\begin{align}
\mu^{*} \triangleq \argmax_{\mu\in\mathcal{U}_{\epsilon}} \mathbb{E}^{\mu}_{k}[ \ln L^{\mu}(X_k,\theta_k) ]. \label{Eqn:optimalCensoringDef}
\end{align}
Then for any $\epsilon \in (0,1]$, the pair of censoring strategy and stopping time $(\mu^{*},T_{\gamma}^A)$ with $T_{\gamma}^A$ defined in \eqref{Eqn:StoppingtimeCUSUM} is third-order asymptotically optimal for Problem \ref{Problem:Lorden}, i.e.,
\begin{align} \label{Eqn:LordenAsymOpt}
\mathcal{D}^{\mu^{*}}_L(T_{\gamma}^A)=\inf_{T\in\mathcal{C}_{\gamma},\mu\in\mathcal{U}_{\epsilon}}\mathcal{D}^{\mu}_L(T)+ \smallO 1, \: \text{as}\, \gamma \to \infty.
\end{align}
\end{theorem}
\begin{proof}
The result of Lemma \ref{Lemma:optimalstopping} tells that to find the optimal (regardless of strictly optimal or asymptotically optimal) censoring strategy, instead of searching from all the possible pairs of censoring strategy and stopping time $(\mu,T)$, we can just compare the performance of $(\mu,T_{\gamma}^{A_{\mu}})$. Here we use the notation $A_{\mu}$ instead of $A$ to highlight that the threshold parameter for CuSum depends on the specific censoring strategy being used.
From the invariance properties of K-L divergence \cite[p. 19]{kullback1978information}, no censoring strategy can increase discrimination information, i.e., the post-censoring K-L divergence cannot be larger than the K-L divergence of the local observation:
\begin{align*}
\mathbb{D}(\mathbb{P}^{\mu}_1||\mathbb{P}^{\mu}_{\infty}) \triangleq &\: \mathbb{E}^{\mu}_{k}[ \ln L^{\mu}(X_k,\theta_k) ] \\
\leq & \mathbb{D}(\mathbb{P}_1||\mathbb{P}_{\infty}) < \:\infty.
\end{align*}
From Theorem 3 of \cite{lorden1971procedures}, we can get
\begin{align}
\mathcal{D}^{\mu}_L(T_{\gamma}^{A_{\mu}})=\frac{\ln \gamma}{\mathbb{D}(\mathbb{P}^{\mu}_1||\mathbb{P}^{\mu}_{\infty}) }(1+\smallO1), \text{as}\: \gamma \to \infty.
\end{align}
Let $\mu_1,\mu_2$ be two arbitrary censoring strategies such that
\begin{align}
\mathbb{D}(\mathbb{P}^{\mu_1}_1||\mathbb{P}^{\mu_1}_{\infty}) \geq \mathbb{D}(\mathbb{P}^{\mu_2}||\mathbb{P}^{\mu_2}_{\infty}).
\end{align}
Then as $\gamma \to \infty$,
\begin{align} \label{Eqn:dividelessone}
\frac{\mathcal{D}^{\mu_1}_L(T_{\gamma}^{A_{\mu_1}})}{\mathcal{D}^{\mu_2}_L(T_{\gamma}^{A_{\mu_2}})}=
\frac{\mathbb{D}(\mathbb{P}^{\mu_2}||\mathbb{P}^{\mu_2}_{\infty})}{\mathbb{D}(\mathbb{P}^{\mu_1}_1||\mathbb{P}^{\mu_1}_{\infty})}
\leq 1,
\end{align}
where we define $\frac{0}{0}=1$. If $\mathbb{D}(\mathbb{P}^{\mu}_1||\mathbb{P}^{\mu}_{\infty})=0$, the detection problem is degenerate and the detection delay $\mathcal{D}^{\mu}_L(T_{\gamma}^{A_{\mu}})=\infty$. Hence we treat the class that has zero post-censor discrimination information equally.
The equation \eqref{Eqn:LordenAsymOpt} follows directly from \eqref{Eqn:dividelessone} and the proof thus is complete.
\end{proof}
\subsection{Asymptotically Optimal Censoring Strategy for Pollak's Problem}
Even for the classical Pollak's problem, the strict optimal detection procedure is still an open issue. Thus instead of looking for the optimal (or asymptotically optimal) pair of censoring strategy and detection procedure, we aim to find optimal (or asymptotically optimal) censoring strategy given a specific detection procedure. The SR procedure and its variants: SRP and SR-$r$ have been proved to be asymptotically optimal for the classical Pollak's problem \cite{pollak1985optimal, tartakovsky2012third}. In this paper, we study the asymptotically optimal censoring strategy when the SRP procedure is being used. Note that the other two cases (when the detection procedure is SR or SR-$r$) can be studied similarly.
The SRP procedure for our problem is given by the stopping time
\begin{align}
T_{\gamma}^A=\inf\{k \geq 0 : R_k \geq A \}, \label{Eqn:StoppingtimeSRP}
\end{align}
where $A$ is a constant threshold such that
\begin{align}
\mathbb{E}^{\mu}_{\infty}[T_{\gamma}^A] = \gamma,
\end{align}
and
\begin{align}
R_k=(1+R_{k-1})L^{\mu}(X_k,\theta_k), \: k \geq 1,
\end{align}
with a random initial point $R_0\sim Q_A(x)$, where the quasi-stationary cdf $Q_A(x)$ is defined by
\begin{align} \label{Eqn:QuasiStaDistri}
Q_A(x) \triangleq \lim_{k\to \infty} \mathbb{P}^{\mu}_{\infty}(R_k \leq x | T_{\gamma}^A > k).
\end{align}
From the classical quickest change detection theory, we know that given any censoring strategy $\mu$, the corresponding SRP is third-order asymptotically optimal, i.e.,
\begin{align}
\mathcal{D}^{\mu}_P(T_{\gamma}^A)=\inf_{T\in\mathcal{C}_{\gamma}}\mathcal{D}^{\mu}_P(T)+\smallO 1, \:\text{as}\: \gamma \to \infty.
\end{align}
The problem we are interested in is that given the stopping time $T_{\gamma}^A$ defined in \eqref{Eqn:StoppingtimeSRP}, what is the asymptotically optimal censoring strategy? The result is stated in the following theorem. To study the properties of SRP procedure, in this subsection we assume that $\mathbb{E}_{k}[{\ell(X_k)}^2] < \infty$, and there is no point mass for $\ell(X_k)$ under either $\mathbb{P}_{1}$ or $\mathbb{P}_{\infty}$, i.e.,
\begin{align}
\mathbb{P}_{1}(\ell(X_k)=t)=0, \, & \forall t\geq 0,\\
\mathbb{P}_{\infty}(\ell(X_k)=t)=0, \, & \forall t\geq 0.
\end{align}
\begin{lemma} \label{Lemma:finitesecondlikelistrategy}
$\mathbb{E}_{k}[{\ell(X_k)}^2] < \infty$ implies \[\mathbb{E}^{\mu}_{k}[ \ln^2 L^{\mu}(X_k,\theta_k)]< \infty, \forall \mu \in \mathcal{U}_{\epsilon}. \]
\end{lemma}
\begin{proof}
If $\mathbb{E}_{\infty}[\mu(X_k)] =1$, then
\[\mathbb{E}^{\mu}_{k}[ \ln^2 L^{\mu}(X_k,\theta_k)]=\mathbb{E}_{k}[{\ell(X_k)}^2]< \infty.\]
If $\mathbb{E}_{\infty}[\mu(X_k)] <1$, let $\Omega_c$ be the censoring region associated with $\mu$.
Since $\mathbb{P}_{\infty}$ and $\mathbb{P}_1$ are mutually locally absolutely continuous measures, then for any $\Omega_c$,
\begin{align*}
0<\frac{\int_{\Omega_{c}}\mathrm{d}\mathbb{P}_1}{\int_{\Omega_{c}}\mathrm{d}\mathbb{P}_{\infty}}<\infty.
\end{align*}
We then obtain
\begin{align*}
&\mathbb{E}^{\mu}_{k}[ \ln^2 L^{\mu}(X_k,\theta_k)] \\
=& \int_{\Omega\backslash \Omega_{c}}\ln^2\frac{\mathrm{d}\mathbb{P}_1}{\mathrm{d}\mathbb{P}_{\infty}} \mathrm{d}\mathbb{P}_1 +
\int_{\Omega_{c}}\ln^2\frac{\int_{\Omega_{c}}\mathrm{d}\mathbb{P}_1}{\int_{\Omega_{c}}\mathrm{d}\mathbb{P}_{\infty}} \mathrm{d}\mathbb{P}_1\\
\leq & \mathbb{E}_{k}[{\ell(X_k)}^2] +\ln^2\frac{\int_{\Omega_{c}}\mathrm{d}\mathbb{P}_1}{\int_{\Omega_{c}}\mathrm{d}\mathbb{P}_{\infty}} \\
< & \infty,
\end{align*}
and the proof is complete.
\end{proof}
\begin{theorem} \label{Theorem:AsymOptSRP}
Let $\mu^{*}$ be the censoring strategy defined in \eqref{Eqn:optimalCensoringDef}. Then given the stopping time $T_{\gamma}^A$ defined in \eqref{Eqn:StoppingtimeSRP}, for any $\epsilon\in(0,1]$, $\mu^{*}$ is third-order asymptotically optimal. Specifically,
\begin{align} \label{Eqn:SRPgreaterKLopt}
\mathcal{D}^{\mu^{*}}_P(T_{\gamma}^A)=\inf_{\mu\in\mathcal{U}_{\epsilon}}\mathcal{D}^{\mu}_P(T_{\gamma}^A)+\smallO1, \:\text{as}\: \gamma\to\infty.
\end{align}
\end{theorem}
\begin{proof}
Given any censoring strategy $\mu$, since
\[\mathbb{E}_{k}[{\ell(X_k)}^2] < \infty,\]
by Lemma \ref{Lemma:finitesecondlikelistrategy}, one obtains
\[\mathbb{E}^{\mu}_{k}[ \ln^2 L^{\mu}(X_k,\theta_k)]< \infty.\]
Also as there is no point mass for $\ell(X_k)$, $\ln L^{\mu}(X_k,\theta_k)$ is non-arithmetic \footnote{A random variable $Y \in \mathbb{R}$ is said to be arithmetic if there exists constant $d>0$ such that \[\text{Pr}\{Y\in\{\ldots,-2d,-d,0,d,2d,\ldots\}\}=1.\] Otherwise it is called non-arithmetic. } for any censoring strategy. Then from \cite{tartakovsky2012third}, one obtains that
\begin{align}
\mathcal{D}^{\mu}_P(T_{\gamma}^A)=&\frac{1}{\mathbb{D}(\mathbb{P}^{\mu}_1||\mathbb{P}^{\mu}_{\infty})}(\ln A + \aleph -C_{\infty}) + \smallO 1\: \text{as}\: A \to \infty, \label{Eqn:SRPasymperDelay} \\
\mathbb{E}_{\infty}^{\mu}(T_{\gamma}^A)=&\frac{A}{\zeta}(1+\smallO 1) \: \text{as}\: A \to \infty, \label{Eqn:SRPasymperARL}
\end{align}
where $\aleph$, $C_{\infty}$ and $\zeta$ are given as follows. Let $S_n=\ln L^{\mu}(X_1,\theta_1)+ \cdots + \ln L^{\mu}(X_n,\theta_n)$ and define the one-sided stopping time by $\tau_a=\arg\min\{n\geq 1: S_n \geq a\},$ for $a\geq 0$. Let $\kappa_a=S_{\tau_a}-a$ be the excess over the threshold $a$ at the stopping time, then $\aleph$ and $\zeta$ in \eqref{Eqn:SRPasymperDelay} and \eqref{Eqn:SRPasymperARL} is defined by
\begin{align*}
\aleph=\lim_{a\to\infty}\mathbb{E}_{1}^{\mu}[\kappa_a],
\qquad \qquad \zeta=\lim_{a\to\infty}\mathbb{E}_{1}^{\mu}[e^{-\kappa_a}].
\end{align*}
The variable $C_{\infty}$ is given by
\begin{align*}
C_{\infty}=\mathbb{E}_{\infty}^{\mu}[\ln(1+R_{\infty}+V_{\infty})],
\end{align*}
where $V_{\infty}=\sum_{i=1}^{\infty}e^{-S_i}$, and $R_{\infty}$ is a random variable that has the $\mathbb{P}_{\infty}^{\mu}$-limiting distribution of $R_n$ as $n\to\infty$, i.e.,
\[\lim_{n\to\infty}\mathbb{P}_{\infty}^{\mu}(R_n\leq x)=\mathbb{P}_{\infty}^{\mu}(R_{\infty}\leq x).\]
Obviously $0<\zeta<1$, and from the renewal theory (e.g., \cite{siegmund1985sequential}), one can obtain that $0<\aleph<\infty$ and $0<C_{\infty}<\infty$ whatever the censoring strategy is. Let $A=\gamma \zeta$, and since $\aleph$, $\zeta$ and $C_{\infty}$ only depend on the censoring strategy and observation model, which is independent of $\gamma$, we can rewrite \eqref{Eqn:SRPasymperDelay} and \eqref{Eqn:SRPasymperARL} as
\begin{align}
\mathcal{D}^{\mu}_P(T_{\gamma}^A)=&\frac{\ln \gamma}{\mathbb{D}(\mathbb{P}^{\mu}_1||\mathbb{P}^{\mu}_{\infty})}(1 + \smallO 1) \: \text{as}\: \gamma \to \infty, \\
\mathbb{E}_{\infty}^{\mu}(T_{\gamma}^A)=&\gamma(1+\smallO 1) \: \text{as}\: \gamma \to \infty,
\end{align}
Let $\mu_1,\mu_2$ be two arbitrary censoring strategies such that
\begin{align}
\mathbb{D}(\mathbb{P}^{\mu_1}_1||\mathbb{P}^{\mu_1}_{\infty}) \geq \mathbb{D}(\mathbb{P}^{\mu_2}||\mathbb{P}^{\mu_2}_{\infty}).
\end{align}
Then as $\gamma \to \infty$,
\begin{align} \label{Eqn:dividelessone1}
\frac{\mathcal{D}^{\mu_1}_P(T_{\gamma}^{A_{\mu_1}})}{\mathcal{D}^{\mu_2}_P(T_{\gamma}^{A_{\mu_2}})}=&
\frac{\mathbb{D}(\mathbb{P}^{\mu_2}||\mathbb{P}^{\mu_2}_{\infty})}{\mathbb{D}(\mathbb{P}^{\mu_1}_1||\mathbb{P}^{\mu_1}_{\infty})}
\leq 1, \\
\frac{\mathbb{E}_{\infty}^{\mu_1}(T_{\gamma}^A)}{\mathbb{E}_{\infty}^{\mu_2}(T_{\gamma}^A)}=&1.
\end{align}
Equation \eqref{Eqn:SRPgreaterKLopt} follows, so the proof is complete.
\end{proof}
\subsection{Maximize K-L Divergence} \label{Section:maximizeKLDivergence}
The above theorems show that the asymptotically optimal censoring strategy for both Lorden's and Pollak's problem is the one that has the maximal post-censoring K-L divergence.
In general, to find the optimal censoring strategy that maximizes the post-censoring K-L divergence has to be done numerically. In this subsection, we give two properties that the optimal strategy possesses. The usefulness of our results is that these two properties can be utilized to significantly reduce the computation load. Before proceeding, we introduce the following assumption.
\begin{assumption} \label{Assumption:nomasspoint}
There is no point mass for observation $X_k$ under either $\mathbb{P}_{\infty}$ or $\mathbb{P}_{1}$, i.e., both $\mathbb{P}_{\infty}$ and $\mathbb{P}_{1}$ are continuous over $\mathcal{F}$.
\end{assumption}
\begin{remark}
Under this assumption, our argument and presentation will be simplified. For scenarios where the observations have point mass, randomized censoring strategy can be used instead. For a randomized censoring strategy, whether a observation taken at the sensor is sent to the fusion center or not depends on not only the observation itself but also on another random variable (which needs to be carefully defined to meet certain constraints). Randomization over these possible point masses can ``split" them arbitrarily, so the problem will be reduced to one with no point mass. The following theorems thus can be extended to the point mass case easily.
\end{remark}
\begin{lemma} \label{Lemma:greaterratio}
Suppose $\mathbf{P}$ and $\mathbf{Q}$ are two probability measures on a measurable space $(\Omega,\mathcal{F})$.
Given any $A_1 \in \mathcal{F}$, define the following two quantities:
\begin{align}
|A_1|_{\mathbf{Q}\mathbf{P}} \triangleq & \frac{\int_{A_1} \mathrm{d}\mathbf{Q}}{\int_{A_1} \mathrm{d}\mathbf{P}},\\
|A_1|_{\mathbf{P}} \triangleq & \int_{A_1} \mathrm{d}\mathbf{P}.
\end{align}
Then $\forall~ b \in[0,|A_1|_{\mathbf{P}}]$, we can always find $A_2 \subseteq A_1$ such that
\begin{align}
|A_2|_{\mathbf{Q}\mathbf{P}} \geq & |A_1|_{\mathbf{Q}\mathbf{P}},\\
|A_2|_{\mathbf{P}}= & b,
\end{align}
if \begin{enumerate}
\item $\mathbf{Q}$ is absolutely continuous with respect to $\mathbf{P}$.
\item $\mathbf{P}$ is continuous.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $\mathbf{Q}$ is absolutely continuous w.r.t. $\mathbf{P}$, we can obtain the Radon-Nikodym derivative as
\[y(\omega)=\frac{\mathrm{d}\mathbf{Q}}{\mathrm{d}\mathbf{P}}(\omega).\]
Then the likelihood ratio for $A_1$ can be rewritten by
\begin{align*}
|A_1|_{\mathbf{Q}\mathbf{P}} = & \frac{\int_{A_1} \mathrm{d}\mathbf{Q}}{\int_{A_1} \mathrm{d}\mathbf{P}}= \frac{\int_{A_1} y \mathrm{d}\mathbf{P}}{\int_{A_1} \mathrm{d}\mathbf{P}},
\end{align*}
and it follows that
\begin{align}
\max_{\omega \in A_1}y(\omega) \geq & |A_1|_{\mathbf{Q}\mathbf{P}}, \\
\min_{\omega \in A_1}y(\omega) \leq & |A_1|_{\mathbf{Q}\mathbf{P}}.
\end{align}
If
$\max_{\omega \in A_1}y(\omega) = |A_1|_{\mathbf{Q}\mathbf{P}}$
we can obtain
\[y(\omega)=|A_1|_{\mathbf{Q}\mathbf{P}}, \: \forall \omega \in A_1. \]
The problem becomes trivial.
In the following, we consider the case where
$\max_{\omega \in A_1}y(\omega) > |A_1|_{\mathbf{Q}\mathbf{P}}.$
Define
\[\overline{a} \triangleq \max_{\omega \in A_1}y(\omega). \]
Let
\begin{align} \label{Eqn:A2}
A_2=\{\omega \in A_1: y(\omega)\in [c,\overline{a}]\}.
\end{align}
Since $\mathbf{P}$ is continuous, we can always find an appropriate $c$ such that
\[|A_2|_{\mathbf{P}}= b.\]
We now prove that $A_2$ defined in \eqref{Eqn:A2} satisfies $|A_2|_{\mathbf{Q}\mathbf{P}} \geq |A_1|_{\mathbf{Q}\mathbf{P}}$. If $ c\geq |A_1|_{\mathbf{Q}\mathbf{P}}$,
\[|A_2|_{\mathbf{Q}\mathbf{P}}=\frac{\int_{A_2} y \mathrm{d}\mathbf{P}}{\int_{A_2} \mathrm{d}\mathbf{P}} \geq c \frac{\int_{A_1} \mathrm{d}\mathbf{P}}{\int_{A_2} \mathrm{d}\mathbf{P}} \geq |A_1|_{\mathbf{Q}\mathbf{P}}.\]
If $ c < |A_1|_{\mathbf{Q}\mathbf{P}}$, we obtain
\[|A_1\backslash A_2|_{\mathbf{Q}\mathbf{P}}=\frac{\int_{A_1\backslash A_2} y \mathrm{d}\mathbf{P}}{\int_{A_1\backslash A_2} \mathrm{d}\mathbf{P}} \leq c \frac{\int_{A_1} \mathrm{d}\mathbf{P}}{\int_{A_1} \mathrm{d}\mathbf{P}} < |A_1|_{\mathbf{Q}\mathbf{P}}.\]
Suppose $|A_2|_{\mathbf{Q}\mathbf{P}} < |A_1|_{\mathbf{Q}\mathbf{P}}$. It then follows that
\begin{align*}
|A_1|_{\mathbf{Q}\mathbf{P}}=&\frac{ \int_{A_2} \mathrm{d}\mathbf{Q} +\int_{A_1\backslash A_2} \mathrm{d}\mathbf{Q} }{ \int_{A_1} \mathrm{d}\mathbf{P} + \int_{A_1\backslash A_2} \mathrm{d}\mathbf{P} }\\
< &|A_1|_{\mathbf{Q}\mathbf{P}} \frac{ \int_{A_1} \mathrm{d}\mathbf{P} + \int_{A_1\backslash A_2} \mathrm{d}\mathbf{P} }{ \int_{A_1} \mathrm{d}\mathbf{P} + \int_{A_1\backslash A_2} \mathrm{d}\mathbf{P} } \\
=&|A_1|_{\mathbf{Q}\mathbf{P}}.
\end{align*}
Thus $|A_2|_{\mathbf{Q}\mathbf{P}} \geq |A_1|_{\mathbf{Q}\mathbf{P}}$. The proof thus is complete.
\end{proof}
\begin{theorem} \label{Theorem:EqualGreaterKL}
Given any censoring strategy $\underline{\mu} \in \mathcal{U}_{\epsilon}$, we can always find another strategy $\overline{\mu} \in \mathcal{U}_{\epsilon}$ that satisfies
\begin{align}
\mathbb{E}_{\infty}[\overline{\mu}(X_k)]~ =~ &\epsilon, \\
\mathbb{E}^{\overline{\mu}}_{k}[ \ln L^{\overline{\mu}}(X_k,\theta_k) ]~ \geq ~& \mathbb{E}^{\underline{\mu}}_{k}[ \ln L^{\underline{\mu}}(X_k,\theta_k) ]. \label{Eqn:BiggerK-L}
\end{align}
\end{theorem}
\begin{proof}
A censoring strategy can be fully characterised by its \emph{send} region or \emph{no send} region. Suppose strategy $\underline{\mu}$ is given by
\begin{align}
\theta_k=\left\{
\begin{array}{ll}
0, & \text{if } X_k \in \Omega_{\underline{\mu}},\\
1, & \text{otherwise},
\end{array} \right.
\end{align}
where $\Omega_{\underline{\mu}} \in \Omega$ is the \emph{no send} region associated with the strategy $\underline{\mu}$.
It follows from $\underline{\mu} \in \mathcal{U}_{\epsilon}$ that
\begin{align}
\int_{\Omega_{\underline{\mu}}} \mathrm{d}\mathbb{P}_{\infty} \geq 1-\epsilon.
\end{align}
Then under Assumption \ref{Assumption:nomasspoint} and by Lemma \ref{Lemma:greaterratio}, we can always find $\Omega_{\overline{\mu}} \subseteq \Omega_{\underline{\mu}}$ such that
\begin{align}
\int_{ \Omega_{\overline{\mu}}} \mathrm{d}\mathbb{P}_{\infty} =& 1-\epsilon, \\
\frac{ \int_{\Omega_{\overline{\mu}}} \mathrm{d}\mathbb{P}_{1} }{ \int_{\Omega_{\overline{\mu}}} \mathrm{d}\mathbb{P}_{\infty} } \geq & \frac{ \int_{\Omega_{\underline{\mu}}} \mathrm{d}\mathbb{P}_{1} }{ \int_{\Omega_{\underline{\mu}}} \mathrm{d}\mathbb{P}_{\infty} } \label{Eqn:greaterratioTheo}
\end{align}
Define strategy $\overline{\mu}$ by
\begin{align}
\theta_k=\left\{
\begin{array}{ll}
0, & \text{if } X_k \in \Omega_{\overline{\mu}},\\
1, & \text{otherwise}.
\end{array} \right.
\end{align}
We obtain
\begin{align*}
&\mathbb{E}^{\underline{\mu}}_{k}[ \ln L^{\underline{\mu}}(X_k,\theta_k) ] - \mathbb{E}^{\overline{\mu}}_{k}[ \ln L^{\overline{\mu}}(X_k,\theta_k) ] \\
= & \int_{ \Omega_{\underline{\mu}} } \ln \frac{ \int_{\Omega_{\underline{\mu}}}\mathrm{d}\mathbb{P}_{1} }{ \int_{\Omega_{\underline{\mu}}} \mathrm{d}\mathbb{P}_{\infty} } \mathrm{d}\mathbb{P}_{1} +
\int_{ \Omega\backslash \Omega_{\underline{\mu}} } \ln \frac{ \mathrm{d}\mathbb{P}_{1} }{ \mathrm{d}\mathbb{P}_{\infty}} \mathrm{d}\mathbb{P}_{1} \\
&-\, \int_{ \Omega_{\overline{\mu}} } \ln \frac{ \int_{\Omega_{\overline{\mu}}}\mathrm{d}\mathbb{P}_{1} }{ \int_{\Omega_{\overline{\mu}}} \mathrm{d}\mathbb{P}_{\infty} } \mathrm{d}\mathbb{P}_{1} -
\int_{ \Omega\backslash \Omega_{\overline{\mu}} } \ln \frac{ \mathrm{d}\mathbb{P}_{1} }{ \mathrm{d}\mathbb{P}_{\infty}} \mathrm{d}\mathbb{P}_{1} \\
=& \int_{ \Omega_{\overline{\mu}} } \left[ \ln \frac{ \int_{\Omega_{\underline{\mu}}}\mathrm{d}\mathbb{P}_{1} }{ \int_{\Omega_{\underline{\mu}}} \mathrm{d}\mathbb{P}_{\infty} } - \ln \frac{ \int_{\Omega_{\overline{\mu}}}\mathrm{d}\mathbb{P}_{1} }{ \int_{\Omega_{\overline{\mu}}} \mathrm{d}\mathbb{P}_{\infty} }\right]\mathrm{d}\mathbb{P}_{1} \\
& +\, \int_{ \Omega_{\underline{\mu}}\backslash \Omega_{\overline{\mu}} } \ln \frac{ \int_{\Omega_{\underline{\mu}}}\mathrm{d}\mathbb{P}_{1} }{ \int_{\Omega_{\underline{\mu}}} \mathrm{d}\mathbb{P}_{\infty} } \mathrm{d}\mathbb{P}_{1} \\
& -\, \int_{ \Omega_{\underline{\mu}}\backslash \Omega_{\overline{\mu}} } \ln \frac{ \mathrm{d}\mathbb{P}_{1} }{ \mathrm{d}\mathbb{P}_{\infty}} \mathrm{d}\mathbb{P}_{1} \\
\leq& 0,
\end{align*}
where the inequality follows from \eqref{Eqn:greaterratioTheo} and the invariance properties of K-L divergence. The proof thus is complete.
\end{proof}
\begin{remark}
The intuition behind the above theorem is that the more energy the sensor uses for communication with the decision maker, the bigger K-L divergence of available observations and hence the better asymptotic detection performance can be obtained.
\end{remark}
In the following, we will show that the asymptotically optimal censoring strategy that maximizes the post-censoring K-L divergence has a very special structure, i.e., the likelihood ratio of the \emph{no send} region is a single interval. As the proof relies on the optimal quantization structure established in \cite{tsitsiklis1993extremal}, we will introduce the concept of randomized likelihood-ratio quantizer (RLRQ) first.
\begin{definition}
Let the threshold vector be $\vec{t}=(t_1,\ldots,t_{D-1}) \in \mathbb{R}_{+}^{D-1}$ with $0 \leq t_1 \leq \cdots \leq t_{D-1} \leq \infty$ and the associated real-valued random vector be $
\vec{r}=(r_1,\ldots,r_{D-1}) \in \mathbb{R}^{D-1}$. The elements $r_1,\ldots,r_{D-1}$ are independent of each other. The intervals associated with $\vec{t}$ are defined by $I_1=(0,t_1), I_2=(t_1,t_2),\ldots,I_{D-1}=(t_{D-2},t_{D-1}), I_{D}=(t_{D-1},\infty)$.
A quantizer $\phi: \Omega \mapsto \{1,\ldots,D\}$ is a monotone RLRQ with threshold vector $\vec{t}$ and random vector $\vec{r}$ if
\begin{align*}
&\text{Pr}\left( L(\omega)\in I_d \,\, \& \,\, \phi(\omega)\neq d \right)=0, \quad \forall d, \\
&\mathbf{1}_{\{L(\omega)=t_d\,\, \& \,\, \phi(\omega)=d\,\, \& \,\, r_d\in\mathbb{R}_d \}}\\
&\,+\,\mathbf{1}_{\{L(\omega)=t_d\,\, \& \,\, \phi(\omega)=d+1\,\, \& \,\, r_d\in\mathbb{R}\backslash\mathbb{R}_d\}}=1, \quad \forall d , \alpha_d,
\end{align*}
where $L(\cdot)$ is the likelihood ratio function and $\mathbb{R}_d \subseteq \mathbb{R}$ is the ``selection" set, which together with the random variable $r_d$ determines the quantization output of those points that have the likelihood ratio on the boundary $t_d$.
A quantizer $\phi$ is defined to be a RLRQ if there exists a permutation map $\pi$ such that $\pi \circ \phi$ is a monotone RLRQ.
\end{definition}
\begin{remark}
With the above definition, the RLRQ reduces to the deterministic one when the likelihood ratio $L(\omega)$ belongs to the interior of the intervals. The quantizer is randomized with the aid of carefully designed random variable $r_d$ and associated selection set $\mathbb{R}_d$ on the boundary $t_d$.
\end{remark}
\begin{theorem} \label{Theorem:likelihoodratio}
The following randomized likelihood-ratio-based censoring strategy can achieve the maximal post-censoring K-L divergence defined in \eqref{Eqn:optimalCensoringDef}: $\forall k$,
\begin{align} \label{Eqn:optcensor}
\theta_k=\left\{
\begin{array}{ll}
0, & \text{if } \underline{L}_c<L(X_k)<\overline{L}_{c} ,\\
1, & \text{if } L(X_k)<\underline{L}_{c}\: \text{or}\: L(X_k)>\overline{L}_{c},
\end{array} \right.
\end{align}
and if $X_k$ is in the boundary, i.e., $L(X_k)=\underline{L}_c$ or $\overline{L}_{c}$, then $\theta_k$ is determined by not only $L(X_k)$ but also the auxiliary independent random variables $\kappa_1, \kappa_2 \in \mathbb{R}$, respectively. Specifically, when $L(X_k)=\underline{L}_c$,
\begin{align}
\theta_k=\left\{
\begin{array}{ll}
0, & \text{if } \kappa_1 \in \mathbb{R}_c^{1} ,\\
1, & \text{otherwise},
\end{array} \right.
\end{align}
and when
$L(X_k)=\overline{L}_c$,
\begin{align}
\theta_k=\left\{
\begin{array}{ll}
0, & \text{if } \kappa_2 \in \mathbb{R}_c^{2} ,\\
1, & \text{otherwise}.
\end{array} \right.
\end{align}
The threshold parameters $\underline{L}_c, \overline{L}_c$ and the censoring region for the auxiliary random variable $\mathbb{R}_c^{1}, \mathbb{R}_c^{2}$ should be chosen such that
\begin{align*}
&\int_{L(X_1)\in(\underline{L}_c, \overline{L}_c)} \mathrm{d}\mathbb{P}_{\infty}+ \mathbb{P}_{\infty}(L(X_1)=\underline{L}_c) \text{Pr}\{\kappa_1\in\mathbb{R}_c^{1}\} \\
& +\,\mathbb{P}_{\infty}(L(X_1)=\overline{L}_c) \text{Pr}\{\kappa_2\in\mathbb{R}_c^{2}\} \,=\, 1- \epsilon. \addtag \label{Eqn:EqualCons}
\end{align*}
\end{theorem}
\begin{proof}
The equality constraint in \eqref{Eqn:EqualCons} follows directly from Theorem \ref{Theorem:EqualGreaterKL}. We thus focus on the proof that the optimal censoring strategy has the structure described in \eqref{Eqn:optcensor}.
In the setting of censoring, it is assumed that when the sensor decides to send its observation, the ``real" observations are sent without any quantization. To prove the likelihood-ratio-based structure, we first assume that the sensor sends quantized observations instead of unquantized ones. Specifically, if the observations are deemed as ``uninformative", the sensor sends nothing; the sensor sends one of the $D$ symbols otherwise. Mathematically, the sensor adopts a quantization rule as:
\[\phi: \Omega \mapsto \{0,1,\ldots,D\},\]
where if the observation is mapped to $0$, it means that the sensor sends nothing. Then we study the structure of the optimal quantization rule that maximizes the K-L divergence after quantization, which is given by
\begin{align}
\mathbb{D}^{q} \triangleq \sum_{d=0}^{D} \mathbb{P}_{1}(\phi(\omega)=i) \ln \frac{\mathbb{P}_{1}(\phi(\omega)=i)}{\mathbb{P}_{\infty}(\phi(\omega)=i)},
\end{align}
subject to
\begin{align}
\mathbb{P}_{\infty}(\phi(\omega)=0)=1-\epsilon.
\end{align}
Because of the convexity of the K-L divergence \cite[P. $32$]{cover2006elements}, under the finiteness assumption in \eqref{Eqn:finiteAssumption}, from Proposition $3.5$ in \cite{tsitsiklis1993extremal}, we know that the above optimal quantization rule has randomized likelihood-ratio-based structure. Note that the censoring strategy can be regarded as the special quantization case where $D=\infty$. Since the randomized likelihood ratio structure holds for every finite $D$, so does the censoring strategy. The proof thus is complete.
\end{proof}
\begin{remark}
We assume there is no point mass for $X_k\in\Omega$ under either $\mathbb{P}_{1}$ or $\mathbb{P}_{\infty}$, but it is likely that there exists point mass for the likelihood ratio $L(X_k)$. We hence consider splitting those points that belongs to the boundary by randomization. We should also note that for any randomized likelihood-ratio-based censoring strategy, there always exists a deterministic observation-based strategy that has the same post-censoring K-L divergence. If there is no point mass in the boundary, i.e., $\mathbb{P}_{\infty}(L(X_k)=\underline{L}_c)=0 $ and $\mathbb{P}_{\infty}(L(X_k)=\overline{L}_c)=0$, both the likelihood-ratio-based and observation-based strategies become deterministic and the optimal censoring strategy is unique; whereas there are an infinite observation-based strategies otherwise.
\end{remark}
\begin{remark}
Rago et al. \cite{rago1996censoring} first introduced the concept of censoring strategy. The authors proved that with several different detection performance indices, the likelihood ratio of the \emph{no send} region is one single interval. In particular, the authors stated in Theorem 3 of \cite{rago1996censoring} that to maximize the Ali-Silvey distance, the \emph{no send} region should be one single interval, which is very similar to our result.
An important limitation of \cite{rago1996censoring} is the assumption that there is no point mass for the likelihood ratio function. Our approach does not rely on such an assumption.
\end{remark}
\begin{remark}
In general, the optimal censoring region can only be obtained numerically. The likelihood-ratio-based structure coupled with the equality constraint established in Theorem \ref{Theorem:EqualGreaterKL} can significantly reduce the computation load, especially for the scenarios where the observations are of high-dimension. The search space of the optimal censoring strategy is reduced from all admissible strategies that satisfy the energy constraint defined in \eqref{Eqn:energyconstraint} to the very special class, for which we only need to determine two parameters: the upper and lower bounds of the likelihood ratio of the \emph{no send} region.
\end{remark}
\section{Numerical Examples} \label{Section:Numerical Example}
In this section, by simulations, we show that the censoring strategy has better trade-off curves between the detection performance and the energy constraint than a random policy and the DE-CuSum algorithm proposed in \cite{banerjee2012data}.
We consider the problem of mean shift detection in Gaussian noise. Specifically, we assume that before the change event happens, the observations are i.i.d. and have the distribution
$f_0\thicksim\mathcal{N}(0,1),$
whereas the observations are i.i.d. with the post-change distribution
$f_1\thicksim\mathcal{N}(1,1).$
\emph{ Example 1.} We compare the asymptotic detection performance, when the ARL goes to infinity, of a random policy with the asymptotic optimal censoring strategy proposed in Theorem \ref{Theorem:AsymOptCuSum}. The random policy has the form as
\begin{align}
\theta_k = \left\{
\begin{array}{ll}
1, & \text{if } p\leq \epsilon,\\
0, & \text{otherwise},
\end{array} \right.
\end{align}
where $p$ is a random variable with a uniform distribution: $p\thicksim \text{unif(0,1)}.$
Such random policy is very simple and one of the easiest strategies to be implemented locally at the sensor nodes.
For simulation, we keep the ARL around $6500$ and simulate the ``worst-worst case'' detection delay $\mathcal{D}^{\mu}_L(T)$ defined in \eqref{Eqn:DetectionDelayLorden} under various energy levels. Note that both the random policy and the censoring strategy are stationary, so the equalizer rule (e.g., \cite[Page 134]{poor2009quickest}) holds. To simulate the detection delay we can let the change event just happens at the very beginning, i.e., $\nu=1$. The simulation result is shown in Fig. \ref{Fig:censorrandom}, from which we can see that the censoring strategy significantly outperforms the random one. In particular, even when the available average energy units of each transmission is $0.1$, there are only about $5$ extra time slots delay compared with the case when there is no energy constraint ($\epsilon=1$).
\begin{figure}
\centering
\includegraphics[width=3in]{censorrandom}\vspace{-2mm}
\caption{ Detection delay $\mathcal{D}^{\mu}_L(T)$ of different observation transmission scheme as a function of available average energy units of each transmission $\epsilon$. } \label{Fig:censorrandom}
\vspace{-4mm}
\end{figure}
\emph{Example 2.} The DE-CuSum algorithm is simulated now under the same setting as in the first example. The parameters of DE-CuSum are chosen as follows: $h=\infty$ and the deterministic incremental parameter $\mu$ (should not be confused with the notation of censoring strategy in our paper) is approximated with the following equation
\begin{align}
\mu\approx \frac{\epsilon}{1-\epsilon}\mathbb{D}(f_0||f_1). \label{Eqn:approximuDECuSum}
\end{align}
Note that for the DE-CuSum algorithm, the sequence of information available at the decision maker is correlated, so the equalizer rule does not hold any more, which causes great difficult for simulating the "worst-worst case" detection delay. The detection delay is then approximated in that we let $\nu$ be $1,2,\ldots,10$ and take the maximal one as the ``worst-worst case" detection delay. Obviously, the simulated detection delay is equal or less than the actual one associated with the DE-CuSum algorithm.
The simulation result is shown in Fig. \ref{Fig:censorDECuSum} and we can see that while the detection delay of the two schemes are approximately the same when the available energy is big enough ($\epsilon>0.5$), the censoring strategy has considerably less detection delay than the DE-CuSum algorithm when the available energy is severely limited.
\begin{figure}
\centering
\includegraphics[width=3in]{censorDECuSum}\vspace{-2mm}
\caption{ Detection delay $\mathcal{D}^{\mu}_L(T)$ of different observation transmission scheme as a function of available average energy units of each transmission $\epsilon$. } \label{Fig:censorDECuSum}
\vspace{-4mm}
\end{figure}
\emph{Example 3.} A typical evolution of the detection statistic $S_k$ for the CuSum algorithm with censoring strategy, random policy and the DE-CuSum algorithm is shown in Fig. \ref{Fig:TimeSeries}. The scenario where the energy is severely limited, i.e., $\epsilon=0.1$, is simulated. From \eqref{Eqn:approximuDECuSum}, to meet the energy constraint, $\mu$ is set to be $0.056$ for the DE-CuSum algorithm. To keep the ARL of the different schemes around $6500$, the threshold for censoring strategy, random policy and the DE-CuSum algorithm is set to be $690, 101$ and $98$, respectively. The change event is assumed to happen at $\nu=20$. As depicted in the figure, the censoring strategy has least detection delay. Though this evolution is just one realization of the three algorithms, it can provide insights of the reason why the censoring strategy outperforms the DE-CuSum algorithm. Note that there are deterministic increment periods for the DE-CuSum algorithm, i.e., when the detection statistic $S_k<0$, it increases with $\mu$ at each time instant regardless of the observations. Therefore, if change event happens during these periods, the DE-CuSum algorithm cannot respond to the change quickly enough. What is more, when the energy is severely limited, most of the time before the change event happens the DE-CuSum algorithm will be undergoing the deterministic increment periods, so it is very likely that the change event just happens during these periods. On the contrary, the censoring strategy is event-triggered. If the observation contains sufficient information indicating the change event (the observation lies out of the \emph{no send} region), it will be delivered to the decision maker in time. Even if the observation is not sent, the decision maker still can get the rough information of the observation (the likelihood ratio of the whole observations belong to the \emph{no send} region).
\begin{figure}
\centering
\includegraphics[scale=0.48]{TimeSeriesCuSum}\vspace{-1mm}
\caption{ Typical evolution of the CuSum algorithm with censoring strategy, the random policy and DE-CuSum algorithm when $\epsilon=0.1$. } \label{Fig:TimeSeries}
\vspace{-4mm}
\end{figure}
\emph{Example 4.} The censoring strategy is now coupled with the SRP detection procedure. As in the first example, the random policy is used for comparison. Note that the SRP procedure has a randomized initial point and there is no analytic expression of the underlying distribution defined in \eqref{Eqn:QuasiStaDistri}. Hence it is impossible to simulate the SRP procedure through Monte Carlo experiments. We resort to the techniques developed in \cite{moustakides2009numerical}, i.e., solving a system of integral equations, to obtain the performance metrics: the ARL and the ``worst case" conditional average delay, $\mathcal{D}^{\mu}_P(T)$. The sample density for the integration interval $[0,A]$ is set to be $0.1$. In the scenarios, the ARL is kept to be around $1500$ by adjusting the threshold $A$. The simulation results are shown in Fig. \ref{Fig:SRPcensorrandom}. As depicted in the figure, the censoring strategy significantly outperforms the random policy. We also should note that the censoring strategy has a very good trade-off curve in itself: when $\epsilon=0.1$, the detection delay is only $3$ more time slots than that in the scenario where there is no energy constraint ($\epsilon=1$).
\begin{figure}
\centering
\includegraphics[width=3in]{SRPcensorrandom}\vspace{-2mm}
\caption{ Detection delay $\mathcal{D}^{\mu}_P(T)$ of different observation transmission scheme as a function of available average energy units of each transmission $\epsilon$. } \label{Fig:SRPcensorrandom}
\vspace{-4mm}
\end{figure}
In the remainder, we show how we benefit from the two conditions (the two necessary properties the optimal censoring strategy has) in developing the numerical algorithm that finds the optimal censoring strategy. Note that in our case, the likelihood ratio function $L(x)=\frac{f_1(x)}{f_0(x)}$ is continuous and monotone w.r.t. $x$, which together with the likelihood-ratio-based property imply that not only the likelihood ratio of the observation but also the observation itself in the optimal censoring region is a single interval. In other words, the censoring region has the following form
\begin{align}
\theta_k = \left\{
\begin{array}{ll}
0, & \text{if } x\in[a,b_a],\\
1, & \text{otherwise},
\end{array} \right.
\end{align}
where
\begin{align}
\int_a^{b_a}f_0(x)\mathrm{d}x=1-\epsilon. \label{Eqn:ExampleEqualCons}
\end{align}
The numerical algorithm works as follows. As $F_0(-3.5)=1-F_1(3.5)\thickapprox0.0002$, the algorithm focus on the truncated interval $[-3.5,4.5]$ instead of $\mathbb{R}$. The real interval is discretized equally over the range with density $\delta=0.001.$ Given an energy constraint $\epsilon$, $a$ varies over these equidistant points, and the corresponding $b_a$ can be uniquely determined by the equation \eqref{Eqn:ExampleEqualCons}. For each candidate pair $(a,b_a)$, the associated K-L divergence is computed using the numerical integration technique Simpson's rule \cite{tallarida1987area}. Among all the candidate pairs, the one that has the maximal K-L divergence is the optimal one. The total time cost of our algorithm for the cases when $\epsilon=\{0.1,0.2,\ldots,1\}$ is around $0.11$ seconds. However, if there were not these two conditions, the number of intervals for the censoring region could be any positive integer and the ``size" of the censoring region could be any real value less than $\epsilon$. Therefore it would be impossible to come up with an efficient algorithm.
\section{Conclusion and Future Work} \label{Section:Conclusion}
In this paper, we studied the problem of quickest change detection in the minimax setting (both Lorden's and Pollak's formulation) in a scenario where the observations are collected by a sensor with limited energy. To deal with the energy constraint, the sensor adopts a censoring strategy, i.e., the senor only sends the observations that fall into a certain region to the decision maker. We proved that the censoring strategy that maximizes the post-censoring K-L divergence coupled with the CuSum algorithm and SRP detection procedure is asymptotic optimal, when the ARL goes to infinity, for the Lorden's and Pollak' setting, respectively. Simulation results demonstrated a considerably better performance than a random policy and the DE-CuSum algorithm. In general, to find the optimal censoring strategy can only be done numerically. We provided two properties for the optimal censoring strategy, which can be utilized to significantly reduce the computation load.
For the future work, there are multiple interesting directions: studying whether the censoring strategy that has the maximal post-censoring K-L divergence coupled with the CuSum algorithm is strictly optimal for the Lorden's problem; exploring the problem with multiple sensor nodes; and investigating time-varying censoring strategies, which may depend on the detection statistic.
\bibliographystyle{IEEETran}
|
2,869,038,153,782 | arxiv | \section{Introduction}We consider the class $\mathcal{A}$ of all analytic functions defined on the unit disc $\mathbb{D}:=\{z: |z|<1\}$ and normalised by the condition $f(0)=f'(0)-1=0$ as well as its subclass $\mathcal{S}$ consisting of all univalent functions. For any two subclasses $\mathcal{F}$ and $\mathcal{G}$ of $\mathcal{A}$, the $\mathcal{G}$-radius of the class $\mathcal{F}$, denoted as $R_{\mathcal{G}} (\mathcal{F})$ is the largest number $R_{\mathcal{G}} \in (0,1)$ such that $r^{-1}f(rz)\in \mathcal{G}$ for all $f\in \mathcal{F}$ and $0<r<R_{\mathcal{G}}$. In 1969, Ba\c{s}g\"{o}ze \cite{Bas} studied the radii of starlikeness and convexity for polynomial functions which are non-vanishing in the unit disc $\mathbb{D}$. This study was motivated by the work of Alexander \cite{Alex} who showed that the radius of starlikeness and hence the radius of univalence for the function $f$ defined by $f(z)=zP(z)$, where $P$ is a polynomial of degree $n>0$ with zeros outside the unit disc, is $(n+1)^{-1}$. Further, Ba\c{s}g\"{o}ze \cite{Bas2} also studied the radius results related to $\alpha$-spirallike and $\alpha$- convex spirallike functions of order $\lambda$ for various kinds of functions obtained from polynomials, such as $zP(z),P(z)^{\beta/n}, z(P(z))^{\beta/n}, zM(z)/N(z)$ where $P,M,\ \text{and}\ N$ are polynomials with zeros outside the unit disc. In 2000, Gangadharan et al.\ \cite{Ganga2} (see also Kwon and Owa \cite{KwanOwa}) determined the radius of $p$-valently strongly starlikeness of order $\gamma$ for the function $F:\mathbb{D}\to\mathbb{C}$ defined by $F(z):=f(z) (Q(z))^{\beta/n}$, where $f$ is a $p$-valent analytic function, $Q$ has properties similar to that of the polynomials considered in the paper by Ba\c{s}g\"{o}ze, and $\beta$ is a positive real number. The present study continues this investigation to several recently studied subclasses of starlike functions defined by subordination.
For two analytic functions $f$ and $g$, $f$ is said to be subordinate to $g$, denoted by $f\prec g$, if $f=g\circ w$ for some analytic function $w:\mathbb{D}\rightarrow \mathbb{D}$, with $w(0)=0$. When the function $g$ is univalent, the subordination $f\prec g$ holds if and only if $f(0)=g(0)$ and $f(\mathbb{D})\subseteq g(\mathbb{D})$. Several subclasses of the class $\mathcal{A}$ are defined using the concept of subordination. For an analytic function $\varphi:\mathbb{D}\to\mathbb{C}$, the class $\mathcal{S}^{*}(\varphi)$ consists of all functions $f\in \mathcal{A}$ satisfying the subordination \[\dfrac{zf'(z)}{f(z)}\prec \varphi(z).\]
Shanmugam \cite{Shan} studied convolution theorems for more general classes but with stronger assumption of convexity of $\varphi$. Ma and Minda \cite{MaMinda} later gave a unified treatment of growth, distortion, rotation and coefficient inequalities for the class $\mathcal{S}^{*}(\varphi)$ when the superordinate function $\varphi$ is a function with positive real part, $\varphi(\mathbb{D})$ is symmetric with respect to the real axis and starlike with respect to $\varphi(0)=1$. The class of starlike functions of order $\alpha$, $\mathcal{ST}(\alpha)$ is a special case when $\varphi(z)=(1+(1-2\alpha)z)/(1-z)$; for $\alpha=0$, the usual class of starlike functions is obtained. Other subclasses of the class of starlike functions can also be derived for different choices of $\varphi$. For $\varphi_{par}(z)=1+(2/\pi^2) (\log ((1+\sqrt{z})/(1-\sqrt{z})) )^2$, the class $\mathcal{S}_{p}=\mathcal{S}^{*}(\varphi_{par})$ is the class of starlike functions associated with a parabola, the image of the unit disc under the function $\varphi_{par}$ is the set $\varphi_{par}(\mathbb{D})=\{w=u+\iota v: v^2<2u-1\}=\{w: |w-1|<\RE w\}$. This class was introduced by R\o nning \cite{RonPara}. Mendiratta et al. \cite{Exp} investigated the class of starlike functions associated with the exponential function $\mathcal{S}^{*}_\mathit{e}=\mathcal{S}^{*}(\mathit{e}^z)$. Similarly, various properties of the class of starlike functions associated with a cardioid, $\mathcal{S}^{*}_{c}=\mathcal{S}^{*}(\varphi_{c}(z))$ for $\varphi_{c}(z)=1+(4/3)z+(2/3)z^2$ are studied by Sharma et al.\cite{KanNavRavi}. A function $f\in \mathcal{S}^{*}_{c}$ if $zf'(z)/f(z)$ lies in the region bounded by the cardioid $\{x+\iota y: (9x^2+9y^2-18x+5)^2 -16(9x^2+9y^2-6x+1)=0\}$. Kumar and Ravichandran \cite{KumarRavi} discussed the class $\mathcal{S}^{*}_{R}=\mathcal{S}^{*}(\psi(z))$ of starlike functions associated with a rational function for $\psi(z)=1+((z^2+kz)/(k^2-kz)),\ k=\sqrt{2}+1$. In 2020, Wani and Swaminathan \cite{LatSwami} studied the class of starlike functions associated with a nephroid domain, $\mathcal{S}^{*}_{Ne}=\mathcal{S}^{*}(\varphi_{Ne})$ with $\varphi_{Ne}(z)=1+z-z^3/3$. Goel and Kumar \cite{PriyankaSivaSig} explored various properties of the class $\mathcal{S}^{*}_{SG}=\mathcal{S}^{*}(2/(1+\mathit{e}^{-z}))$ known as the class of starlike functions associated with modified sigmoid function. Radius problems relating to the ratio of analytic functions are recently considered in \cite{LeckoRaviAsha,Madhuravi}. In 2020, Wani and Swaminathan \cite{LatSwami2} discussed the radius problems for the functions associated with the nephroid domain. Cho and others have also investigated some interesting radius problems, see \cite{ChoVirendSushRavi,EbadianCho}.
For a given function $f$ starlike of order $\alpha$, $Q$ a polynomial of degree $n$ whose zeros are outside the unit disk $\mathbb{D}$ (in other words, $Q$ is non-vanishing in $\mathbb{D}$) and $\beta$ a positive real number, we determine the $\mathcal{M}$-radius of the function $F:\mathbb{D}\to\mathbb{C}$ defined by $F(z):=f(z) (Q(z))^{\beta/n}$ for various choices of the class $\mathcal{M}$. In Section 2, we determine the radius of starlikeness of order $\lambda$ for the function $F$, and obtain, as a special case, the radius of starlikeness for $F$. This is done by studying a mapping property of the function $w:\mathbb{D}\to\mathbb{C}$ defined by $w(z):=zF'(z)/F(z)$. Indeed, we find the smallest disc containing $w(\overline{\mathbb{D}}_r)$ where $\mathbb{D}_r:=\{z\in \mathbb{C}:|z|<r\}$. This disc is used in the investigation of all the radius problems. In Sections 3-4, we respectively compute the values of the radius of starlikeness associated with the exponential function and the radius of starlikeness associated with a cardioid for the function $F$. In Sections 5-7, we determine the radius of starlikeness associated with a particular rational function, the radius of starlikeness associated with nephroid domain and the radius of starlikeness associated with modified sigmoid function for the function $F$. All the radii obtained are shown to be sharp. Several known radii results are obtained as special cases for specific values of $\alpha$ and $\beta$.
\section{Starlike functions of order $\lambda$}
For $0\leqslant \lambda <1$, the class $\mathcal{ST}(\lambda)$ of starlike functions of order $\lambda$ contains all functions $f\in \mathcal{A}$, satisfying $\RE (zf'(z)/f(z))>\lambda$. Let $f\in \mathcal{ST}(\alpha)$ and $Q$ be a polynomial of degree $n>0$ with all of its zeros in the region $\mathbb{C}\backslash\mathbb{D}$. Since $Q$ is non-vanishing in $\mathbb{D}$, the function $F:\mathbb{D}\rightarrow \mathbb{C}$ defined by
\begin{equation}\label{eqn2.1}
F(z):=f(z) (Q(z))^{\frac{\beta}{n}},\quad \beta>0
\end{equation}is analytic in $\mathbb{D}$. We assume $Q(0)=1$ throughout this paper so that $F\in\mathcal{A}$.
The following theorem gives the $\mathcal{ST}(\lambda)$-radius for the function $F$ and it is independent of the degree of the polynomial $Q$.
\begin{theorem}\label{thm2.1}If the function $f\in \mathcal{ST}(\alpha)$, then the radius of starlikeness of order $\lambda$ for the function $F$ defined in \eqref{eqn2.1} is given by \[R_{\mathcal{ST}(\lambda)} (F)=\dfrac{2(1-\lambda)}{2-2\alpha+\beta+\sqrt{(2-2\alpha+\beta)^2+4(1-\lambda)(2\alpha+\beta-1-\lambda)}}.\]
\end{theorem}
\begin{proof}
We start with finding the disc in which $zF'(z)/F(z)$ lies for $z\in \overline{\mathbb{D}}_{r}$, then using this disc, we determine the radius of starlikeness of order $\lambda$ for $F$.
For the function $F$ given by \eqref{eqn2.1}, a calculation shows that
\begin{equation}\label{eqn2.2}
\dfrac{zF'(z)}{F(z)}=\dfrac{zf'(z)}{f(z)}+\dfrac{\beta}{n}\dfrac{zQ'(z)}{Q(z)}.
\end{equation}
Since $f\in \mathcal{ST}(\alpha)$, it is well-known that $zf'(z)/f(z)$ has positive real part and so
\begin{equation}\label{eqn2.3a}
\left| \frac{zf'(z)}{f(z)}-\dfrac{1+(1-2\alpha)r^2}{1-r^2}\right| \leqslant \dfrac{2(1-\alpha)r}{1-r^2},\quad |z|\leqslant r.
\end{equation}
or equivalently
$zf'(z)/f(z)$ lies in the disc $\mathbb{D} (a_f (r); c_f(r))$ for $|z|\leqslant r$ where
\begin{equation}\label{eqn2.3}
a_f(r)=\dfrac{1+(1-2\alpha)r^2}{1-r^2},\quad c_f(r)=\dfrac{2(1-\alpha)r}{1-r^2}.
\end{equation}
Let $z_{k},\ k=1,2,\ldots ,n$ denote the zeros of the polynomial $Q$, then the polynomial $Q$ is a constant multiple of $\prod_{k=1}^n (z-z_k)$ and so
\begin{equation}\label{eqn2.4}
\dfrac{zQ'(z)}{Q(z)}=\sum_{k=1}^{n}\dfrac{z}{z-z_{k}}.
\end{equation}
Since $z_{k}\in \mathbb{C}\backslash \mathbb{D}$ for every $k$, the bilinear transformation $z/(z-z_k)$ maps $\overline{\mathbb{D}}_r$ to a disc. Indeed, \cite[Lemma~3.2]{GangaRaviShan} shows that \begin{equation*}
\left|\dfrac{z}{z-z_{k}}+\dfrac{r^2}{1-r^2}\right|\leqslant \dfrac{r}{1-r^2},\quad |z|\leqslant r
\end{equation*} for every $k$ and hence, using \eqref{eqn2.4}, we have
\begin{equation}\label{eqn2.5}
\left|\dfrac{zQ'(z)}{Q(z)}+\dfrac{nr^2}{1-r^2}\right|\leqslant \dfrac{ nr}{1-r^2},\quad |z|\leqslant r.
\end{equation}
Using \eqref{eqn2.2}, we get
\begin{align}
\left|\dfrac{zF'(z)}{F(z)}-\dfrac{1-(2\alpha -1+\beta)r^2}{1-r^2}\right| &=\nonumber \left|\dfrac{zf'(z)}{f(z)}-\dfrac{1+(1-2\alpha )r^2}{1-r^2}+\dfrac{\beta}{n}\dfrac{zQ'(z)}{Q(z)}+\dfrac{\beta r^2}{1-r^2}\right|\\
&\leqslant \left|\dfrac{zf'(z)}{f(z)}-\dfrac{1+(1-2\alpha )r^2}{1-r^2}\right|+\left|\dfrac{\beta}{n}\dfrac{zQ'(z)}{Q(z)}+\dfrac{\beta r^2}{1-r^2}\right|,\label{eqn2.6}
\end{align}
for $|z|\leqslant r$.
Since $\beta\in \mathbb{R}$ is positive, using the equations \eqref{eqn2.3a} and \eqref{eqn2.5}, the inequality \eqref{eqn2.6} gives
\begin{equation}\label{eqn2.7}
\left|\dfrac{zF'(z)}{F(z)}-\dfrac{1-(2\alpha -1+\beta)r^2}{1-r^2}\right| \leqslant \dfrac{(2-2\alpha+\beta)r}{1-r^2},\quad |z|\leqslant r.
\end{equation}
Define the functions $a_F $ and $c_F$ by
\[ a_F(r):= \dfrac{1-(2\alpha -1+\beta)r^2}{1-r^2} \quad \text{and}\quad c_F(r):= \dfrac{(2-2\alpha+\beta)r}{1-r^2}. \]
so that $ zF'(z)/F(z)\in \mathbb{D}(a_F (r);c_F(r))$.
It is observed that the center $a_{F}$ is an increasing function of $r$ for $2\alpha+\beta-2<0$, and is a decreasing function of $r$ for $2\alpha+\beta-2 \geqslant 0$.
From \eqref{eqn2.7}, it follows that
\begin{align}
\RE \dfrac{zF'(z)}{F(z)} &\geqslant \notag a_{F}(r)-c_{F}(r)\\
&= \notag \dfrac{1-(2\alpha -1+\beta)r^2}{1-r^2}-\dfrac{(2-2\alpha+\beta)r}{1-r^2}\\
&=\dfrac{1-(1-2\alpha)r}{1+r}-\dfrac{\beta r}{1-r}=: \psi(r) . \label{eqn2.10}
\end{align}
The equation
$\psi(\alpha, \beta, r)=\lambda$ is simplifies to
\begin{equation}\label{eqn2.9}
(1-2\alpha-\beta+\lambda)r^2-(2-2\alpha+\beta)r+1-\lambda=0
\end{equation} and so the smallest positive root of the equation $\psi(\alpha, \beta, r)=\lambda$
in the interval $(0,1)$ is given by \[\sigma_{0}:=\dfrac{2(1-\lambda)}{2-2\alpha+\beta+\sqrt{(2-2\alpha+\beta)^2+4(1-\lambda)(2\alpha+\beta-1-\lambda)}}.\]
It can be seen that \
\begin{align}
\psi '(r)&=\nonumber - \dfrac{2(1-\alpha)(1-r)^2+\beta(1+r)^2}{(1-r^2)^2}<0,
\end{align}
which shows that $\psi$ is a decreasing function of $r\in (0,1)$. Therefore,
for $r<\sigma_{0}$, we have $ \psi(\alpha,\beta,r)>\psi(\alpha,\beta,\sigma_{0})=\lambda$ and so \eqref{eqn2.10} implies that $ \RE (zF'(z)/F(z))\geqslant \psi(\alpha,\beta,r)>\lambda$ for all $r< \sigma_{0}$, or in other words, the radius of starlikeness of order $\lambda$ of the function $F$ is at least $\sigma_{0}$.
To show that the radius obtained is the best possible, take $f(z)=z(1-z)^{2\alpha-2}\in \mathcal{ST}(\alpha)$ and the polynomial $Q(z)=(1+z)^{n}$. For these choices of the functions $f$ and $Q$, we have
\begin{align}
F(z)&= \notag z(1-z)^{2\alpha-2} (1+z)^{\beta},\\
\intertext{which implies}
\dfrac{zF'(z)}{F(z)}&=\notag 1+\dfrac{(2-2\alpha)z}{1-z}+\dfrac{\beta z}{1+z}\\
&=\notag \dfrac{(1-2\alpha-\beta)z^2+(2-2\alpha+\beta)z+1}{1-z^2}\\
&=\lambda+ \dfrac{(1-2\alpha-\beta+\lambda)z^2+(2-2\alpha+\beta)z+1-\lambda}{1-z^2}.\label{eqn2.11}
\end{align}
Using the fact that $\sigma_{0}$ is the positive root of the polynomial in \eqref{eqn2.9}, the equation \eqref{eqn2.11} shows that $\RE (zF'(z)/F(z))=\lambda$ for $z=-\sigma_{0}$ proving sharpness of $\sigma_0$.
\end{proof}
For $\lambda =0$, the radius of starlikeness for the function $F$ given by \eqref{eqn2.1} is \[R_{\mathcal{ST}}(F)=\dfrac{2}{2-2\alpha+\beta+\sqrt{(2-2\alpha+\beta)^2+4(2\alpha+\beta-1)}}.\]
When $\alpha$ goes to $1$ in Theorem~\ref{thm2.1} we get that the radius of starlikeness of order $\lambda$ for the function $F(z)=z(Q(z))^{\beta/n}$, where $Q$ is a non-constant polynomial of degree $n$ non-vanishing on the unit disc and $\beta >0$, comes out to be $(1-\lambda)/(\beta+1-\lambda)$. This result for $\beta=n$ coincides with the one obtained by Ba\c{s}g\"{o}ze in \cite[Theorem~3]{Bas}. Moreover, by letting $\beta\to 0$ in Theorem~\ref{thm2.1}, we obtain the radius of starlikeness of order $\lambda$ for the class of starlike functions of order $\alpha, 0\leqslant \alpha <1$ (see \cite[p.~88]{Goodman}).
\section{Starlike functions associated with the exponential function}
The class $\mathcal{S}^{*}_{\mathit{e}}$ of starlike functions associated with the exponential functions consists of all the functions $f\in \mathcal{A}$ which satisfy $ zf'(z)/f(z)\prec \mathit{e}^z$. This class was introduced and studied by Mendiratta et al. \cite{Exp} in 2015. The subordination definition is equivalent to the inequality $|\log (zf'(z)/f(z))| <1$.
The main result of the section provides the $\mathcal{S}^{*}_{\mathit{e}}$ radius for the function $F$ given by \eqref{eqn2.1}.
\begin{lemma}\cite[Lemma~2.2]{Exp}\label{lem4.1}
For $1/\mathit{e}<a<\mathit{e}$, let $r_a$ be given by
\begin{align*}
r_a &=\begin{dcases}
a-\frac{1}{\mathit{e}} &\text{ if }\ \frac{1}{\mathit{e}}< a \leqslant \frac{\mathit{e}+\mathit{e}^{-1}}{2}\\
\mathit{e}-a &\text{ if }\ \frac{\mathit{e}+\mathit{e}^{-1}}{2} \leqslant a <\mathit{e}.
\end{dcases}
\end{align*}
Then, $\{w: |w-a|<r_a\} \subset \{w: |\log w|<1\}= \Omega_{\mathit{e}}$, where $\Omega_{\mathit{e}}$ is the image of the unit disc $\mathbb{D}$ under the exponential function.
\end{lemma}
\begin{theorem}\label{thm4.2}
If the function $f\in \mathcal{ST}(\alpha)$, then the radius of starlikeness asscociated with the exponential function for the function $F$ defined in \eqref{eqn2.1} is given by
\begin{align*}
R_{\mathcal{S}^{*}_{\mathit{e}}}(F)&= \begin{dcases}
\sigma_{0} &\text{ if }\ 2\alpha+\beta-2\geqslant 0\\
\sigma_{0} &\text{ if }\ 2\alpha+\beta-2<0\ \text{and}\ X(\alpha,\beta)\leqslant 0\\
\tilde{\sigma_{0}}&\text{ if }\ 2\alpha+\beta-2<0\ \text{and} X(\alpha,\beta)> 0,
\end{dcases}
\end{align*}
where
\begin{align}
\sigma_{0}=& \nonumber\dfrac{2(\mathit{e}-1)}{\mathit{e}(2-2\alpha+\beta)+\sqrt{(\mathit{e}(2-2\alpha+\beta))^2-4(\mathit{e}-1)(1-\mathit{e}(2\alpha-1+\beta))}},\\
\tilde{\sigma_{0}} =& \nonumber \dfrac{2(\mathit{e}-1)}{(2-2\alpha+\beta)+\sqrt{(2-2\alpha+\beta)^2-4(\mathit{e}-1)(2\alpha-1+\beta-\mathit{e})}}
\intertext{and}
X(\alpha,\beta) =& \label{eqn4.1} 2(2+\beta-2\alpha)(1+\mathit{e}^2-2\mathit{e}(2\alpha+\beta -1))((-2\alpha+2+\beta
)\\ &\quad \nonumber-\sqrt{(-2\alpha+2+\beta)^2-4(\mathit{e}-1)(2\alpha-1+\beta-\mathit{e})})\\
&\quad\nonumber +4(2\alpha+\beta-1-\mathit{e})(\mathit{e}^2-1)(2\alpha+\beta-2).
\end{align}
\end{theorem}
\begin{proof}
Our aim is to show that the $\mathbb{D}(a_F (r);c_F(r)) \subset \Omega_{\mathit{e}}$ for all $0<r\leqslant R_{\mathcal{S}^{*}_{\mathit{e}}}(F)$. Let
\begin{align*}
\sigma_{0}:=&\dfrac{2(\mathit{e}-1)}{\mathit{e}(2-2\alpha+\beta)+\sqrt{(\mathit{e}(2-2\alpha+\beta))^2-4(\mathit{e}-1)(1-\mathit{e}(2\alpha-1+\beta))}},\intertext{and}
\tilde{\sigma_{0}} :=&\dfrac{2(\mathit{e}-1)}{(2-2\alpha+\beta)+\sqrt{(2-2\alpha+\beta)^2-4(\mathit{e}-1)(2\alpha-1+\beta-\mathit{e})}}.
\end{align*}
It is clear that $\sigma_{0}$ and $\tilde{\sigma_{0}}$ are both positive as both $2-2\alpha+\beta$ and $\mathit{e}-1$ are positive..
For the polynomial
\begin{equation}\label{eqn4.2}
\phi(r):=(1-\mathit{e}(-1+2\alpha+\beta))r^2-\mathit{e}(2-2\alpha+\beta)r+(\mathit{e}-1)
\end{equation} obtained from the equivalent form $\phi(r)=0$ of the equation $c_{F}(r)=a_{F}(r)-1/\mathit{e}$, it is observed that $\phi(0)=\mathit{e}-1>0,\ \phi(1)=-2\mathit{e} \beta<0 $, showing that there is a zero for $\phi$ in the interval $(0,1)$, namely $\sigma_{0}$. Also, the positive root of the equation $c_{F}(r)=\mathit{e}-a_{F}(r)$ or $\psi(r)=0$ with
\begin{equation}\label{eqn4.3}
\psi(r):=(-1+2\alpha+\beta-\mathit{e})r^2-(2-2\alpha+\beta)r+(\mathit{e}-1)
\end{equation} is $\tilde{\sigma_{0}}$. To verify that $\psi$ has a zero in the interval $(0,1)$, it is seen that $\psi(0)=\mathit{e}-1>0$ and $\psi(1)=4(\alpha-1)<0$.
Case (i): $2\alpha+\beta-2\geqslant 0$. Since the center $a_{F}$ in \eqref{eqn2.7} is a decreasing function of $r$, $a_{F}(r)>a_{F}(\sigma_0),\ \text{for}\ r\in (0,\sigma_0)$. By definition, $\sigma_0$ is the solution the equation $c_{F}(r)=a_{F}(r)-1/\mathit{e}$, also, the radius in \eqref{eqn2.7} satisfies $c_{F}(r)>0,\ r\in (0,1)$, together imply that $a_{F}(r)>a_{F}(\sigma_{0})>1/\mathit{e},\ \text{for}\ r\in (0,\sigma_{0})$. Further, $a_{F}(r)< a_{F}(0)=1<(\mathit{e}+\mathit{e}^{-1})/2\approx1.54308,\ \text{for}\ r\in (0,\sigma_0) $. Thus, it is established that \[\dfrac{1}{\mathit{e}}<a_{F}(r)\leqslant \dfrac{\mathit{e}+\mathit{e}^{-1}}{2\mathit{e}},\ \text{for}\ r\in (0,\sigma_0).\]
Applying Lemma~\ref{lem4.1} we get $\mathbb{D}(a_F (\sigma_0);c_F(\sigma_0)) \subset \Omega_{\mathit{e}}$ that is, the radius of starlikeness associated with the exponential function for the function $F$ is atleast $\sigma_{0}$.
Case (ii): $2\alpha+\beta -2<0$ and $X(\alpha,\beta)\leqslant 0$. The number \[\tilde{\sigma_{1}}=\sqrt{\dfrac{1-2\mathit{e}+\mathit{e}^2}{1+2\mathit{e}+\mathit{e}^2-4\mathit{e} \alpha-2\mathit{e} \beta}}<1\]
is the positive root of the equation $a_{F}(r)=(\mathit{e}+\mathit{e}^{-1})/2$, or equivalently, $\zeta(r)=0$ with $\zeta(r):=(1+\mathit{e}^2-2\mathit{e}(2\alpha+\beta-1))r^2+2\mathit{e}-\mathit{e}^2-1$. Here, $\zeta(0)=2\mathit{e}-\mathit{e}^2-1\approx -2.9525<0$ and $\zeta(1)=-2\mathit{e}(2\alpha+\beta-2)>0$ justifies the existence of a zero for the polynomial $\zeta$ in the interval $(0,1)$. Further,
\begin{align}\label{eqn4.4}
\zeta(\tilde{\sigma_{0}}) =& \dfrac{1}{4(-1-\mathit{e}+2\alpha+\beta)^2} \big (
2(2+\beta-2\alpha)(1+\mathit{e}^2-2\mathit{e}(2\alpha+\beta -1))\\&\quad\nonumber \times((-2\alpha+2+\beta
) -\sqrt{(-2\alpha+2+\beta)^2-4(\mathit{e}-1)(2\alpha-1+\beta-\mathit{e})}) \\\nonumber&\quad{}+ 4(2\alpha+\beta-1-\mathit{e})(\mathit{e}^2-1)(2\alpha+\beta-2)\big),
\end{align}
and from this with \eqref{eqn4.1} we infer that $X(\alpha,\beta) \leqslant 0$ implies $\zeta(\tilde{\sigma_{0}}) \leqslant 0$. Also, $\zeta(0)<0,\ \zeta(\tilde{\sigma_{0}})\leqslant 0$ along with the fact $\zeta(\tilde{\sigma_{1}})=0$ gives $\tilde{\sigma_{0}} \leqslant \tilde{\sigma_{1}}$, which implies that $a_{F}(\tilde{\sigma_{0}})\leqslant a_{F}(\tilde{\sigma_{1}})=(\mathit{e}+\mathit{e}^{-1})/2$. The application of Lemma~\ref{lem4.1} gives $\mathbb{D}(a_F (\sigma_0);c_F(\sigma_0)) \subset \Omega_{\mathit{e}}$ or in other words, the radius of starlikeness associated with the exponential function for $F$ is atleast $\sigma_{0}$.
Case (iii): $2\alpha+\beta -2<0$ and $X(\alpha,\beta)> 0$. Here following the same line of thought as in Case (ii), $\zeta(\tilde{\sigma_{0}})>0$ implies $a_{F}(\tilde{\sigma_{0}})>(\mathit{e}+\mathit{e}^{-1})/2$, and thus Lemma~\ref{eqn4.1} gives the required radius to be atleast $\tilde{\sigma_{0}}$.
To show that the obtained radius values are the best possible, take $f(z)=z/(1-z)^{-2\alpha+2}\in \mathcal{ST}(\alpha)$ and the polynomial $Q(z)=(1+z)^{n}$, these choices give
the expression for $zF'(z)/F(z)$, as already shown in the proof of Theorem~\ref{thm2.1}, to be
\begin{align}
\dfrac{zF'(z)}{F(z)} &=\dfrac{(1-2\alpha-\beta)z^2+(1+(1-2\alpha)+\beta)z+1}{1-z^2}\nonumber\\
&=\mathit{e}-\dfrac{(-1+2\alpha+\beta-\mathit{e})z^2-(2-2\alpha+\beta)z+\mathit{e}-1}{1-z^2}.\label{eqn4.6}
\intertext{It is seen that \eqref{eqn4.6} can also be written as} \dfrac{zF'(z)}{F(z)}&=\dfrac{1}{\mathit{e}}+\dfrac{(1-\mathit{e}(-1+2\alpha+\beta))z^2+\mathit{e}(2-2\alpha+\beta)z+\mathit{e}-1}{\mathit{e}(1-z^2)}.\label{eqn4.7}
\end{align}
The definition of the polynomial $\phi$ in \eqref{eqn4.2} for $r=\sigma_{0}$ together with \eqref{eqn4.7} gives \[\left|\log \dfrac{(-\sigma_{0})F'(-\sigma_{0})}{F(-\sigma_{0})}\right|=\left|\log \dfrac{1}{\mathit{e}}\right|=1,\]
proving sharpness for $\sigma_{0}$.
Further, the polynomial $\psi$ in \eqref{eqn4.3} for $r=\tilde{\sigma_{0}}$ and \eqref{eqn4.6} provide
\[\left|\log \dfrac{\tilde{\sigma_{0}}F'(\tilde{\sigma_{0}})}{F(\tilde{\sigma_{0}})}\right|=\left|\log \mathit{e}\right|=1.\]
This proves sharpness for $\tilde{\sigma_{0}}$.
\end{proof}
If we let $\alpha$ goes to $1$ in Theorem~\ref{thm4.2}, then the radius of starlikeness associated with the exponential function for the function $F(z)=z(Q(z))^{\beta/n}$ where $Q$ is a non-constant polynomial of degree $n$ non-vanishing on the unit disc and $\beta >0$, comes out to be $(\mathit{e}-1)/(\mathit{e}\beta+\mathit{e}-1)$. Moreover, when $\beta\to 0$ in Theorem~\ref{thm4.2}, we obtain the radius of starlikeness associated with the exponential function for the class of starlike functions of order $\alpha, 0\leqslant \alpha <1$ obtained by Mendiratta et al. in \cite[Theorem~3.4]{Exp} and also by Khatter et al. in \cite[ Theorem~2.17 (2)]{KhatSivaRaviExpo} for $\mathcal{S}^{*}_{0,\mathit{e}}$ with $A=1-2\alpha\ \text{and}\ B=-1$.
\section{Starlike functions associated with a cardioid}
Sharma et al. \cite{KanNavRavi} studied the class $\mathcal{S}^{*}_{c}=\mathcal{S}^{*}(\varphi_{c}),\ \varphi_{c}(z)=(1+(4/3)z+(2/3)z^2)$ of starlike functions associated with a cardioid and proved the following lemma.
\begin{lemma}\cite[lemma~2.5]{KanNavRavi} \label{lem5.1}
For $1/3<a<3$,
\begin{align*}
r_a &=
\begin{dcases}
\frac{3a-1}{3} & \text{ if }\ \frac{1}{3}<a\leq \frac{5}{3}\\
3-a & \text{ if } \ \frac{5}{3}\leq a<3.
\end{dcases}
\end{align*}
Then $\{w: |w-a|<r_a\} \subset \Omega _c$. Here $\Omega_c$ is the region bounded by the cadioid $\{x+\iota y: (9x^2+9y^2-18x+5)^2 -16(9x^2+9y^2-6x+1)=0\}$.
\end{lemma}
The following theorem gives the radius of starlikeness associated with a cardioid for the function $F$ given in \eqref{eqn2.1}.
\begin{theorem}\label{thm5.2}
Let the function $f\in \mathcal{ST}(\alpha)$, the $\mathcal{S}^{*}_{c}$ radius for the function $F$ defined in \eqref{eqn2.1} is
\begin{align*}
R_{\mathcal{S}^{*}_{c}}(F)&= \begin{dcases}
\sigma_{0} &\text{ if }\ 2\alpha+\beta-2\geqslant 0\\
\sigma_{0} &\text{ if }\ 2\alpha+\beta-2<0\ \text{and}\ X(\alpha,\beta)\leqslant 0\\
\tilde{\sigma_{0}}&\text{ if }\ 2\alpha+\beta-2<0\ \text{and} X(\alpha,\beta)> 0,
\end{dcases}
\end{align*}
where
\begin{align}
\sigma_{0}=&\nonumber \dfrac{4}{3(2-2\alpha+\beta)+\sqrt{(6-6\alpha+3\beta)^2+8(6\alpha+3\beta-4)}},\\
\tilde{\sigma_{0}} =& \nonumber\dfrac{4}{(2-2\alpha+\beta)+\sqrt{(2-2\alpha+\beta)^2-8(2\alpha+\beta-4)}}
\intertext{and}
X(\alpha,\beta) =& \label{eqn5.1} 2(8-6\alpha-3\beta)(6\alpha-6-3\beta)(6\alpha-6-3\beta \\&\quad\nonumber+ \sqrt{(6-6\alpha+3\beta)^2+8(6\alpha+3\beta-4)})\\&\quad\nonumber+ 48(6\alpha+3\beta-4)(2-2\alpha-\beta).
\end{align}
\end{theorem}
\begin{proof}
Consider the disc mentioned in \eqref{eqn2.7}, it needs to be shown that this disc lies in the region $\Omega_{c}$ for all $0<r\leqslant R_{\mathcal{S}^{*}_{c}}(F)$. Take the equations $c_{F}(r)=a_{F}(r)-1/3$ and $c_{F}(r)=3-a_{F}(r)$; which are equivalent to $\phi(r)=0\ \text{and}\ \psi(r)=0$ respectively, with the corresponding polynomials $\phi$ and $\psi$ given by
\begin{align}
\phi(r):&=(3(2\alpha-1+\beta)-1)r^2+3(2-2\alpha+\beta)r-2,\label{eqn5.2}\intertext{and}
\psi(r):&= (2\alpha-1+\beta-3)r^2-(2-2\alpha+\beta)r+2.\label{eqn5.3}
\end{align}
For the polynomial $\phi$, $\phi(0)=-2<0,\ \phi(1)=6\beta>0$; thus $\phi$ has a zero in the interval $(0,1)$, let it be denoted by $\sigma_{0}$. Also, considering the polynomial $\psi$, it is seen that $\psi(0)=2>0$ and $\psi(1)=4(\alpha-1)<0$, let the zero of $\psi$ in the interval $(0,1)$ be denoted by $\tilde{\sigma_{0}}$. Then it is obtained that
\begin{align*}
\sigma_{0}:=&\dfrac{4}{3(2-2\alpha+\beta)+\sqrt{(6-6\alpha+3\beta)^2+8(6\alpha+3\beta-4)}},\intertext{and}
\tilde{\sigma_{0}} :=&\dfrac{4}{(2-2\alpha+\beta)+\sqrt{(2-2\alpha+\beta)^2-8(2\alpha+\beta-4)}}.
\end{align*}
Here, it is evident from their values that both $\sigma_{0}$ and $\tilde{\sigma_{0}}$ are indeed positive.
Case (i): $2\alpha+\beta-2 \geqslant 0$. First it is shown that the center $a_{F}$ in \eqref{eqn2.7} satisfies
\begin{equation}\label{eqn5.4}
\dfrac{1}{3}<a_{F}(r)< \dfrac{5}{3},\ r\in (0,\sigma_{0}).
\end{equation}
The fact that the center is a decreasing function of $r$, implies $a_{F}(r)>a_{F}(\sigma_{0})\ \text{for}\ r\in (0,\sigma_0)$. Also, $\sigma_{0}$ is the root of the equation $a_{F}(r)-1/3=c_{F}(r)$, along with the fact that the radius satisfies $c_{F}(r)>0,\ r\in (0,1)$ gives $a_{F}(r)>a_{F}(\sigma_{0})>1/3\ \text{for}\ r\in (0,\sigma_{0})$. Further, $a_{F}(r)<a_{F}(0)=1<5/3,\ r\in (0,\sigma_0)$. This proves \eqref{eqn5.4} and applying Lemma~\ref{lem5.1}, infers that $\mathbb{D}(a_F (\sigma_{0});c_F(\sigma_{0})) \subset \Omega_{c}$, that is, the radius of starlikeness associated with a cardioid for the function $F$ is atleast $\sigma_{0}$.
Case (ii): $2\alpha+\beta-2<0\ \text{and}\ X(\alpha,\beta)\leqslant 0$. The equation $a_{F}(r)=5/3$ from Lemma~\ref{lem5.1}, takes the form $\zeta(r)=0$, with $\zeta(r):=(8-6\alpha-3\beta)r^2-2$. Then, it is seen that $\zeta(0)=-2<0$ and $\zeta(1)=-3(2\alpha+\beta-2)>0$. This shows that $\zeta$ has a zero in the interval $(0,1)$, let this be denoted by $\tilde{\sigma_{1}}$, then \[\tilde{\sigma_{1}}=\sqrt{\dfrac{2}{8-6\alpha-3\beta}}.\]
Also since
\begin{align}
\zeta(\sigma_{0}) =&\label{eqn5.4a} \dfrac{1}{4(-4+6\alpha+3\beta)^2}(2(8-6\alpha-3\beta)(6\alpha-6-3\beta)(6\alpha-6-3\beta \\
&\quad{}\nonumber+ \sqrt{(6-6\alpha+3\beta)^2 +8(6\alpha+3\beta-4)}) \\&\quad\nonumber+48(6\alpha+3\beta-4)(2-2\alpha-\beta)),
\end{align}
from \eqref{eqn5.1} and \eqref{eqn5.4a} it is evident that $X(\alpha,\beta)\leqslant 0$ is equivalent to saying $\zeta(\sigma_{0})\leqslant 0$. The facts that $\zeta(0)<0,\ \zeta(\sigma_{0})\leqslant 0$ and $\zeta(\tilde{\sigma_{1}})=0$ together imply that $\sigma_{0} \leqslant \tilde{\sigma_{1}}$. Further, $\sigma_{0}\leqslant \tilde{\sigma_{1}}$ will imply that $a_{F}(\sigma_{0})\leqslant a_{F}(\tilde{\sigma_{1}})=5/3$, and thus using Lemma~\ref{lem5.1}, $\mathbb{D}(a_F (\sigma_{0});c_F(\sigma_{0})) \subset \Omega_{c}$. Thus, proving that the required radius value is atleast $\sigma_{0}$.
Case (iii): $2\alpha+\beta-2<0\ \text{and}\ X(\alpha,\beta)> 0$. On the similar lines as in Case (ii), $X(\alpha,\beta)>0$ implies $\zeta(\sigma_{0})>0$, which gives that $a_{F}(\sigma_{0})>5/3$, this inturn, after another application of Lemma~\ref{lem5.1} concludes that the radius of starlikeness associated with a cardioid for the function $F$ is atleast $\tilde{\sigma_{0}}$.
To verify the sharpness of the obtained radius values, take $f(z)=z/(1-z)^{-2\alpha+2}\in \mathcal{ST}(\alpha)$ and the polynomial $Q$ as $Q(z)=(1+z)^{n}$. Thus, the expression for $zF'(z)/F(z)$ as seen in the proof for Theorem~\ref{thm2.1}, becomes
\begin{align}
\dfrac{zF'(z)}{F(z)}&= \dfrac{(1-2\alpha-\beta)z^2+(2-2\alpha+\beta)z+1}{1-z^2}.\label{eqn5.5}
\end{align}
The polynomial $\phi$ in \eqref{eqn5.2} gives that for $r=\sigma_{0}$,
\begin{equation}
3((1-2\alpha-\beta)r^2-(2-2\alpha+\beta)r+1)=1-r^2. \label{eqn5.6}
\end{equation}
Thus, the sharpness for $\sigma_{0}$ is proved by using \eqref{eqn5.5} and \eqref{eqn5.6} which gives that $zF'(z)/F(z)=1/3=\varphi_{c}(-1)$ for $z=-\sigma_{0}$. Also, the polynomial $\psi$ in \eqref{eqn5.3} for $r=\tilde{\sigma_{0}}$ gives
\begin{equation}
(1-2\alpha-\beta)r^2+(2-2\alpha+\beta)r+1=3(1-r^2). \label{eqn5.7}
\end{equation}
Thus, using \eqref{eqn5.7} in \eqref{eqn5.5} it is seen that $\tilde{\sigma_{0}}F'(\tilde{\sigma_{0}})/F(\tilde{\sigma_{0}})=3=\varphi_{c}(1)$. This proves the sharpness of $\tilde{\sigma_{0}}$.
\end{proof}
If we let $\alpha$ goes to $1$ in Theorem~\ref{thm5.2}, then we obtain that the radius of starlikeness associated with a cardioid for the function $F(z)=z(Q(z))^{\beta/n}$, where $Q$ is a non-constant polynomial of degree $n$ non-vanishing on the unit disc and $\beta >0$, comes out to be $2/(2+3\beta)$. Moreover, by letting $\beta\to 0$ in Theorem~\ref{thm5.2}, we get the radius of starlikeness associated with a cardioid (\cite[Theorem~4.7]{KanNavRavi} with $A=1-2\alpha\ \text{and}\ B=-1$) for the class of starlike functions of order $\alpha, 0\leqslant \alpha <1$.
\section{Starlike functions associated with a rational function}
Kumar and Ravichandran \cite{KumarRavi} introduced the class of starlike functions associated with the rational function $\varphi_{R}(z)=1+((z^2+kz)/(k^2-kz)),\ \text{with}\ k=\sqrt{2}+1$. This class is represented by $\mathcal{S}^{*}_{R}=\mathcal{S}^{*}(\varphi_{R}(z))$.They also showed the following result, which is used in finding the $\mathcal{S}^{*}_{R}$ radius for the function $F$ defined in \eqref{eqn2.1}.
\begin{lemma}\cite[lemma~2.2]{KumarRavi} \label{lem6.1}
For $2(\sqrt{2}-1)<a<2,$
\begin{align*}
r_a &=
\begin{dcases}
a-2(\sqrt{2}-1) & \text{ if }\ 2(\sqrt{2}-1)<a\leq \sqrt{2}\\
2-a & \text{ if } \ \sqrt{2}\leq a<2.
\end{dcases}
\end{align*}
Then $\{w: |w-a|<r_a\} \subset \varphi_{R}(\mathbb{D}).$
\end{lemma}
\begin{theorem}\label{thm6.2}
If the function $f\in \mathcal{ST}(\alpha)$, then the $\mathcal{S}^{*}_{R}$ radius for the function $F$ defined in \eqref{eqn2.1} is given by
\begin{align*}
R_{\mathcal{S}^{*}_{R}}(F)&= \begin{dcases}
\sigma_{0} &\text{ if }\ 2\alpha+\beta-2\geqslant 0\\
\sigma_{0} &\text{ if }\ 2\alpha+\beta-2<0\ \text{and}\ X(\alpha,\beta)\leqslant 0\\
\tilde{\sigma_{0}}&\text{ if }\ 2\alpha+\beta-2<0\ \text{and} X(\alpha,\beta)> 0,
\end{dcases}
\end{align*}
where
\begin{align}
\sigma_{0}=&\nonumber\dfrac{2(3-2\sqrt{2})}{(2-2\alpha+\beta)+\sqrt{(-2+2\alpha-\beta)^2-4(3-2\sqrt{2})(2\sqrt{2}-1-2\alpha-\beta)}},\\
\tilde{\sigma_{0}} =&\nonumber\dfrac{2}{(2-2\alpha+\beta)+\sqrt{(-2+2\alpha-\beta)^2-4(2\alpha-3+\beta)}}
\intertext{and}
X(\alpha,\beta) =& \label{eqn6.1}2(2\alpha-2-\beta)(1+\sqrt{2}-2\alpha-\beta)((2\alpha-2-\beta)\\
&\quad \nonumber+\sqrt{(2\alpha-2-\beta)^2-4(3-2\sqrt{2})(2\sqrt{2}-1-2\alpha-\beta)}) \\
&\quad \nonumber
+4(1-2\sqrt{2}+2\alpha+\beta) ((3-2\sqrt{2})(1+\sqrt{2}-2\alpha-\beta) \\
&\quad \nonumber +(1-\sqrt{2})(1-2\sqrt{2}+2\alpha+\beta)).
\end{align}
\end{theorem}
\begin{proof}
We show that the disc mentioned in \eqref{eqn2.7} satisfies $\mathbb{D}(a_F (r);c_F(r)) \subset \varphi_{R}(\mathbb{D})$ for all $0<r\leq R_{\mathcal{S}^{*}_{R}}(F)$. Lemma~\ref{lem6.1} gives that the two possible values of the radius are the smallest positive roots of the equations $c_{F}(r)=a_{F}(r)-2(\sqrt{2}-1)$ and $c_{F}(r)=2-a_{F}(r)$; which are equivalent to $\phi(r)=0$ and $\psi(r)=0$ respectively, where the polynomials in $r$ are of the form
\begin{align}
\phi(r):&=(2(\sqrt{2}-1)-(2\alpha-1+\beta))r^2-(2-2\alpha+\beta)r+3-2\sqrt{2},\label{eqn6.2}\intertext{and}
\psi(r):&=(2\alpha-3+\beta)r^2-(2-2\alpha+\beta)r+1\label{eqn6.3}
\end{align}
respectively. The fact that both the polynomials $\phi$ and $\psi$ possess zeros in the interval $(0,1)$ can be easily verified as $\phi(0)=3-2\sqrt{2}>0 \ \text{and}\ \phi(1)=-2\beta <0$, also, $\psi(0)=1>0,\ \psi(1)=4(\alpha-1)<0$. The respective positive zeros of $\phi$ and $\psi$, denoted by $\sigma_{0}$ and $\tilde{\sigma_{0}}$, are given by
\begin{align*}
\sigma_{0}:=&\dfrac{2(3-2\sqrt{2})}{(2-2\alpha+\beta)+\sqrt{(-2+2\alpha-\beta)^2-4(3-2\sqrt{2})(2\sqrt{2}-1-2\alpha-\beta)}},\intertext{and}
\tilde{\sigma_{0}} :=&\dfrac{2}{(2-2\alpha+\beta)+\sqrt{(-2+2\alpha-\beta)^2-4(2\alpha-3+\beta)}}.
\end{align*}
Case (i): $2\alpha+\beta-2 \geqslant 0$. By using the fact that the center $a_{F}(r)$ is decreasing function of $r$, it will be proved that
\begin{equation}\label{eqn6.4}
2(\sqrt{2}-1)<a_{F}(r)\leq \sqrt{2},\ r\in (0,\sigma_{0}).
\end{equation}
This after the application of Lemma~\ref{eqn6.1} will directly imply$\mathbb{D}(a_F (\sigma_{0});c_F(\sigma_{0})) \subset \varphi_{R}(\mathbb{D})$, that is, the required radius is atleast $\sigma_{0}$. So, to prove \eqref{eqn6.4}, observe that $ a_{F}(r)>a_{F}(\sigma_{0})\ \text{for}\ r\in (0,\sigma_0)$, and $\sigma_{0}$ is the positive root of the equation $c_{F}(r)=a_{F}(r)-2(\sqrt{2}-1)$, also since the radius $c_{F}(r)$ in \eqref{eqn2.7} is positive for all $r\in (0,1)$, $a_{F}(\sigma_{0})-2(\sqrt{2}-1)>0$. Lastly, $a_{F}(r)<a_{F}(0)=1<\sqrt{2},\ r\in (0,\sigma_0)$, thus proving \eqref{eqn6.4}, and also the required result for this case.
Case (ii): $2\alpha+\beta-2<0\ \text{and}\ X(\alpha,\beta)\leqslant 0$. Here, again from the Lemma~\ref{lem6.1}, consider the equation $a_{F}(r)=\sqrt{2}$, which gets simplified to $\zeta(r)=0$, with $\zeta(r)=(\sqrt{2}+1-2\alpha-\beta)r^2+1-\sqrt{2}$. Then, for the polynomial $\zeta$, the positive root is denoted by $\tilde{\sigma_{1}}$, where
\begin{equation*}
\tilde{\sigma_{1}}=\sqrt{\dfrac{\sqrt{2}-1}{\sqrt{2}+1-2\alpha-\beta}}.
\end{equation*}
It is observed that $\zeta(1)=-(2\alpha+\beta -2)>0$ and $\zeta(0)=1-\sqrt{2}<0$, justifying the fact that $\tilde{\sigma_{1}} \in (0,1)$. This also gives that $\sigma_{0} \leqslant \tilde{\sigma_{1}}$ if and only if $\zeta(\sigma_{0})\leqslant 0$. It is seen that
\begin{align}
\zeta(\sigma_{0})=&\label{eqn6.4a}\dfrac{1}{4(2\sqrt{2}-1-2\alpha-\beta)^2}\big( 2(2\alpha-2-\beta)(1+\sqrt{2}-2\alpha-\beta)((2\alpha-2-\beta)\\
&\quad \nonumber+\sqrt{(2\alpha-2-\beta)^2-4(3-2\sqrt{2})(2\sqrt{2}-1-2\alpha-\beta)}) \\
&\quad \nonumber
+4(1-2\sqrt{2}+2\alpha+\beta) ((3-2\sqrt{2})(1+\sqrt{2}-2\alpha-\beta) \\
&\quad \nonumber +(1-\sqrt{2})(1-2\sqrt{2}+2\alpha+\beta))\big),
\end{align}
so, combining \eqref{eqn6.1} and \eqref{eqn6.4a}, $X(\alpha,\beta)\leqslant 0$ is same as saying $\zeta(\sigma_{0})\leqslant 0$. Thus, in this case, $\sigma_{0} \leqslant \tilde{\sigma_{1}}$ which implies $a_{F}(\sigma_{0})\leqslant a_{F}(\tilde{\sigma_{1}})=\sqrt{2}$, and now, Lemma~\ref{lem6.1} gives that the radius of starlikeness associated with the rational function for the function $F$ is atleast $\sigma_{0}$.
Case (iii): $2\alpha+\beta-2<0\ \text{and}\ X(\alpha,\beta)> 0$. Following similar arguments as in Case (ii), $\zeta(\sigma_{0})>0$ gives $a_{F}(\sigma_{0})> \sqrt{2}$, and then Lemma~\ref{lem6.1} implies that the required radius is atleast $\tilde{\sigma_{0}}$.
Take $f(z)=z/(1-z)^{-2\alpha+2}\in \mathcal{ST}(\alpha)$ and the polynomial $Q$ as $Q(z)=(1+z)^{n}$. Using these choices for the function $f$ and the polynomial $Q$, the expression for $zF'(z)/F(z)$ as in the proof for Theorem~\ref{thm2.1}, becomes,
\begin{align}
\dfrac{zF'(z)}{F(z)}=& \dfrac{(1-2\alpha-\beta)z^2+(2-2\alpha+\beta)z+1}{1-z^2}.\label{eqn6.5}
\end{align}
The polynomials $\phi$ and $\psi$ in \eqref{eqn6.2}, \eqref{eqn6.3} for $r=\sigma_{0}$ and $r=\tilde{\sigma_{0}}$ respectively imply that
\begin{align}
(1-2\alpha-\beta)r^2-(2-2\alpha+\beta)r+1 &=2(\sqrt{2}-1)(1-r^2), \label{eqn6.6}\intertext{and}
(1-2\alpha-\beta)r^2 +(2-2\alpha+\beta)r+1 &=2(1-r^2).\label{eqn6.7}
\end{align}
Now, observe that using \eqref{eqn6.6} and putting $z=-\sigma_{0}$ in \eqref{eqn6.5}, it is obtained that \[
\dfrac{(-\sigma_{0})F'(-\sigma_{0})}{F(-\sigma_{0})}= 2(\sqrt{2}-1)=\varphi_{R}(-1)\]
this proves the sharpness for the radius $\sigma_{0}$. Also, considering \eqref{eqn6.7}, and replacing $z=\tilde{\sigma_{0}}$ in \eqref{eqn6.5}, it is seen that $zF'(z)/F(z)$ assumes the value $2=\varphi_{R}(1)$ thus proving the sharpness for $\tilde{\sigma_{0}}$.
\end{proof}
When $\alpha$ goes to $1$ in Theorem~\ref{thm6.2} we get that the radius of starlikeness associated with a rational function for the function $F(z)=z(Q(z))^{\beta/n}$ where $Q$ is a non-constant polynomial of degree $n$ non-vanishing on the unit disc and $\beta >0$, comes out to be $(3-2\sqrt{2})/(3-2\sqrt{2}+\beta)$. Further, when $\beta\to 0$, we obtain the radius of starlikeness associated with a rational function for the class of starlike functions of order $\alpha, 0\leqslant \alpha <1$ obtained by Kumar and Ravichandran in \cite[Theorem~3.2]{KumarRavi} (when $A=1-2\alpha\ \text{and}\ B=-1$).
\section{Starlike functions associated with a nephroid domain}
Wani and Swaminathan \cite{LatSwami} in 2020 studied the class of starlike functions associated with a nephroid domain $\mathcal{S}^{*}_{Ne}=\mathcal{S}^{*}(\varphi_{Ne})$ with $\varphi_{Ne}(z)=1+z-z^3/3$. The function $\varphi_{Ne}$ maps the unit disc onto the interior of the nephroid, a 2-cusped curve, \[\left((u-1)^2+v^2-\dfrac{4}{9}\right)^3-\dfrac{4v^2}{3}=0.\]
In this section, the radius of starlikeness associated with the nephroid is discussed for the function $F$ defined in \eqref{eqn2.1}, using the following lemma due to Wani and Swaminathan.
\begin{lemma}\cite[lemma~2.2]{LatSwami2} \label{lem7.1}
For $1/3<a<5/3,$
\begin{align*}
r_a &=
\begin{dcases}
a-\frac{1}{3} & \text{ if }\ \frac{1}{3}<a\leq 1\\
\frac{5}{3}-a & \text{ if } \ 1\leq a<\frac{5}{3}.
\end{dcases}
\end{align*}
Then $\{w: |w-a|<r_a\} \subset \Omega_{Ne}$, where $\Omega_{Ne}$ is the region bounded by the nephroid, that is \[\Omega_{Ne}:=\left\{\left((u-1)^2+v^2-\dfrac{4}{9}\right)^3-\dfrac{4v^2}{3}<0\right\}.\]
\end{lemma}
\begin{theorem}\label{thm7.2}
If the function $f\in \mathcal{ST}(\alpha)$, then the radius of starlikeness associated with a nephroid domain for the function $F$ defined in \eqref{eqn2.1} is given by
\begin{align*}
R_{\mathcal{S}^{*}_{Ne}}(F)&= \begin{dcases}
\sigma_{0} &\text{ if }\ 2\alpha+\beta-2\geqslant 0\\
\tilde{\sigma_{0}} &\text{ if }\ 2\alpha+\beta-2<0,
\end{dcases}
\end{align*}
where
\begin{align*}
\sigma_{0}&=\dfrac{4}{3(2-2\alpha+\beta)+\sqrt{9(-2+2\alpha-\beta)^2-8(4-6\alpha-3\beta)}},\intertext{and}
\tilde{\sigma_{0}} &=\dfrac{4}{3(2-2\alpha+\beta)+\sqrt{9(-2+2\alpha-\beta)^2-8(6\alpha-8+3\beta)}}.
\end{align*}
\end{theorem}
\begin{proof}
The proof aims to show that the disc in \eqref{eqn2.7} satisfies the following condition:\[\mathbb{D}(a_F (r);c_F(r)) \subset \Omega_{Ne},\quad 0<r\leqslant R_{\mathcal{S}^{*}_{Ne}}(F).\]
Let \[\sigma_{0}:=\dfrac{4}{3(2-2\alpha+\beta)+\sqrt{9(-2+2\alpha-\beta)^2-8(4-6\alpha-3\beta)}}.\]
The number $\sigma_{0}$ is the positive solution to the equation $c_{F}(r)=a_{F}(r)-1/3$, which transforms into $\phi(r)=0$ with the polynomial $\phi$ in $r$ given by
\begin{equation}\label{eqn7.1}
\phi(r):=(4-6\alpha-3\beta)r^2-3(2-2\alpha+\beta)r+2.
\end{equation}
For the polynomial $\phi$, $\phi(0)=2>0$ and $\phi(1)=-6\beta <0$ justifying the existence of a zero for $\phi$ in the interval $(0,1)$.
The number \[\tilde{\sigma_{0}}: =\dfrac{4}{3(2-2\alpha+\beta)+\sqrt{9(-2+2\alpha-\beta)^2-8(6\alpha-8+3\beta)}},\]
is the positive zero of the polynomial $\psi$ in $r$ is given by
\begin{equation}\label{eqn7.2}
\psi(r):=(6\alpha-8+3\beta)r^2-3(2-2\alpha+\beta)r+2.
\end{equation}
Here, the equation $\psi(r)=0$ is the the simplified form of the equation $c_{F}(r)=5/3-a_{F}(r)$. The polynomial $\psi$ indeed has a zero in the interval $(0,1)$ as $\psi(0)=2>0$ and $\psi(1)=12(\alpha-1)<0$.
It is known that the center $a_{F}$ in \eqref{eqn2.7} has the property that $a_{F}(r)\leqslant 1$ for $2\alpha+\beta-2\geqslant 0$; and $a_{F}(r)>1$ for $2\alpha+\beta-2<0$.
Case (i): $2\alpha+\beta-2\geqslant 0$. In this case, the center $a_{F}(r)\leqslant 1$, thus Lemma~\ref{lem7.1} implies that $\mathbb{D}(a_F (\sigma_{0});c_F(\sigma_{0})) \subset \Omega_{Ne}$. This proves that the $\mathcal{S}^{*}_{Ne}$ radius for the function $F$ is atleast $\sigma_{0}$.
To verify the sharpness of $\sigma_{0}$, take $f(z)=z/(1-z)^{-2\alpha+2}\in \mathcal{ST}(\alpha)$ and the polynomial $Q$ as $Q(z)=(1+z)^{n}$. Using these choices for the function $f$ and the polynomial $Q$, the expression for $zF'(z)/F(z)$ as in the proof for Theorem~\ref{thm2.1} is,
\begin{align}
\dfrac{zF'(z)}{F(z)}&= \dfrac{(1-2\alpha-\beta)z^2+(2-2\alpha+\beta)z+1}{1-z^2}.\label{eqn7.3}
\end{align}
The polynomial $\phi$ in \eqref{eqn7.1} provides that for $r=\sigma_{0}$,
\begin{equation}\label{eqn7.4}
3((1-2\alpha-\beta)r^2-(2-2\alpha+\beta)r+1)=(1-r^2).
\end{equation}
Thus using \eqref{eqn7.4}, and replacing $z=-\sigma_{0}$, \eqref{eqn7.3} gives $((-\sigma_{0})F'(-\sigma_{0}))/F(-\sigma_{0})=1/3=\varphi_{Ne}(-1)$. This proves the sharpness for $\sigma_{0}$.
Case (ii): $2\alpha+\beta -2<0$. Here, it is known that $a_{F}(r)>1$, so, Lemma~\ref{lem7.1} directly gives that the required radius is atleast the positive solution of the equation $c_{F}(r)=5/3-a_{F}(r)$ that is $\tilde{\sigma_{0}}$.
To verify sharpness in this case, take $f(z)=z/(1-z)^{-2\alpha+2}\in \mathcal{ST}(\alpha)$ and the polynomial $Q$ as $Q(z)=(1+z)^{n}$. These expressions transform $zF'(z)/F(z)$ into the form given in \eqref{eqn7.3}. The polynomial $\psi$ given in \eqref{eqn7.2} implies that for $r=\tilde{\sigma_{0}}$,
\begin{equation}\label{eqn7.5}
3((1-2\alpha-\beta)r^2+(2-2\alpha+\beta)r+1)=5(1-r^2).
\end{equation}
Thus, the radius $\tilde{\sigma_{0}}$ is the best possible since an application of \eqref{eqn7.5} in \eqref{eqn7.3} gives that the expression for $zF'(z)/F(z)$ takes the value $5/3=\varphi_{Ne}(1)$ for $z=\tilde{\sigma_{0}}$. Thus, proving the sharpness for the radius $\tilde{\sigma_{0}}$.
\end{proof}
When $\alpha$ goes to $1$ in Theorem~\ref{thm7.2} we get:
The radius of starlikeness associated with a nephroid domain for the function $F(z)=z(Q(z))^{\beta/n}$ where $Q$ is a non-constant polynomial of degree $n$ non-vanishing on the unit disc and $\beta >0$, comes out to be $2/(2+3\beta)$.
When $\beta\to 0 $ in Theorem~\ref{thm7.2}, we obtain the radius of starlikeness associated with a nephroid domain for the class of starlike functions of order $\alpha, 0\leqslant \alpha <1$ obtained by Wani\ and\ Swaminathan \cite[Theorem~3.1(ii)]{LatSwami2} when $A=1-2\alpha\ \text{and}\ B=-1$.
\section{Starlike functions associated with modified sigmoid function}
In 2020, Goel and Kumar \cite{PriyankaSivaSig} introduced the class $\mathcal{S}^{*}_{SG}$ of functions mapping $\mathbb{D}$ onto the domain $\Delta_{SG}=\{w: |\log w/(2-w)|< 1\}$, with $\mathcal{S}^{*}_{SG}=\mathcal{S}^{*}(2/(1+\mathit{e}^{-z}))$. They also proved the following lemma.
\begin{lemma}\cite[lemma~2.2]{PriyankaSivaSig} \label{lem8.1}
Let $2/(1+\mathit{e})<a<2\mathit{e}/(1+\mathit{e})$. If
\[r_a=\dfrac{\mathit{e}-1}{\mathit{e}+1}-|a-1|,\]
then $\{w: |w-a|<r_a\} \subset \Delta_{SG}$.
\end{lemma}
In the next result, we find the radius of starlikeness associated with modified sigmoid function for the function $F$ defined in \eqref{eqn2.1}.
\begin{theorem}\label{thm8.2}
If the function $f\in \mathcal{ST}(\alpha)$, then the $\mathcal{S}^{*}_{SG}$ radius for the function $F$ given in \eqref{eqn2.1} is given by
\begin{align*}
R_{\mathcal{S}^{*}_{SG}}(F)&= \begin{dcases}
\sigma_{0} &\text{ if }\ 2\alpha+\beta-2\geqslant 0\\
\tilde{\sigma_{0}} &\text{ if }\ 2\alpha+\beta-2<0,
\end{dcases}
\end{align*}
where
\begin{align*}
\sigma_{0}&=2(\mathit{e}-1)\big((2-2\alpha+\beta)(\mathit{e}+1) \\&\quad \nonumber+\sqrt{((\mathit{e}+1)(2-2\alpha+\beta))^2
-4(\mathit{e}-1)(3+\mathit{e}-2\alpha-2\alpha\mathit{e}-\beta-\beta\mathit{e})}\big)^{-1},\intertext{and}
\tilde{\sigma_{0}} &=2(\mathit{e}-1)\big((2-2\alpha+\beta)(\mathit{e}+1)\\
&\quad \nonumber +\sqrt{((\mathit{e}+1)(2-2\alpha+\beta))^2
+4(\mathit{e}-1)(1+3\mathit{e}-2\alpha-2\alpha\mathit{e}-\beta-\beta\mathit{e})}\big)^{-1}.
\end{align*}
\end{theorem}
\begin{proof}
We will show that $\mathbb{D}(a_F (r);c_F(r)) \subset \Delta_{SG}\ \text{for}\ 0<r\leqslant R_{\mathcal{S}^{*}_{SG}}(F)$.
Case (i): $2\alpha+\beta-2 \geqslant 0$. It is known that in this case, the center $a_{F}$ satisfies the inequality $a_{F}(r)\leqslant 1$. Consequently, the equation $c_{F}(r)= ((\mathit{e}-1)/(\mathit{e}+1))-|a_{F}(r)-1|$ becomes $c_{F}(r)=((\mathit{e}-1)/(\mathit{e}+1))-1+a_{F}(r)$. This equation is simplified into the form $\phi(r)=0$, with
\begin{equation}\label{eqn8.1}
\phi(r):= ((2-2\alpha-\beta)(\mathit{e}+1)-(\mathit{e}-1))r^2-(2-2\alpha+\beta)(\mathit{e}+1)r+(\mathit{e}-1).
\end{equation}
The polynomial $\phi$ has a zero in the interval $(0,1)$ since, $\phi(0)=(\mathit{e}-1)>0,\ \text{and}\ \phi(1)=-2\beta(\mathit{e}+1)<0$, and this positive number is
\begin{align*}
\sigma_{0}&=2(\mathit{e}-1)\big((2-2\alpha+\beta)(\mathit{e}+1) \\&\quad \nonumber+\sqrt{((\mathit{e}+1)(2-2\alpha+\beta))^2
-4(\mathit{e}-1)(3+\mathit{e}-2\alpha-2\alpha\mathit{e}-\beta-\beta\mathit{e})}\big)^{-1}.
\end{align*}
Now, to verify the sharpness of the radius $\sigma_{0}$, we take $f(z)=z/(1-z)^{-2\alpha+2}\in \mathcal{ST}(\alpha)$ and polynomial $Q$ as $Q(z)=(1+z)^{n}$. Using these choices for the function $f$ and the polynomial $Q$, the expression for $zF'(z)/F(z)$ as seen in the proof for Theorem~\ref{thm2.1} is obtained to be
\begin{align}
\dfrac{zF'(z)}{F(z)}=&\label{eqn8.2} \dfrac{(1-2\alpha-\beta)z^2+(2-2\alpha+\beta)z+1}{1-z^2},
\intertext{which implies that}
2-\dfrac{zF'(z)}{F(z)} =&\label{eqn8.3} \dfrac{(-3+2\alpha+\beta)z^2-(2-2\alpha+\beta)z+1}{1-z^2}.
\intertext{Thus, \eqref{eqn8.2} and \eqref{eqn8.3} give that}
\left(\dfrac{zF'(z)}{F(z)}\right)\left( 2-\dfrac{zF'(z)}{F(z)}\right)^{-1} =&\label{eqn8.4}\dfrac{(1-2\alpha-\beta)z^2+(2-2\alpha+\beta)z+1}{(-3+2\alpha+\beta)z^2-(2-2\alpha+\beta)z+1}.
\end{align}
Further, the polynomial $\phi$ in \eqref{eqn8.1} gets reduced to
\begin{equation}\label{eqn8.5}
((1-2\alpha-\beta)r^2-(2-2\alpha+\beta)r+1)\mathit{e}=(-3+2\alpha+\beta)r^2+(2-2\alpha+\beta)r+1.
\end{equation}
for $r=\sigma_{0}$. Thus using \eqref{eqn8.5} in \eqref{eqn8.4} for $z=-\sigma_{0}$, it is seen that \[\left|\log\left( \left(\dfrac{zF'(z)}{F(z)}\right)\left( 2-\dfrac{zF'(z)}{F(z)}\right)^{-1}\right)\right|=\left|\log \dfrac{1}{\mathit{e}}\right|=1,\] thus proving the sharpness for $\sigma_{0}$.
Case (ii): $2\alpha+\beta-2<0$. Here, the center $a_{F}(r)>1$, which implies that the equation $c_{F}(r)= ((\mathit{e}-1)/(\mathit{e}+1))-|a_{F}(r)-1|$ converts to $c_{F}(r)=((\mathit{e}-1)/(\mathit{e}+1))+1-a_{F}(r)$. This is equivalent to $\psi(r)=0$ for
\begin{equation}\label{eqn8.6}
\psi(r):= ((2-2\alpha-\beta)(\mathit{e}+1)+(\mathit{e}-1))r^2+(2-2\alpha+\beta)(\mathit{e}+1)r-(\mathit{e}-1).
\end{equation}
The number
\begin{align*}
\tilde{\sigma_{0}} &=2(\mathit{e}-1)\big((2-2\alpha+\beta)(\mathit{e}+1)\\
&\quad \nonumber+\sqrt{((\mathit{e}+1)(2-2\alpha+\beta))^2
+4(\mathit{e}-1)(1+3\mathit{e}-2\alpha-2\alpha\mathit{e}-\beta-\beta\mathit{e})}\big)^{-1}
\end{align*}
is the smallest positive zero of the polynomial $\psi$, and the observations $\psi(0)=-(\mathit{e}-1)<0,\ \text{and}\ \psi(1)=4(1-\alpha)(\mathit{e}+1)>0$ justify the existence of a zero in $(0,1)$.
To verify that the radius $\tilde{\sigma_{0}}$ is the best possible, take the values of the function $f$ and the polynomial $Q$ same as in Case (i), thus the expression for $(zF'(z)/F(z))/(2-(zf'(z)/F(z)))$ is same as given in \eqref{eqn8.4}. The polynomial $\psi$ in \eqref{eqn8.6} gives
\begin{equation}\label{eqn8.7}
(1-2\alpha-\beta)r^2+(2-2\alpha+\beta)r+1=((-3+2\alpha+\beta)r^2-(2-2\alpha+\beta)r+1)\mathit{e}.
\end{equation}
for $r=\tilde{\sigma_{0}}$. Thus, putting $z=\tilde{\sigma_{0}}$ in \eqref{eqn8.4} and then using \eqref{eqn8.7}, it is obtained that
\[\left|\log\left( \left( \dfrac{\tilde{\sigma_{0}}F'(\tilde{\sigma_{0}})}{F(\tilde{\sigma_{0}})}\right)
\left(2-\dfrac{\tilde{\sigma_{0}}F'(\tilde{\sigma_{0}})}{F(\tilde{\sigma_{0}})}\right)^{-1}\right) \right|=\left|\log \mathit{e}\right|=1.\] This proves sharpness for $\tilde{\sigma_{0}}$.
\end{proof}
When $\alpha$ goes to $1$ in Theorem~\ref{thm8.2} we get that the radius of starlikeness associated with modified sigmoid function for the function $F(z)=z(Q(z))^{\beta/n}$ where $Q$ is a non-constant polynomial of degree $n$ non-vanishing on the unit disc and $\beta >0$, is $(\mathit{e}-1)/(\mathit{e}-1+\beta(\mathit{e}+1))$. When $\beta\to 0$ in Theorem~\ref{thm8.2}, we obtain the radius of starlikeness associated with the modified sigmoid function for the class of starlike functions of order $\alpha$, $0\leqslant \alpha <1$. The radius of parabolic starlikeness and the radii of other related starlikeness including the one related to the lemniscate of Bernoulli can be investigated.
|
2,869,038,153,783 | arxiv | \section{Introduction}
\label{sec:intro}
Cosmic shear, the weak gravitational lensing effect imprinted on distant galaxies by the large-scale structure (LSS), is one of the primal science goals for EUCLID and Rubin-LSST. The blueprint for these missions has been set by current stage-3 surveys, including the Kilo-Degree Survey \citep[KiDS]{kuijken_fourth_2019,asgari_kids-1000_2020}, the Dark Energy Survey \citep[DES]{des_collaboration_dark_2017,gatti_dark_2020} or the Subaru Hyper Suprime-Cam \citep[HST]{hamana_cosmological_2020}, yielding tight constraints ob the matter distribution in the late Universe.
The cosmic shear signal is estimated by measuring the coherent distortion of background galaxies. Since the intrinsic ellipticity of galaxies is much larger than the lensing effect, millions of galaxies are required to measure a significant signal. This casts a complete spectroscopic survey unfeasible. Hence, one has to rely on redshift estimates from photometry. In order to interpret the observed ellipticity correlations, the potometric redshifts have to be calibrated. There are different approaches for the calibration procedure on the market. These include the calibration with a spectroscopic reference sample (possibly with re-weighting) \citep[e.g.][]{lima_estimating_2008,newman_calibrating_2008,matthews_reconstructing_2010,masters_mapping_2015,bonnett_redshift_2016,mcleod_joint_2017,hildebrandt_kidsviking-450_2020,wright_photometric_2020,myles_dark_2021}, using photometry measurements in conjunction with clustering measurements of tracer populations \citep[e.g.][]{sanchez_redshift_2019,busch_testing_2020,alarcon_redshift_2020} and self-organising maps \citep{wright_photometric_2020}. It is also possible to partially self-calibrate the photometric redshifts in weak lensing data \citep[e.g.][]{schaan_photo-z_2020}. In order to account for general shapes of the source-redshift distributions (SRDs) different mixture models have been employed \citep[see for example][]{rau_estimating_2020}. These Gaussian processes are non-parametric, but they are by definition non-linear, which makes their implementation in cosmology pipelines in general very difficult. \citet{stolzner_self-calibration_2021} used linear fit parameters to circumvent this problem to self-calibrate the data, as it can be implemented in existing pipelines very easily.
Currently it is best practice to propagate the redshift uncertainty in the SRDs by introducing shift parameters in the mean of the distribution \citep{hildebrandt_kids-1000_2020,hikage_cosmology_2019,abbott_dark_2022}. As the sensitivity of surveys rises, however, the requirements on the SRD uncertainties become larger as well. Therefore, the contributions from higher order cumulants of the SRD become important. As discussed above, previous works have focused on Gaussian mixture models to self-calibrate the cosmic shear measurement. In this paper we investigate the general sensitivity of the lensing power spectrum to perturbations in the SRD. In particular we are calculating the functional derivative of the cosmic shear angular power spectrum with respect to the SRD at a particular co-moving distance. This can then be mapped to a total error in the cosmic shear power spectrum if a perturbation in the SRD in a co-moving interval is applied. We take the constraint of the normalisation of the SRD into account when calculating the functional derivative. Therefore we can propagate arbitrary perturbations to the SRDs (subject to some underlying covariance) and propagate them into the $C_\ell$ of cosmic shear. This allows us to estimate the difference in $\chi^2$ induced by the uncertainty in the SRD, without having to run thousands of realizations of the analysis pipeline used. By using a Fisher matrix for the cosmological parameters, this $\Delta\chi^2$ can then be mapped to potential biases in cosmological parameters. Here we studied a rather idealised scenario by working in Fourier space, assuming a Gaussian likelihood and ignoring intrinsic alignments. The method, however, easily generalises and including these effects is straightforward.
We structure the paper as follows: In \Cref{sec:methodology} we briefly review cosmic shear basics and introduce the methodology used by calculating the functional derivative of the weak lensing angular power spectrum. The results are presented in \Cref{sec_results}, where we apply the procedure to a survey with EUCLIDs specifications and to KiDS-VIKING-450 (KV450). We conclude in \Cref{sec:conclusions}. In the appendices we also investigate the possibility of an Edgeworth expansion of the SRD (\Cref{ssec:edgeworth expansion}), discuss photometric galaxy clustering (\Cref{sec:photometric_clustering}), the distribution of the mean and standard deviation of the SRD in \Cref{sec:m_and_v_kv450},
the general relationship to observables (\Cref{sec:observables}), the functional derivative of the non-Limber projection in \Cref{sec:nonlimber} and inrinsic alignments (\Cref{sec:intrinsics}).
\section{Methodology}
\label{sec:methodology}
In this section we present the basic methodology of our analysis. In particular we describe the basics of cosmic shear and derive the function derivative of the lensing angular power spectrum with respect to the SRDs.
\subsection{Cosmic shear basics}
The equation for the cosmic shear power spectrum in tomographic bins $i$ and $j$ in the Limber proejction is \citep{limber_analysis_1954,loverde_extended_2008}
\begin{equation}
\label{eq:limber}
C^{\kappa_i\kappa_j}_\ell = \int_0^{\chi_\mathrm{H}}\frac{\mathrm{d}\chi}{\chi^2} W^{(i)}_\kappa(\chi)W^{(j)}_\kappa(\chi) P_\delta\left(\frac{\ell + 0.5}{\chi},\chi\right)\;,
\end{equation}
where $P_\delta$ is the matter power spectrum, for which we use the emulated spectrum from \citet{mead_accurate_2015}. $W^{(i)}_\kappa (\chi)$ is the lensing weight of the $i$-th tomographic bin as given by:
\begin{equation}
\label{eq:weight}
W^{(i)}_\kappa(\chi) =\frac{3\Omega_\mathrm{m0}}{2\chi_\mathrm{H}^2}\frac{\chi}{a(\chi)}\int_\chi^{\chi_\mathrm{H}}\mathrm{d}{\chi^\prime} n^{(i)}_\mathrm{s}({\chi^\prime})\frac{{\chi^\prime}-\chi}{{\chi^\prime}}\;.
\end{equation}
Here $\chi$ is the co-moving distance, $a$ the scale factor, $\Omega_{\mathrm{m}0}$ the matter density parameter today, $\chi_\mathrm{H}$ the Hubble radius and $n^{(i)}_\mathrm{s}$ is the SRD in the $i$-th tomographic bin which builds on photo-$z$ measurements and its calibration. It is normalized in each tomographic bin such that
\begin{equation}
\label{eq:srd_norm}
\int \mathrm{d}z\; n^{(i)}_\mathrm{s}(z) = 1 = \int\mathrm{d}\chi \;n^{(i)}_\mathrm{s}(z(\chi))\frac{\mathrm{d}z}{\mathrm{d}\chi}\equiv \int \mathrm{d}\chi \; n^{(i)}_\mathrm{s}(\chi)\;.
\end{equation}
Since photo-$z$ is just an estimate of the true redshift, the estimated source-redshift distribution, $n^{(i)}_\mathrm{s}$, is not exactly known.
Here we investigate two approaches:
\begin{enumerate}
\item [i)] Use functional derivatives to evaluate the change of the lensing power spectrum when perturbing the $n^{(i)}_\mathrm{s}$ at different redshifts. Given specific survey settings and precision goals, limits on the allowed change of the $n^{(i)}_\mathrm{s}$ can be determined, which in turn can be mapped to changes in the cumulants or moments of the underlying distribution (see \Cref{ssec:functional_derivative}).
\item [ii)] We expand the underlying source-redshift distribution in an asymptotic Edgeworth series and investigate the requirements on the cumulants directly in a Fisher analysis. The second approach is not feasible for realistic SRDs (see \Cref{ssec:edgeworth expansion}).
\end{enumerate}
\subsection{Functional derivative of the lensing power spectrum}
\label{ssec:functional_derivative}
Here we wish to investigate the sensitivity of the weak lensing power spectrum to the full shape of the source-redshift distribution using functional derivatives. In particular we start by perturbing $n^{(i)}_\mathrm{s}(\chi(z))$ at a certain redshift $z_0$, such that $\chi_0 = \chi(z_0) $. The corresponding perturbed lensing weight is thus
\begin{equation}
\label{eq:delta_weight}
\Delta W^{(i)}_\kappa(\chi, \chi_0) = \frac{\delta W^{(i)}_\kappa(\chi) }{\delta n^{(i)}_\mathrm{s}(\chi_0)}\Delta n^{(i)}_\mathrm{s}(\chi_0)\;.
\end{equation}
This expression evaluates, how the lensing weight changes if the source-redshift distribution is perturbed by an amount $\Delta n^{(i)}_\mathrm{s}$ at the co-moving distance $\chi_0$ corresponding to the redshift $z_0$.
Ultimately, we are interested in the change of the lensing power spectrum, \Cref{eq:limber}. First, by applying the Leibniz rule
\begin{equation}
\label{eq:derivative_cell}
\begin{split}
\frac{\delta C^{(ij)}_\ell}{\delta n^{(a)}(\chi_0)} = & \; \int\mathrm{d}x \frac{\delta C^{(ij)}_\ell}{\delta W^{(a)}(x)} \frac{\delta W^{(a)}(x)}{\delta n^{(a)}(\chi_0)} \\= & \; \int\mathrm{d}x \frac{\delta W^{(a)}(x)}{\delta n^{(a)}(\chi_0)}\frac{P_\delta\left(\frac{\ell + 0.5}{x},x\right)}{x^2}\left(W^{(j)}(x)\delta^\mathrm{D}_{ia} + W^{(i)}(x)\delta^\mathrm{D}_{ja}\right) \;,
\end{split}
\end{equation}
The missing ingredient is the functional derivative of the lensing kernel, for which we find
\begin{equation}
\frac{\delta W^{(i)}(x) }{\delta n^{(j)}_\mathrm{s}(\chi_0)} = \frac{3\Omega_\mathrm{m0}}{2\chi_\mathrm{H}^2}\frac{x}{a(x)} \frac{\chi_0-x}{\chi_0}\delta^\mathrm{D}_{ij}\Theta(\chi_0-x)\;.
\end{equation}
$\Theta(x)$ is the Heaviside function to ensure that the functional derivative vanishes if the SRD is perturbed outside the integration bounds.
Using \Cref{eq:delta_weight} and \Cref{eq:derivative_cell} we can write the change in angular power spectrum $\Delta C^{(ij)}_\ell ({\chi^\prime})$ due to a change in the source-redshift distribution at co-moving distance $\chi_0$ as
\begin{equation}
\begin{split}
\Delta C^{(ij)}_{\ell,a} (\chi_0)
\equiv & \ \frac{\delta C^{(ij)}_\ell}{\delta n^{(a)}(\chi_0)} \Delta n^{(a)}(\chi_0)\\
= & \ \frac{3 \Omega_\mathrm{m0}}{2\chi_\mathrm{H}^2}\Delta n (\chi_0) \int \frac{\mathrm{d}x}{a(x)x}\frac{\chi_0-x}{\chi_0} P_\delta\left(\frac{\ell + 0.5}{x},x\right)\\ &\times
\left(W^{(j)}(x)\delta^\mathrm{D}_{ia} + W^{(i)}(x)\delta^\mathrm{D}_{ja}\right)\;.
\end{split}
\end{equation}
Integrating the perturbed lensing spectrum then gives the total perturbation:
\begin{equation}
\label{eq:integrated_error}
\Delta C^{(ij)}_{\ell,a} \equiv\int\mathrm{d}\chi_0 \Delta C^{(ij)}_{\ell,a} (\chi_0)\;.
\end{equation}
So far we have treated the function $n^{(i)}(z)$ as being completely free. However, the functional derivative needs to respect the constraint given in \Cref{eq:srd_norm}, thus limiting the possible variations of $n^{(i)}(z)$. The normalization condition itself is again a function and we write
\begin{equation}
N[n^{(i)}_\mathrm{s}] \coloneqq 1 -\int \mathrm{d}z\; n^{(i)}_\mathrm{s}(z) = 0\;,
\end{equation}
this constraint can be implemented by first defining
\begin{equation}
n^{(i)}_\mathrm{s}(z) \coloneqq\frac{f(z)}{\int \mathrm{d}x^\prime f(x^\prime)}
\end{equation}
which will be normalized by construction. $n^{(i)}_\mathrm{s}(z)$ is a functional of $f$ and we can now evaluate the functional derivative of $C[n[f]]$ as an unconstrained derivative but evaluated at $f=n$. To avoid clutter we ignore the sub- and superscripts in this part
\begin{equation}
\left(\frac{\delta C[n[f]]}{\delta f(x)}\right)\Bigg|_{f=n} = \int \mathrm{d}x^\prime \frac{\delta C[n]}{\delta n(x^\prime)}\frac{\delta n(x^\prime)}{\delta f(x)}\Bigg|_{f=n}\;.
\end{equation}
With
\begin{equation}
\frac{\delta n(x^\prime)}{\delta f(x)} = \frac{\delta_\mathrm{D}(x^\prime - x)}{\int\mathrm{d}y\;f(y)} - \frac{f(x^\prime)}{\left(\int\mathrm{d}y\;f(y)\right)^2}\;,
\end{equation}
one finds
\begin{equation}
\begin{split}
\frac{\delta C[n]}{\delta_1 n(x)} \equiv & \ \left(\frac{\delta C[n[f]]}{\delta f(x)}\right)\Bigg|_{f=n} = \frac{\delta C[n]}{\delta n(x)} - \int \mathrm{d}y \; \frac{\delta C[n]}{\delta n(y)}n(y)\;,
\end{split}
\end{equation}
where we denote that we want to keep the normalization fixed by the variation $\delta_1$. This is a very intuitive expression: the first term evaluates the standard functional derivative, while the second term corrects this variation to respect the normalization.
\begin{figure}
\centering
\includegraphics[width = .45\textwidth]{allowed_perturbations.pdf}
\caption{Allowed perturbation for EUCLID to the SRD of the ten tomographic source bins. Solid lines show the fiducial SRD, while the bands show the allowed perturbation to it.}
\label{fig:allowed_ecl}
\end{figure}
\subsection{Fisher forecast}
The next step is to set some requirement on the lensing power spectra. Here we will look at the difference in the $\chi^2$, assuming a Gaussian likelihood and thus setting a lower limit on the required accuracy of $n^{(i)}_\mathrm{s}(z)$. For modes $\boldsymbol{a}_{\ell m}$ with zero mean and covariance $\boldsymbol{C}_\ell$, the $\Delta\chi^2$ between multipoles $\ell_\mathrm{min}$ and $\ell_\mathrm{max}$ can be written as
\begin{equation}
\label{eq:delta_chi2}
\Delta\chi^2 (\ell_\mathrm{min},\ell_\mathrm{max}) = f_\mathrm{sky}\sum_{\ell = \ell_\mathrm{min}}^{\ell_\mathrm{max}}\frac{2\ell+1}{2}\mathrm{tr}\left( \Delta \boldsymbol{C}_\ell \boldsymbol{C}^{-1}_\ell\Delta \boldsymbol{C}_\ell \boldsymbol{C}^{-1}_\ell\right)\;,
\end{equation}
note that $\boldsymbol{C}_\ell$ is the matrix with the components $C^{(ij)}$. The factor $f_\mathrm{sky}$ takes into account the observed sky fraction.
Using \Cref{eq:integrated_error} we rewrite the previous equation as a Riemann sum
\begin{equation}
\begin{split}
\Delta\chi^2 (\ell_\mathrm{min},\ell_\mathrm{max}) = f_\mathrm{sky}& \sum_{\ell = \ell_\mathrm{min}}^{\ell_\mathrm{max}}\frac{2\ell+1}{2}\\
&\times\sum_{r,s,i,j}\mathrm{tr}\left(\frac{\delta\boldsymbol{C}_\ell}{\delta_1 n^{(i)}(\chi_r)}\boldsymbol{C}^{-1}_\ell\frac{\delta\boldsymbol{C}_\ell}{\delta_1 n^{(j)}(\chi_s)} \boldsymbol{C}^{-1}_\ell\right)\\ & \times \mathcal{D}\chi_r\mathcal{D}\chi_s \Delta n^{(i)}(\chi_r)\Delta n^{(j)}(\chi_s)\;,
\end{split}
\end{equation}
with the measure $\mathcal{D}\chi_r$.
If we define the Fisher matrix in this case as:
\begin{equation}
\label{eq:fisher_matrix}
F_{\alpha\beta} = f_\mathrm{sky} \sum_{\ell = \ell_\mathrm{min}}^{\ell_\mathrm{max}}\frac{2\ell+1}{2}\mathrm{tr}\left( \frac{\delta\boldsymbol{C}_\ell}{\delta_1 n_\alpha } \boldsymbol{C}^{-1}_\ell\frac{\delta\boldsymbol{C}_\ell}{\delta_1 n_\beta} \boldsymbol{C}^{-1}_\ell\right)\mathcal{D}\chi_{r(\alpha)}\mathcal{D}\chi_{s(\beta)}\;,
\end{equation}
where we labeled $n^{(i)}(\chi_r) \to n_{\alpha}$, we recover for a difference in $\chi^2$ using a scalar product on the finite dimensional Hilbert space of shifts in the redshift distribution where the Fisher matrix acts as a norm-inducing metric
\begin{equation}
\label{eq:chi2}
\Delta\chi^2 = F({\boldsymbol{\Delta n}},{\boldsymbol{\Delta n}})\equiv {\boldsymbol{\Delta n}}^T{\boldsymbol{F}}{\boldsymbol{\Delta n}}\;,
\end{equation}
where $\boldsymbol{\Delta n}$ is the vector containing shifts of the components $n_\alpha$.
The Fisher matrix, \Cref{eq:fisher_matrix}, describes, how well the shifts $n_\alpha$ can be determined by a measurement of the angular power spectra $\boldsymbol{C}_\alpha$ given certain survey settings. Clearly, if one would try to measure all possible perturbations, neighbouring $\delta n(\chi)$ are strongly correlated. This is, however not the question we would like to ask in this work. Instead, we want to look at the situation that we allow any perturbation $\boldsymbol{\Delta n}$, irrespective of the correlation. Therefore, by turning this argument around, we only use the diagonal part of the Fisher matrix.
Lastly one should note that the functional derivative is strictly defined as a limiting process for infinitesimally small perturbation to the function at hand. The relation in general can be non-linear, but as long as relative perturbations to the function are small with respect to unity, these non-linear contributions are sub-dominant. Especially for surveys with tight requirements on the SRDs this is essentially always fulfilled.
\begin{figure}
\centering
\includegraphics[width = 0.45\textwidth]{moments_ecl.pdf}
\caption{Allowed relative change in per-cent of the central moment of the SRD in each tomographic bin. The changes are calculated from the perturbed SRD distributions as shown in \Cref{fig:allowed_ecl}.}
\label{fig:moments_euclid}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = .45\textwidth]{allowed_perturbations_kv450.pdf}
\caption{Allowed perturbation for KV450 to the SRD for the 5 tomographic source bins. Solid lines show the fiducial SRD, while the bands show the allowed perturbation to it.}
\label{fig:allowed_kv450}
\end{figure}
\section{Results}
\label{sec_results}
\subsection{Allowed Perturbations to the Source Redshift Distribution}
First we will look at the allowed perturbations to the SRD by assuming allowing for a total $\Delta\chi^2$ of unity, corresponding to a one $\sigma$ shift of a linear model parameter. Clearly, there are many different solutions $\boldsymbol{\Delta n}$ that satisfy $\Delta\chi^2 = 1$ subject to \Cref{eq:chi2}. To show the structure of the Fisher matrix we therefore distribute the allowed $\Delta\chi^2$ per $\Delta n_\alpha$ equally.
We will assume EUCLID specifications for the survey as given in \citet{blanchard_euclid_2020} and assume $n_\mathrm{tomo}= 10$ tomographic bins, a sky fraction of $0.3$.
Furthermore, we will collect multipoles between $\ell_\mathrm{min} = 10$ and $\ell_\mathrm{max} = 3000$. We then calculate the diagonal Fisher matrix from \Cref{eq:fisher_matrix} and distribute the errors equally as described above. This results into a possible realisation of $\boldsymbol{\Delta n}$ yielding $\Delta\chi^2 = 1$ subject to the constraint \Cref{eq:srd_norm}.
\Cref{fig:allowed_ecl} shows the resulting perturbed SRDs. The solid lines show the fiducial SRD, while the shaded areas show the allowed perturbations to not cause a bias of more than 1 $\sigma$ for a linear model parameter. Lastly, the tomographic bin index is shown as a colour-bar. The general trend is very clear, the allowed perturbations become very large around a small interval $\Delta\chi$ around the mean of the distributions. For most tomographic bins this coincides with the peak of the distribution as they are very close to Gaussian. Only for the first and the last bin these spikes are a bit offset since the distributions are a bit more asymmetric. This already confirms that the most important part about the SRDs in cosmic shear measurements is to calibrate the mean redshift of each tomographic bin very well. Furthermore, we observe that the spikes tend to be narrower at higher redshifts, indicating that the uncertainty on the mean of the SRD is more important at higher redshifts. We want to stress again, that this is just one realization of $\boldsymbol{\Delta n}$ that produces a $\Delta\chi^2 = 1$, but by distributing the errors equally, it is possible to see, which perturbations the final measurement is most sensitive to. However, the uncertainties should not be used at literally value and are extreme values, they just give a general trend.
Next, we use the perturbed SRDs to calculate their central moments $\mu_n$:
\begin{equation}
\mu_n \coloneqq E[(X-E[X])^n] = \int p(x)(x-\mu)^n\mathrm{d}x\;,
\end{equation}
for a probability distribution function $p(x)$ with mean $\mu$. The perturbed SRDs are used to calculate the change in the central moments relative to the fiducial SRD. \Cref{fig:moments_euclid} shows the resulting relative change for all tomographic bins as a function of the order of the central moment. Clearly, the first moment is most important and while the second one still needs to be known at a 10$\%$ level, all higher order moments are essentially unimportant. This is of course reminiscent of the behaviour observed in \Cref{fig:allowed_ecl}, where the perturbations are such that they essentially fix the mean. It is of course entirely possible, that we alter the shape of the distribution in a different way but still achieve the desired accuracy.
Nonetheless, the results show that for the SRD for cosmic shear only the mean redshift and the width are important with the former influencing the result way stronger (by more than an order of magnitude). In \Cref{sec:m_and_v_kv450} we sample from the allowed changes in the SRD and show the relative difference of the first two moments to illustrate their scatter.
\begin{figure}
\centering
\includegraphics[width = 0.45\textwidth]{kv450_dchi2.pdf}
\caption{$\Delta\chi^2$ for $10^6$ realisations of $\boldsymbol{\Delta n}$ from the $\boldsymbol{C}^\mathrm{KV450}_{n(\chi)}$. We also show the 50, 68 and 95 percentiles.
}
\label{fig:dchi2_kv450}
\end{figure}
\subsection{Propagating Redshift Errors}
In this section we will revisit the KV450 data for the SRD \citep{hildebrandt_kidsviking-450_2020}. This data set is used since it includes a covariance matrix from the direct calibration (DIR). For the clustering redshifts \citep{busch_testing_2020} or the self-organising maps \citep{wright_photometric_2020} no bootstrap covariance was estimated so far.
For completeness the allowed perturbations are shown in \Cref{fig:allowed_kv450}. Due to the lower signal-to-noise ratio of the measurement, the allowed perturbations are much larger than in the previous case. The features, however, are very similar.
Since we are expressing everything in co-moving distance, the covariance matrix needs to be transformed accordingly. Let $\boldsymbol{C}^\mathrm{KV450}_{n(z)}$ be the covariance matrix in $n(z)$ space, the transformed covariance is then
\begin{equation}
\boldsymbol{C}^\mathrm{KV450}_{n(\chi)} = \boldsymbol{J}^T\boldsymbol{C}^\mathrm{KV450}_{n(z)}\boldsymbol{J}\;,
\end{equation}
where $\boldsymbol{J}$ is the Jacobian with components $J^{i}_{\;j} = \delta^{i}_j\mathrm{d}z/\mathrm{d}\chi$. Alternatively, the Fisher matrix of the SRD perturbations can be expressed in redshift space by the inverse transform.
Perturbations $\boldsymbol{\Delta n}$ are now sampled from $\boldsymbol{C}^\mathrm{KV450}_{n(\chi)}$ and propagated to obtain $\Delta\chi^2$ according to \Cref{eq:delta_chi2}. If the redshift errors as given in $\boldsymbol{C}^\mathrm{KV450}_{n(\chi)}$ are sufficiently small to not produce a significant bias in the cosmological parameters such as $S_8$ we expect most realisations \citep[i.e. 68$\%$][]{hildebrandt_kidsviking-450_2020} to yield $\Delta\chi^2 < 1$. \Cref{fig:dchi2_kv450} shows the resulting distribution in $\Delta\chi^2$ for the $10^6$ realizations of $\boldsymbol{\Delta n}$ for KV450. The vertical dashed lines show the 50th, 68th and 95th percentile. It is clear from this plot that the precision of the SRD used in KV450 is high enough to not yield any spurious detection in the final parameter constraints since the 68th percentile is still well below unity.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{heeatmap.pdf}
\caption{The black histogram shows the induced shifts by the photo-$z$ uncertainty in the $\Omega_\mathrm{m0}$-$\sigma_8$-plane, derived from the $\Delta\chi^2$ of \Cref{fig:dchi2_kv450}. In red we show the contour from the Fisher matrix for KV450 enclosing the 1$\sigma$ confidence interval.}
\label{fig:kv_450_bias}
\end{figure}
One could now further propagate these uncertainties into cosmological parameters using the corresponding Fisher matrix. For a given shift in the SRD $\boldsymbol{\Delta n}$, the corresponding shifts in the cosmological parameters, $\boldsymbol{\Delta\theta}$ can be calculated:
\begin{equation}
\Delta\theta^{i} = - (F^{-1})^{i}_{\;\alpha}F^{\alpha}_{\;\beta}\Delta n^\beta\;,
\end{equation}
where Greek indices run over the perturbations in the SRD, while Latin indices label cosmological parameters. Here we assumed the sum convention. $F^{i}_{\;\alpha}$ hence is the mixed pseudo Fisher matrix:
\begin{equation}
F^{i}_{\;\alpha} = -E\left[\frac{\partial\ln L}{\partial\theta_i }\frac{\delta\ln L}{\delta n^\alpha }\mathcal{D}\chi_{r(\alpha)}\right]
\end{equation}
and it's inverse is a pseudo inverse. Since the inversion of this matrix is not necessarily stable we choose to go another route here. Since the distribution of $\Delta\chi^2$ is known, we are interested in samples of cosmological parameters with the same $\Delta\chi^2$ with respect to the best fit value. For a Gaussian posterior in one dimension this would amount to a distribution such that the absolute value of each sample is fixed to $\sqrt{\Delta\theta^2}$. We sample from a standard Gaussian distribution and modify its width by $\sqrt{\Delta\theta^2}$. This Gaussian is then mapped into the frame of the cosmological parameters under consideration via the Cholesky decomposition of the Fisher matrix of the cosmological parameters. In \Cref{fig:kv_450_bias} we apply this procedure to the $\Delta\chi^2$ distribution of KV450 (\Cref{fig:dchi2_kv450}). Each dot represents one sample of the $\Delta\chi^2$ distribution with its value shown as a colour bar. It can be seen as the geodesic distance to the fiducial value for the cosmological parameters in the parameter manifold \citep{giesel_information_2021}. The red contours depict the expected $1,2,3\sigma$ confidence regions from the Fisher forecast for KV450. Since in the original analysis more than the two parameters here where used, we re-scale the $\Delta\chi^2$ accordingly, in particular by the $\chi^2$ quantile function $\chi^2_k(p)$, where $k = 10$ is the number of parameters in the actual analysis analysis \citep{Hildebrandt:2016iqg} and $p = 0.68$. This is done in order to obtain a fair comparison.
It is clear from the plot, that all samples for the photometric redshift distribution lie well within the $1\sigma$ contour. Furthermore, it should be noted that we are considering a very idealised forecast with two free parameters and no systematics here. The procedure, however, can be generalized to any number of parameters. Furthermore, one can apply the same analysis to a full Monte-Carlo-Markov-Chain (MCMC) by matching those samples which are $\Delta\chi^2$ away from the maximum likelihood of the MCMC.
Lastly, the samples from \Cref{fig:kv_450_bias} can be mapped to $S_8 = \sigma_8\sqrt{\Omega_\mathrm{m0}/0.3}$. \Cref{fig:s8} shows the resulting histogram of the scatter due to the photo-$z$ uncertainties. Comparing this to $\Delta S_8 = 0.076$ at $68\%$ confidence \citep{hildebrandt_kidsviking-450_2020} shows that the scatter induced by the redshift uncertainties as sampled from the KV450 SRD covariance have a small effect on the overall error budget. In \citet{Hildebrandt:2016iqg} a Fisher matrix method for the shifts of the mean of the SRDs was investigated as a a source of systematics, which found similar results to the once presented here. The main difference between the two methods is that we allow for general perturbations to the redshift distribution (provided there correlation is given). Generalizing the procedure in \citet{Hildebrandt:2016iqg} to moments higher than the variance is bound to fail (see \Cref{ssec:edgeworth expansion}). However, we would also conclude that even for EUCLID, the analysis of the first two moments is probably sufficient.
In \cref{sec:m_and_v_kv450} the mean and standard deviation of each SRD in the five tomographic bins are shown for the realisations used in this section as sampled from the DIR covariance matrix. \Cref{fig:momentskv450} shows a very similar behaviour to what we found in \cref{fig:moments_euclid}. In particular this is that the mean scatters less at higher redshifts, while the standard deviation scatters roughly equally for most of the bins.
We close the section with a general discussion about the usage of $\Delta\chi^2$ or directly uncertainties in the parameters. It is in general advantageous to make accuracy assessments for the SRD using the $\Delta\chi^2$ and not by inverting the Fisher matrix for the parameters of interests to obtain the shift values for those. The reason for this is that $\Delta\chi^2$ is an invariant quantity, while shifts in parameter space are dependent on the specific model choice. The only caveat in the $\Delta\chi^2$ is that the number of parameter must be taken into account, this is, however, much easier than calculating the Fisher matrix.
\begin{figure}
\centering
\includegraphics[width = .45\textwidth]{kv450_S8.pdf}
\caption{Induced scatter on the $S_8 = \sigma_8\sqrt{\Omega_\mathrm{m0}/0.3}$ parameter. This is directly derived from the samples of \Cref{fig:kv_450_bias}. The scatter is roughly 15 per-cent of the statistical error budget reported in \citet{Hildebrandt:2016iqg,hildebrandt_kidsviking-450_2020}.}
\label{fig:s8}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper we have analysed the dependence of the cosmic shear angular power spectrum on the SRD. This has be done by employing functional derivatives of the cosmic shear $C_\ell$ with respect to the SRD at a fixed co-moving distance $\chi_0$. By integrating over the introduced error we estimated the $\Delta\chi^2$ introduced by arbitrary uncertainties in the SRD. We applied our method to a cosmic shear survey with EUCLID specifications and KV450 since a covariance of the SRD estimate was given. Our main findings can be summarised as follows:
\begin{enumerate}
\item Allowed perturbations of the SRD are such that they preserve the mean of the underlying distribution. If they do, they can be rather larger, even for a survey like EUCLID. This is in line with the common practice of using only shifted means of the underlying redshift distribution.
\item In order to achieve the accuracy required for EUCLID, the mean of the redshift distribution needs to be determined between 1 and 0.01 per-cent, depending on the tomographic bin under consideration. The variance of the SRD is still important at the 10 per-cent level. There is still some sensitivity left in the skewness, but all other moments are not relevant.
\item We performed a simplistic analysis of the KV450 SRDs to check whether they fulfill the requirements and found that the uncertainties, in this very idealised scenario, only yield biases up to $1\sigma$ in the final constraints. In a full analysis, this bias would be even smaller. Thus confirming the redshift calibration used in KV450.
\item Even for EUCLID it is most likely not necessary to investigate moments of the redshift distribution $n>2$. This conclusion could change for different settings and self-calibration methods.
\item The procedure outlined here has the advantage to be very cheap computationally, since the functional derivatives only need to be computed once. It is then only a matter of sampling from the underlying SRD and to propagate these perturbations with the previously calculated functional derivative. It is hence not necessary to push thousands of realisations of the SRD through the analysis pipeline.
\end{enumerate}
The method outlined here can thus be used to analyse whether a perturbation in the SRD still fulfills the requirements of a given experiment so that no biases of model parameters are introduced. It allows for arbitrary perturbations to the SRD without requiring a fit to the actual distribution. We intend to apply the presented method to the updated SRDs of KiDS in the future.
For the interested reader the appendices \Cref{ssec:edgeworth expansion} - \Cref{sec:nonlimber} discuss various aspects of the analysis which could be refined in future work. In particular we look at the Edgeworth expansion of the SRD in \Cref{ssec:edgeworth expansion}, i.e. an expansion in the cumulants of the underlying SRDs. However, we find that, even for a realistic setting, the Edgeworth expansion cannot reproduce the original SRDs if cumulants $n>2$ are considered.
{\bf Data Availability}: The data underlying this article will be shared on reasonable request to the corresponding author.
\section*{Acknowledgments}
RR would like to thank Hendrik Hildebrandt and Bj\"orn Malte Sch\"afer for insightful discussions and comments on the manuscript.
RR is supported by the European Research Council (Grant No. 770935).
\bibliographystyle{mnras}
|
2,869,038,153,784 | arxiv |
\section{Conclusion}
We conduct comprehensive studies on adversarial network pruning. The contributions are three-fold: First, we give a new explanation on the connection between robustness and network sparsity, which is supported by much empirical evidence.
Second, we demonstrate the efficacy of training network with robustness via our proposed algorithm including one-shot pruning and searching the `winning ticket.'
Third, we discover a new adversarial training strategy to achieve sparsity and large capacity at the same time for robustness.
\section{Introduction}
It is widely recognized that deep neural networks (DNNs) are usually over-parameterized, and network pruning has been adopted to remove insignificant weights from a large neural network without hurting the accuracy. Despite its success, pruning strategies have been rarely discussed in the adversarial learning setting where the network is trained against adversarial examples, and the robustness of the network is as important as accuracy.
It is unclear what pruning methods are effective and which factors are critical for retaining model robustness. Believing that the inherited model weights may not be effective in preserving network accuracy \cite{DBLP:journals/corr/abs-1903-12561, DBLP:conf/iclr/LiuSZHD19}, \citet{DBLP:journals/corr/abs-1903-12561} propose a concurrent adversarial training and weight pruning framework to seek a compressed robust model. \citet{gui2019model} further incorporates pruning and several other techniques into a unified optimization framework to preserve high robustness while achieving a high compression ratio. However, the conventional three-stage `training--pruning--fine-tuning' pipeline has not been closely examined in the adversarial context. More crucially, it is unclear which components in the network pruning methods are critical to preserving model performance. To this end, we design a comprehensive set of experiments to answer these questions.
Despite some adversarial pruning methods that have been proposed, there is still a lack of theoretical foundation to explain the working mechanism behind those methods. In fact, there are seemingly contradictory opinions on the robustness of pruned networks: \citet{DBLP:conf/iclr/MadryMSTV18} suggests network capacity is crucial to robustness, and a wider network is more likely to obtain higher accuracy and robustness than a simple network. In contrast, \citet{guo2018sparse} theoretically proves that an appropriately higher weight sparsity implies stronger robustness on naturally trained models. Other theories such as the `Lottery Ticket Hypothesis' \cite{DBLP:conf/iclr/FrankleC19} point out that, a subnetwork extracted from a large network can always achieve comparable performance with the original one in the natural setting. However, it remains unknown if the hypothesis holds true for adversarially robust networks. We are motivated to explore how adversarial pruning affects the intrinsic characteristics of the network and its impact on model robustness.
In this study, we find that the robustness of the model improves as its weights become sparser. We show that weights sparsity not only includes the traditional $L_{0}$-sparsity, {\em i.e.}, the number of parameters retained, but also a weight distribution closer to zero, represented generally by the $L_{p}$ norm of weights. These forms of sparsity can lead to robustness improvement, which is verified theoretically and experimentally.
By extensive experiments on a variety of state-of-the-art pruning methods, models, and datasets, we also demonstrate that a pruned network inheriting weights from a large robust network has improved robustness than a network with the same structure but randomly initialized weights. Moreover, weight inheritance implicitly produces sparser weights distributions on adversarially pruned models.
Inspired by the connection between model sparsity and robustness, we propose a new adversarial training strategy called {\em Inverse Weights Inheritance}: by inheriting weights from a pruned model, a large network can achieve higher robustness than being adversarially trained from scratch. The pruned model can be the `winning ticket' of the large network, as we verify that `Lottery Ticket Hypothesis' \cite{DBLP:conf/iclr/FrankleC19} holds true in the adversarial learning context. The performance results of our proposed training strategy corroborate that sparse weights and high capacity are not contradictory, but contribute joint efforts to model robustness.
The contributions of the paper can be summarized as follows. {\em First,} we establish the theoretical connection between network robustness and sparsity. {\em Second,} through comprehensive experiments, we find that weights inheritance and adversarial training are important in adversarial pruning, and implicitly provide weights sparsity. {\em Finally,} we propose a new adversarial training strategy that achieves improved robustness.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figures/nparam0.pdf}
\caption{The relation between parameter numbers and adversarial robustness in our approach and the state-of-the-art methods on different architectures. The dotted lines represent the baselines of three base (large) models. Models residing at the upper left corner have higher adversarial accuracies and smaller sizes. All models are adversarially trained by PGD with $ \epsilon = 8/255 $ and 10 steps, and evaluated by PGD attack of $ \epsilon = 8/255 $ and 100 steps on CIFAR10. We also mark the results \citet{Guo2020} and \citet{gui2019model} by stars in the same settings. Our experiments show that adversarial pruning methods are effective in obtaining networks of smaller parameters with comparable or even better robustness than the baselines.}
\label{fig:nparam}
\end{figure}
\iffalse
Different from natural settings, model performance is evaluated by both clean accuracy (accuracy on the clean data) and robustness (accuracy on the adversarial data) in adversarial settings. Moreover, as adversarial training introduces significant computation overhead, there is a pressing need for pruning at the earlier stages. Some preliminary efforts in this aspect have been made:
\cite{wang2018adversarial} indicates that when pruning is applied, robustness drops earlier than clean accuracy.
\cite{DBLP:journals/corr/abs-1903-12561} strengthens the benefit of inherited weights in adversarial settings by showing that a small network can preserve performance through concurrently pruning and training.
\cite{zhao2018compress} investigates the transferability of adversarial examples between uncompressed and compression models, and finds adversarial examples at high sparsity are marginally less transferable in natural settings.
In this work, we theoretically and empirically reveal properties of network pruning in adversarial settings, which may or may not hold in natural settings. Based on extensive experiments on a variety of state-of-the-art pruning methods, models, and datasets, we have the following interesting findings. {\em First}, `winning ticket' exists in robust models, and can be found efficiently by iterative pruning, {\em i.e.}, `lottery ticket hypothesis' \cite{DBLP:conf/iclr/FrankleC19} holds true in adversarial settings.
{\em Second}, we demonstrate the inherited weights by automatic pruning imply higher weights sparsity and lead to higher robustness than the same small networks trained from scratch. We even observe an improvement in robustness after pruning, which is not observed in \cite{wang2018adversarial,gui2019model,ye2018rethinking}. On the contrary, inherited weights by predefined structured pruning \cite{DBLP:conf/iclr/0022KDSG17} does not enhance performance on robust models, sharing a similar conclusion in the natural setting \cite{DBLP:conf/iclr/LiuSZHD19}.
{\em Third}, we reveal that models with weights distribution closer to zero imply higher robustness theoretically and empirically by providing a new adversarial training method called {\em Inverse Weights Inheritance}. Higher $L_0$-sparsity, namely, network pruning can be viewed as a special case to force smaller weights distribution. Thus a large network has the capability to attain higher accuracy and robustness than the pruned ones.
\fi
\section{Conclusion}
We conduct comprehensive studies on adversarial network pruning. The contributions are three-fold: First, we give a new explanation on the connection between robustness and network sparsity, which is supported by much empirical evidence.
Second, we demonstrate the efficacy of training network with robustness via our proposed algorithm including one-shot pruning and searching the `winning ticket.'
Third, we discover a new adversarial training strategy to achieve sparsity and large capacity at the same time for robustness.
\section{Introduction}
It is widely recognized that deep neural networks (DNNs) are usually over-parameterized, and network pruning has been adopted to remove insignificant weights from a large neural network without hurting the accuracy. Despite its success, pruning strategies have been rarely discussed in the adversarial learning setting where the network is trained against adversarial examples, and the robustness of the network is as important as accuracy.
It is unclear what pruning methods are effective and which factors are critical for retaining model robustness. Believing that the inherited model weights may not be effective in preserving network accuracy \cite{DBLP:journals/corr/abs-1903-12561, DBLP:conf/iclr/LiuSZHD19}, \citet{DBLP:journals/corr/abs-1903-12561} propose a concurrent adversarial training and weight pruning framework to seek a compressed robust model. \citet{gui2019model} further incorporates pruning and several other techniques into a unified optimization framework to preserve high robustness while achieving a high compression ratio. However, the conventional three-stage `training--pruning--fine-tuning' pipeline has not been closely examined in the adversarial context. More crucially, it is unclear which components in the network pruning methods are critical to preserving model performance. To this end, we design a comprehensive set of experiments to answer these questions.
Despite some adversarial pruning methods that have been proposed, there is still a lack of theoretical foundation to explain the working mechanism behind those methods. In fact, there are seemingly contradictory opinions on the robustness of pruned networks: \citet{DBLP:conf/iclr/MadryMSTV18} suggests network capacity is crucial to robustness, and a wider network is more likely to obtain higher accuracy and robustness than a simple network. In contrast, \citet{guo2018sparse} theoretically proves that an appropriately higher weight sparsity implies stronger robustness on naturally trained models. Other theories such as the `Lottery Ticket Hypothesis' \cite{DBLP:conf/iclr/FrankleC19} point out that, a subnetwork extracted from a large network can always achieve comparable performance with the original one in the natural setting. However, it remains unknown if the hypothesis holds true for adversarially robust networks. We are motivated to explore how adversarial pruning affects the intrinsic characteristics of the network and its impact on model robustness.
In this study, we find that the robustness of the model improves as its weights become sparser. We show that weights sparsity not only includes the traditional $L_{0}$-sparsity, {\em i.e.}, the number of parameters retained, but also a weight distribution closer to zero, represented generally by the $L_{p}$ norm of weights. These forms of sparsity can lead to robustness improvement, which is verified theoretically and experimentally.
By extensive experiments on a variety of state-of-the-art pruning methods, models, and datasets, we also demonstrate that a pruned network inheriting weights from a large robust network has improved robustness than a network with the same structure but randomly initialized weights. Moreover, weight inheritance implicitly produces sparser weights distributions on adversarially pruned models.
Inspired by the connection between model sparsity and robustness, we propose a new adversarial training strategy called {\em Inverse Weights Inheritance}: by inheriting weights from a pruned model, a large network can achieve higher robustness than being adversarially trained from scratch. The pruned model can be the `winning ticket' of the large network, as we verify that `Lottery Ticket Hypothesis' \cite{DBLP:conf/iclr/FrankleC19} holds true in the adversarial learning context. The performance results of our proposed training strategy corroborate that sparse weights and high capacity are not contradictory, but contribute joint efforts to model robustness.
The contributions of the paper can be summarized as follows. {\em First,} we establish the theoretical connection between network robustness and sparsity. {\em Second,} through comprehensive experiments, we find that weights inheritance and adversarial training are important in adversarial pruning, and implicitly provide weights sparsity. {\em Finally,} we propose a new adversarial training strategy that achieves improved robustness.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figures/nparam0.pdf}
\caption{The relation between parameter numbers and adversarial robustness in our approach and the state-of-the-art methods on different architectures. The dotted lines represent the baselines of three base (large) models. Models residing at the upper left corner have higher adversarial accuracies and smaller sizes. All models are adversarially trained by PGD with $ \epsilon = 8/255 $ and 10 steps, and evaluated by PGD attack of $ \epsilon = 8/255 $ and 100 steps on CIFAR10. We also mark the results \citet{Guo2020} and \citet{gui2019model} by stars in the same settings. Our experiments show that adversarial pruning methods are effective in obtaining networks of smaller parameters with comparable or even better robustness than the baselines.}
\label{fig:nparam}
\end{figure}
\iffalse
Different from natural settings, model performance is evaluated by both clean accuracy (accuracy on the clean data) and robustness (accuracy on the adversarial data) in adversarial settings. Moreover, as adversarial training introduces significant computation overhead, there is a pressing need for pruning at the earlier stages. Some preliminary efforts in this aspect have been made:
\cite{wang2018adversarial} indicates that when pruning is applied, robustness drops earlier than clean accuracy.
\cite{DBLP:journals/corr/abs-1903-12561} strengthens the benefit of inherited weights in adversarial settings by showing that a small network can preserve performance through concurrently pruning and training.
\cite{zhao2018compress} investigates the transferability of adversarial examples between uncompressed and compression models, and finds adversarial examples at high sparsity are marginally less transferable in natural settings.
In this work, we theoretically and empirically reveal properties of network pruning in adversarial settings, which may or may not hold in natural settings. Based on extensive experiments on a variety of state-of-the-art pruning methods, models, and datasets, we have the following interesting findings. {\em First}, `winning ticket' exists in robust models, and can be found efficiently by iterative pruning, {\em i.e.}, `lottery ticket hypothesis' \cite{DBLP:conf/iclr/FrankleC19} holds true in adversarial settings.
{\em Second}, we demonstrate the inherited weights by automatic pruning imply higher weights sparsity and lead to higher robustness than the same small networks trained from scratch. We even observe an improvement in robustness after pruning, which is not observed in \cite{wang2018adversarial,gui2019model,ye2018rethinking}. On the contrary, inherited weights by predefined structured pruning \cite{DBLP:conf/iclr/0022KDSG17} does not enhance performance on robust models, sharing a similar conclusion in the natural setting \cite{DBLP:conf/iclr/LiuSZHD19}.
{\em Third}, we reveal that models with weights distribution closer to zero imply higher robustness theoretically and empirically by providing a new adversarial training method called {\em Inverse Weights Inheritance}. Higher $L_0$-sparsity, namely, network pruning can be viewed as a special case to force smaller weights distribution. Thus a large network has the capability to attain higher accuracy and robustness than the pruned ones.
\fi
\section{Proof of Theorem 3.2}
Let us denote layer-wise activation output as $ a_0=x $ and $ a_j = \sigma \left( W_j^T a_{j-1} \right) $ for $ 0 \le j \le d-1 $. We also define $ D_j $ as
\begin{equation}
\label{equation:activation}
D_j\left(x\right) := diag \left( 1_{W_j[:,1]^T a_{j-1}>0}, ..., 1_{W_j[:,n_j]^T a_{j-1}>0} \right)
\end{equation}
which is a diagonal matrix whose entries will take value one when the activation is nonzero within $j$-th layer. Note that if for any $x$ and any $j \in \{1,...,d-2\}$, $D_j(x)=0^{n_j \times n_j}$, $D_{d-1}(x)=0^{n_{d-1} \times n_{d-1}}$ must hold. In this setting, we have Lemma \ref{lemma:lipschitz_sparsity_with_D} proved by \cite{guo2018sparse}.
\begin{lemma}
\label{lemma:lipschitz_sparsity_with_D}
(A local Lipschitz constant for ReLU networks \cite{guo2018sparse}) Letting $\frac{1}{p}+\frac{1}{q}=1$, for any $x \in \mathbb{R}^n $, $k \in \{1,...,c\}$ and $q \in \{1,2\}$, the local Lipschitz constant of function $ g_{\hat{y}}\left(x\right)-g_k\left(x\right) $ satisfies
\begin{equation}
\label{equation:lipschitz_sparsity_with_D}
L^k_{q,x} \le \left\Vert w_{\hat{y}}-w_k \right\Vert_q\sup_{x'\in B_p\left(x,R\right)}\prod_{j=1}^{d-1}
\left( {\left\Vert D_j\left(x'\right) \right\Vert_p} {\left\Vert W_j \right\Vert_p} \right)
\end{equation}
where all the matrix norms are induced norms.
\end{lemma}
In our settings, the network is pruned to avoid layer removal and thereby for any $j \in [0, d-1]$, $W_{j}$ cannot be all-zero matrices. Then we prove for any $j \in [0, d-1]$, there must be at least one $a_{j}$ which is not an all-zero matrix. We prove that by contradiction. Assume that for layer $j$, $a_{j}$ is an all-zero matrix. Then $D_{j}(x) = 0^{n_{d-1} \times n_{d-1}}$ which yields $D_{d-1}(x)=0^{n_{d-1} \times n_{d-1}}$ and the prediction score $g_k(x)$ will be all-zero for any $k$ since
\begin{equation}
\label{equation:perceptron}
g_k\left(x_i\right) = w_k^T \sigma \left(W_{d-1}^T \sigma \left( ... \sigma \left( W_1^T x_i\right) \right) \right).
\end{equation}
It is impossible for a well-trained network to make all-zero predictions, which leads to a contradiction. Hence the assumption is not true and $a_{j}, \forall j\in [0, d-1] $ cannot be an all-zero matrix. The conclusion implies that, for any $j\in [0, d-1]$, there must be at least one entry in $ D_{j}(x) $ taking the value $1$. By the definition of matrix induced norms, we have that for any $x$ and particular values of $p$,
\begin{equation}
\label{equation:L_2_norm}
\left\Vert D_j\left(x\right) \right\Vert_2 := \sigma_{\max}(D_j) = 1
\end{equation}
\begin{equation}
\label{equation:L_inf_norm}
\left\Vert D_j\left(x\right) \right\Vert_\infty := \max_{1\le i \le n_j}\sum_{m=1}^{n_j}\vert D_j[i,m]\vert = 1
\end{equation}
where $\left\Vert D_j\left(x\right) \right\Vert_2$ is the spectral norm, the greatest singular value of $D_j\left(x\right)$. Then for $p = 2$ and $\infty$, the right side of Eq.~\eqref{equation:lipschitz_sparsity_with_D} is independent of input $ x $ and we have the relation between the Lipschitz constant and weight sparsity described by Theorem 3.2.
\section{Related Work}
\subsubsection{Adversarial Training.} Adversarial training and its variants are proposed to improve network robustness against adversarial examples \cite{DBLP:conf/iclr/KurakinGB17}. \citet{DBLP:conf/iclr/MadryMSTV18} motivates projected gradient descent (PGD) as a universal `first-order adversary,' and optimizes the saddle point formulation to train a robust network. \citet{DBLP:journals/corr/abs-1905-09747} observes robustness can transfer between networks by knowledge distillation, and such transfer can even improve the robustness of the student network. Following the convention, we adopt $L_\infty\text{-PGD}$ attack \cite{DBLP:conf/iclr/MadryMSTV18}, {\em i.e.,} the strongest attack utilizing the local first-order information of the network, both in adversarial training strategy and the robustness evaluations.
\subsubsection{Network Pruning Methods.} Network pruning methods related to this paper can be divided into two categories: structured pruning and unstructured pruning. Structured pruning prunes a network at the level of filters \cite{DBLP:conf/ijcai/2018,DBLP:conf/iclr/0022KDSG17,DBLP:conf/iccv/LuoWL17}, channels \cite{DBLP:conf/iccv/LiuLSHYZ17} or columns \cite{wen2016learning}, depending on their respective importance.
The importance of a filter or a channel can be determined by the norm of the weights \cite{DBLP:conf/iclr/0022KDSG17} or the channel scaling factor \cite{ye2018rethinking,DBLP:conf/iccv/LiuLSHYZ17} (sometimes the scaling factor in batch normalization layers). The unstructured pruning \cite{lecun1990optimal,hassibi1993second} prunes at the level of individual weight according to the Hessian matrix of the loss function. \citet{DBLP:journals/corr/HanPTD15} proposes to prune weights with small magnitude, and the compression ratio is further enhanced in \citet{DBLP:journals/corr/HanMD15} by quantization and Huffman coding. By incorporating non-negative stochastic gates, \citet{louizos2017learning} turns network pruning into an optimization problem with $ L_0 $-norm regularization. We pick representative structured and unstructured pruning methods to implement in our experiments.
\subsubsection{Network Pruning in Adversarial Context.} Network pruning in adversarial context has been recently discussed in search of small and robust models \cite{wang2018adversarial,zhao2018compress,DBLP:journals/corr/abs-1903-12561,Sehwag2019}.
Several frameworks \cite{rakin2019robust,Madaan2019,gui2019model} have been proposed to adversarially train a neural network while constraining its size by pruning and/or quantization.
However, these works do not answer which pruning factors are important for robust networks, nor which pruning methods are effective.
The `lottery ticket hypothesis' \cite{DBLP:conf/iclr/FrankleC19} shows the existence of a sparse subnetwork (or `winning ticket') in a randomly initialized network that can reach comparable performance with the large network. Nevertheless, \citet{DBLP:journals/corr/abs-1903-12561} argues against the existence of `winning ticket' in adversarial settings. On the other hand, \citet{Cosentino2019} manages to acquire adversarial winning tickets of simple models without harming model robustness.
\citet{li2020towards} further proposed an optimized learning rate schedule to boost the searching performance of lottery tickets, while demonstrating why \citet{DBLP:journals/corr/abs-1903-12561} fails to find them.
\citet{DBLP:conf/iclr/LiuSZHD19} claims that for network pruning in the natural setting, weights inherited by unstructured and predefined structured pruning may not be useful, as it may trap the pruned network to bad local minima. We show with experiments that weight inheritance improves the robustness in the adversarial setting, which we conjecture that this is because the inverse weights inheritance embrace larger networks during training, which can help jump out of local minima and achieve better generalization performance. \citet{hein2017formal} proposes a formal guarantee of adversarial robustness in terms of the local Lipschitz constant. By building a bridge between the local Lipschitz constant and weight sparsity, \citet{guo2018sparse} considers that an appropriately higher weight sparsity on naturally trained networks implies higher robustness. \citet{dinh2020sparsity} also finds an adversarially trained network with sparser weights distribution tends to be more robust, such as EnResNet20 \cite{wang2019resnets}. Different from compression, \citet{dhillon2018stochastic} proposes dynamic sparsity as an approach to improve robustness. By supplementing the concept of `sparsity,' we found empirical evidence of the link between robustness and sparsity, as well as training strategies to boost robustness.
\subsection{Inverse Weights Inheritance}
According to our experimental results in one-shot adversarial pruning, it seems that networks with smaller capacities (higher $L_0$-sparsity) can also have an equivalent or even higher accuracy and robustness than large networks. This appears to be contradictory to the conclusion in \citet{DBLP:conf/iclr/MadryMSTV18} that classifying examples in a robust way requires the model to have a larger capacity, as the decision boundary is more complicated. We ask the question that, {\em can a network be sparse and have larger capacity at the same time?} As we analyze, it is indeed possible to have such networks with superior performance.
We introduce a new training strategy called {\em inverse weights inheritance} (\textbf{IWI}), which is inspired by Thm.~\ref{theorem:lipschitz_sparsity} and adversarial network pruning results. By the strategy, a large network acquires sparse weights distribution by inheriting weights from a small robust network, which is pruned from the same large network in the first place and is adversarially trained. Alg.~\ref{alg:iwi} gives an example of using the lottery ticket to obtain such a small network. For a fair comparison, we train the base networks with Stop-C and Stop-E (240 epochs) and report the one with higher performance. To train the large network with inherited weights, we first run Alg.~\ref{alg:lottery_ticket} to obtain the `winning ticket' and then train the `winning ticket' (a small network) for 120 epochs. Then the weights of the trained `winning ticket' are loaded back to the large network to train for another 45 epochs (Stop-E) or until convergence (Stop-C). In Table~\ref{table:iwi}, the large network with inherited weights not only outperforms the `winning ticket' but also exceeds the base network.
To find out the reason, we measure the weight distributions of each network and partial results are given in Fig.~\ref{fig:weight_distribution}. It is clear that, with inherited weights as initialization, the distribution of the final weights for the large networks is sparser (closer to zero) than those with random initialization, which is in accord with Thm.~\ref{theorem:lipschitz_sparsity}. The results suggest that for networks with the same structure, IWI implicitly finds sparse weights distribution for the large networks, and the network can achieve an improved level of clean and adversarial accuracies. Moreover, it is evident that those networks are sparse and have large capacities at the same time.
Beyond performance boost, IWI also accelerates the adversarial training process, mainly due to the lower expense of adversarially training a small network, and less training epochs required after the large network inheriting weights. Details can be found in the supplementary material. We have also tried other methods, such as using an additional regularization term to impose sparsity in large networks, but it failed. Interested readers may refer to the supplementary material for more details.
\iffalse
\YE{Above the performance boost and acceleration for adversarial training, we also found that the pruned network obtained by IWI was more robust than the state-of-the-art methods for model compression for adversarial robustness---adversarial training model compression method (ATMC, \cite{gui2019model}). We have achieved 71.14\% adversarial accuracy against the PGD-7 attack which is more than 30\% higher than the results reported in \cite{gui2019model}. This shows a practical application for lottery ticket hypothesis to compress models without hurting their adversarial robustness.}
\fi
\subsection{Implementation Details}
In this part, we describe the implementation details in examining adversarially robust network pruning. To obtain objective results, we mostly follow the experimental settings in previous works \cite{DBLP:conf/iclr/LiuSZHD19,DBLP:conf/icml/YangZXK19,DBLP:journals/corr/abs-1903-12561,DBLP:conf/icml/ZhangZ19}. Our experiments are carried out with PyTorch 1.0 on NVIDIA GeForce 2080 Ti GPUs.
\subsubsection{Datasets and Networks.} For the fairness of the results, we conduct experiments on CIFAR-10, Tiny-ImageNet, and CIFAR-100, which are representatives for small-scale datasets, large-scale datasets and datasets somewhere in between. Three state-of-the-art network architectures are chosen: VGG \cite{DBLP:journals/corr/SimonyanZ14a}, ResNet \cite{DBLP:conf/cvpr/HeZRS16}, and DenseNet \cite{DBLP:conf/cvpr/HuangLMW17} as the base large networks. A DenseNet-BC with depth $40$ and growth rate $k=12$ is also used.
\subsubsection{One-Shot Pruning Methods.} We pick four representative and intrinsically different pruning methods: Global Unstructured Pruning (\textbf{GUP}) \cite{DBLP:conf/iclr/FrankleC19}, Local Unstructured Pruning (\textbf{LUP}) \cite{DBLP:journals/corr/HanMD15}, Filter Pruning (\textbf{FP}) \cite{DBLP:conf/iclr/0022KDSG17} and Network Slimming (\textbf{NS}) \cite{DBLP:conf/iccv/LiuLSHYZ17}. LUP and GUP are unstructured pruning, whereas FP and NS are structured pruning. Both GUP and NS prune globally according to the importance of weights or channels across all convolutional layers, while LUP and FP prune an identical percentage of weights or filters per layer locally. FP is a predefined pruning method while GUP, LUP and NS are automatic pruning methods where the structure is determined by the pruning algorithm at runtime.
We conduct these pruning methods in a one-shot manner that removes the parameters at one step, followed by post retraining to convergence. For all pruning methods, we re-implement each to achieve comparable performance with that reported in the current literature. For FP in ResNet, we conduct it on every two consecutive convolutional layers and skip the shortcuts according to \cite{DBLP:conf/iccv/LuoWL17}, also it is not available on DenseNet as pruning one filter would lead to input channel changes in all subsequent layers \cite{DBLP:conf/iclr/0022KDSG17,DBLP:conf/iclr/LiuSZHD19}. For NS, the highest pruning ratio is selected according to the maximum channel pruning ratio to avoid the removal of layers \cite{DBLP:conf/iccv/LiuLSHYZ17}.
\input{tables/advcom_three_datasets100}
\subsubsection{Adversarial Training and Evaluation.} We employ the widely used $ L_\infty\text{-PGD} $ adversary with $ \epsilon = 8/255, \text{step size} = 2/255 $ in our experiments. Following recent works \cite{Guo2020}, we utilize $ \text{iteration} = 10 $ for adversarial training, and evaluate robustness on $ \text{iteration} = 100 $. For all trainings, we adopt a SGD optimizer with momentum of $ 0.9 $ and weight decay of $ 5\times 10^{-4} $. The batch sizes for CIFAR-10 and CIFAR-100 are both $128$. On Tiny-ImageNet, the batch size is $128$ for ResNet18 and $32$ for DenseNet121 following \citet{DBLP:conf/icml/ZhangZ19} and \citet{DBLP:conf/icml/YangZXK19}. The distortion bound of adversarial examples \cite{bastani2016measuring, salman2019convex} also serves as a robustness metric, which is estimated by searching the minimum PGD $\epsilon$ that crafts a valid adversarial image on a given batch. We report the average of distortion bounds across all samples.
\subsubsection{Stopping Criteria.} Typically, it is not well-defined how to train models to `full convergence' when stepwise decaying learning rate schedule is applied. Hence we adopt two stopping criteria indicating models have been sufficiently trained for ease of comparison. \textbf{Stop-E} denotes the network is trained for a fixed number of epochs. For CIFAR-10, CIFAR-100, and Tiny-ImageNet, we set the start learning rate to be $0.1, 0.1,$, and $0.01$, respectively. The learning rate is divided by $ 10 $ for every $1/3$ of the total epochs. \textbf{Stop-C} monitors the validation loss changes to automatically adjust the learning rate. For example, if we define patience to be $ 10 $ epochs and relative threshold to be $ 10^{-5} $, the learning rate only decays when the average validation loss does not decrease by more than $ 0.001\%$ for consecutive $10$ epochs. Models stop training after $2$ learning rate decays.
\subsection{Sparsity and Robustness}
In this section, we theoretically prove that sparser weights distribution indicates an improved level of robustness. In the theoretical deduction, we assume DNNs with ReLU activation functions, but the conclusion can be generalized to a variety of models, as we verify by experiments.
We focus on nonlinear DNNs with ReLU activation functions for classification tasks as an example to study the connection between sparsity and robustness. Let us consider a multi-layer perceptron $g\left(\cdot\right)$ trained with labeled training datasets $\{\left(x_i, y_i\right)\}$. Each layer of the network is parameterized by a weight matrix $ W_d \in \mathbb{R}^{n_{d-1}\times n_d} $ and $ w_k = W_d[:,k]$ represents the weights associated with the $k$-th class in the final layer. $ \sigma $ denotes the ReLU function. Then the prediction scores of $x_i$ for class $k$ can be denoted as
\begin{equation}
\label{equation:perceptron}
g_k\left(x_i\right) = w_k^T \sigma \left(W_{d-1}^T \sigma \left( ... \sigma \left( W_1^T x_i \right) \right) \right).
\end{equation}
Let $\hat{y}=\arg\max_{k\in\{1,...,c\}}g_k\left(x\right)$ denote the class with the highest prediction score. Assuming the classifier is Lipschitz continuous, the local Lipschitz constant of function $g_{\hat{y}}\left(x\right)-g_k\left(x\right)$ over the neighborhood of $x$ is defined as $ L^k_{q,x} = \max_{x \in B_p\left(0,R\right)} \Vert \nabla g_{\hat{y}}(x)-\nabla g_k(x) \Vert$, where $B_p\left(x,R\right)$ denotes a ball centered at $x$ with radius $R$ under $L_p$ norm. Previous works \cite{hein2017formal,guo2018sparse} have associated robustness with the local Lipschitz constant by the following theorem:
\begin{theorem}
\label{theorem:robustness_guarantee}
\cite{hein2017formal,guo2018sparse} Let $\hat{y}=\arg\max_{k\in\{1,...,c\}}g_k\left(x\right)$ and $\frac{1}{p}+\frac{1}{q}=1$. For any perturbation $\delta_x\in B_p\left(0,R\right)$, $p \in \mathbb{R}^+$ and a set of Lipschitz continuous functions $\{g_k:\mathbb{R}^n\mapsto\mathbb{R}\}$, the classification decision on $x'$ will not change with
\begin{equation}
\label{equation:robustness_guarantee}
\left\Vert\delta_x\right\Vert_p \le \min\left\{ \min_{k\ne\hat{y}} \frac{g_{\hat{y}}\left(x\right)-g_k\left(x\right)}{L^k_{q,x}}\right\},
\end{equation}
where $ L^k_{q,x} = \max_{x' \in B_p\left(0,R\right)} \Vert \nabla g_{\hat{y}}(x')-\nabla g_k(x') \Vert$.
\end{theorem}
Eqn.~\eqref{equation:robustness_guarantee} has clearly depicted the relation between robustness and the local Lipschitz constant --- a smaller $L^k_{q,x}$ represents a higher level of robustness as a larger distortion can be tolerated without changing the prediction. \citet{guo2018sparse} further gives the relation between the local Lipschitz constant and the weights. We further deduct that the relation satisfies the following theorem:
\begin{theorem}
\label{theorem:lipschitz_sparsity}
(The robustness and weights distribution of ReLU networks.) Letting $\frac{1}{p}+\frac{1}{q}=1$, for any $x \in \mathbb{R}^n $, $k \in \{1,...,c\}$ and $q \in \{1,2\}$, the local Lipschitz constant of function $ g_{\hat{y}}\left(x\right)-g_k\left(x\right) $ satisfies
\begin{equation}
\label{equation:lipschitz_sparsity}
L^k_{q,x} \le \left\Vert w_{\hat{y}}-w_k \right\Vert_q
\prod_{j=1}^{d-1}\left({\left\Vert W_j \right\Vert_p} \right).
\end{equation}
\end{theorem}
Note that the local Lipschitz constant is upper bounded by the product of the $L_{p}$-norm of the weights matrices. That is to say, if $\left\Vert W_j \right\Vert_p$ is small, $L^k_{q,x}$ is constrained to be small, leading to a higher level of robustness. The proof of Thm.~\ref{theorem:lipschitz_sparsity} is omitted here due to space constraint and we refer readers to the supplementary document for the detailed proof.
We have at least two interpretations of Thm.~\ref{theorem:lipschitz_sparsity}: if we let $p=0$, Eqn.~\eqref{equation:lipschitz_sparsity} is bounded by the number of non-zero weights of the model, and hence the higher the proportion of non-zero weights, the more robust the model is. On the other hand, a smaller value of $\left\Vert W_j \right\Vert_p$ suggests the distribution of weights is closer to zero. This indicates that if a model has a weights distribution closer to zero, it may be more robust than other models with the same structure. We will respectively show how the two points are supported by the experimental results.
\subsection{Lottery Tickets in Adversarial Settings}
We seek that in a randomly-initialized large network, if a subnetwork exists achieving comparable robustness as the large one, which is also known as the `winning ticket' in \citet{DBLP:conf/iclr/FrankleC19} in a natural setting. More specifically, we perform Alg.~\ref{alg:lottery_ticket} to find out the `winning ticket' in the adversarial setting. A discussion of hyperparameters can be found in the supplementary material.
The results on CIFAR-10 and CIFAR-100 are displayed in Table~\ref{table:lottery_ticket}. We mark the results with comparable performance to the base large networks in bold. On ResNet18 and VGG16 trained on CIFAR-10, no noticeable performance degradation occurs when the pruning ratio is as high as $ 80\% $. This is slightly different from pruning on natural models \cite{DBLP:conf/iclr/FrankleC19}, where accuracies do not drop until pruning ratio reaches around $ 88.2\% $ and $ 92 \% $ respectively on Resnet18 and VGG16. We think the difference may be explained by the more complicated decision boundary of a robust model (in theory, a model with a higher Rademacher complexity is needed to achieve adversarial robustness), and hence its `winning ticket' requires a higher capacity.
To better understand lottery tickets in adversarial settings, we compare the weights distribution between one-shot pruned model and the winning ticket at the same pruning ratio. Fig.~\ref{fig:lottery_gup} illustrates the example of two models pruned at the same pruning ratio by GUP and Alg.~\ref{alg:lottery_ticket} respectively on CIFAR10, with adversarial accuracy 47.09\% versus 47.36\% on ResNet18, and 44.36\% versus 45.15\% on VGG16, correspondingly.
As we observe, whereas GUP models tend to have a flatter distribution which is consistent with \citet{DBLP:journals/corr/abs-1903-12561}, the winning tickets have more near-zero valued weights, indicating a higher level of sparsity. Thus we conclude that it is able to achieve preferable adversarial robustness through the lottery tickets settings.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/lottery_gup.pdf}
\caption{Weights distribution example of the pruned network obtained by one-shot \textbf{GUP} and \textbf{adversarial lottery} at the same pruning ratio. Note that we have a logarithmic y-axis such that the near-zero values are highly dense in the upper part of the figure. The distribution indicates that the adversarial winning tickets have higher sparsity than corresponding GUP pruned models. }
\label{fig:lottery_gup}
\end{figure}
\subsubsection{Comparison with previous results.}
\citet{DBLP:journals/corr/abs-1903-12561} argues against the existence of `winning ticket' in adversarial settings. Nevertheless, through experiments we show that `winning ticket' exists in adversarial settings and can be obtained efficiently with a few rounds of pruning and less retraining. Our conclusion is different mostly because we search `winning ticket' by iterative global unstructured pruning as in \citet{DBLP:conf/iclr/FrankleC19}, while \citet{DBLP:journals/corr/abs-1903-12561} uses a layer-wise pruning method. As indicated in \citet{DBLP:conf/iclr/FrankleC19}, layers with fewer parameters may become bottlenecks under a layer-wise pruning method, and thus winning tickets fail to emerge. We also compare our work with \citet{li2020towards}, and find the few-shot pruning in \citet{li2020towards} does not outperform iterative pruning results in our setting.
We also plot the results in Table \ref{table:advcom_three_datasets} and Table \ref{table:lottery_ticket} by showing the relation between the number of parameters of the pruned models against the adversarial accuracy in Fig.~\ref{fig:nparam}. By comparing with recent works including RobNet \cite{Guo2020} and ATMC \cite{gui2019model} utilizing the same training and testing metrics, which is PGD10 and PGD100, respectively, we demonstrate that our approach are able to acquire smaller networks with robustness comparable to the original dense models through adversarial network pruning, extensively effective under different current model structures among ResNet, VGG, and DenseNet.
\section{Study of Robustness and Network Pruning}
\label{section:theory}
\input{story/theory}
\section{Performance Evaluation}
\label{section:experiment}
\input{story/implement}
\input{story/advprune}
\input{story/lottery}
\input{story/iwi}
\subsection{Adversarial Network Pruning Improves Robustness by Imposing Higher Sparsity}
\label{section:story_1}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/weight_distribution_reset_inherit_resnet18_half.pdf}
\caption{Weights distribution of the pruned network adversarially trained with \textbf{inherited weights} or \textbf{randomly initialized} weights. In general, networks with inherited weights from automatic pruning methods including LUP, GUP, NS have an equivalent or higher sparsity than their counterparts with randomly initialized weights. FP has lower sparsity than FP-rand.}
\label{fig:weight_distribution_reset_inherit}
\end{figure*}
Although Thm.~\ref{theorem:lipschitz_sparsity} establishes a preliminary link between sparsity and robustness, it does not tell us how to achieve sparsity and therefore robustness by the equation. An intuitive way is to prune a network to reduce the number of non-zero weights of the model, which is also done in \cite{guo2018sparse} but only in the natural setting. We show in the following that pruning also works in the adversarial setting. Beyond that, we found that adversarial retraining after pruning mostly improves robustness, at a sparser weights distribution than models with the same structure.
We first adversarially train each base network until reaching the state-of-the-art clean and adversarial accuracy, and then prune each network by different means.
\iffalse
When networks are pruned, we immediately test their accuracies on clean and adversarial samples. The results on CIFAR-10 and Tiny-ImageNet are shown in Table~\ref{table:accuracy_without_retrain2}. Please refer to the supplementary material for the results on CIFAR-100. Compared to the performance on the base network (marked under the model name), the pruned network only moderately suffers from accuracy loss when the pruning ratio is not very high, and the pruning methods are automatic, {\em i.e.,} LUP, GUP, and NS. Note that those methods automatically extract a subnetwork without altering the weights, resulting in a higher $L_0$ sparsity compared to base models. Considering pruning reduces model capacity, the mild performance loss is reasonable.
\fi
Although pruning shows a promising method to introduce sparsity, it does not end up in robust models each time. We hence impose adversarial retraining on pruned networks to enhance robustness. The results are provided in Table~\ref{table:advcom_three_datasets}. Since there is a tradeoff between accuracy and robustness \cite{DBLP:conf/icml/ZhangYJXGJ19}, and some models tend to sacrifice one for the other, we choose to report the performance where the sum of adversarial accuracy and clean accuracy is the highest. Distortion bound is also reported for a complete view. We refer readers to the supplementary material for further discussions on the results.
\input{tables/compare_prune_and_scratch100}
Most networks in Table~\ref{table:advcom_three_datasets} obtain higher accuracy and robustness than pruning without retraining, and a large proportion of them can achieve better performance than the base networks. Specifically, LUP and NS only suffer notable performance degradation at high pruning ratios, whereas GUP remains a remarkable high performance across all pruning ratios. FP cannot preserve network performance well.
To see whether the weights inherited from a large network help the pruned network converge, we conduct a series of comparison experiments, as shown in Table~\ref{table:compare_prune_and_scratch}. Compared to FP, FP-rand initializes a small network with the same structure as the corresponding pruned network. For automatic pruning methods including LUP, GUP, and NS, we re-use the pruned network structure with re-initialized weights. As we found, compared with FP-rand, FP provides little or no improvement with the inherited weights. On the contrary, automatic pruning with inherited weights almost always performs better than that with randomly initialized weights.
Although Table~\ref{table:advcom_three_datasets}, and Table~\ref{table:compare_prune_and_scratch} experimentally found effective methods or factors to gain robustness for pruned networks, it still remains unclear how it relates to sparsity. Interestingly, by examining the weights distribution after adversarial retraining, we found that most automatically pruned networks with inherited weights have similar or higher sparsity than those with randomly initialized weights, as some examples shown in Fig.~\ref{fig:weight_distribution_reset_inherit}, while the networks pruned by predefined pruning (FP) show the opposite trend. This could be explained by Thm.~\ref{theorem:lipschitz_sparsity}, since a weight distribution closer to zero implies higher robustness. Therefore, weight inheritance and adversarial retraining implicitly provide a way to obtain sparse networks.
\subsubsection{Comparison with previous results.}
We also compare our conclusion with previous works and summarize the difference as follows. We find inherited weights by automatic pruning (LUP, GUP, NS) provide better initialization for small networks, while predefined pruning does not. \citet{DBLP:conf/iclr/LiuSZHD19} argues that weights inherited from structured pruning have little impact on the performance of the pruned network. While the experiments on FP agree with the conclusion, that on NS does not. \citet{wang2018adversarial} also suggests inherited weights are important to preserving network accuracy and robustness in adversarial settings, but they do not discuss the working mechanism behind.
\section{Proof of Theorem 3.2}
Let us denote layer-wise activation output as $ a_0=x $ and $ a_j = \sigma \left( W_j^T a_{j-1} \right) $ for $ 0 \le j \le d-1 $. We also define $ D_j $ as
\begin{equation}
\label{equation:activation}
D_j\left(x\right) := diag \left( 1_{W_j[:,1]^T a_{j-1}>0}, ..., 1_{W_j[:,n_j]^T a_{j-1}>0} \right)
\end{equation}
which is a diagonal matrix whose entries will take value one when the activation is nonzero within $j$-th layer. Note that if for any $x$ and any $j \in \{1,...,d-2\}$, $D_j(x)=0^{n_j \times n_j}$, $D_{d-1}(x)=0^{n_{d-1} \times n_{d-1}}$ must hold. In this setting, we have Lemma \ref{lemma:lipschitz_sparsity_with_D} proved by \cite{guo2018sparse}.
\begin{lemma}
\label{lemma:lipschitz_sparsity_with_D}
(A local Lipschitz constant for ReLU networks \cite{guo2018sparse}) Letting $\frac{1}{p}+\frac{1}{q}=1$, for any $x \in \mathbb{R}^n $, $k \in \{1,...,c\}$ and $q \in \{1,2\}$, the local Lipschitz constant of function $ g_{\hat{y}}\left(x\right)-g_k\left(x\right) $ satisfies
\begin{equation}
\label{equation:lipschitz_sparsity_with_D}
L^k_{q,x} \le \left\Vert w_{\hat{y}}-w_k \right\Vert_q\sup_{x'\in B_p\left(x,R\right)}\prod_{j=1}^{d-1}
\left( {\left\Vert D_j\left(x'\right) \right\Vert_p} {\left\Vert W_j \right\Vert_p} \right)
\end{equation}
where all the matrix norms are induced norms.
\end{lemma}
In our settings, the network is pruned to avoid layer removal and thereby for any $j \in [0, d-1]$, $W_{j}$ cannot be all-zero matrices. Then we prove for any $j \in [0, d-1]$, there must be at least one $a_{j}$ which is not an all-zero matrix. We prove that by contradiction. Assume that for layer $j$, $a_{j}$ is an all-zero matrix. Then $D_{j}(x) = 0^{n_{d-1} \times n_{d-1}}$ which yields $D_{d-1}(x)=0^{n_{d-1} \times n_{d-1}}$ and the prediction score $g_k(x)$ will be all-zero for any $k$ since
\begin{equation}
\label{equation:perceptron}
g_k\left(x_i\right) = w_k^T \sigma \left(W_{d-1}^T \sigma \left( ... \sigma \left( W_1^T x_i\right) \right) \right).
\end{equation}
It is impossible for a well-trained network to make all-zero predictions, which leads to a contradiction. Hence the assumption is not true and $a_{j}, \forall j\in [0, d-1] $ cannot be an all-zero matrix. The conclusion implies that, for any $j\in [0, d-1]$, there must be at least one entry in $ D_{j}(x) $ taking the value $1$. By the definition of matrix induced norms, we have that for any $x$ and particular values of $p$,
\begin{equation}
\label{equation:L_2_norm}
\left\Vert D_j\left(x\right) \right\Vert_2 := \sigma_{\max}(D_j) = 1
\end{equation}
\begin{equation}
\label{equation:L_inf_norm}
\left\Vert D_j\left(x\right) \right\Vert_\infty := \max_{1\le i \le n_j}\sum_{m=1}^{n_j}\vert D_j[i,m]\vert = 1
\end{equation}
where $\left\Vert D_j\left(x\right) \right\Vert_2$ is the spectral norm, the greatest singular value of $D_j\left(x\right)$. Then for $p = 2$ and $\infty$, the right side of Eq.~\eqref{equation:lipschitz_sparsity_with_D} is independent of input $ x $ and we have the relation between the Lipschitz constant and weight sparsity described by Theorem 3.2.
\section{Related Work}
\subsubsection{Adversarial Training.} Adversarial training and its variants are proposed to improve network robustness against adversarial examples \cite{DBLP:conf/iclr/KurakinGB17}. \citet{DBLP:conf/iclr/MadryMSTV18} motivates projected gradient descent (PGD) as a universal `first-order adversary,' and optimizes the saddle point formulation to train a robust network. \citet{DBLP:journals/corr/abs-1905-09747} observes robustness can transfer between networks by knowledge distillation, and such transfer can even improve the robustness of the student network. Following the convention, we adopt $L_\infty\text{-PGD}$ attack \cite{DBLP:conf/iclr/MadryMSTV18}, {\em i.e.,} the strongest attack utilizing the local first-order information of the network, both in adversarial training strategy and the robustness evaluations.
\subsubsection{Network Pruning Methods.} Network pruning methods related to this paper can be divided into two categories: structured pruning and unstructured pruning. Structured pruning prunes a network at the level of filters \cite{DBLP:conf/ijcai/2018,DBLP:conf/iclr/0022KDSG17,DBLP:conf/iccv/LuoWL17}, channels \cite{DBLP:conf/iccv/LiuLSHYZ17} or columns \cite{wen2016learning}, depending on their respective importance.
The importance of a filter or a channel can be determined by the norm of the weights \cite{DBLP:conf/iclr/0022KDSG17} or the channel scaling factor \cite{ye2018rethinking,DBLP:conf/iccv/LiuLSHYZ17} (sometimes the scaling factor in batch normalization layers). The unstructured pruning \cite{lecun1990optimal,hassibi1993second} prunes at the level of individual weight according to the Hessian matrix of the loss function. \citet{DBLP:journals/corr/HanPTD15} proposes to prune weights with small magnitude, and the compression ratio is further enhanced in \citet{DBLP:journals/corr/HanMD15} by quantization and Huffman coding. By incorporating non-negative stochastic gates, \citet{louizos2017learning} turns network pruning into an optimization problem with $ L_0 $-norm regularization. We pick representative structured and unstructured pruning methods to implement in our experiments.
\subsubsection{Network Pruning in Adversarial Context.} Network pruning in adversarial context has been recently discussed in search of small and robust models \cite{wang2018adversarial,zhao2018compress,DBLP:journals/corr/abs-1903-12561,Sehwag2019}.
Several frameworks \cite{rakin2019robust,Madaan2019,gui2019model} have been proposed to adversarially train a neural network while constraining its size by pruning and/or quantization.
However, these works do not answer which pruning factors are important for robust networks, nor which pruning methods are effective.
The `lottery ticket hypothesis' \cite{DBLP:conf/iclr/FrankleC19} shows the existence of a sparse subnetwork (or `winning ticket') in a randomly initialized network that can reach comparable performance with the large network. Nevertheless, \citet{DBLP:journals/corr/abs-1903-12561} argues against the existence of `winning ticket' in adversarial settings. On the other hand, \citet{Cosentino2019} manages to acquire adversarial winning tickets of simple models without harming model robustness.
\citet{li2020towards} further proposed an optimized learning rate schedule to boost the searching performance of lottery tickets, while demonstrating why \citet{DBLP:journals/corr/abs-1903-12561} fails to find them.
\citet{DBLP:conf/iclr/LiuSZHD19} claims that for network pruning in the natural setting, weights inherited by unstructured and predefined structured pruning may not be useful, as it may trap the pruned network to bad local minima. We show with experiments that weight inheritance improves the robustness in the adversarial setting, which we conjecture that this is because the inverse weights inheritance embrace larger networks during training, which can help jump out of local minima and achieve better generalization performance. \citet{hein2017formal} proposes a formal guarantee of adversarial robustness in terms of the local Lipschitz constant. By building a bridge between the local Lipschitz constant and weight sparsity, \citet{guo2018sparse} considers that an appropriately higher weight sparsity on naturally trained networks implies higher robustness. \citet{dinh2020sparsity} also finds an adversarially trained network with sparser weights distribution tends to be more robust, such as EnResNet20 \cite{wang2019resnets}. Different from compression, \citet{dhillon2018stochastic} proposes dynamic sparsity as an approach to improve robustness. By supplementing the concept of `sparsity,' we found empirical evidence of the link between robustness and sparsity, as well as training strategies to boost robustness.
\subsection{Inverse Weights Inheritance}
According to our experimental results in one-shot adversarial pruning, it seems that networks with smaller capacities (higher $L_0$-sparsity) can also have an equivalent or even higher accuracy and robustness than large networks. This appears to be contradictory to the conclusion in \citet{DBLP:conf/iclr/MadryMSTV18} that classifying examples in a robust way requires the model to have a larger capacity, as the decision boundary is more complicated. We ask the question that, {\em can a network be sparse and have larger capacity at the same time?} As we analyze, it is indeed possible to have such networks with superior performance.
We introduce a new training strategy called {\em inverse weights inheritance} (\textbf{IWI}), which is inspired by Thm.~\ref{theorem:lipschitz_sparsity} and adversarial network pruning results. By the strategy, a large network acquires sparse weights distribution by inheriting weights from a small robust network, which is pruned from the same large network in the first place and is adversarially trained. Alg.~\ref{alg:iwi} gives an example of using the lottery ticket to obtain such a small network. For a fair comparison, we train the base networks with Stop-C and Stop-E (240 epochs) and report the one with higher performance. To train the large network with inherited weights, we first run Alg.~\ref{alg:lottery_ticket} to obtain the `winning ticket' and then train the `winning ticket' (a small network) for 120 epochs. Then the weights of the trained `winning ticket' are loaded back to the large network to train for another 45 epochs (Stop-E) or until convergence (Stop-C). In Table~\ref{table:iwi}, the large network with inherited weights not only outperforms the `winning ticket' but also exceeds the base network.
To find out the reason, we measure the weight distributions of each network and partial results are given in Fig.~\ref{fig:weight_distribution}. It is clear that, with inherited weights as initialization, the distribution of the final weights for the large networks is sparser (closer to zero) than those with random initialization, which is in accord with Thm.~\ref{theorem:lipschitz_sparsity}. The results suggest that for networks with the same structure, IWI implicitly finds sparse weights distribution for the large networks, and the network can achieve an improved level of clean and adversarial accuracies. Moreover, it is evident that those networks are sparse and have large capacities at the same time.
Beyond performance boost, IWI also accelerates the adversarial training process, mainly due to the lower expense of adversarially training a small network, and less training epochs required after the large network inheriting weights. Details can be found in the supplementary material. We have also tried other methods, such as using an additional regularization term to impose sparsity in large networks, but it failed. Interested readers may refer to the supplementary material for more details.
\iffalse
\YE{Above the performance boost and acceleration for adversarial training, we also found that the pruned network obtained by IWI was more robust than the state-of-the-art methods for model compression for adversarial robustness---adversarial training model compression method (ATMC, \cite{gui2019model}). We have achieved 71.14\% adversarial accuracy against the PGD-7 attack which is more than 30\% higher than the results reported in \cite{gui2019model}. This shows a practical application for lottery ticket hypothesis to compress models without hurting their adversarial robustness.}
\fi
\subsection{Implementation Details}
In this part, we describe the implementation details in examining adversarially robust network pruning. To obtain objective results, we mostly follow the experimental settings in previous works \cite{DBLP:conf/iclr/LiuSZHD19,DBLP:conf/icml/YangZXK19,DBLP:journals/corr/abs-1903-12561,DBLP:conf/icml/ZhangZ19}. Our experiments are carried out with PyTorch 1.0 on NVIDIA GeForce 2080 Ti GPUs.
\subsubsection{Datasets and Networks.} For the fairness of the results, we conduct experiments on CIFAR-10, Tiny-ImageNet, and CIFAR-100, which are representatives for small-scale datasets, large-scale datasets and datasets somewhere in between. Three state-of-the-art network architectures are chosen: VGG \cite{DBLP:journals/corr/SimonyanZ14a}, ResNet \cite{DBLP:conf/cvpr/HeZRS16}, and DenseNet \cite{DBLP:conf/cvpr/HuangLMW17} as the base large networks. A DenseNet-BC with depth $40$ and growth rate $k=12$ is also used.
\subsubsection{One-Shot Pruning Methods.} We pick four representative and intrinsically different pruning methods: Global Unstructured Pruning (\textbf{GUP}) \cite{DBLP:conf/iclr/FrankleC19}, Local Unstructured Pruning (\textbf{LUP}) \cite{DBLP:journals/corr/HanMD15}, Filter Pruning (\textbf{FP}) \cite{DBLP:conf/iclr/0022KDSG17} and Network Slimming (\textbf{NS}) \cite{DBLP:conf/iccv/LiuLSHYZ17}. LUP and GUP are unstructured pruning, whereas FP and NS are structured pruning. Both GUP and NS prune globally according to the importance of weights or channels across all convolutional layers, while LUP and FP prune an identical percentage of weights or filters per layer locally. FP is a predefined pruning method while GUP, LUP and NS are automatic pruning methods where the structure is determined by the pruning algorithm at runtime.
We conduct these pruning methods in a one-shot manner that removes the parameters at one step, followed by post retraining to convergence. For all pruning methods, we re-implement each to achieve comparable performance with that reported in the current literature. For FP in ResNet, we conduct it on every two consecutive convolutional layers and skip the shortcuts according to \cite{DBLP:conf/iccv/LuoWL17}, also it is not available on DenseNet as pruning one filter would lead to input channel changes in all subsequent layers \cite{DBLP:conf/iclr/0022KDSG17,DBLP:conf/iclr/LiuSZHD19}. For NS, the highest pruning ratio is selected according to the maximum channel pruning ratio to avoid the removal of layers \cite{DBLP:conf/iccv/LiuLSHYZ17}.
\input{tables/advcom_three_datasets100}
\subsubsection{Adversarial Training and Evaluation.} We employ the widely used $ L_\infty\text{-PGD} $ adversary with $ \epsilon = 8/255, \text{step size} = 2/255 $ in our experiments. Following recent works \cite{Guo2020}, we utilize $ \text{iteration} = 10 $ for adversarial training, and evaluate robustness on $ \text{iteration} = 100 $. For all trainings, we adopt a SGD optimizer with momentum of $ 0.9 $ and weight decay of $ 5\times 10^{-4} $. The batch sizes for CIFAR-10 and CIFAR-100 are both $128$. On Tiny-ImageNet, the batch size is $128$ for ResNet18 and $32$ for DenseNet121 following \citet{DBLP:conf/icml/ZhangZ19} and \citet{DBLP:conf/icml/YangZXK19}. The distortion bound of adversarial examples \cite{bastani2016measuring, salman2019convex} also serves as a robustness metric, which is estimated by searching the minimum PGD $\epsilon$ that crafts a valid adversarial image on a given batch. We report the average of distortion bounds across all samples.
\subsubsection{Stopping Criteria.} Typically, it is not well-defined how to train models to `full convergence' when stepwise decaying learning rate schedule is applied. Hence we adopt two stopping criteria indicating models have been sufficiently trained for ease of comparison. \textbf{Stop-E} denotes the network is trained for a fixed number of epochs. For CIFAR-10, CIFAR-100, and Tiny-ImageNet, we set the start learning rate to be $0.1, 0.1,$, and $0.01$, respectively. The learning rate is divided by $ 10 $ for every $1/3$ of the total epochs. \textbf{Stop-C} monitors the validation loss changes to automatically adjust the learning rate. For example, if we define patience to be $ 10 $ epochs and relative threshold to be $ 10^{-5} $, the learning rate only decays when the average validation loss does not decrease by more than $ 0.001\%$ for consecutive $10$ epochs. Models stop training after $2$ learning rate decays.
\subsection{Sparsity and Robustness}
In this section, we theoretically prove that sparser weights distribution indicates an improved level of robustness. In the theoretical deduction, we assume DNNs with ReLU activation functions, but the conclusion can be generalized to a variety of models, as we verify by experiments.
We focus on nonlinear DNNs with ReLU activation functions for classification tasks as an example to study the connection between sparsity and robustness. Let us consider a multi-layer perceptron $g\left(\cdot\right)$ trained with labeled training datasets $\{\left(x_i, y_i\right)\}$. Each layer of the network is parameterized by a weight matrix $ W_d \in \mathbb{R}^{n_{d-1}\times n_d} $ and $ w_k = W_d[:,k]$ represents the weights associated with the $k$-th class in the final layer. $ \sigma $ denotes the ReLU function. Then the prediction scores of $x_i$ for class $k$ can be denoted as
\begin{equation}
\label{equation:perceptron}
g_k\left(x_i\right) = w_k^T \sigma \left(W_{d-1}^T \sigma \left( ... \sigma \left( W_1^T x_i \right) \right) \right).
\end{equation}
Let $\hat{y}=\arg\max_{k\in\{1,...,c\}}g_k\left(x\right)$ denote the class with the highest prediction score. Assuming the classifier is Lipschitz continuous, the local Lipschitz constant of function $g_{\hat{y}}\left(x\right)-g_k\left(x\right)$ over the neighborhood of $x$ is defined as $ L^k_{q,x} = \max_{x \in B_p\left(0,R\right)} \Vert \nabla g_{\hat{y}}(x)-\nabla g_k(x) \Vert$, where $B_p\left(x,R\right)$ denotes a ball centered at $x$ with radius $R$ under $L_p$ norm. Previous works \cite{hein2017formal,guo2018sparse} have associated robustness with the local Lipschitz constant by the following theorem:
\begin{theorem}
\label{theorem:robustness_guarantee}
\cite{hein2017formal,guo2018sparse} Let $\hat{y}=\arg\max_{k\in\{1,...,c\}}g_k\left(x\right)$ and $\frac{1}{p}+\frac{1}{q}=1$. For any perturbation $\delta_x\in B_p\left(0,R\right)$, $p \in \mathbb{R}^+$ and a set of Lipschitz continuous functions $\{g_k:\mathbb{R}^n\mapsto\mathbb{R}\}$, the classification decision on $x'$ will not change with
\begin{equation}
\label{equation:robustness_guarantee}
\left\Vert\delta_x\right\Vert_p \le \min\left\{ \min_{k\ne\hat{y}} \frac{g_{\hat{y}}\left(x\right)-g_k\left(x\right)}{L^k_{q,x}}\right\},
\end{equation}
where $ L^k_{q,x} = \max_{x' \in B_p\left(0,R\right)} \Vert \nabla g_{\hat{y}}(x')-\nabla g_k(x') \Vert$.
\end{theorem}
Eqn.~\eqref{equation:robustness_guarantee} has clearly depicted the relation between robustness and the local Lipschitz constant --- a smaller $L^k_{q,x}$ represents a higher level of robustness as a larger distortion can be tolerated without changing the prediction. \citet{guo2018sparse} further gives the relation between the local Lipschitz constant and the weights. We further deduct that the relation satisfies the following theorem:
\begin{theorem}
\label{theorem:lipschitz_sparsity}
(The robustness and weights distribution of ReLU networks.) Letting $\frac{1}{p}+\frac{1}{q}=1$, for any $x \in \mathbb{R}^n $, $k \in \{1,...,c\}$ and $q \in \{1,2\}$, the local Lipschitz constant of function $ g_{\hat{y}}\left(x\right)-g_k\left(x\right) $ satisfies
\begin{equation}
\label{equation:lipschitz_sparsity}
L^k_{q,x} \le \left\Vert w_{\hat{y}}-w_k \right\Vert_q
\prod_{j=1}^{d-1}\left({\left\Vert W_j \right\Vert_p} \right).
\end{equation}
\end{theorem}
Note that the local Lipschitz constant is upper bounded by the product of the $L_{p}$-norm of the weights matrices. That is to say, if $\left\Vert W_j \right\Vert_p$ is small, $L^k_{q,x}$ is constrained to be small, leading to a higher level of robustness. The proof of Thm.~\ref{theorem:lipschitz_sparsity} is omitted here due to space constraint and we refer readers to the supplementary document for the detailed proof.
We have at least two interpretations of Thm.~\ref{theorem:lipschitz_sparsity}: if we let $p=0$, Eqn.~\eqref{equation:lipschitz_sparsity} is bounded by the number of non-zero weights of the model, and hence the higher the proportion of non-zero weights, the more robust the model is. On the other hand, a smaller value of $\left\Vert W_j \right\Vert_p$ suggests the distribution of weights is closer to zero. This indicates that if a model has a weights distribution closer to zero, it may be more robust than other models with the same structure. We will respectively show how the two points are supported by the experimental results.
\subsection{Lottery Tickets in Adversarial Settings}
We seek that in a randomly-initialized large network, if a subnetwork exists achieving comparable robustness as the large one, which is also known as the `winning ticket' in \citet{DBLP:conf/iclr/FrankleC19} in a natural setting. More specifically, we perform Alg.~\ref{alg:lottery_ticket} to find out the `winning ticket' in the adversarial setting. A discussion of hyperparameters can be found in the supplementary material.
The results on CIFAR-10 and CIFAR-100 are displayed in Table~\ref{table:lottery_ticket}. We mark the results with comparable performance to the base large networks in bold. On ResNet18 and VGG16 trained on CIFAR-10, no noticeable performance degradation occurs when the pruning ratio is as high as $ 80\% $. This is slightly different from pruning on natural models \cite{DBLP:conf/iclr/FrankleC19}, where accuracies do not drop until pruning ratio reaches around $ 88.2\% $ and $ 92 \% $ respectively on Resnet18 and VGG16. We think the difference may be explained by the more complicated decision boundary of a robust model (in theory, a model with a higher Rademacher complexity is needed to achieve adversarial robustness), and hence its `winning ticket' requires a higher capacity.
To better understand lottery tickets in adversarial settings, we compare the weights distribution between one-shot pruned model and the winning ticket at the same pruning ratio. Fig.~\ref{fig:lottery_gup} illustrates the example of two models pruned at the same pruning ratio by GUP and Alg.~\ref{alg:lottery_ticket} respectively on CIFAR10, with adversarial accuracy 47.09\% versus 47.36\% on ResNet18, and 44.36\% versus 45.15\% on VGG16, correspondingly.
As we observe, whereas GUP models tend to have a flatter distribution which is consistent with \citet{DBLP:journals/corr/abs-1903-12561}, the winning tickets have more near-zero valued weights, indicating a higher level of sparsity. Thus we conclude that it is able to achieve preferable adversarial robustness through the lottery tickets settings.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/lottery_gup.pdf}
\caption{Weights distribution example of the pruned network obtained by one-shot \textbf{GUP} and \textbf{adversarial lottery} at the same pruning ratio. Note that we have a logarithmic y-axis such that the near-zero values are highly dense in the upper part of the figure. The distribution indicates that the adversarial winning tickets have higher sparsity than corresponding GUP pruned models. }
\label{fig:lottery_gup}
\end{figure}
\subsubsection{Comparison with previous results.}
\citet{DBLP:journals/corr/abs-1903-12561} argues against the existence of `winning ticket' in adversarial settings. Nevertheless, through experiments we show that `winning ticket' exists in adversarial settings and can be obtained efficiently with a few rounds of pruning and less retraining. Our conclusion is different mostly because we search `winning ticket' by iterative global unstructured pruning as in \citet{DBLP:conf/iclr/FrankleC19}, while \citet{DBLP:journals/corr/abs-1903-12561} uses a layer-wise pruning method. As indicated in \citet{DBLP:conf/iclr/FrankleC19}, layers with fewer parameters may become bottlenecks under a layer-wise pruning method, and thus winning tickets fail to emerge. We also compare our work with \citet{li2020towards}, and find the few-shot pruning in \citet{li2020towards} does not outperform iterative pruning results in our setting.
We also plot the results in Table \ref{table:advcom_three_datasets} and Table \ref{table:lottery_ticket} by showing the relation between the number of parameters of the pruned models against the adversarial accuracy in Fig.~\ref{fig:nparam}. By comparing with recent works including RobNet \cite{Guo2020} and ATMC \cite{gui2019model} utilizing the same training and testing metrics, which is PGD10 and PGD100, respectively, we demonstrate that our approach are able to acquire smaller networks with robustness comparable to the original dense models through adversarial network pruning, extensively effective under different current model structures among ResNet, VGG, and DenseNet.
\section{Study of Robustness and Network Pruning}
\label{section:theory}
\input{story/theory}
\section{Performance Evaluation}
\label{section:experiment}
\input{story/implement}
\input{story/advprune}
\input{story/lottery}
\input{story/iwi}
\subsection{Adversarial Network Pruning Improves Robustness by Imposing Higher Sparsity}
\label{section:story_1}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/weight_distribution_reset_inherit_resnet18_half.pdf}
\caption{Weights distribution of the pruned network adversarially trained with \textbf{inherited weights} or \textbf{randomly initialized} weights. In general, networks with inherited weights from automatic pruning methods including LUP, GUP, NS have an equivalent or higher sparsity than their counterparts with randomly initialized weights. FP has lower sparsity than FP-rand.}
\label{fig:weight_distribution_reset_inherit}
\end{figure*}
Although Thm.~\ref{theorem:lipschitz_sparsity} establishes a preliminary link between sparsity and robustness, it does not tell us how to achieve sparsity and therefore robustness by the equation. An intuitive way is to prune a network to reduce the number of non-zero weights of the model, which is also done in \cite{guo2018sparse} but only in the natural setting. We show in the following that pruning also works in the adversarial setting. Beyond that, we found that adversarial retraining after pruning mostly improves robustness, at a sparser weights distribution than models with the same structure.
We first adversarially train each base network until reaching the state-of-the-art clean and adversarial accuracy, and then prune each network by different means.
\iffalse
When networks are pruned, we immediately test their accuracies on clean and adversarial samples. The results on CIFAR-10 and Tiny-ImageNet are shown in Table~\ref{table:accuracy_without_retrain2}. Please refer to the supplementary material for the results on CIFAR-100. Compared to the performance on the base network (marked under the model name), the pruned network only moderately suffers from accuracy loss when the pruning ratio is not very high, and the pruning methods are automatic, {\em i.e.,} LUP, GUP, and NS. Note that those methods automatically extract a subnetwork without altering the weights, resulting in a higher $L_0$ sparsity compared to base models. Considering pruning reduces model capacity, the mild performance loss is reasonable.
\fi
Although pruning shows a promising method to introduce sparsity, it does not end up in robust models each time. We hence impose adversarial retraining on pruned networks to enhance robustness. The results are provided in Table~\ref{table:advcom_three_datasets}. Since there is a tradeoff between accuracy and robustness \cite{DBLP:conf/icml/ZhangYJXGJ19}, and some models tend to sacrifice one for the other, we choose to report the performance where the sum of adversarial accuracy and clean accuracy is the highest. Distortion bound is also reported for a complete view. We refer readers to the supplementary material for further discussions on the results.
\input{tables/compare_prune_and_scratch100}
Most networks in Table~\ref{table:advcom_three_datasets} obtain higher accuracy and robustness than pruning without retraining, and a large proportion of them can achieve better performance than the base networks. Specifically, LUP and NS only suffer notable performance degradation at high pruning ratios, whereas GUP remains a remarkable high performance across all pruning ratios. FP cannot preserve network performance well.
To see whether the weights inherited from a large network help the pruned network converge, we conduct a series of comparison experiments, as shown in Table~\ref{table:compare_prune_and_scratch}. Compared to FP, FP-rand initializes a small network with the same structure as the corresponding pruned network. For automatic pruning methods including LUP, GUP, and NS, we re-use the pruned network structure with re-initialized weights. As we found, compared with FP-rand, FP provides little or no improvement with the inherited weights. On the contrary, automatic pruning with inherited weights almost always performs better than that with randomly initialized weights.
Although Table~\ref{table:advcom_three_datasets}, and Table~\ref{table:compare_prune_and_scratch} experimentally found effective methods or factors to gain robustness for pruned networks, it still remains unclear how it relates to sparsity. Interestingly, by examining the weights distribution after adversarial retraining, we found that most automatically pruned networks with inherited weights have similar or higher sparsity than those with randomly initialized weights, as some examples shown in Fig.~\ref{fig:weight_distribution_reset_inherit}, while the networks pruned by predefined pruning (FP) show the opposite trend. This could be explained by Thm.~\ref{theorem:lipschitz_sparsity}, since a weight distribution closer to zero implies higher robustness. Therefore, weight inheritance and adversarial retraining implicitly provide a way to obtain sparse networks.
\subsubsection{Comparison with previous results.}
We also compare our conclusion with previous works and summarize the difference as follows. We find inherited weights by automatic pruning (LUP, GUP, NS) provide better initialization for small networks, while predefined pruning does not. \citet{DBLP:conf/iclr/LiuSZHD19} argues that weights inherited from structured pruning have little impact on the performance of the pruned network. While the experiments on FP agree with the conclusion, that on NS does not. \citet{wang2018adversarial} also suggests inherited weights are important to preserving network accuracy and robustness in adversarial settings, but they do not discuss the working mechanism behind.
|
2,869,038,153,785 | arxiv | \section{Introduction}
At the electron-proton collider HERA charm quarks are predominantly
produced via boson gluon fusion, $\gamma g \rightarrow c \bar{c}$,
where the photon is emitted from the incoming lepton and the
gluon originates from the proton. The cross section is largest
for photoproduction, i.e.~for photons with negative four-momentum squared
(virtuality) $Q^2 \simeq 0\ {\rm GeV}^2$. In addition to
hard direct scattering off the photon,
processes have to be considered in which
the partonic structure of the photon is resolved.
The charm quark mass provides a hard scale which
justifies the applicability of perturbative QCD (pQCD).
Previous measurements of the photoproduction of charm quarks at HERA
cover inclusive \dstar\ meson
production~\cite{zeusdstar98,h1dstar98,h1dstar06}, production
of \dstar\ mesons with associated dijets~\cite{zeusdstar98,zeusdstarjets03,
zeusdstarjets05,h1dstar06} and heavy quark production
using events with a \dstar\ meson and a muon~\cite{h1dstarmu}.
In this paper, single and double differential cross sections are presented
for the inclusive production of \dstar\ mesons and the production
of two jets with one of the jets containing the \dstar\ meson.
They are compared to leading and next-to-leading order pQCD predictions
using different hadronisation models.
Compared to the previous H1 analysis of inclusive \dstar\
photoproduction~\cite{h1dstar06}, a seven times larger signal sample
is analysed here.
Studying events in which at least two jets could be reconstructed,
with one of the jets
containing the \dstar\ meson, allows further investigations of the
details of the heavy quark production process.
The jets are measured down to transverse momenta of $\ptj = 3.5\ {\rm GeV}$.
While the jet containing the \dstar\ meson originates from a charm or
anticharm quark produced in the hard subprocess, the
non-\dstar -tagged jet, refered to as {\it \otherj}, can result from
either the other heavy quark or a light parton (e.g.~a gluon).
Correlations between the two jets are studied using variables which are
sensitive to higher order effects and to the
longitudinal as well as to the transverse momentum components of the partons
entering the hard scattering process.
\section{QCD Calculations}
The data presented in this analysis are compared with Monte Carlo
simulations based on leading order (LO) matrix elements
supplemented by parton showers and with next-to-leading
order (NLO) calculations. The calculations are performed using either the
collinear factorisation or the $k_t$-factorisation approach.
The collinear factorisation makes use of the DGLAP~\cite{DGLAP} evolution
equations, while in $k_t$-factorisation the CCFM~\cite{CCFM} evolution
equations are employed. In the collinear approach transverse momenta obtained
through the initial
state QCD evolution are neglected and the transverse momenta are
generated in the hard scattering process. Effects from the non-vanishing
transverse momenta of the gluons enter only at the NLO level. In the
$k_t$-factorisation ansatz the transverse momenta of incoming
gluons, $k_t$, are already included at leading order both in the
off-shell matrix element and the $k_t$-dependent unintegrated gluon
density~\cite{uPDF}. Corrections appearing only at higher order
in collinear factorisation are hence partially included at LO in the
$k_t$-factorisation approach.
For charm quark photoproduction two classes of processes occur,
the direct-photon and the resolved-photon processes. In the direct
processes the photon emitted from the beam lepton enters directly the hard
interaction,
whereas in the resolved processes the photon acts as the source of incoming
partons, one of which takes part in the hard interaction. The distinction
between these two classes depends on the factorisation scheme and the
order in which the calculation is performed.
The production of heavy quarks is calculated either in the massive
scheme, where heavy quarks are produced only perturbatively
via boson gluon fusion, or in the massless scheme, where
heavy quarks are treated as massless partons. These two schemes are expected to
be appropriate in different regions of phase space~\cite{Tung}:
the massive scheme is expected to be reliable when the transverse momentum
$p_{T}$ of the heavy quark is of similar size compared to the charm
mass $m_{c}$, whereas the massless scheme is expected to be valid
for $p_{T} \gg m_{c}$.
In the general-mass variable-flavour-number scheme (GMVFNS) a smooth
transition from the massive to the massless scheme is
provided.
The structure of the proton and of the photon are described by
parton distribution functions (PDFs), that have been determined by
fits to data in various heavy flavour schemes and at different orders
of pQCD.
Monte Carlo (MC) generators are used to simulate detector
effects in order to determine the acceptance and the efficiency for
selecting events and to estimate the systematic uncertainties
associated with the measurement. The generated events are passed
through a detailed simulation of the detector response based on the GEANT
simulation programm~\cite{geant} and are processed
using the same reconstruction and analysis chain as is used for the data.
The following two MC generators are used:
\begin{description}
\item[\PYTHIA:]
The MC program \PYTHIA~\cite{pythia} is based on LO QCD matrix elements
with leading-log parton showers in the collinear factorisation approach.
\PYTHIA\ includes both direct photon gluon fusion and resolved-photon
processes. In the resolved-photon processes either a charm quark or a
gluon from the photon enters the hard scattering.
In the inclusive mode of \PYTHIA\ used here charm quarks are treated
as massless partons in all steps of the calculation in both types of processes.
The hadronisation process is simulated using the Lund string fragmentation
model~\cite{lundstring}. The Bowler fragmentation model~\cite{bowler} is
applied to fragment the charm quark into a \dstar\ meson.
The longitudinal part of the fragmentation is reweighted
to the parameterisation by Kartvelishvili et al.~\cite{kartvelish} which
depends on a single parameter $\alpha$.
The latter is set to the values
determined by H1~\cite{Aaron:2008tt}, which depend on the centre-of-mass
energy squared of the hard subprocess $\hat{s}$ (see table~\ref{fragpar}).
The proton structure is described by the
PDF set CTEQ6L~\cite{cteq6l}.
For the photon the PDF set GRV-G LO~\cite{grvlo} is used.
\renewcommand{\arraystretch}{1.15}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l||c||c|c||c|c|}
\hline
\multicolumn{6}{|c|}{ }\\[-0.4cm]
\multicolumn{6}{|c|}{\bf Fragmentation parameter \boldmath$\alpha$\unboldmath}\\[0.1cm]\hline
& & \multicolumn{2}{c||}{\bf \PYTHIA} & \multicolumn{2}{c|}{\bf \CASCADE}\\\hline
& $\shat_{threshold}$ &
$\alpha$ for & $\alpha$ for &
$\alpha$ for & $\alpha$ for \\
& $[{\rm GeV}^2]$ &
$\shat < \shat_{threshold}$ & $\shat \ge \shat_{threshold}$ &
$\shat < \shat_{threshold}$ & $\shat \ge \shat_{threshold}$ \\ \hline\hline
Central value & $70$ & $10.3$ & $4.4$ & $8.4$ & $4.5$ \\ \hline
Variations & $70$ & $\phantom{1}8.7$ & $3.9$ & $7.3$ & $3.9$ \\
& $70$ & $12.2$ & $5.0$ & $9.8$ & $5.1$ \\
& $50$ & $10.3$ & $4.4$ & $8.4$ & $4.5$ \\
& $90$ & $10.3$ & $4.4$ & $8.4$ & $4.5$ \\ \hline
\end{tabular}
\caption{\label{fragpar}
Fragmentation parameters $\alpha$ in the Kartvelishvili
parameterisation used in the MC simulations.
In the two regions of the invariant mass squared of the
$c\bar{c}$ pair, $\shat$, separated by the boundary $\shat_{threshold}$,
two different values of $\alpha$ are used.
}
\end{center}
\end{table}
\renewcommand{\arraystretch}{1.0}
\item[\CASCADE:]
The \CASCADE~\cite{cascade} MC program is used for simulating events
based on LO QCD calculations in the
$k_t$-factorisation approach. The direct boson gluon fusion process is
implemented using off-shell matrix elements and incoming gluons which can
have non-vanishing transverse momenta. Higher order QCD corrections are
simulated with initial state parton showers applying the CCFM
evolution~\cite{CCFM}. The unintegrated PDFs of the proton from
set A0~\cite{a0} are used. The
hadronisation of partons is performed with the Lund string model as
implemented in \PYTHIA. For the fragmentation of the charm quarks into
\dstar\ mesons the same reweighting procedure to the
parameterisation of Kartvelishvili et al.~is applied as in the case
of \PYTHIA.
\end{description}
For the comparison of data with NLO predictions, calculations
based on the massive approach and the general mass variable flavor number
scheme are used.
The uncertainties of the calculations are estimated by varying the charm
mass, $m_c$, the factorisation scale, $\mu_f$, and the renormalisation
scale, $\mu_r$. The detailed
settings are given in table~\ref{Tab_NLO_parameters}.
For the comparison in the \dstarDj\ sample only MC@NLO is used since it
provides a full hadronisation of the final state.
\renewcommand{\arraystretch}{1.15}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l||c|c c||c|c c||c|c c|}
\hline
& \multicolumn{3}{c||}{\bf FMNR} & \multicolumn{3}{c||}{\bf GMVFNS} &
\multicolumn{3}{c|}{\bf MC@NLO} \\[0.1cm]\hline
Parameter & Central & \multicolumn{2}{c||}{Variations}
& Central & \multicolumn{2}{c||}{Variations}
& Central & \multicolumn{2}{c|}{Variations} \\\hline\hline
Charm mass $m_c/{\rm GeV}$ & $1.5$ & $1.3$ & $1.7$
& $1.5$ & & & $1.5$ & $1.3$ & $1.7$ \\
Renorm. Scale $\mu_{r}/m_T$ & $1$ & $0.5$ & $2$
& $1$ & $0.5$ & $2$ & $1$ & $0.5$ & $2$ \\
Fact. Scale $\mu_{f}/m_T$ & $2$ & $1$ & $4$
& $1$ & $0.5$ & $2$ & $2$ & $1$ & $4$ \\
\hline
\end{tabular}
\caption{
Parameters and variations used in the NLO calculations of FMNR~\cite{Frixione,FMNR},
GMVFNS~\cite{Kramer, Kniehl2009} and MC@NLO~\cite{mcatnlo_hera}.
}
\label{Tab_NLO_parameters}
\end{center}
\end{table}
\renewcommand{\arraystretch}{1.0}
\begin{description}
\item[FMNR:]
The FMNR program~\cite{Frixione,FMNR} is based on an NLO
calculation in the massive scheme in the collinear approach.
The resolved and direct processes are calculated
separately. The program provides weighted parton level events with
two or three outgoing partons, i.e.~a charm quark pair and possibly one
additional light parton.
The fragmentation of a charm quark to a \dstar\ meson is treated by
a downscaling of the three-momentum of the quark in the charm-anticharm
rest frame according to the Peterson fragmentation function with a
parameter value of $\epsilon=0.035$.
The PDF sets HERAPDF1.0\footnote{The HERAPDF1.0 set
was determined from inclusive deep-inelastic scattering data
from the H1 and ZEUS experiments in
the GMVFNS. It has been checked that the difference to a PDF set
determined in the massive scheme, CTEQ5F3~\cite{cteq5f3}, is
significantly smaller than the effect of the
variations considered for the systematic uncertainty of the FMNR
predictions.}~\cite{Herapdf} for the proton
and GRV-G HO~\cite{grvlo} for the photon are used. For the strong coupling,
the five-flavour QCD scale
$\Lambda^{(5)}_{QCD}$ is set to $0.2626\ {\rm GeV}$. The charm mass is set to
$m_{c} = 1.5\ {\rm GeV}$ and varied by $\pm 0.2\ {\rm GeV}$
for an uncertainty estimate.
This variation covers the central value for the pole mass
of the charm quark~\cite{pdg10}.
The renormalisation and factorisation scale are set to
$\mu_{r} = m_{T}$ and $\mu_{f} = 2 \cdot m_{T}$ with $m_{T}$ being
the transverse mas
defined as $m_T^2 = m_c^2+(p_{T,c}^2+p_{T,\bar{c}}^2)/2$,
with $p_{T,c}$ and $p_{T,\bar{c}}$ denoting the transverse momenta of
the charm and anticharm quark, respectively.
In order to estimate the uncertainties related to missing higher orders, the
renormalisation and factorisation scales are varied by a factor $2$ up and
down. Each variation is done independently, leading to in total $6$ variations.
The resulting uncertainties are added in quadrature separately for positive
and negative deviations to obtain the total uncertainties.
\item[GMVFNS:]
A next to leading order cross section prediction for direct and resolved
contributions to the cross section has been provided in the
GMVFNS~\cite{Kramer, Kniehl2009}. The
transition from the charm quark to the \dstar\ meson is given by the
KKKS fragmentation function which takes
DGLAP evolution and finite-mass effects into account~\cite{KKK06}.
The parton contents of the proton and of the photon are described by
the PDF sets HERAPDF1.0~\cite{Herapdf} and AFG04~\cite{afg04}, respectively.
The charm mass is set to $m_{c} = 1.5\ {\rm GeV}$, and the
renormalisation and factorisation scales are chosen to be
$\mu_{r} = \mu_{f} = m_{T}$. The uncertainties related to
missing higher orders are estimated by varying
the renormalisation scale,
the factorisation scale for the initial state and the factorisation scale
for the final state independently by a factor $2$ up and down
while satisfying the condition that the ratio of any
of the two scales is $1/2$, $1$ or $2$. This leads to $14$ independent
variations.
The maximum and minimum values found by this procedure are used to
determine the systematic uncertainty~\cite{Kniehl2009}.
\item[MC@NLO:] \label{mcatnlo-description}
In the MC@NLO framework~\cite{mcatnlo}, predictions for heavy
flavour production at HERA~\cite{mcatnlo_hera} are provided which
combine an NLO calculation in the massive approach with
parton showers and hadronisation.
The direct and resolved part of the cross section are
calculated separately. MC@NLO uses parton showers with angular ordering to
simulate higher order contributions
and the cluster fragmentation as implemented in HERWIG~\cite{herwig}.
A factor of $1.34$ is applied to the MC@NLO predictions in order
to correct the $c \rightarrow \dstar$ branching
fraction in HERWIG to the experimental value~\cite{gladilin}.
The PDF sets HERAPDF1.0~\cite{Herapdf} for the proton
and GRV-G HO~\cite{grvlo} for the photon are used.
For an estimation of the uncertainty, the charm mass and the
renomalisation and factorisation scales are varied separately, and
the resulting uncertainties
are added in quadrature.
\end{description}
\section{H1 Detector}
A detailed description of the H1 detector can be found
elsewhere~\cite{h1detector}.
Only the components essential to the present analysis are described here.
The origin of the H1 coordinate system is the nominal $ep$ interaction point.
The positive $z$-axis (forward direction) is defined by the direction of the
proton beam. Transverse momenta are measured in the $x$--$y$ plane.
Polar~($\theta$) and~azimuthal~($\varphi$) angles are measured with respect to
this reference system. The pseudorapidity is defined as
$\eta = - \ln{\tan(\theta/2)}$.
Charged particles are measured within the central tracking detector (CTD)
in the pseudorapidity range $-1.74 < \eta < 1.74$. The CTD comprises two
large cylindrical jet chambers (inner CJC1 and outer CJC2) and the silicon
vertex detector~\cite{cst}. The CJCs are separated by a drift chamber which
improves the $z$ coordinate reconstruction. A multiwire proportional
chamber mainly used for triggering~\cite{mwpc} is situated inside the
CJC1. These detectors are arranged concentrically around the interaction
region in a solenoidal magnetic field of \mbox{$1.16\ {\rm T}$}. The
trajectories of the charged particles are measured with a transverse
momentum resolution of $\sigma(p_T)/p_T \approx 0.5\% \, p_T/{\rm GeV}
\oplus 1.5\%$~\cite{ctdresolution}. The CJCs also provide a measurement
of the specific ionisation
energy loss ${\rm d}E/{\rm d}x$ of charged particles.
The interaction vertex is reconstructed from CTD tracks.
The CTD also provides trigger information based on track segments
measured in the CJCs~\cite{ftt}. At the first two levels of this
fast track trigger (FTT) tracks are reconstructed online from the
track segments in the CJCs. At the third level of the FTT invariant
masses of combinations of tracks are calculated~\cite{nik,andy}.
Charged and neutral particles are measured with the liquid argon (LAr)
calorimeter, which surrounds the tracking chambers. It covers the range
$-1.5 < \eta < 3.4$ with full azimuthal acceptance. Electromagnetic shower
energies are measured with a precision of
$\sigma(E)/E = 12\% / \sqrt{E/{\rm GeV}} \oplus 1\%$ and hadronic energies
with $\sigma(E)/E = 50\% / \sqrt{E/{\rm GeV}} \oplus 2\%$, as determined
in test beam measurements~\cite{h1testbeam}.
A lead-scintillating fibre calorimeter (SpaCal)~\cite{spacal} covering
the backward region $-4.0<\eta<-1.4$ completes the measurement of charged
and neutral particles. For electrons a relative energy resolution of
$\sigma(E)/E = 7\% / \sqrt{E/{\rm GeV}} \oplus 1\%$ is reached, as determined
in test beam measurements~\cite{spacaltestbeam}.
The hadronic final state is reconstructed using an energy flow algorithm
which combines charged particles measured in the CTD with information from
the SpaCal and LAr calorimeters~\cite{hadroo2}.
The luminosity determination is based on the measurement of the
Bethe-Heitler process $ep \rightarrow ep\gamma$ where the photon is
detected in a calorimeter located at $z= -104\ {\rm m}$ downstream of the
interaction region in the electron beam direction.
\section{Event Selection and Reconstruction}
The data sample was recorded in the years 2006 and 2007, when
electrons with an energy of $27.6\ {\rm GeV}$ were collided with protons with
$920\ {\rm GeV}$.
Photoproduction events are selected by requiring that no isolated high
energy electromagnetic cluster, consistent with a signal from a scattered
electron, is detected in the calorimeters. This limits the photon
virtuality to $Q^2 < 2\ {\rm GeV}^2$.
\subsection{\boldmath Inclusive \dstar\ Sample}
The triggering of the events relies on the reconstruction of the final
state particles originating from the \dstar\ decay. For this purpose all
three levels of the FTT are used. At the first level, where tracks
are reconstructed only in the transverse plane,
the selection criteria are based on track multiplicities
above certain transverse momentum thresholds. These conditions
are refined on the second level, and on the third level invariant masses
and charge combinations consistent with the decay channel
$\dstarpm\rightarrow D^{0}\pi^{\pm}_{slow} \rightarrow
K^{\mp}\pi^{\pm}\pi^{\pm}_{slow}$ are required~\cite{andy}.
Three trigger conditions with different thresholds for the
transverse momentum of the \dstar\ candidate are used. The analysis
is therefore performed in three separate $p_T(\dstar)$ regions
corresponding to the different luminosities:
${\cal L}=30.7\ {\rm pb}^{-1}$ for $1.8 \le p_T(\dstar)< 2.5\ {\rm GeV}$,
${\cal L}=68.2\ {\rm pb}^{-1}$ for $2.5 \le p_T(\dstar)< 4.5\ {\rm GeV}$, and
${\cal L}=93.4\ {\rm pb}^{-1}$ for $p_T(\dstar)\ge 4.5\ {\rm GeV}$.
The requirement that all decay particles have to be in the acceptance
of the CJC limits the analysis to central rapidities for the
\dstar\ meson $|\eta(\dstar)|<1.5$ and photon-proton centre-of-mass
energies in the range $100 < W_{\gamma p}< 285\ {\rm GeV}$.
The $\gamma p$ centre-of-mass energy is reconstructed using the
Jacquet-Blondel method~\cite{jacquetblondel}:
$W_{\gamma p} = \sqrt{y_{JB} \ s}$ with
$y_{JB} = \sum_{HFS} (E-p_z)_i / (2 \ E_e)$, where $s$ and $E_e$ denote
the square of the $ep$ centre-of-mass energy and the energy of the incoming
electron, respectively, and the sum $\sum_{HFS}$ runs over the energy $E$
and the longitudinal momentum $p_z$ of all final state particles.
The \dstar\ inelasticity \zDs, which corresponds to the fraction of
photon energy transferred to the \dstar\ meson in the proton rest frame,
is defined by $\zDs = P \cdot p(\dstar)/(P\cdot q)$,
with $P$, $p(\dstar)$ and $q$ denoting the four-momenta of the incoming
proton, the \dstar\ meson and the exchanged photon, respectively. It is
reconstructed as $\zDs= (E-p_{z})_{\dstar}/(2 \ y_{JB} \ E_{e})$.
The inelasticity distribution is sensitive to the kinematics of the
production mechanism and to the $c\rightarrow \dstar$ fragmentation function.
The \dstar\ meson is detected via the decay channel
$\dstarpm\rightarrow D^{0}\pi^{\pm}_{slow}
\rightarrow K^{\mp}\pi^{\pm}\pi^{\pm}_{slow}$ with a branching fraction of
${\cal BR}= 2.63 \pm 0.04\%$~\cite{pdg10}. The tracks of the decay
particles are reconstructed using the CTD information. The invariant mass
of the $K^{\mp}\pi^{\pm}$ system is required to be consistent with the nominal
$D^{0}$ mass~\cite{pdg10} within $\pm 80\ {\rm MeV}$.
The signal to background ratio is improved by applying a loose particle
identification criterion to the kaon candidates based on the measurement
of the specific energy loss, ${\rm d}E/{\rm d}x$, in the CTD.
In addition the background is reduced by
a cut on the fraction of the transverse momentum carried by the \dstar\
with respect to the scalar sum of transverse energies of the hadronic
final state, excluding the forward region ($\theta < 10^\circ$).
This fraction is required to be
$p_{T}(\dstar)/(\sum_{HFS}^{\theta > 10^\circ} E_{T,i})>0.1$.
This criterion accounts for the
harder fragmentation of charm compared to light flavours.
The \dstarpm\ candidates are selected using the mass difference
method~\cite{deltammethod}. In figure~\ref{signal}a) the distribution
of the mass difference $\Delta M = m(K\pi \pi_{slow})-m(K\pi)$ of the
final \dstar\ candidates is shown. A clear peak is observed around
the nominal value of $\Delta M = 145.4 \ {\rm MeV}$~\cite{pdg10}.
The wrong charge combinations, defined as $K^{\pm}\pi^\pm \pi_{slow}^{\mp}$
with $K^\pm \pi^\pm$ pairs in the accepted $D^0$ mass range, are used to
constrain the shape of the combinatorial background in the signal region.
The number of reconstructed \dstar\ mesons $N(\dstar)$ is extracted in
each analysis bin by a log-likelihood fit simultaneously to the right charge
and the wrong charge $\Delta M$ distribution. For the signal which has
a tail towards larger $\Delta M$ values the asymmetric Crystal Ball
function~\cite{Gaiser} is used. The shape of the background is
parametrised with the Granet function~\cite{Granet}.
The fit is performed in the RooFit framework~\cite{Verkerke}.
The fit to the inclusive data sample yields $8232 \pm 164$ \dstar\ mesons.
To improve the convergence of the fit in each analysis bin,
the parameters describing the asymmetry of the Crystal Ball function
are fixed to the values found by the fit to the complete data set. The
width of the peak varies
in dependence on the \dstar\ kinematics and is therefore left free.
More details can be found in~\cite{thesis_eva}.
\subsection{\boldmath \dstarDj\ Sample}
For the selection of the \dstar\ meson in the \dstarDj\ sample, the
requirements are the same as for the inclusive \dstar\ sample, except that
the requirement on the specific energy loss ${\rm d}E/{\rm d}x$ is removed,
and the cut on $p_T(\dstar)$ is increased to $2.1\ {\rm GeV}$
because of large backgrounds at small transverse momenta.
Jets are defined by the inclusive
$k_t$-algorithm~\cite{jetKT93} in the energy re\-com\-bina\-tion
scheme with jet size
$\Delta R = \sqrt{(\Delta \eta)^2 + (\Delta \varphi)^2} = 1$
where $\Delta \varphi$ is expressed in radians.
The jet algorithm is applied in the laboratory frame to all reconstructed
particles of the hadronic final state.
To prevent the decay particles of the \dstar\ candidate from being
attributed to different jets,
the \dstar\ candidate is used as a single particle in the jet algorithm,
replacing its decay products.
In this way the jet containing the \dstar\ meson (\dstarjet) is unambiguously
defined for each \dstar\ candidate.
In events which contain more than one \dstar\ candidate, the jet algorithm
is run separately for each candidate, and all candidates for which the
dijet selection criteria are fulfilled enter the $\Delta M$ distribution.
The pseudorapidity of the \dstarjet\ is restricted to the same range as is used
for the \dstar\ meson, $|\eta(\dstarjet)| < 1.5$.
In addition to the \dstarjet\ a second jet is required. Both jets have to
satisfy $\ptj > 3.5\ {\rm GeV}$.
If there is more than one jet that does not contain the \dstar\ meson,
the one with the highest \ptj\ is chosen as the \otherj.
The pseudorapidity of the \otherj\ has to be in the range
$-1.5 < \eta(\otherj) < 2.9$.
The invariant mass $M_{jj}$ of the \dstarjet\
and the \otherj is required to satisfy $M_{jj} > 6\ {\rm GeV} $ in order
to select jets from the partons originating from the hard interaction.
More details on the selection of the \dstarDj\ sample
can be found in~\cite{thesis_zlatka}.
The number of \dstarDj\ s is extracted from the $\Delta M$ distribution
of the \dstar\ candidates with the same procedure as used for the
inclusive \dstar\ measurement.
The $\Delta M$ distribution for the selected events in the dijet sample
is shown in figure~\ref{signal}b). The fit yields a signal of
$3937 \pm 114$ \dstar\ mesons.
The kinematic
range of the inclusive \dstar\ measurement and of the \dstarDj\
measurement are summarised in table~\ref{tab:kinrange}.
\renewcommand{\arraystretch}{1.15}
\begin{table}[h]
\centering
\begin{tabular}{|l|l|}
\hline
\multicolumn{2}{|c|}{\bf \boldmath inclusive \dstar\ meson and \dstarDj\ production} \\
\hline
Photon virtuality & $Q^{2} < 2\ {\rm GeV}^{2}$\\
$\gamma p$ centre-of-mass energy & $100 < W_{\gamma p}< 285\ {\rm GeV}$ \\
Pseudorapidity of \dstarpm & $|\eta(\dstar)| < 1.5$ \\
\hline \hline
\multicolumn{2}{|c|}{\bf \boldmath inclusive \dstar\ meson production} \\
\hline
Transverse momentum of \dstarpm & $p_{T}(\dstar) > 1.8\ {\rm GeV}$ \\
\hline \hline
\multicolumn{2}{|c|}{\bf \boldmath \dstarDj\ production} \\
\hline
Transverse momentum of \dstarpm & $p_{T}(\dstar) > 2.1\ {\rm GeV}$ \\
Transverse momentum of \dstarjet & $p_{T}(\dstarjet) > 3.5\ {\rm GeV}$ \\
Pseudorapidity of \dstarjet & $|\eta(\dstarjet)| < 1.5$ \\
Transverse momentum of \otherj & $p_{T}(\otherj) > 3.5\ {\rm GeV}$ \\
Pseudorapidity of \otherj & $-1.5 < \eta(\otherj) < 2.9$ \\
Dijet invariant mass $M_{jj}$ & $M_{jj} > 6\ {\rm GeV} $\\
\hline
\end{tabular}
\caption{Definition of the kinematic range of the measurements.
\label{tab:kinrange}
}
\end{table}
\renewcommand{\arraystretch}{1.0}
\section{Cross Section Determination and Systematic Errors}
The bin averaged visible differential cross section with respect to a
variable $Y$ (with bin width $\Delta Y$) is calculated according to
\begin{equation}
\frac{{\rm d}\sigma_{vis}(ep\rightarrow e\ \dstar + X)}{{\rm d}Y} =
\frac{N(\dstar)(1-r)}{\Delta Y \cdot {\cal L}\cdot {\cal BR} \cdot \epsilon }
\label{eq:crossec}
\end{equation}
where ${\cal L}$ is the integrated luminosity, ${\cal BR}$ is the
branching ratio of the analysed decay chain
$\dstarpm\rightarrow D^{0}\pi^{\pm}_{slow}
\rightarrow K^{\mp}\pi^{\pm}\pi^{\pm}_{slow}$
and $(1-r)$ a correction factor
to account for reflections from other $D^0$ decays. The efficiency $\epsilon$
includes the detector acceptance, trigger and reconstruction efficiencies
and migrations between bins.
The contributions of \dstar\ mesons originating from beauty
production and from gluon splitting from light flavour production is not
subtracted. It is estimated from MC predictions to be below $2 \%$.
The systematic uncertainties are determined in each bin separately
and are summarised in table \ref{tab:sysError} for the total cross section.
They are divided into uncertainties which are considered to be
uncorrelated between the bins and uncertainties which change the
cross section normalisation in all bins.
The numbers
for the uncertainties listed below are given in per cent
of the cross section values.
The following uncorrelated systematic uncertainty sources are considered:
\begin{description}
\item[Trigger Efficiency:]
The simulation of the FTT is verified by a comparison to data in a
sample of \dstar\ mesons in deep-inelastic scattering triggered by
the scattered electron. For the total inclusive \dstar\ sample the
efficiency agrees within a relative uncertainty of $7.5\%$.
This is one of the dominant systematic uncertainties. For the
\dstarDj\ sample the trigger efficiency is higher, leading to a smaller
uncertainty of $3.1\%$ for the total cross section.
\item[Signal Extraction:]
For the determination of the uncertainty of the signal fit, different
parameterisations for the signal and background functions are used.
The resulting uncertainty amounts to $1.5\%$.
\item[\boldmath $D^0$ mass cut:]
The loss of \dstar\ mesons due to the $D^0$ mass cut is compared between
data and simulation as a function of the \dstar\ transverse momentum, assuming
a Gaussian resolution for the $D^0$ mass reconstruction.
They agree within $2\%$, which is assigned as uncertainty.
\item[Reflections:]
The amount of reflections $r$ from decay modes of the $D^0$ meson other
than $D^0 \rightarrow K^\mp \pi^{\pm}$ amounts to $3.8\%$ in the
simulation~\cite{h1dstardis}. It is independent of kinematic quantities
within $1\%$, which is used as systematic uncertainty.
\item[Background from deep inelastic scattering:]
The background originating from deep inelastic scattering events is
estimated with the RAPGAP~\cite{rapgap} MC generator. It is found to
be below $1\%$, which is not subtracted but treated as an uncertainty.
\item[\boldmath ${\rm d}E/{\rm d}x$ cut:]
The efficiency of the cut on the ${\rm d}E/{\rm d}x$ likelihood of the
kaon candidate is studied for data and MC simulation in bins of the
transverse momentum of the \dstar\ meson. The relative difference of $1.5\%$
is corrected for in the MC sample.
An uncertainty of $0.5\%$ is assigned, covering the possible
$p_{T}(\dstar)$ dependence of this correction.
\item[Hadronic energy scale:]
The energy scale of the hadronic final state has an uncertainty of $2\%$
leading to an uncertainty of the cross section of $0.6\%$ in the inclusive
\dstar\ sample and of $2.0\%$ in the \dstarDj\ sample.
\item[Model:]
For the determination of the cross section the \PYTHIA\ and \CASCADE\
predictions are reweighted to describe the data distributions where
necessary. For the
correction of the data the efficiency from the \PYTHIA\
MC is used. The difference to the efficiency from \CASCADE\ is taken as
a systematic uncertainty.
It amounts to $2\%$ ($1.5\%$) for the total inclusive \dstar\ (\dstarDj )
cross section.
\item[Fragmentation:]
The $\alpha$ parameter of the Kartvelishvili function and the position of
the $\hat{s}$ threshold are varied within the values given in
table~\ref{fragpar} resulting in an uncertainty of $2.5\%$ ($2.0\%$)
for the total inclusive \dstar\ (\dstarDj ) cross section.
\end{description}
The following normalisation uncertainties are considered:
\begin{description}
\item[Track finding efficiency:]
The systematic uncertainty on the track efficiency of $4.1\%$ per \dstar~meson
arises from two contributions: (i) The comparison of the track finding efficiency
in data and simulation leads to an uncertainty of $2\%$ for the slow pion track
and $1\%$ for the tracks of the $D^0$ decay particles, and the uncertainty
is assumed to be
correlated between the decay particles; (ii) the efficiency with which
a track can be fitted to the event vertex
leads to a systematic error of $1\%$ per \dstar~meson.
The uncertainty on the track finding efficiency is considered to be half
correlated between the bins of the measurement.
\item[Luminosity:]
The uncertainty on the luminosity measurement for the data sample used
in this analysis amounts to $5\%$.
\item[Branching Ratio:]
The uncertainty due to the \dstar\ branching ratio is $1.5\%$~\cite{pdg10}.
\end{description}
All sources of systematic errors are added in quadrature resulting in
a systematic uncertainty of
$10.9\%$ ($8.5\%$)
for the total cross section of
the inclusive \dstar\ (\dstarDj ) production.
\begin{table}[htd]
\begin{center}
\begin{tabular}{|l|r|r|}
\hline
Uncertainty source & \dstar & \dstarDj \\ \hline
\multicolumn{3}{|l|}{Uncorrelated uncertainties } \\ \hline
Trigger efficiency & $7.5\%$ & $3.1\%$\\
Signal extraction & $1.5\%$ & $1.5\%$ \\
$D^0$ meson mass cut & $2.0\%$ & $2.0\%$ \\
Reflections & $1.0\%$ & $1.0\%$ \\
Background from deep-inelastic scattering & $1.0\%$ & $1.0\%$ \\
${\rm d}E/{\rm d}x$ cut & $0.5\%$ & $-$ \\
Hadronic energy scale & $0.6\%$ & $2.0\%$ \\
Model & $2.0\%$ & $1.5\%$ \\
Fragmentation & $2.5\%$ & $2.0\%$ \\ \hline
Track finding efficiency (half) & $2.9\%$ & $2.9\%$ \\
Total uncorrelated & $9.2\%$ & $6.0\%$ \\ \hline \hline
\multicolumn{3}{|l|}{Normalisation uncertainties} \\ \hline
Track finding efficiency (half) & $2.9\%$ & $2.9\%$ \\
Luminosity & $5.0\%$ & $5.0\%$ \\
Branching ratio & $1.5\%$ & $1.5\%$ \\ \hline
Total normalisation & $6.0\%$ & $6.0\%$ \\ \hline \hline
Total & $10.9\%$ & $8.5\%$ \\ \hline
\end{tabular}
\end{center}
\caption{Summary of all sources of systematic uncertainties and their
effect on the
total \dstar\ and the \dstarDj\ production cross section with the breakdown into sources leading to
bin-to-bin uncorrelated uncertainties and sources leading to
normalisation uncertainties.
}
\label{tab:sysError}
\end{table}
\section{\boldmath Results for Inclusive \dstar\ Meson Production}
The total visible cross section for \dstar~meson photoproduction is
measured to be:
\begin{equation}
\sigma_{vis}(e p \rightarrow e \ \dstar + X)=
41.1 \pm 0.8~({\rm stat.}) \pm 3.6~({\rm unc. sys.})\pm 2.7~({\rm norm.})~{\rm nb}
\end{equation}
in the kinematic range defined in table~\ref{tab:kinrange}. The corresponding
predictions from \PYTHIA\ and \CASCADE\ amount to $43.7~{\rm nb}$ and
$32.9~{\rm nb}$, respectively. Due to the fact that these predictions are based
on leading order matrix elements the uncertainty on the normalisation
of the cross sections is large, and is not quantified here.
The NLO calculations predict
$26\,^{+ 13}_{-\ \,8}~{\rm nb}$ for FMNR,
$37\,^{+28}_{-14}~{\rm nb}$ for GMVFNS and
$30\,_{- 7}^{+ 6}$ for MC@NLO.
The measured single differential cross section as a function of the transverse
momentum \ptds\ and the pseudorapidity \etads\ of the \dstar~meson,
the photon-proton centre-of-mass energy $W_{\gamma p}$ and
\dstar\ inelasticity \zDs\ are presented in table~\ref{tab:cssd} and
in figures~\ref{singlediff}
and~\ref{singlediff_NLO}. The data are compared to \PYTHIA, \CASCADE\ and
the NLO predictions of FMNR, GMVFNS and MC@NLO.
Since all the predictions have large normalisation uncertainties,
the normalised ratio \rnorm\ of theory to data is shown in order to
compare the shape of the various predictions to the data. \rnorm\ is
defined as
\begin{equation}
\rnorm = \frac{\dfrac{1}{\sigma_{\rm vis}^{\rm calc}}\cdot
\dfrac{{\rm d}\sigma^{\rm calc}}{{\rm d}Y}}
{\dfrac{1}{\sigma_{\rm vis}^{\rm data}}\cdot
\dfrac{{\rm d}\sigma^{\rm data}}{{\rm d}Y}}
\end{equation}
where $\sigma_{\rm vis}^{\rm calc}$ ($\sigma_{\rm vis}^{\rm data}$) and
${{\rm d}\sigma^{\rm calc}}/{{\rm d}Y}$
(${{\rm d}\sigma^{\rm data}}/{{\rm d}Y}$)
are the total and differential cross
section of the model under consideration (of the data), respectively,
and $Y$ denotes any measured variable.
In this ratio the normalization uncertainties of the data (luminosity,
branching ratio and half of the tracking uncertainty) cancel.
Similarly, uncertainty sources of the NLO predictions altering the
normalisation only do not affect \rnorm\ since for each variation the
total and the differential cross section are varied simultanously.
The single differential cross sections are compared to the predictions
of the LO MC simulations
in figure~\ref{singlediff}. The steep decrease of the cross section
with increasing transverse momentum \ptds\ is
reasonably reproduced by \PYTHIA,
while \CASCADE\ falls slightly slower than the data.
Both MC simulations describe the shape of the observed \etads\
distribution within uncertainties.
The cross section decreases as a function of the $\gamma p$ centre-of-mass
energy $W_{\gamma p}$, as expected from the photon flux in the equivalent
photon approximation~\cite{epa}. \CASCADE\ predicts a smaller fraction
of \dstar~mesons being produced at small inelasticities \zDs, similar to what
has been observed in deep inelastic scattering at HERA~\cite{h1dstardis}.
All distributions are reasonably well described by \PYTHIA.
A comparison of the single differential cross sections to the predictions
of the NLO calculations is shown in figure~\ref{singlediff_NLO}.
For all measured quantities the precision of the measurement presented here
is much better than the estimated uncertainty of the NLO calculations.
The uncertainty of the NLO predictions is dominated by the variation
of the renormalisation scale $\mu_r$, which has a large effect on the
absolute cross section, while the differences in the shapes tend to be
smaller. Within these large theoretical uncertainties, both the FMNR
and GMVFNS predictions agree
with the measured cross section as a function of \ptds, while the
MC@NLO underestimates the data at small \ptds. The \ptds\ shape
is best described by the GMVFNS calculation, while FMNR and MC@NLO
predict a harder spectrum than observed in data as can be seen
in the ratio \rnorm.
The underestimation of the low \ptds\ region by the central FMNR and MC@NLO
predictions results
in a low normalisation in the other distributions.
The shape of the \etads\ distribution is reasonably well described by all
NLO calculations.
All three NLO calculations give a rather precise prediction of the shape
of the $W_{\gamma p}$ distribution, which describes the measurement.
Given the large uncertainties the predictions for the \zDs\ distribution
agree with the data, although when using the central parameter
settings for the calculations they differ in shape with respect to data.
Previous H1 and ZEUS analyses of \dstar\ meson
photoproduction~\cite{zeusdstar98,h1dstar06}, albeit in
different kinematic ranges in the photon virtuality $Q^2$ and the
photon-proton centre-of-mass energy $W_{\gamma p}$, lead to similar
conclusions: while all predictions
give a good description of the $W_{\gamma p}$ distribution, differences
between data and theoretical predictions are observed for variables sensitive
to the quantities of the outgoing charm quark.
In order to investigate the correlation between pseudorapidity and
transverse momentum, a double differential measurement in \ptds\ and \etads\
is performed (table~\ref{tab:csdd}). The cross sections of the
leading order MCs \PYTHIA\ and
\CASCADE\ in the three \ptds\ regions shown in figure~\ref{ddiff} reflect the
different \ptds\ dependences seen in figure~\ref{singlediff}.
Both models are in broad agreement with the data.
The comparison of the NLO calculations with the data in
figure~\ref{ddiff_NLO} leads to similar conclusions as for the LO MC
programs.
\section{\boldmath Results for \dstar\ Tagged Dijet Production}
The integrated \dstarDj\
cross section in the visible range
given in table~\ref{tab:kinrange} is measured to be
\begin{equation}
\label{eq:totXsecDstarjet}
\sigma_{vis}(ep \to e \ \dstarjet +\mbox{\otherj} + X) =
9.68 \pm 0.28~({\rm stat.}) \pm 0.51~({\rm unc. sys.})
\pm 0.64~({\rm norm.})~{\rm nb}.
\end{equation}
The corresponding predictions from \PYTHIA, \CASCADE\ and MC@NLO amount
to $8.9~{\rm nb}$, $8.1~{\rm nb}$ and $7.1\,^{+2.5}_{-1.8}~{\rm nb}$,
respectively.
In the common range of transverse momentum, $\ptds>2.1\ {\rm GeV}$, the
ratio of the \dstarDj\ to the inclusive \dstar\ cross section is
$0.304 \pm 0.013 \pm 0.031$, compared to $0.271$ and $0.311$
for \PYTHIA\ and \CASCADE, respectively.
MC@NLO predicts a ratio of $0.309\,^{+0.019}_{-0.040}$.
The bin averaged differential cross section for the \dstarDj\ production
as a function
of the transverse momentum $\pt$ and the pseudorapidity $\eta$ of both the
\dstarjet\ and the \otherj\ are
listed in table~\ref{tab:diffXsecdijet}
and shown in figures~\ref{fig:dstardijet} and~\ref{fig:dstardijet-mcatnlo}.
On average, the \otherj\ is more forward than the \dstarjet\ not only due
to the larger measurement range in $\eta$, but also within the common region
of $-1.5 < \eta< 1.5$. This behaviour
is consistent with the expectation that the \otherj\ originates not always from
a charm quark. This observation confirms the
result of the previous H1 analysis of \dstarDj\ photoproduction~\cite{h1dstar06}
with improved precision.
In figure~\ref{fig:dstardijet} the measurements are compared to the \PYTHIA\
and the \CASCADE\ predictions. The shapes of the distributions are described
well by both models.
In figure~\ref{fig:dstardijet-mcatnlo} the measurements are compared to the
predictions of MC@NLO. At low transverse momenta of both the \dstarjet\ and the
\otherj, the predictions lie significantly below the measurement. This
results in a smaller total visible cross section which is also observed in
the $\eta$ distribution.
The uncertainty band of the MC@NLO prediction includes both variation of the
charm mass and variations of the factorisation and renormalisation scales as
described in section~\ref{mcatnlo-description}.
In order to investigate further the charm production dynamics, several
variables related to the structure of the hadronic final state are studied.
The correlation between the jets in the longitudinal and transverse
directions is experimentally assessed by the difference in pseudorapidity
$\Delta \eta=\eta(\otherj)-\eta(\dstarjet)$ and
in the azimuthal angle \dphidsj\ between the \dstarjet\ and
the \otherj. The amount of QCD radiation in addition to the
the two leading jets is investigated using the mass variable
$M_X = \sqrt{(P + q - (j_1 + j_2 ))^2}$ with $P$, $q$, $j_1$ and $j_2$ being
the four-vectors of the initial proton, the exchanged photon, the \dstarjet\
and the \otherj, respectively. In direct photon processes without
radiation, $M_X$ is expected to be close to the
proton mass, whereas resolved processes as well as additional QCD radiation
will increase $M_X$.
The fraction $x_\gamma$ of the longitudinal photon momentum
entering the hard scattering process can be used to distinguish
direct and resolved processes:
in collinear factorisation at LO a resolved photon process is characterised by
$x_\gamma < 1$, while a direct process has $x_\gamma = 1$.
In the \dstarDj\ sample, $x_\gamma$ is approximated by
\begin{equation}
\xgjj = \frac{\sum_{jets} (E -p_z)_i}{\sum_{HFS} (E-p_z)_j}.
\end{equation}
The sum in the numerator runs over the particles in the two selected jets,
whereas the sum in the denominator contains all reconstructed
particles of the hadronic final state.
In table~\ref{tab:diffXsecdijetcorr} and
figures~\ref{fig:dstardijet-corr} and~\ref{fig:dstardijet-corr-mcatnlo} the
bin averaged differential cross sections for the \dstarDj\ production as a
function of the difference in pseudorapidity
$\Delta \eta$ and in azimuthal
angle \dphidsj\ between the \otherj\ and the \dstarjet, the mass $M_X$
and $\xgjj$ are presented.
The cross section as a function of $\Delta \eta$ is not symmetric because
the \otherj\ is on average more forward than the \dstarjet.
The shape in $\Delta \eta$ is
reasonably well described by all QCD calculations.
The cross section as a function of \dphidsj\ shows a significant
contribution away from the back-to-back configuration at
$|\Delta \varphi| \simeq 180 \grad$. Such a configuration can be described
by models which include significant contributions
from higher order QCD radiation or a transverse momentum of the gluon in the
initial state.
Whereas \PYTHIA\ predicts a too small relative contribution of these
configurations, \CASCADE\ overestimates them.
The prediction from MC@NLO, shown in
figure~\ref{fig:dstardijet-corr-mcatnlo}b), agrees
well in shape with the measurement.
The cross section as a function of the invariant mass $M_X$ is reasonably
well described by the predictions of \CASCADE\ and \PYTHIA\ in the region
of $M_X < 120$~GeV, whereas the measured cross section is larger than the
predictions for the highest $M_X$ bin. The large $M_X$ region is correlated
with the region of small $\xgjj$, where also the predictions are below the
measurement.
MC@NLO predicts a different shape for $M_X$ and
is not able to describe the shape of the $\xgjj$ distribution.
The \dphidsj\ dependence of the cross sections in two regions of $\xgjj$
is presented in table~\ref{tab:diffXsecXgammaDphi}
and in figure~\ref{fig:dstardijet-corr-2d}. \PYTHIA\ is in agreement
with the data.
\CASCADE\ overestimates the contribution from small \dphidsj\ in both
$\xgjj$ regions. MC@NLO describes the shape well in the region of small
$\xgjj$, where resolved photon processes are enhanced, but is too low in
normalisation. At large $\xgjj$ values MC@NLO predicts the size of the cross section
correctly, but overestimates the contribution from small $|\Delta \varphi|$.
The cross sections for \dstarDj\ production show that
in general both hard partons in the final state can be described reasonably
well by the QCD predictions, while the details
and especially the correlations between the \dstarjet\ and the
\otherj\ are not described very well by these theoretical calculations.
\section{Conclusions}
The production of \dstar\ mesons in the photoproduction regime is
investigated with the H1 detector
at HERA with a seven times larger signal sample compared to the previous
H1 measurement. The events containing \dstar\ mesons were triggered by
the tracks of the decay particles in the channel
$D\*^{\pm}\rightarrow D^{0}\pi^{\pm}_{slow} \rightarrow
K^{\mp}\pi^{\pm}\pi^{\pm}_{slow}$. Single and double differential cross
sections are measured, and the results
are compared to leading order QCD models provided by the MC simulation
programs \PYTHIA\ and \CASCADE\ and to the next-to-leading order pQCD calculations
FMNR, GMVFNS and MC@NLO.
The precision of the cross section measurements far exceeds the
predictive power of the NLO theories. The shapes of the differential
cross sections, however, are less sensitive to the theoretical uncertainties,
and generally show reasonable agreement with the data.
The cross section for \dstarDj\ production is measured and compared to
predictions of \PYTHIA, \CASCADE\ and MC@NLO.
The results are consistent with the expectation that the non-\dstar -jet
can originate not only from a charm quark but also from a light parton.
Significant contributions from higher order QCD radiation or transverse
momenta of the partons in the initial state are needed
to describe the cross section away from the back-to-back configuration
between the \dstarjet\ and \otherj\ at
$\dphidsj \simeq 180 \grad$.
The cross sections as a function of the transverse momentum and
the pseudorapidity
of the \dstarjet\ and the \otherj\ are reasonably well described by
the predictions. However,
significant differences are observed in the description of
some variables related to the structure of the hadronic final state,
such as \dphidsj, $M_X$ and \xgjj .
\section*{Acknowledgements}
We are grateful to the HERA machine group whose outstanding
efforts have made this experiment possible.
We thank the engineers and technicians for their work in constructing and
maintaining the H1 detector, our funding agencies for
financial support, the
DESY technical staff for continual assistance
and the DESY directorate for support and for the
hospitality which they extend to the non-DESY
members of the collaboration.
|
2,869,038,153,786 | arxiv | \chapter{Analyzing the Protocols}
\label{chapter:3}
Computer networks or simply networks are the main means of information
sharing and communication in today's IT infrastructure. Certain
protocols are executed to facilitate communication in
networks. However, such networks are mostly insecure and the
communication needs to be protected against attackers that may
influence network traffic and and communication parties that might be either dishonest or compromised by attackers.
Cryptographic security protocols form an essential ingredient of
network communications by ensuring secure communication over insecure
networks. These protocols use cryptographic primitives to support
certain security properties, but ensuring this properties requires a
lot more effort. Despite the relatively small size of the security
protocols it is very hard to design them correctly, and their analysis
is very complicated. One of the most well-known examples is the
Needham-Schroeder protocol \cite{nee:sch}, that was proven secure by using BAN
logic \cite{ban}. Seventeen years later G. Lowe \cite{lowe:96,lowe:2}, found a flaw by
using an automatic tool FDR. The flaw was not detected in the
original proof because of different assumptions on the intruder
model. The fact that this new attack had escaped the attention of the experts was an indication of the underestimation of the
complexity of protocol analysis. This example has shown that protocol
analysis is critical for assessing the security of such cryptographic
protocols.
In this report, we present our approach for protocol
analysis together with a real example where we find an important flow
in a contemporary wireless sensor network security protocol. We start
by modelling protocols using a specific process algebraic formalism
called \LYSA process calculus. We then apply an analysis based on a
special program analysis technique called control flow analysis. We
apply this technique to the ZigBee-2007 End-to-End Application Key Establishment Protocol and with the help of the analysis discover
an unknown flaw. Finally we suggest a fix for the protocol, and verify
that the fix works by using the same technique.
\section{An Overview of the Analysis Method}
Static program analysis, in essence, examines a program statically,
before any attempt of execution. Although the finite amount of
resources may limit the information or the answers to important
questions, the approximation based approach of static program analysis makes
it preferable on the area of protocol analysis. Instead of facing
undecidability problem, this technique sacrifices precision and
gives approximate answers about a property of a certain program, or a
piece of code, or a protocol as in our case. However, this loss of
precision does not mean that we are missing the flaws, it merely
means that the analysis results may include false positives, such as a
bug or a flaw that the program does not contain.
Static program analysis was originally developed for generating codes
and optimising compilers \cite{Lowrey:Medlock,Busam:Englund}. Nevertheless, the analysis
technique have recently been directed to the field of
security. Encouraging results have been obtained by the use of this
approach where safe approximations to the set of values or behaviours
arising during protocol runs can be predicted.
Control flow analysis of processes formalised in the \LYSA process
calculus successfully computes an over-approximation of the run-time
behaviour of a protocol \cite{bod:2, bod:1}. This method is actually the
protocol analysis method that we present in this report.
The roadmap of the analysis method is given in Fig. \ref{fig:flow},
and we will present the steps of this roadmap in the following sections.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{fig1}
\caption{The Roadmap of the Analysis}
\label{fig:flow}
\end{figure}
\section{Modelling in \LYSA Process Calculus}
\label{lys}
The first step in the protocol analysis is to formalise the protocol
narration into a model that is suitable for the analysis.
In our case, we formalize the protocols using the \LYSA process calculus \cite{bod:1}.
\LYSA is based on the $\pi$-calculus \cite{mil} and incorporates cryptographic operations using ideas from the Spi-calculus \cite{aba:gor}.
However, \LYSA has two different properties compared to spi/$\pi$ calculus.
First, \LYSA has one global ether, instead of channels.
The reason for this difference is that, in usual networking implementations (e.g. ethernet-based, wireless, etc.),
anyone can eavesdrop or act as an active attacker which does not correspond to the channel-based communication.
The second difference is in the pattern matching usage in the tests of the expressions associated with input and decryption.
Although \LYSA is a very powerful process calculus which also supports
asymmetric encryption, digital signatures, etc., in order to make it
simple we only illustrate the \emph{symmetric} fragment.
The symmetric fragment suffices to prove our claims in the example
that we will present the flaw discovery since the protocol is designed
for symmetric encryption only.
The reader interested in further details including the asymmetric fragment may refer to \cite{bod:1}.
In \LYSA, we have terms ($E$) that consist of names (keys, nonces, messages, etc.), variables, and the compositions of them using symmetric encryption.
The syntax of terms is shown in Table \ref{tab:terms}.
In the case of encryption, the tuples of terms $E_1,\ldots,E_k$ are encrypted under a term $E_0$ which actually represents an encryption key.
Note that an assumption of perfect cryptography is adopted, which means that \emph{decryption with the correct key} is the only inverse function of encryption.
The \emph{annotation} inside brackets in the end of encryption will be explained later in this section.
\begin{table
\caption{\LYSA Terms - Symmetric Fragment}
\label{tab:terms}
\centering
\begin{tabular}{rlll}
\hline
\multicolumn{2}{l}{$E$ ::=} \\
& $x$ & $\mathrm{variable}$ \\
$\PAR$ & $n$ & $\mathrm{name}$ \\
$\PAR$ & $\ANSENC{E_1,\ldots,E_k}{E_0}{\ell}{\mathcal{L}}$ & $\mathrm{symmetric~encryption}$ \\
\hline
\end{tabular}
\end{table}
The syntax of the processes ($P$) which is mostly alike to the polyadic Spi-calculus \cite{aba:gor} is shown in Table \ref{tab:processes}.
At this point, we prefer to skip the syntax for simple ones in the table, but explain the more interested and complicated two: output and input processes.
The output process $\OUT{E_1,\ldots,E_k}.P$ sends the $k$-tuple $E_1,\ldots,E_k$ to the network and continues as process $P$.
Similarly, the input process $(E_1,\ldots,E_j ; x_{j+1},\ldots,x_k).P$ receives a $k$-tuple $E'_1,\ldots,E'_k$ and if conditions are satisfied, removes the $k$-tuple from the network.
Here, the input operation uses pattern matching which will only succeed if the prefix of the input message matches the terms specified before the semi-colon.
In a simple manner, we can say that for some input $E'$ the input process ($E$;$x$).$P$ means that
if $E'$ can be separated into two parts such that first part pairwise matches to the values $E$, then the remaining part of the input will be bound to the variables $x$.
As you can see in Table \ref{tab:processes}, the number of tuples in $E'$ is $k$ so that this is the total number of tuples in $E$ and $x$.
This kind of pattern matching is also used in decryption.
\begin{table*
\caption{\LYSA Processes - Symmetric Fragment}
\label{tab:processes}
\centering
\begin{tabular}{lll}
\hline
\multicolumn{2}{l}{$P$ ::=} \\
& $0$ & $\mathrm{nil}$ \\
$\PAR$ & $P_1 \PAR P_2 $ & $\mathrm{parallel~comp.}$ \\
$\PAR$ & !$P $ & $\mathrm{replication}$\\
$\PAR$ & $\NEW{n} P $ & $\mathrm{restriction}$\\
$\PAR$ & $\OUT{E_1,\ldots,E_k}.P $ & $\mathrm{output}$\\
$\PAR$ & $(E_1,\ldots,E_j ; x_{j+1},\ldots,x_k).P $ & $\mathrm{input}$\\
$\PAR$ & $\ANSDEC{E}{E_1,\ldots,E_j}{x_{j+1},\ldots,x_k}{E_0}{\ell}{\mathcal{L}} P $ & $\mathrm{symm.~decryption}$\\
\hline
\end{tabular}
\end{table*}
\textbf{Example 1.a} \textit{The example \LYSA code below is a \textbf{new} (created - restriction) encryption key ($K$) followed by an \textbf{output} which includes three plaintext elements ($A$, $B$, $K_A$) and an encrypted element ($\SENC{K}{K_A}{}{}$).}
\begin{center}$\NEW{K}\OUT{A,B, K_A, \SENC{K}{K_A}{}{} }$\end{center}
\textbf{Example 1.b} \textit{The example \LYSA code below is an \textbf{input} that binds the last two elements of the input to the variables $x_{KA}$ and $x$ as long as the first two elements are $A$ and $B$.}
\begin{center}$(A, B ; x_{KA}, x)$\end{center}
\textbf{Example 1.c} \textit{The example \LYSA code below is a \textbf{decryption} that decrypts the value bound to variable $x$ using the encryption key bound to variable $x_{KA}$ and binds the resulting plaintext value to the variable $x_K$. Note that this decryption always succeeds without any need of pattern matching, as long as the correct key exists in the receiver.}
\begin{center}$\SDEC{x}{}{x_K}{x_{KA}}{}$\end{center}
In order to describe the \emph{message authentication} intentions of the protocols, we also have \emph{annotation}s for origin and destination.
Encryptions can be annotated with fixed labels called \textit{crypto-point}s that define their positions in the process,
and with \textit{assertion}s that specify the origin and destination of encrypted messages.
A crypto-point \(\ell\) is an element of some set \(\mathcal{C}\) and used when encryptions/decryptions occur.
The \LYSA term for encryption:
\begin{center} $\ANSENC{E_1,\ldots,E_k}{E_0}{\ell}{\mathcal{L}}$ \end{center}
means that the encryption happened at crypto-point \(\ell\) and the assertion [dest \(\mathcal{L}\)] means that
corresponding (valid) decryption is \emph{to} happen at a crypto-point that belongs to the set $\mathcal{L}$ such that \(\mathcal{L} \subseteq \mathcal{C}\).
Similarly, in the \LYSA term for decryption:
\begin{center} $\ANSDEC{E}{E_1,\ldots,E_j}{x_{j+1},\ldots,x_k}{E_0}{\ell}{\mathcal{L}} P$ \end{center}
[orig \(\mathcal{L}\)] specifies the crypto-points \(\mathcal{L} \subseteq \mathcal{C}\) that $E$ is allowed to have been encrypted.
\textbf{Example 2} \textit{The example \LYSA code below is the \textbf{composition} of the three separate parts in Example 1, and the necessary \textbf{annotations} in such a way that now we have two separate processes running in \textbf{parallel}.}
\begin{tabular}{ll}
& \\
/* a */ & $\NEW{K}\OUT{A,B, K_A, \ANSENC{K}{K_A}{\ell_A}{\{\mathcal{\ell_B}\}} }.0$ \\
& $\PAR$ \\
/* b */ & $(A, B ; x_{KA}, x).$ \\
/* c */ & $\ANSDEC{x}{}{x_K}{x_{KA}}{\ell_B}{\{\mathcal{\ell_A}\}}.0$ \\
& \\
\end{tabular}
\textit{The example we constructed step by step is actually the \LYSA model of the single-message protocol below:}
\begin{center} \textbf{1. A \(\rightarrow\) B:} KA, \{K\}$_{KA}$ \end{center}
\textit{The upper part (line a) of the parallel composition is the code for principal A, and the lower part (lines b and c) is for principal B. In this example, annotations state that the encryption at crypto-point $\ell_A$ is intended to be decrypted only at $\ell_B$. In a corresponding manner, the decryption at $\ell_B$ should originate from the encryption at $\ell_A$.}
\subsection{Specifying Protocols in \LYSA}
In the beginning, we have a protocol narration like the one in Table \ref{tab:protocolnarration1}.
Then we extend the narration to specify the internal actions to be performed in principals when receiving those messages.
The reason for this kind of extension or conversion is to completely state the actions internal to the principals, which are normally left implicit in the narration of security protocols.
\begin{table
\renewcommand{\arraystretch}{1.3}
\renewcommand{\tabcolsep}{0.02cm}
\caption{Extended Protocol Narration - Case 1}
\label{tab:xpn}
\begin{tabular}{lllll}
\hline
\textbf{1. A} &\textbf{\(\rightarrow\)} &\textbf{ } &:& A, TC, \{TC, AppKey, B\}$_{KA}$ \\
& & & & [\textnormal{dest TC}] \\
\textbf{1'. } &\textbf{\(\rightarrow\)} &\textbf{TC}&:& $x_{initiator}$, $x_{TC}$, $x_{message}$ \\
& & & & [\textnormal{check } $x_{TC}$=TC]\\
\textbf{1''.} & &\textbf{TC}&:& \textnormal{decrypt $x_{message}$ as }\\
& & & & \{$x'_{TC}$, $x_{keytype}$, $x_{responder}$\}$_{KA}$\\
& & & & [\textnormal{orig }$x_A$][\textnormal{check } $x'_{TC}$=TC, $x_{keytype}$=AppKey]\\
& & & & \\
\textbf{2. TC}&\textbf{\(\rightarrow\)} & &: & [\textnormal{new} LK]\\
& & && TC, $x_{initiator}$, \\
& & && \{$x_{initiator}$,AppLK,$x_{responder}$,TRUE,LK\}$_{KA}$ \\
& & && [\textnormal{dest }$x_{initiator}$] \\
\textbf{2'. } &\textbf{\(\rightarrow\)} &\textbf{A} &:& $y_{TC}$, $y_A$, $y_{message}$ \\
& & & & [\textnormal{check } $y_{TC}$=TC, $y_A$=A]\\
\textbf{2''.} & &\textbf{A} &:& \textnormal{decrypt $y_{message}$ as } \\
& & & & \{$y'_A$, $y_{keytype}$, $y_B$, $y_{bool}$, $y_{LK}$\}$_{KA}$\\
& & & & [\textnormal{orig TC}][\textnormal{check } $y'_A$=A, $y_{keytype}$=AppLK]\\
& & & & [\textnormal{check } $y_B$=B, $y_{bool}$=TRUE] \\
\hline
\end{tabular}
\end{table}
As an example, the extended protocol narration of (due to the lack of space) the first two messages of Case 1 is given in Table \ref{tab:xpn}.
For each message in the original protocol narration, we have an output message $n$ and an input message $n'$ in the extended protocol narration.
Input message $n'$ presents the variable (those written in \emph{italics}) bindings and necessary checks in the receiver side.
If a variable is a ciphertext and the receiver has the correct encryption key, then we have another message (i.e. $n''$) for each of those variables.
In addition, we explicitly write the internal actions as annotations between square brackets, in order to bridge the gap between informal and formal specification of the protocol.
Note that when analysing protocols we add an extra message to the end, where a principal attempts to communicate the other through the new shared key, LK. For example, the message
\begin{center} \textbf{1. B \(\rightarrow\) A:} \{MSG\}$_{LK}$ \end{center}
does not change the protocol nor bring any (nor bring any additional cost to the implementations), it is just a sample message that will be sent using the new LK and
thus it will ease the validation which is done by checking attackers knowledge.
In the next phase, we convert the extended protocol narration into a \LYSA model.
We use the \LYSA syntax that we explained earlier in this section and configure the necessary settings.
As an example, a regular \LYSA model of the protocol that we have used
to demonstrate extended protocol conversion is given in Table \ref{tab:lysaC1}.
Further details of specifying protocols in \LYSA are present in \cite{bod:1}.
\begin{table
\caption{LySa Model - Case 1}
\label{tab:lysaC1}
\centering
\begin{tabular}{l@{}l}
\hline
& $\LET{X}{\mathbf{N}\;\LFONT{s.t.}\;\CANON{\mathbf{N}} = \{1,2,3\}}\IN$\\
& $\INEW{i\in X}{\mathit{KA}_{i}}$ $\INEW{j\in X}{\mathit{KB}_{j}}$\\
& $\IPAR{i\in X}\IPAR{j\in X\cup \{ 0 \}} !$\\
\textbf{1} & $\OUT{A_{i}, \mathit{TC}, \SENC{\mathit{TC}, \mathit{AppKey}, B_{j}}{\mathit{KA}_{i}}\ATDEST{\mathit{a1}_{i j}}{\{\mathit{tc1}_{i j}\}}}.$\\
\textbf{2'} & $\INP{\mathit{TC}, A_{i}}{\mathit{y}_{i j}}.$\\
\textbf{2''} & $\SDEC{\mathit{y}_{i j}}{A_{i}, \mathit{AppLK}, B_{j}, \mathit{TRUE}}{\mathit{xLK}_{i j}}{\mathit{KA}_{i}}$\\
& $\ATORIG{\mathit{a2}_{i j}}{\{\mathit{tc2}_{i j}\}}\IN$\\
\textbf{4'} & $\INP{B_{j}, A_{i}}{\mathit{y2}_{i j}}.$\\
\textbf{4'' } & $\SDEC{\mathit{y2}_{i j}}{}{\mathit{xmsg}_{i j}}{\mathit{xLK}_{i j}}\ATORIG{\mathit{a4}_{i j}}{\{\mathit{b4}_{i j}\}}\IN$ $\NIL$\\
& $\PAR$\\
& $\IPAR{j\in X}\IPAR{i\in X\cup \{ 0 \}} !$\\
\textbf{3'} & $\INP{\mathit{TC}, B_{j}}{\mathit{z}_{i j}}.$\\
\textbf{3''} & $\SDEC{\mathit{z}_{i j}}{B_{j}, \mathit{AppLK}, A_{i}, \mathit{FALSE}}{\mathit{yLK}_{i j}}{\mathit{KB}_{j}}$\\
& $\ATORIG{\mathit{b3}_{i j}}{\{\mathit{tc3}_{i j}\}}\IN$\\
\textbf{4} & $\NEW{\mathit{MSG}_{i j}} \OUT{B_{j}, A_{i}, \SENC{\mathit{MSG}_{i j}}{\mathit{yLK}_{i j}}\ATDEST{\mathit{b4}_{i j}}{\{\mathit{a4}_{i j}\}}}.$ $\NIL$\\
& $\PAR$\\
& $\IPAR{i\in X\cup \{ 0 \}}\IPAR{j\in X\cup \{ 0 \}} !$\\
\textbf{1'} & $\INP{A_{i}, \mathit{TC}}{\mathit{x}_{i j}}.$\\
\textbf{1''} & $\SDEC{\mathit{x}_{i j}}{\mathit{TC}, \mathit{AppKey}, B_{j}}{}{\mathit{KA}_{i}}$\\
& $\ATORIG{\mathit{tc1}_{i j}}{\{\mathit{a1}_{i j}\}}\IN$\\
\textbf{2} & $\NEW{\mathit{LK}_{i j}} \langle\mathit{TC}, A_{i}, \SENC{A_{i}, \mathit{AppLK}, B_{j}, \mathit{TRUE}, \mathit{LK}_{i j}}{\mathit{KA}_{i}}$\\
& $\ATDEST{\mathit{tc2}_{i j}}{\{\mathit{a2}_{i j}\}}\rangle.$\\
\textbf{3} & $\langle\mathit{TC}, B_{j}, \SENC{B_{j}, \mathit{AppLK}, A_{i}, \mathit{FALSE}, \mathit{LK}_{i j}}{\mathit{KB}_{j}}$\\
& $\ATDEST{\mathit{tc3}_{i j}}{\{\mathit{b3}_{i j}\}}\rangle.$ $\NIL$\\
\hline
\end{tabular}
\end{table}
\section{Static Program Analysis}
\label{zig:static}
Static Analysis is a formal method that enables the security analysis of cryptographic communication protocols which are modelled as \LYSA processes.
Messages communicated on the network are tracked with the possible values of the variables in the protocol.
Besides, the potential violations of the destination/origin annotations are also recorded.
The aim of static analysis is to efficiently compute the safe approximations to the behaviour of the models without actually running them.
In Fig. \ref{fig:stat} we can see the approximation approach.
In general, it is impossible to compute the precise answer so we make a choice between over-approximation and under-approximation.
Static analysis over-approximates the set of possible operations that the \LYSA process describes.
The nature of over-approximation may cause the analysis to investigate a trace which is impossible at all.
However, over-approximation is needed to make a safe approximation since under-approximation could miss some traces.
\begin{figure}[!htp]
\centering
\includegraphics[width=3.5in]{fig2}
\caption{Static Analysis}
\label{fig:stat}
\end{figure}
\subsection{Analysis Method}
The static analysis we use in this study is specified as a Flow Logic
\cite{bod:2,bod:1}, which is based on the control flow analysis and
the data flow analysis techniques that allow us to make it fully
automatic \cite{nie:nie:han}.
Control flow analysis is a program analysis technique that is used to
compute approximations of the result of a program execution without
running the program. Such an analysis helps us in determining the sets
of values that may be generated by communication using a specific
protocol, which is beneficial for validating certain security
properties. Especially when used in conjunction with a model of
possible malicious activity (i.e. attacker), the analysis provides a
safe approximation of all events that may happen.
Flow Logic is a notational style for specifying analyses across
programming paradigms, introduced by Nielson, Nielson
\cite{flow:1,flow:2,flow:3}, and with Hankin
\cite{nie:nie:han}.
By abstracting from domain
specific formalisms and instead using standard mathematical notations, the Flow
Logic constitutes a meta-language that can present an analysis without requiring
additional knowledge about particular formalisms. Deriving an analysis estimate
from the resulting analysis specification is then left as a separate activity, usually
involving orthogonal considerations and tools.
This approach allows the designer to focus on the specification of analyses without
making compromises dictated by implementation considerations. Similarly,
implementation is simplified and improved, as the implementer is always free to
choose the best available tool. In the next sections, we will present
control flow analysis of \LYSA in the style of flow logic.
The control flow analysis that we use in protocol analysis is specified using the flow logic framework as a predicate
\begin{center}$\JUDGE{\rho, \kappa, \psi}{P}$\end{center}
that holds precisely when $\rho $, $\kappa$, and $\psi $ form an analysis result that correctly
describes the behaviour of the process $P$.
The main components of the analysis are:
\begin{itemize}
\item \emph{The variable environment $\rho$,} an over-approximation of the potential values of each variable that it may be bound to.
\item \emph{The network component $\kappa$,} an over-approximation of the set of messages that can be communicated over the network.
\item \emph{The error component $\psi$,} the set of error messages in the form $(\ell,\ell')$, indicating that something encrypted at $\ell$ was unexpectedly decrypted at $\ell'$.
\end{itemize}
The analysis is judgments of the form $\JUDGE{\rho, \kappa, \psi}{P}$ which express that $\rho, \kappa, \psi$ compose a valid analysis for the process $P$. We also need to introduce the auxiliary judgment $\JUDGE{\rho}{E \LFONT{ : } \vartheta}$ at this point. This expresses that $\vartheta$, the set of values, is an acceptable estimate of the values that the term $E$ may evaluate in $\rho$, the abstract environment.
\label{lys:can}To keep the analysis component finite, we partition all the names that are generated by a \LYSA process into finitely many equivalence classes.
A \emph{canonical value} is a representative for each of these equivalence classes.
Names from the same equivalence class are assigned a common \emph{canonical name} and instead of the actual names, we use the names of those equivalence classes.
For example, the canonical representative of a name $n$ is denoted by $\lfloor n \rfloor$.
Since it allows us to analyse an infinite number of principals, canonical value is an important analysis element \cite{buc:nie}.
The analysis of terms is listed in Table \ref{tab:analysisterm}.
The rule for analysing names $\LFONT{(AName)}$ states that $\vartheta$ is an acceptable estimate for a name $n$ if the canonical representative of $n$ belongs to $\vartheta$.
The rule for analysing variables $\LFONT{(AVar)}$ states that $\vartheta$ is an acceptable estimate for a variable $x$ if it is a superset of $\rho (\lfloor x \rfloor)$.
The rule for analysing symmetric encryption $\LFONT{(AEnc)}$ finds the set $\vartheta_i$ for each term $E_i$, collects all k-tuples of values $(V_0,\ldots,V_k)$ taken from $\vartheta_0 \times \ldots \times \vartheta_k$ into values of the form $\ANSENC{V_1,\ldots,V_k}{V_0}{\ell}{\mathcal{L}}$ and requires that these values belong to $\vartheta$.
\begin{table*
\caption{Analysis for Terms, $\JUDGE{\rho}{E \LFONT{ : } \vartheta}$}
\label{tab:analysisterm}
\centering
\begin{tabular}{lc}
\hline
\LFONT{(AName)} & \INFERENCE{\lfloor n \rfloor \in \vartheta}{\JUDGE{\rho}{n \LFONT{ : } \vartheta}} \\
& \\
\LFONT{(AVar)} & \INFERENCE{\rho (\lfloor x \rfloor) \subseteq \vartheta}{\JUDGE{\rho}{x \LFONT{ : } \vartheta}} \\
& \\
\multirow{2}{*}{\LFONT{(AEnc)}} & $\JUDGE{\wedge_{i=0}^k \rho}{E_i \LFONT{ : } \vartheta_i} \quad \wedge$ \\
& \INFERENCE{\forall V_0, V_1,\ldots,V_k \LFONT{ : } \wedge_{i=0}^k V_i \in \vartheta_i \quad \Rightarrow \quad \ANSENC{V_1,\ldots,V_k}{V_0}{\ell}{\mathcal{L}} \in \vartheta}{\JUDGE{\rho}{\ANSENC{E_1,\ldots,E_k}{E_0}{\ell}{\mathcal{L}} \LFONT{ : }\vartheta}} \\
\hline
\end{tabular}
\end{table*}
The analysis of processes is listed in Table \ref{tab:analysisproce}. The idea of the analysis is very similar to the analysis of terms, therefore instead of explaining all the rules we explain only one interesting rule.
The rule for analysing output $\LFONT{(AOut)}$ uses the analysis for terms to find the estimate $\vartheta_i$ for each term $E_i$ and requires that all all k-tuples of values $\OUT{V_1,\ldots,V_k {}{} }$ taken from $\vartheta_1 \times \ldots \times \vartheta_k$ are in $\kappa$ (i.e. they may flow on the network). The rule also requires that the components $\rho, \kappa, \psi$ compose a valid analysis for process $P$.
\begin{table*
\caption{Analysis for Processes, $\JUDGE{(\rho , \kappa)}{P \LFONT{ : } \psi}$}
\label{tab:analysisproce}
\centering
\begin{tabular}{lc}
\hline
\LFONT{(ANil)} & \JUDGE{(\rho , \kappa)}{0 \LFONT{ : } \psi} \\
& \\
\LFONT{(APar)} & \INFERENCE{\JUDGE{(\rho , \kappa)}{P_1 \LFONT{ : } \psi} \quad \wedge \quad \JUDGE{(\rho , \kappa)}{P_2 \LFONT{ : } \psi}}{\JUDGE{(\rho , \kappa)}{P_1 \PAR P_2 \LFONT{ : } \psi}} \\
& \\
\LFONT{(ARep)} & \INFERENCE{\JUDGE{(\rho , \kappa)}{P \LFONT{ : } \psi}}{\JUDGE{(\rho , \kappa)}{\LFONT{!} P \LFONT{ : } \psi}} \\
& \\
\LFONT{(ANew)} & \INFERENCE{\JUDGE{(\rho , \kappa)}{P \LFONT{ : } \psi}}{\JUDGE{(\rho , \kappa)}{\NEW{n} P \LFONT{ : } \psi}} \\
& \\
\multirow{3}{*}{\LFONT{(AOut)}} & $\JUDGE{\wedge_{i=1}^k \rho}{E_i \LFONT{ : } \vartheta_i} \quad \wedge $ \\
& $\JUDGE{(\rho , \kappa)}{P \LFONT{ : } \psi}$\\
& $\INFERENCE{\forall V_1,\ldots,V_k \LFONT{ : } \wedge_{i=1}^k V_i \in \vartheta_i \quad \Rightarrow \quad \OUT{V_1,\ldots,V_k {}{} } \in \kappa \quad \wedge \quad}{\JUDGE{(\rho,\kappa)}{\OUT{E_1,\ldots,E_k {}{} }. P \LFONT{ : } \psi}}$ \\
& \\
\multirow{3}{*}{\LFONT{(AIn)}} & $\JUDGE{\wedge_{j=1}^k \rho}{E_i \LFONT{ : } \vartheta_i} \quad \wedge \quad$ \\
& $\JUDGE{(\rho , \kappa)}{P \LFONT{ : } \psi}$\\
& $\INFERENCE{\forall V_1,\ldots,V_k \in \kappa \LFONT{ : } \wedge_{j=1}^k V_i \in \vartheta_i \quad \Rightarrow \quad \wedge_{i=j+1}^k V_i \in \rho (\lfloor x_i \rfloor) \quad \wedge \quad}{\JUDGE{(\rho , \kappa)}{\INP{E_1,\ldots,E_j}{x_{j+1},\ldots,x_k}. P \LFONT{ : } \psi}}$ \\
& \\
\multirow{5}{*}{\LFONT{(ADec)}} & $\JUDGE{\rho}{E \LFONT{ : } \vartheta} \quad \wedge \quad$ \\
& $\JUDGE{\forall \wedge_{i=0}^j \rho}{E_i \LFONT{ : } \vartheta_i} \quad \wedge \quad$ \\
& $\JUDGE{(\rho , \kappa)}{P \LFONT{ : } \psi}$ \\
& $((\ell \notin \mathcal{L}' \vee \ell ' \notin \mathcal{L}) \Rightarrow (\ell , \ell ') \in \psi) \quad \wedge \quad$ \\
& $\INFERENCE{\forall \ANSENC{V_1,\ldots,V_k}{V_0}{\ell}{\mathcal{L}} \in \vartheta \LFONT{ : } \wedge_{i=0}^j V_i \in \vartheta_i \quad \Rightarrow \quad \wedge_{i=j+1}^k V_i \in \rho (\lfloor x_i \rfloor) \quad \wedge \quad}{\JUDGE{(\rho , \kappa)}{\ANSDEC{E}{E_1,\ldots,E_j}{x_{j+1},\ldots,x_k}{E_0}{\ell '}{\mathcal{L}} P \LFONT{ : } \psi}}$ \\
\hline
\end{tabular}
\end{table*}
\textbf{Example 3} \textit{Static analysis of the \LYSA model given in Example 2 will lead to the following results:}\\
\begin{tabular}{ll}
& \\
& $\OUT{A,B, K_A, \ANSENC{K}{K_A}{\ell_A}{\mathcal{\ell_B}} } \in \kappa$ \\
& $K_A \in \rho(x_{KA})$ \\
& $\ANSENC{K}{K_A}{\ell_A}{\mathcal{\ell_B}} \in \rho(x)$ \\
& $K \in \rho(x_K)$ \\
& \\
\end{tabular}
\textit{Looking at the results above, it is easy to see that the first line is related to \textbf{line a} in Example 2. Likewise, next two lines derived from \textbf{line b} and the last line derived from \textbf{line c} in Example 2. Note that, \textit{how the analysis works} is not the subject of this paper. Therefore, see \cite{bod:1} for \textit{how} Example 2 leads to Example 3.}
\subsection{Attacker Model}
In practice, network protocols are vulnerable to attacks.
Unfortunately it is even easier to attack wireless networks since any computer within range that is equipped with a wireless client card can pull the signal and access the data.
In this study, \LYSA processes are analysed in parallel with the Dolev-Yao attacker \cite{dol:yao}.
The operations that this attacker model can perform are listed below,
but before this we have to introduce new canonical (see Section~\ref{lys:can}) names and variables for the attacker.
All the canonical names of the attacker are mapped to $n_\bullet$ and all the canonical variables of the attacker are mapped to $z_\bullet$.
We also have \(\ell\)$_\bullet$ which is a crypto-point in the attacker.
The descriptions of the Dolev-Yao conditions are:
\begin{itemize}
\item The attacker initially has the knowledge of the canonical name $n_\bullet$ and all free names of the process $P$ but he can improve his knowledge by eavesdropping on all messages sent on the network.
\item The attacker can improve his knowledge by decrypting messages with the keys he already knows. Unless the intended recipient of the message was an attacker, an error (\(\ell\),\(\ell\)$_\bullet$) should be added to the error component $\psi$ which means that something encrypted at \(\ell\) was actually decrypted by the attacker at \(\ell\)$_\bullet$.
\item The attacker can construct new encryptions using the keys he already knows. If this message is received and decrypted by a principal, then an error (\(\ell\)$_\bullet$,\(\ell\)) should be added to the error component $\psi$
which means that something encrypted at the attacker was decrypted by the attacker by a process $P$ at \(\ell\).
\item The attacker can send messages on the network using his knowledge and thus forge new communications.
\end{itemize}
These conditions enable the attacker to establish scenarios including eavesdropping, modification, man-in-the-middle and replay attacks.
The soundness of the Dolev-Yao condition is proved in \cite{bod:1}.
As shown in Fig. \ref{fig:flow}, the \LYSA model of a protocol is analysed in parallel with the attacker model and processed by the \LYSA-tool (see Section \ref{zig:formal}) which implements the static analysis.
The results of the analysis are used to validate destination/origin authentication and confidentiality properties of the protocols.
If no violation is detected, namely the error component $\psi$ is empty, then it is guaranteed that the protocol satisfies the destination/origin authentication properties.
Furthermore, the potential values that are learned by the attacker help us in validating the confidentiality properties.
The details as well as the proof of the soundness of the analysis are presented in \cite{bod:2}.
\textbf{Example 4} \textit{In Example 3, we analysed Example 2 in an attack-free setting. Now we add the attacker model and get the following results in addition to the results in Example 3. Since the attacker is able to learn everything sent on the network we have:}
\begin{center}$K_A,\ANSENC{K}{K_A}{\ell_A}{\mathcal{\ell_B}} \in \rho(z_\bullet)$ \end{center}
\textit{Therefore, the attacker can decrypt the encrypted part of the message which leads to the violation:}
\begin{center}($\ell_A, \ell_\bullet$) $\in \psi$\end{center}
\textit{Thus we conclude that the encryption at crypto-point $\ell_A$ which was intended to be decrypted at $\ell_B$ can be decrypted by the attacker and hence the example protocol is flawed.}
\section{Application on ZigBee Wireless Sensor networks}
In this section, we present an application of the analysis method that
we explained up to now \cite{yuk:nie:2009}. This application has many features that make
it interesting. First of all, it pinpoints an undiscovered and
non-trivial flaw in a
real cryptographic security protocol. Another key issue is that the
protocol is being used in one of the latest wireless sensor network
standards, ZigBee, that is promising and emerging in the sensor
networks field. Therefore, the protocol includes secure components
that are known to be secure when they are individually used and some
of them are industry standards such as SKKE that we will explain in
more details. Still we show that combining proven to be secure
components is not sufficient for guaranteeing security
properties. Last feature of this application is that we not only use
protocol analysis to discover flaws but also to verify our fix
proposals.
\subsection{ZigBee-2007 End-to-End Application Key Establishment Protocol}
\label{zig:keyest}
ZigBee is a fairly new but promising Wireless Personal Area network (WPAN) standard for wireless sensor
networks
that have very low resource requirements. In parallel with this, the devices that operate in
ZigBee networks have limited resources in terms of memory, processor,
storage, power, etc. Therefore
implementing the security guarantees is a great challenge and the
verification of the security properties is of paramount importance.
We start by presenting the key points that are necessary for a clear understanding of the development, and we omit all the details which are not directly relevant to this study. However, a detailed survey on ZigBee security can be suggested as \cite{Yuksel:Nielson} and surely the ultimate source is the ZigBee documentation \cite{ZigBee:2007, ZigBee:Stack, ZigBeePRO:Stack, ZigBee:HA, ZigBee:SE} which is a rather difficult read with hundreds of pages including references to several other standards.
\begin{figure}[!htp]
\centering
\includegraphics[width=3.3in]{fig3}
\caption{ZigBee Network Model}
\label{fig:znet}
\end{figure}
End-to-End Application Key Establishment is the protocol to be used when establishing a Link Key (\textbf{LK}) between two ZigBee devices,
which are running in \emph{High Security Mode} (which was called \emph{Commercial Mode} in the previous standard, ZigBee-2006 \cite{ZigBee:2006}).
We will call the devices as initiator (\textbf{A}) and responder (\textbf{B}).
Note that there is also a Trust Center (\textbf{TC}), which shares a pairwise secret key with each principal in the network.
TC is actually an application that runs on a preferably more powerful ZigBee device referred to as \emph{ZigBee Coordinator} which is unique in the network;
whereas the remaining devices might be of type \emph{ZigBee Router} or \emph{ZigBee End Device}, as shown in Fig. \ref{fig:znet}.
For a better understanding we should mention that for two ZigBee devices to establish a secure communication,
they must share a symmetric key (LK) which they either \emph{receive} from a trusted server (TC)
or \emph{create mutually} using a temporary key received from the trusted server.
\begin{figure}[!htp]
\centering
\includegraphics[width=3.3in]{fig4}
\caption{ZigBee-2007 End-to-End Application Key Establishment Scenarios}
\label{fig:scena}
\end{figure}
The scenarios of End-to-End Application Key Establishment are visualized in Fig. \ref{fig:scena}.
The solid lines represent the already secure communication paths, labeled by corresponding symmetric encryption keys.
The dashed lines represent the resulting secure communication paths after a successful protocol run, again labeled by corresponding encryption keys.
Finally, the dotted lines are the messages in the protocol labeled by their sequence numbers and the encryption keys they deliver.
ZigBee-2007 End-to-End Application Key Establishment Protocol has two different cases according to the configuration of TC, we will call them as \emph{Case 1} and \emph{Case 2}.
In Case 1, TC creates the LK itself and sends it to each principal.
Therefore, the initiator and the responder have no role in the creation of the LK.
In Case 2, TC creates a temporary shared key called \emph{Master Key} (\textbf{MK}) and sends it to each principal.
Using this MK, A and B initiate a Symmetric-Key Key Establishment (\textbf{SKKE}) procedure to establish an LK.
This case allows principals to create an LK \emph{mutually}.
SKKE is actually a key agreement scheme employed in the ZigBee End-to-End Application Key Establishment mechanism, and its components are defined in the ANSI X9.63-2001 standard \cite{ansi:x963}.
At the end of (a successful run of) either case, two ZigBee devices will be able to establish secure communication using their pairwise encryption key, LK.
\subsubsection{Case 1}
In Case 1, the initiator begins the procedure of establishing an LK with the responder by sending TC the first message, \textbf{request key},
which includes \emph{destination address} (\begin{footnotesize}=TC\end{footnotesize}), \emph{requested key type} (\begin{footnotesize}=Application Key\end{footnotesize}), and \emph{partner address} (\begin{footnotesize}=B\end{footnotesize}).
Then TC creates an LK for two principals, and sends it to each principal in two similar \textbf{transport key} messages.
Since TC is configured to send an LK directly in this case, the \emph{key type} value in the last two messages will be Application Link Key (AppLK).
The only difference between these two messages is a boolean value that indicates the initiator
(\begin{footnotesize}TRUE\end{footnotesize}: message recipient is the initiator, \begin{footnotesize}FALSE\end{footnotesize}: message recipient is the responder), and also the principal address'.
All the messages in this case are encrypted with the sender/receiver principal's key that is shared with TC (assuming that the security suite is \emph{Encryption-only}).
The type of this key can either be Trust Center Link Key (\textbf{TCLK}) or Trust Center Master Key (\textbf{TCMK}), as defined in the ZigBee specification \cite{ZigBee:2007},
but for simplicity we will call it \emph{KA} for principal A, and \emph{KB} for principal B.
The protocol narration of Case 1 is given in Table \ref{tab:protocolnarration1}.
\begin{table
\caption{Protocol Narration - Case 1}
\label{tab:protocolnarration1}
\centering
\begin{tabular}{l}
\hline
\textbf{1. A\(\rightarrow\)TC:} \{TC, AppKey, B\}$_{KA}$ \\
\textbf{2. TC\(\rightarrow\)A:} \{A, AppLK, B, TRUE, LK\}$_{KA}$ \\
\textbf{3. TC\(\rightarrow\)B:} \{B, AppLK, A, FALSE, LK\}$_{KB}$ \\
\hline
\end{tabular}
\end{table}
\subsubsection{Case 2}
In Case 2, the first three messages are almost the same as in Case 1,
except in this case TC is configured to send MK, and therefore key type is the Application Master Key (AppMK).
The rest of the messages are between the initiator and the responder.
In the fourth message, \textbf{establish key}, A sends B his request to start SKKE.
The values, False and Zero, indicate that there is no \emph{parent} (router, TC, etc.), and no \emph{parent address}, respectively.
The fifth message is the response of B to A's SKKE request.
Note that these two messages are encrypted by MK, which was received in the previous two messages.
The remaining four messages are actually the SKKE protocol itself.
Messages 6 and 7 include the \emph{challenges} (NA, NB) of the principals.
Messages 8 and 9 are the complex messages which can be computed by both parties to verify each other.
A and B create two \emph{message authentication codes} (\textbf{MAC})
using their knowledge, besides the MAC key itself is a \emph{hash} (H) of another MAC which they produce using the same knowledge \cite{HMAC}.
After the verification, the new LK will be H(MAC\{A,B,NA,NB\}$_{MK}$, 2), which is a minor variation of the MAC key that was used in the last two messages.
The protocol narration of Case 2 is given in Table \ref{tab:protocolnarration2}.
\begin{table
\caption{Protocol Narration - Case 2}
\label{tab:protocolnarration2}
\centering
\begin{tabular}{l}
\hline
\textbf{1. A\(\rightarrow\)TC:} \{TC, AppKey, B\}$_{KA}$ \\
\textbf{2. TC\(\rightarrow\)A:} \{A, AppMK, B, TRUE, MK\}$_{KA}$ \\
\textbf{3. TC\(\rightarrow\)B:} \{B, AppMK, A, FALSE, MK\}$_{KB}$ \\
\textbf{4. A\(\rightarrow\)B:} \{B, FALSE, Zero, SKKE\}$_{MK}$ \\
\textbf{5. B\(\rightarrow\)A:} \{A, TRUE\}$_{MK}$ \\
\textbf{6. A\(\rightarrow\)B:} \{NA\}$_{MK}$ \\
\textbf{7. B\(\rightarrow\)A:} \{NB\}$_{MK}$ \\
\textbf{8. A\(\rightarrow\)B:} MAC\{3,A,B,NA,NB\}$_{H(MAC\{A,B,NA,NB\}_{MK},1)}$ \\
\textbf{9. B\(\rightarrow\)A:} MAC\{2,B,A,NB,NA\}$_{H(MAC\{A,B,NA,NB\}_{MK},1)}$ \\
\hline
\end{tabular}
\end{table}
\subsection{The Flaw}
\label{flaw}
In wireless networks, it is easy to intercept, forge and inject messages.
Without any formal analysis, an experienced eye can see that all the messages in ZigBee-2007 End-to-End Application Key Establishment Protocol can be replayed
when the same long-term encryption keys (KA, KB) are still being used.
The reason is the lack of \emph{freshness} elements like nonces, timestamps, etc.
This flaw can lead to serious replay attacks, denial of service (DoS) attacks, etc.
Even worse, when an old session key is compromised, an attacker can decrypt all the messages by replaying that old session key. In other words, lack of freshness can cause failures in \emph{authenticity} (in the case that principals accept an old session key from a rogue TC) and \emph{confidentiality} (in the case that principals start using a compromised session key).
As can be seen in the narration of the protocol, no freshness indicator is used in the distribution of either LK (in Case 1) or MK (in Case 2, the first three messages).
Therefore, all the messages can be replayed.
Replay of a message that includes a key is very critical.
An attacker can store a message including a key from a previous run of this protocol, and then send the old message to make principals communicate using this old key.
If the old key is compromised, then the attacker will be able to decrypt all the messages between two victim principals.
The significance of the security risk that is caused by this flaw may require more explanation.
Indeed, the flaw does not disclose any session key but allow reuse of a former key.
Besides, brute force attacks or other types of known cryptographic
attacks for obtaining the key do not seem practical for the current
specification (i.e. the keys are 128-bits).
However, disclosure of a key might still be possible without dealing with cryptography, and reuse of an old session key can cause serious risks.
An example scenario is given below:
\textbf{Scenario 1} \textit{A and B established a link key, and had secure communication with the help of that pairwise key.
Than B left the network and disclosed the key, which might be by means
of hardware (e.g. local key extraction from the chipset such as connecting a debugger, erasing the chip, then freely reading the contents of RAM),
or software (e.g. a bug in the implementation that discloses the key
after the session expires or terminates with the natural assumption
that a new session key will be used for a future session) defects. If
B rejoins the network, and run the key establishment protocol with A (no
matter which case or security level is chosen), the disclosed key may
be replayed by the attacker who can decrypt all the communication using
the disclosed key.}
In the ZigBee Specification, the notion of \emph{frame counter} is emphasized as the freshness protection.
This approach is not a strong one for several reasons.
First of all, a frame counter uses incrementing values rather than random values and rejects frames with a smaller counter value.
Second, regardless of the length (which is 32-bits in ZigBee) it is easy to cause overflow to frame counters. As indicated in \cite{Sastry:Wagner}, if an adversary forges a frame with the maximum value (\emph{i.e. 0xFFFFFFFF}) any further frame will be rejected.
In addition, using counters is not a novel approach, since in such layered architectures lower layers also used similar counters.
\subsubsection{Flaw in Case 1}
The attack scenario for Case 1 is given in Table \ref{tab:att1}. The first run (messages 1 to 3), is an old run which is intercepted by an attacker. Here, it is appropriate to mention that LK is used like a session key and KA/KB are used like master keys. Therefore, KA and KB are possibly the same in two different runs. The second run in the attack scenario (messages 1' to 3') is initiated regularly, but the last two messages are replayed by the attacker using the messages that are captured from the old run.
Furthermore, the attacker does not necessarily need to wait for a message like 1' since he can already replay it, too.
\begin{table
\caption{Attack Scenario - Case 1}
\label{tab:att1}
\centering
\begin{tabular}{l}
\hline
\textbf{1. A\(\rightarrow\)TC:} \{TC, AppKey, B\}$_{KA}$ \\
\textbf{2. TC\(\rightarrow\)A:} \{A, AppLK, B, TRUE, LK\}$_{KA}$ \\
\textbf{3. TC\(\rightarrow\)B:} \{B, AppLK, A, FALSE, LK\}$_{KB}$ \\
\hline
\textbf{1'. A\(\rightarrow\)TC:} \{TC, AppKey, B\}$_{KA}$ \\
\textbf{2'. M(TC)\(\rightarrow\)A:} \{A, AppLK, B, TRUE, LK\}$_{KA}$ \\
\textbf{3'. M(TC)\(\rightarrow\)B:} \{B, AppLK, A, FALSE, LK\}$_{KB}$ \\
\hline
\end{tabular}
\end{table}
\subsubsection{Flaw in Case 2}
The attack for Case 1 is also possible for Case 2, in which MK is sent without any freshness indicator.
Even though LK is created mutually by the use of SKKE in Case 2,
a compromised old MK that is replayed to principals before SKKE will allow an attacker to create the LK as well.
The attack scenario for Case 2 is given in Table \ref{tab:att2}.
The first run (messages 1 to 9) is the old run and it is sufficient for an attacker to capture messages 2 and 3.
Then the attacker replays these messages in the new run (messages 1' to 9').
Although the nonce's used in SKKE (exchanged in messages 6 and 7) are different,
as long as MK is compromised the attacker can decrypt these messages and learn the nonces as well.
As a result, the attacker can still compute the new LK which is actually H(MAC\{A,B,NA',NB'\}$_{MK}$, 2) (see Section \ref{zig:keyest}).
Therefore, we may conclude that the flaw is critical in both cases.
\begin{table
\caption{Attack Scenario - Case 2}
\label{tab:att2}
\centering
\begin{tabular}{l}
\hline
\textbf{1. A\(\rightarrow\)TC:} \{TC, AppKey, B\}$_{KA}$ \\
\textbf{2. TC\(\rightarrow\)A:} \{A, AppMK, B, TRUE, MK\}$_{KA}$ \\
\textbf{3. TC\(\rightarrow\)B:} \{B, AppMK, A, FALSE, MK\}$_{KB}$ \\
\textbf{4. A\(\rightarrow\)B:} \{B, FALSE, Zero, SKKE\}$_{MK}$ \\
\textbf{5. B\(\rightarrow\)A:} \{A, TRUE\}$_{MK}$ \\
\textbf{6. A\(\rightarrow\)B:} \{NA\}$_{MK}$ \\
\textbf{7. B\(\rightarrow\)A:} \{NB\}$_{MK}$ \\
\textbf{8. A\(\rightarrow\)B:} MAC\{3,A,B,NA,NB\}$_{H(MAC\{A,B,NA,NB\}_{MK},1)}$ \\
\textbf{9. B\(\rightarrow\)A:} MAC\{2,B,A,NB,NA\}$_{H(MAC\{A,B,NA,NB\}_{MK},1)}$ \\
\hline
\textbf{1'. A\(\rightarrow\)TC:} \{TC, AppKey, B\}$_{KA}$ \\
\textbf{2'. M(TC)\(\rightarrow\)A:} \{A, AppMK, B, TRUE, MK\}$_{KA}$ \\
\textbf{3'. M(TC)\(\rightarrow\)B:} \{B, AppMK, A, FALSE, MK\}$_{KB}$ \\
\textbf{4'. A\(\rightarrow\)B:} \{B, FALSE, Zero, SKKE\}$_{MK}$ \\
\textbf{5'. B\(\rightarrow\)A:} \{A, TRUE\}$_{MK}$ \\
\textbf{6'. A\(\rightarrow\)B:} \{NA'\}$_{MK}$ \\
\textbf{7'. B\(\rightarrow\)A:} \{NB'\}$_{MK}$ \\
\textbf{8'.\hspace{0.02cm}A\(\rightarrow\)B:}\hspace{0.02cm}MAC\{3,A,B,NA',NB'\}$_{H(MAC\{A,B,NA',NB'\}_{MK},1)}$ \\
\textbf{9'.\hspace{0.02cm}B\(\rightarrow\)A:}\hspace{0.02cm}MAC\{2,B,A,NB',NA'\}$_{H(MAC\{A,B,NA',NB'\}_{MK},1)}$ \\
\hline
\end{tabular}
\end{table}
\subsection{Proposed Fixed Protocols}
\label{fix}
We propose fixed protocols that use nonces to ensure freshness of the messages and at the same time the keys.
We make use of the vital principles defined on \cite{Abadi:Needham}.
The narrations of our proposed solution are given in Table \ref{tab:fix1} and Table \ref{tab:fix2} for Case 1 and Case 2, respectively.
In Case 1, we added the nonce of the initiator (\textbf{NA}) to the first two messages.
This will ensure that when receiving the second message, A will believe that she is communicating with the TC who knows her nonce and also her private key.
Note that message 1 can still be replayed but it will be ignored if A does not verify message 2.
We inserted two more messages before the last message, so that we use nonces of the TC (\textbf{NTC}) and the responder (\textbf{NB}) to avoid replay attacks.
This will ensure that when receiving the fifth message, B will believe that he is communicating with TC who knows his nonce.
Also note that message 3 can still be replayed but the process will be ignored if B does not verify message 5.
\begin{table
\caption{Proposed Fix - Case 1}
\label{tab:fix1}
\centering
\begin{tabular}{l}
\hline
\textbf{1. A\(\rightarrow\)TC:} \{TC, AppKey, B, NA\}$_{KA}$ \\
\textbf{2. TC\(\rightarrow\)A:} \{A, AppLK, B, TRUE, NA, LK\}$_{KA}$ \\
\textbf{3. TC\(\rightarrow\)B:} \{B, A, NTC\}$_{KB}$ \\
\textbf{4. B\(\rightarrow\)TC:} \{TC, A, NTC, NB\}$_{KB}$ \\
\textbf{5. TC\(\rightarrow\)B:} \{B, AppLK, A, FALSE, NB, LK\}$_{KB}$ \\
\hline
\end{tabular}
\end{table}
Our solution is also applicable to the leaked MK problem in Case 2.
Similar to our solution for Case 1, we change the first three messages of Case 2 with five messages that are also given in Table \ref{tab:fix2}.
Not to confuse with the nonces used in SKKE, the nonces we added are called (\textbf{preNA}) and (\textbf{preNB}) in Case 2.
\begin{table
\caption{Proposed Fix - Case 2}
\label{tab:fix2}
\centering
\begin{tabular}{l}
\hline
\textbf{1. A\(\rightarrow\)TC:} \{TC, AppKey, B, preNA\}$_{KA}$ \\
\textbf{2. TC\(\rightarrow\)A:} \{A, AppMK, B, TRUE, preNA, MK\}$_{KA}$ \\
\textbf{3. TC\(\rightarrow\)B:} \{B, A, NTC\}$_{KB}$ \\
\textbf{4. B\(\rightarrow\)TC:} \{TC, A, NTC, preNB\}$_{KB}$ \\
\textbf{5. TC\(\rightarrow\)B:} \{B, AppMK, A, FALSE, preNB, MK\}$_{KB}$ \\
\textbf{6. A\(\rightarrow\)B:} \{B, FALSE, Zero, SKKE\}$_{MK}$ \\
\textbf{7. B\(\rightarrow\)A:} \{A, TRUE\}$_{MK}$ \\
\textbf{8. A\(\rightarrow\)B:} \{NA\}$_{MK}$ \\
\textbf{9. B\(\rightarrow\)A:} \{NB\}$_{MK}$ \\
\textbf{10. A\(\rightarrow\)B:} MAC\{3,A,B,NA,NB\}$_{H(MAC\{A,B,NA,NB\}_{MK},1)}$ \\
\textbf{11. B\(\rightarrow\)A:} MAC\{2,B,A,NB,NA\}$_{H(MAC\{A,B,NA,NB\}_{MK},1)}$ \\
\hline
\end{tabular}
\end{table}
The fix that we propose is a mechanism that suffices to fix the flaws in the original protocol.
There might be other ways to fix, but this is a solution that simply
works and has proven (by formal verification) to be secure.
Obviously, the proposed solution would come at a particular
cost. Particularly, the
number of messages in each protocol is increased by two, and the usage of
nonces are required. Transmission of more messages means more power
consumption, but for security critical applications (e.g. in Smart
Energy, Commercial Building Automation, etc.)
this kind of fix which ensures that TC is authenticated to both A and B (i.e. the new LK is not replayed)
is necessary, so the additional messages are inevitable.
Besides the original protocol in Case 2 already has nine messages (whereas the primitive version, Case 1, only has three), which is a proof that in order to
have a sound protocol ZigBee may have longer protocols for the same purpose.
The usage of nonces is not a new cost since it is already in SKKE which is employed by Case 2.
However, the freshness is preserved for only SKKE but not the protocol
itself due to the design mistake of the wrapping protocol.
As we mentioned before, the flaw in End-to-End Application Key Establishment protocol may be visible to an experienced eye but to claim that a fix is flawless, verification using formal methods is crucial.
\emph{Static analysis with \LYSA} is one of the methods that can be used, which has many advantages such as scalability and the guarantee of termination.
\subsection{Formal Verification Details}
\label{zig:formal}
Analysing security protocols without any formal verification method is
not a reliable way to find flaws, nor to guarantee that there are no
flaws. To make our assertions and arguments sound, we use static
analysis to analyse protocols.
To be finite, this method is computing over-approximations rather than
exact answers, and therefore may lead to false positives. However, when the analysis results tell that the protocol is error-free, then it really is. In other words, no simulation or verification is necessary when the protocols successfully passes static analysis.
The base protocols in Section \ref{zig:keyest} are modelled using \LYSA process calculus and analysed using the \LYSA-tool\footnote{http://www.imm.dtu.dk/English/Research/Language-Based\_Technology/Research/LySa.aspx}. The result supports our claims in Section \ref{flaw}. The base protocols are prone to replay attacks which will cause serious problems in the case of a leaking key.
The proposed protocols in Section \ref{fix} are also modelled analysed in the same way with the base protocol. The result is successful, namely the proposed protocols do not have any flaws.
The settings that we use to implement the \LYSA model and verify in the \LYSA-tool are listed below:
\begin{itemize}
\item we check for the origin and destination addresses in each message (by adding them as prefixes such as in IPv4 or IPv6)
\item we have the necessary annotations for the encryptions and decryptions
\item we allow legitimate attackers in addition to the illegitimate attackers (by adding appropriate zero indices, namely attacker also shares master key with TC)
\item we model three groups of (infinite) principals so that we can model man-in-the-middle attacks
\item we add an extra message that is encrypted using the session key (to see whether the compromised key can be used)
\item we check all the fields in the messages to have proper values (by pattern matching), except session keys which are newly created (and bound to variables in inputs)
\end{itemize}
To distinguish between old rounds and new rounds of the protocol we apply a new technique in \LYSA.
We add round indicators to the end of pattern-matched fields in messages and match them in a smart way to distinguish old runs.
Using this technique, we can investigate replay attacks successfully.
\section{Conclusion}
Analysing protocols is not a trivial issue, and in this work we
presented an analysis method with a detailed application on a new and so called
advanced security protocol that uses secure components.
In this approach, we have solid benefits in mainly:
\begin{itemize}
\item \emph{solutions always exist and are computed in low polynomial
time.} This is an important advantage because approaches based on
model checking cannot always guarantee termination, and besides
prone to state space explosion problem. Besides the analysis is
correct with respect to formal operational semantics, which may be hard
to establish in different approaches such as the ones based on modal
logic of beliefs (BAN) where the completeness property does not
generally hold.
\end{itemize}
However, those benefits come with a particular cost:
\begin{itemize}
\item \emph{lack of trace and counter-example.} Due to the nature of
the analysis, there is no trace and no produced counter-example to
help flaw discovery. As a result of the over-approximation, false
positives may occur and manual inspection is required to match the
reported violations to actual flaws.
\end{itemize}
Another thing we have presented was the usage of protocol analysis in
suggesting a secured version of a flawed protocol. Fixing the flaws
and proposing secure protocols is another non-trivial job. In this manner, we
made use of prudent engineering practices of Gordon and Abadi, and
benefited fruitful discussions with Gavin Lowe. One of the points we
emphasized was the importance of freshness, and the importance of
proper usage of freshness indicators such as nonces, challenges, etc.
We can recapitulate as encryption is not synonymous with security,
and its improper use can lead to errors. The proper use should be
verified by protocol analysis methods that focus on certain security
properties. Along the way in this study, we discovered and documented general
guidelines about how to use static analysis for protocol
validation. We do believe that such studies are necessary in order
to standardise protocols that live up to their stated
expectations.
\chapter{Preface}
This report provides a detailed documentation on the application of
static program analysis to the key establishment protocols of the
ZigBee wireless sensor network standard. The approach presented in this
report is within the scope of the SENSORIA (\emph{Software Engineering for
Service-Oriented Overlay Computers}) project, and will form a
preliminary version of one of the chapters in my PhD dissertation. The
discovered flaw and the proposed secure protocols were recently
published in a conference paper (see references \cite{yuk:nie:2009})
and also accepted for journal publication.
\paragraph{Acknowledgement}
This work has been partially supported by EU-FETPI Global Computing
Project IST-2005-16004 SENSORIA.
I would like to thank Hanne Riis Nielson and Flemming Nielson for the
supervision and invaluable contribution in this work, and also for providing
me with this opportunity to work on an international research
project.
I would also like to thank Gavin Lowe for kind discussions and
feedback on fixing the flaw in the key establishment
protocol.
\vspace{5mm} \mbox{}\hfill
\begin{minipage}[t]{180mm}
Kongens Lyngby, February 2010
\\ \\
Ender Y\"uksel
\end{minipage}
|
2,869,038,153,787 | arxiv | \section{Introduction}
\vspace*{-1mm}
\label{sec:intro}
One of the most challenging issues in particle and nuclear physics is
to unravel the origin of
baryon forces
based on the fundamental theory, quantum chromodynamics (QCD).
The precise information on nuclear and hyperon forces
serve as the key ingredients to calculate properties of nuclei and dense matter
and the structure of neutron stars.
While so-called realistic nuclear forces
have been obtained using experimental scattering phase shifts,
their connection to QCD is yet to be established.
Hyperon forces suffer from large uncertainties
since scattering experiments with hyperon(s) are very difficult
due to the short life time of hyperons.
Under these circumstances, it is most desirable
to carry out the first-principles calculations of baryon forces
by lattice QCD.
We study baryon forces using a novel theoretical framework,
HAL QCD method,
in which the interaction kernels (so-called ``potentials'')
are determined from Nambu-Bethe-Salpeter (NBS) correlators
on a lattice~\cite{Ishii:2006ec, HALQCD:2012aa, Aoki:2012tk}.
(For the application of obtained lattice baryon forces to
the equation of state of dense matter, the structure of neutron stars
and the properties of nuclei, see Refs.~\cite{Inoue:2013nfe, McIlroy:2017ssf}.)
The significant advantage of HAL QCD method over the traditional approach
(so-called ``direct'' calculations~\cite{Yamazaki:2015asa, Orginos:2015aya, Berkowitz:2015eaa})
is that the baryon-baryon interactions can be extracted
without relying on the ground state saturation~\cite{HALQCD:2012aa}.
In fact, since typical excitation energy in multi-baryon systems
is one to two orders of magnitude smaller than ${\cal O}(\Lambda_{\rm QCD})$ due to
the existence of elastic excited states,
the results from the ``direct'' method~\cite{Yamazaki:2015asa, Orginos:2015aya, Berkowitz:2015eaa},
which rely on the ground state saturation,
become generally unreliable~\cite{Iritani:2016jie, Iritani:2016xmx}.
In this paper, we report the latest lattice QCD results for the
baryon forces obtained at (almost) physical quark masses,
updating our previous results~\cite{Doi:2015oha}.
We first give a brief overview of the theoretical framework
and then present numerical results for
two-$\Xi$ ($\Xi\Xi$) forces ($S=-4$) and two-nucleon ($NN$) forces ($S=0$)
in parity-even channel.
Central forces are extracted in $^1S_0$ channel,
and central and tensor forces are obtained in $^3S_1$-$^3D_1$ coupled channel analysis.
The results for other baryon forces in the same lattice setup are presented
in Refs.~\cite{Ishii:lat2016}.
\vspace*{-1mm}
\section{Formalism}
\vspace*{-1mm}
\label{sec:formalism}
The key quantity in the HAL QCD method is
the equal-time NBS wave function.
In the case of a $NN$ system, for instance, it is defined by
$
\phi_W^{NN}(\vec{r}) \equiv
1/Z_N \cdot
\langle 0 | N(\vec{r},0) N(\vec{0},0) | NN, W \rangle_{\rm in} ,
$
where
$N$ is the nucleon operator
with its wave-function renormalization factor $\sqrt{Z_N}$
and
$|NN, W \rangle_{\rm in}$ denotes the asymptotic in-state of the $NN$ system
at the total energy of $W = 2\sqrt{k^2+m_N^2}$
with
the asymptotic momentum $k$,
and we consider the elastic region, $W < W_{\rm th} = 2m_N + m_\pi$.
The most important property of the NBS wave function is that
the asymptotic behavior at $r \equiv |\vec{r}| \rightarrow \infty$
is given by
$
\phi_W^{NN} (\vec{r}) \propto
\sin(kr-l\pi/2 + \delta_W^l) / (kr),
$
where
$\delta_W^l$ is the scattering phase shift
with the orbital angular momentum $l$.
Exploiting this feature,
one can define (non-local) $NN$ potential, $U^{NN}(\vec{r},\vec{r}')$,
which is faithful to the phase shifts
through the Schr\"odinger equation~\cite{Ishii:2006ec, HALQCD:2012aa, Aoki:2012tk},
$
(E_W^{NN} - H_0) \phi_W^{NN}(\vec{r})
=
\int d\vec{r'} U^{NN}(\vec{r},\vec{r'}) \phi_W^{NN}(\vec{r'}) ,
$
where
$H_0 = -\nabla^2/(2\mu)$ and
$E_W^{NN} = k^2/(2\mu)$ with the reduced mass $\mu = m_N/2$.
It has been also proven that
$U^{NN}(\vec{r},\vec{r'})$ can be constructed
as to be energy-independent~\cite{Ishii:2006ec,Aoki:2012tk}.
Generally speaking, the NBS wave function
can be extracted from the
four-point correlator,
$
G^{NN} (\vec{r},t)
\equiv
\sum_{\vec{R}}
\langle 0 |
(N(\vec{R}+\vec{r}) N (\vec{R}))(t)\
\overline{(N N)}(t=0)
| 0 \rangle ,
$
by isolating the contribution from each energy eigenstate
(most typically by the ground state saturation with $t \rightarrow \infty$).
Such a procedure, however, is practically almost impossible,
due to the existence of nearby elastic scattering states.
In fact, the typical excitation energy is
as small as ${\cal O}(1)-{\cal O}(10)$ MeV,
which is estimated by the empirical binding energies and/or
the discretization in spectrum by the finite volume, $\sim (2\pi/L)^2 / m_N$.
Correspondingly, ground state saturation
requires
$t \mathrel{\mathpalette\gl@align>} {\cal O}(10)-{\cal O}(100)$ fm,
which is far beyond reach considering that
signal/noise is exponentially suppressed in terms of $t$.
\begin{figure}[t]
\begin{minipage}{0.48\textwidth}
\begin{center}
\vspace*{-4mm}
\includegraphics[angle=0,width=0.85\textwidth]{figs/XiXi/pot/pot_2b.IS10_cen_____.CG05.t_all.gimp.eps}
\vspace*{-2mm}
\caption{
\label{fig:pot:XiXi:1S0:cen}
$\Xi\Xi$ central force $V_C(r)$ in $^1S_0$ $(I=1)$ channel
obtained at $t = 15-18$.
}
\end{center}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\begin{center}
\vspace*{-4mm}
\includegraphics[angle=0,width=0.85\textwidth]{figs/XiXi/phase/phase.fit.t_016-019.r_00.0-20.0.eps}
\vspace*{-2mm}
\caption{
\label{fig:phase:XiXi:1S0:cen}
Phase shifts in $\Xi\Xi (^1S_0)$ $(I=1)$ channel
obtained at $t = 15-18$.
}
\end{center}
\end{minipage}
\end{figure}
Recently, the breakthrough on this issue was achieved
by the time-dependent HAL QCD method~\cite{HALQCD:2012aa}.
The crucial point is that,
since $U^{NN}(\vec{r},\vec{r'})$ is energy-independent,
one can extract the signal thereof
even from elastic excited states.
More specifically,
the following ``time-dependent'' Schr\"odinger equation holds
even without the ground state saturation,
\begin{eqnarray}
\left(
- \frac{\partial}{\partial t}
+ \frac{1}{4m_N} \frac{\partial^2}{\partial t^2}
- H_0
\right)
R^{NN}(\vec{r},t)
=
\int d\vec{r'} U^{NN}(\vec{r},\vec{r'}) R^{NN}(\vec{r'},t) ,
\label{eq:Sch_2N:tdep}
\end{eqnarray}
where
$R^{NN}(\vec{r},t) \equiv G^{NN} (\vec{r},t) e^{2m_Nt}$.
While it is still necessary to suppress the contaminations from inelastic states,
it can be fulfilled by much easier condition,
$t \mathrel{\mathpalette\gl@align>} (W_{\rm th} - W)^{-1} \sim {\cal O}(1)$ fm.
This is in contrast to the direct calculations,
which inevitably rely on the ground state saturation.
Note that while ``plateau-like'' structures in the effective energy shifts often appear at $t\sim {\cal O}(1)$ fm
and are customarily used in the previous direct calculations~\cite{Yamazaki:2015asa, Orginos:2015aya, Berkowitz:2015eaa},
they cannot be distinguished from the fake plateaux (called ``mirage'' in Ref.~\cite{Iritani:2016jie})
and thus are generally unreliable~\cite{Iritani:2016jie, Iritani:2016xmx}.
The time-dependent HAL QCD method is also essential to study coupled channel systems.
Coupled channel effects play an important role in hyperon forces,
e.g., in the $\Lambda N$-$\Sigma N$ system and $\Lambda \Lambda$-$N \Xi$-$\Sigma \Sigma$ system.
The master formula
for a coupled channel system is given by~\cite{Aoki:2011gt}
\begin{eqnarray}
\biggl( -\frac{\partial}{\partial t} - H^{\alpha}_{0} \biggr)
R^{\alpha \beta}(\vec{r}, t)
=
\sum_{\gamma} \Delta^{\alpha \gamma}
\int d\vec{r'} U^{\alpha \gamma}(\vec{r},\vec{r'})
R^{\gamma \beta}(\vec{r'}, t) ,
\label{t-dep_local}
\end{eqnarray}
where $\alpha, \beta, \gamma$ denote labels for channels,
$R^{\alpha\beta}$ normalized four-point correlator in $\alpha$ ($\beta$) channel at the sink (source),
$H^{\alpha}_{0}=-\nabla^{2}/2\mu^{\alpha}$ with the reduced mass $\mu^{\alpha}=m^{\alpha}_{1} m^{\alpha}_{2} /(m^{\alpha}_{1} + m^{\alpha}_{2})$,
$\Delta^{\alpha \gamma}=e^{(m_{1}^{\alpha}+m_{2}^{\alpha}) t}/e^{(m_{1}^{\gamma}+m_{2}^{\gamma}) t}$,
and subscripts $1, 2$ labels for (two) hadrons in the channel.
For simplicity,
relativistic correction terms
are omitted.
The advantage of the coupled channel approach in HAL QCD method
is also shown in the study of the tetraquark candidate, $Z_c(3900)$~\cite{Ikeda:2016zwx}.
The computational challenge in lattice QCD for multi-baryon systems
is that enormous computational resources are required for the calculation of correlators.
The reasons are that
(i) the number of
Wick contractions
grows
factorially with mass number $A$, and
(ii) the number of
color/spinor contractions grows exponentially for larger $A$.
On this point, we recently develop a novel algorithm
called the unified contraction algorithm (UCA),
in which two contractions (i) and (ii) are unified in a systematic way~\cite{Doi:2012xd}.
This algorithm significantly reduces the computational cost
and play a crucial role in our simulations.
\vspace*{-1mm}
\section{Lattice QCD setup}
\vspace*{-1mm}
\label{sec:setup}
$N_f = 2+1$ gauge configurations are generated on the $96^4$ lattice
with the Iwasaki gauge action at $\beta = 1.82$ and
nonperturbatively ${\cal O}(a)$-improved Wilson quark action with $c_{sw} = 1.11$
and APE stout smearing with $\alpha = 0.1$, $n_{\rm stout} = 6$.
About 2000 trajectories are generated after the thermalization,
and preliminary studies show that $a^{-1} \simeq 2.333$ GeV ($a \simeq 0.0846$ fm)
and $m_\pi \simeq 146$ MeV, $m_K \simeq 525$ MeV.
The lattice size, $La \simeq 8.1$ fm, is sufficiently large
to accommodate two baryons on a box.
For further details on the gauge configuration generation,
see Ref.~\cite{Ishikawa:2015rho}.
The measurements of NBS correlators are performed at the unitary point,
where the block solver~\cite{Boku:2012zi} is used for the quark propagator
and unified contraction algorithm~\cite{Doi:2012xd} is used for the contraction.
The computation for the measurements (including I/O)
achieves $\sim$ 25\% efficiency, or $\sim$ 65 TFlops sustained on 2048 nodes of K computer.
For two-octet baryon forces, we calculate all 52 channels relevant in parity-even channel.
We employ wall quark source with Coulomb gauge fixing,
where
the periodic (Dirichlet) boundary condition is used for spacial (temporal) directions
and forward and backward propagations are averaged to reduce the statistical fluctuations.
We pick 1 configuration per each 5 trajectories,
and we make use of the rotation symmetry to increase the statistics.
The total statistics in this report amounts to
414 configurations $\times$ 4 rotations $\times$ 48 wall sources,
binned by 46 configurations.
Baryon forces are determined in $^1S_0$ and $^3S_1$-$^3D_1$ channels.
We perform the velocity expansion~\cite{Aoki:2012tk} in terms of
the non-locality of potentials,
and obtain the leading order potentials, i.e., central and tensor forces.
In this preliminary analysis shown below,
the term which corresponds to the relativistic effects
($\partial^2 / \partial t^2$-term in Eq.~(\ref{eq:Sch_2N:tdep}))
is neglected.
\vspace*{-1mm}
\section{$\Xi\Xi$ systems ($S=-4$ channel)}
\vspace*{-1mm}
\label{sec:XiXi}
\begin{figure}[t]
\begin{minipage}{0.48\textwidth}
\begin{center}
\vspace*{-4mm}
\includegraphics[angle=0,width=0.85\textwidth]{figs/XiXi/pot/pot_2b.IS01_cen_Jz0_.CG05.t_all.gimp.eps}
\vspace*{-2mm}
\caption{
\label{fig:pot:XiXi:3S1:cen}
$\Xi\Xi$ central force $V_C(r)$ in $^3S_1$-$^3D_1$ $(I=0)$ channel
obtained at $t = 15-18$.
}
\end{center}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\begin{center}
\vspace*{-4mm}
\includegraphics[angle=0,width=0.85\textwidth]{figs/XiXi/pot/pot_2b.IS01_ten_Jz0_.CG05.t_all.gimp.eps}
\vspace*{-2mm}
\caption{
\label{fig:pot:XiXi:3S1:ten}
$\Xi\Xi$ tensor force $V_T(r)$ in $^3S_1$-$^3D_1$ $(I=0)$ channel
obtained at $t = 15-18$.
}
\end{center}
\end{minipage}
\end{figure}
Let us first consider the $\Xi\Xi$ system in $^1S_0$ (iso-triplet) channel.
This channel belongs to the 27-plet
in flavor SU(3) classification
as does the $NN (^1S_0)$ system.
Therefore,
the $\Xi\Xi (^1S_0)$ interaction serves as a good ``doorway'' to probe
the $NN (^1S_0)$ interaction.
In addition,
since the strong attraction in $NN (^1S_0)$ makes a ``dineutron'' nearly bound,
it has been attracting interest whether
the 27-plet interaction with the SU(3) breaking effects
forms a bound $\Xi\Xi (^1S_0)$ state or not~\cite{Haidenbauer:2014rna}.
In Fig.~\ref{fig:pot:XiXi:1S0:cen},
we show the lattice QCD results
for the central force $V_C(r)$
in the $\Xi\Xi (^1S_0)$ channel.
We observe a clear signal of
the mid- and long-range attraction as well as the repulsive core at short-range,
resembling the phenomenological potential in $NN(^1S_0)$ system.
Within statistical fluctuations,
the results are found to be consistent with each other in the range $t = 15-18$,
which suggests that the contaminations from inelastic excited states are suppressed
and higher-order terms in the velocity expansion are small.
Shown in Fig.~\ref{fig:phase:XiXi:1S0:cen}
are the
corresponding
phase shifts in terms of the center-of-mass energy.
The results indicate that the interaction is strongly attractive at low energies
while it is not sufficient to form a bound $\Xi\Xi (^1S_0)$ state.
It is desirable to examine this observation in experiments by, e.g., heavy-ion collisions.
We next consider the $\Xi\Xi$ system in $^3S_1$-$^3D_1$ (iso-singlet) channel.
This channel belongs to the 10-plet in flavor SU(3),
a unique representation with hyperon degrees of freedom.
By solving the coupled channel Schr\"odinger equation with
NBS correlators, we determine the central and tensor forces.
In Figs.~\ref{fig:pot:XiXi:3S1:cen} and ~\ref{fig:pot:XiXi:3S1:ten},
we show the central and tensor forces, respectively.
For the central force,
we observe the strong repulsive core,
which
can be understood from the viewpoint of
the quark Pauli blocking effect~\cite{Aoki:2012tk, Oka:1986fr}.
There also exists an indication of a weak attraction at mid range which
may reflect the effect of small attractive one-pion exchange potential (OPEP).
We observe that the $\Xi\Xi$ tensor force (Fig.~\ref{fig:pot:XiXi:3S1:ten})
has opposite sign and is weaker
compared to
the $NN$ tensor forces (Fig.~\ref{fig:pot:NN:3S1:ten}).
This could be understood by the phenomenological
one-boson exchange potentials,
where $\eta$ gives weaker and positive tensor forces
and $\pi$ gives much weaker and negative tensor forces
with flavor SU(3) meson-baryon couplings
together with $F/D$ ratio by SU(6) quark model.
Further studies with larger statistics are currently underway.
\vspace*{-1mm}
\section{$NN$ systems ($S=0$ channel)}
\vspace*{-1mm}
\label{sec:NN}
\begin{figure}[t]
\begin{minipage}{0.48\textwidth}
\begin{center}
\vspace*{-4mm}
\includegraphics[angle=0,width=0.85\textwidth]{figs/NN/pot/pot_2b.IS01_ten_Jz0_.CG05.t_all.gimp.eps}
\vspace*{-2mm}
\caption{
\label{fig:pot:NN:3S1:ten}
$NN$ tensor force $V_T(r)$ in $^3S_1$-$^3D_1$ $(I=0)$ channel
obtained at $t = 7-10$,
together with bare OPEP
and AV18 phenomenological potential.
}
\end{center}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\begin{center}
\vspace*{-9mm}
\includegraphics[angle=0,width=0.85\textwidth]{figs/NN/pot/pot_2b.IS01_cen_Jz0_.CG05.t_all.gimp.eps}
\vspace*{-2mm}
\caption{
\label{fig:pot:NN:3S1:cen}
$NN$ central force $V_C(r)$ in $^3S_1$-$^3D_1$ $(I=0)$ channel
obtained at $t = 7-10$.
}
\end{center}
\end{minipage}
\end{figure}
Let us begin with the $^3S_1$-$^3D_1$ (iso-singlet) channel.
In Fig.~\ref{fig:pot:NN:3S1:ten}, we show the tensor force $V_T(r)$ obtained at $t = 7-10$.
A strong tensor force with the long-range tail is clearly visible.
Compared to the lattice tensor forces obtained with heavier quark masses~\cite{Aoki:2012tk},
the range of interaction is found to be longer.
We also make a qualitative comparison with bare OPEP and the phenomenological tensor force (AV18)
in Fig.~\ref{fig:pot:NN:3S1:ten},
while a quantitative comparison requires a study in terms of phase shifts
which will be presented elsewhere.
The tail structures are found to be rather similar among these three potentials.
Overall behaviors including the suppression at short-range
compare to bare OPEP are also similar between lattice potentials and AV18.
Since it is the tensor force which plays the most crucial role in
the binding of deuteron,
this is a very
encouraging
result.
In order to suppress contaminations from inelastic states,
it is desirable to take larger $t$ by increasing the statistics, which is in progress.
In Fig.~\ref{fig:pot:NN:3S1:cen}, we show the central force $V_C(r)$ in $^3S_1$-$^3D_1$ channel.
While the results suffer from much larger statistical fluctuations,
the repulsive core at short-range is clearly observed
and mid- and long-range attraction is obtained as well.
We then consider the $^1S_0$ (iso-triplet) channel.
Shown in Fig.~\ref{fig:pot:NN:1S0:cen} is the obtained central force $V_C(r)$ for $NN(^1S_0)$.
As is the case for the central force in $^3S_1$-$^3D_1$ channel,
the results suffer from large statistical fluctuations.
Yet, the repulsive core at short-range is observed
and mid- and long-range attraction is obtained as well.
In Fig.~\ref{fig:pot:NNvsXiXi:1S0:cen}, we compare
$NN(^1S_0)$ at $t=9$ (red) and $\Xi\Xi(^1S_0)$ at $t=18$ (blue).
As noted before, both channels
belong to the 27-plet in flavor SU(3) classification,
and the difference dictates the SU(3) breaking effect.
We observe that the repulsive core in $NN(^1S_0)$ is more enhanced
than $\Xi\Xi(^1S_0)$,
which
can be understood from the one-gluon exchange picture.
\begin{figure}[t]
\begin{minipage}{0.48\textwidth}
\begin{center}
\vspace*{-4mm}
\includegraphics[angle=0,width=0.85\textwidth]{figs/NN/pot/pot_2b.IS10_cen_____.CG05.t_all.gimp.eps}
\vspace*{-2mm}
\caption{
\label{fig:pot:NN:1S0:cen}
$NN$ central force $V_C(r)$ in $^1S_0$ $(I=1)$ channel
obtained at $t = 7-10$.
}
\end{center}
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\begin{center}
\vspace*{-4mm}
\includegraphics[angle=0,width=0.85\textwidth]{figs/NN.XiXi/pot/pot_2b.IS10_cen_____.CG05.t_all.gimp.eps}
\vspace*{-2mm}
\caption{
\label{fig:pot:NNvsXiXi:1S0:cen}
Comparison between
central forces in $NN(^1S_0)$ at $t=9$ (red) and $\Xi\Xi(^1S_0)$ at $t=18$ (blue).
}
\end{center}
\end{minipage}
\end{figure}
\vspace*{-1mm}
\section{Summary}
\vspace*{-1mm}
\label{sec:summary}
We have reported the lattice QCD studies
for baryon interactions
with (almost) physical quark masses,
$m_\pi \simeq 146$ MeV and $m_K \simeq 525$ MeV
on a large lattice box $(96 a)^4 \simeq (8.1 {\rm fm})^4$.
Baryon forces have been calculated from
NBS
correlators
in the (time-dependent) HAL QCD method.
Preliminary results for $\Xi\Xi$ and $NN$ interactions have been presented.
In the $\Xi\Xi (^1S_0)$ channel, a strong attraction is obtained,
although it is not strong enough to form a bound state.
In the $\Xi\Xi$ ($^3S_1$-$^3D_1$) channel,
we have observed the strong repulsive core in the central force.
Tensor force is found to be weak and have an opposite sign compared to the $NN$ tensor force.
For $NN$ forces,
a clear signal for the strong tensor force has been obtained.
Repulsive cores as well as mid- and long-range attractions have been observed
in central forces in both $^1S_0$ and $^3S_1$-$^3D_1$ channels.
Repulsive core in $NN(^1S_0)$ is found to be stronger than $\Xi\Xi(^1S_0)$.
These observations have interesting physical implications
from the point of view of quark Pauli blocking effect
and phenomenological models of baryon forces.
Studies in terms of phase shifts with increased statistics are in progress.
\vspace*{-1mm}
\section*{Acknowledgments}
\vspace*{-1mm}
We thank members of PACS Collaboration for the gauge configuration generation.
The lattice QCD calculations have been performed on the K computer at RIKEN, AICS
(hp120281, hp130023, hp140209, hp150223, hp150262, hp160211),
HOKUSAI FX100 computer at RIKEN, Wako (G15023, G16030)
and HA-PACS at University of Tsukuba (14a-20, 15a-30).
We thank ILDG/JLDG~\cite{conf:ildg/jldg}
which serves as an essential infrastructure in this study.
This work is supported in part by
MEXT Grant-in-Aid for Scientific Research (JP15K17667),
SPIRE (Strategic Program for Innovative REsearch) Field 5 project
and
"Priority Issue on Post-K computer" (Elucidation of the Fundamental Laws and Evolution of the Universe).
\vspace*{-1mm}
|
2,869,038,153,788 | arxiv | \section{Introduction}
Non-spherical rigid molecules may show orientationally ordered phases,
which are often referred to as liquid crystalline(LC) phases.
Among all rigid molecules, (nonpolar) rod-like molecules have attracted most
interests. Rod-like molecules have shown several LC phases experimentally,
including nematic, smectic A and smectic C. The shape of rigid molecules can be
more complex, which may induce richer phase behaviors. In the recent two
decades, a novel type of molecules has occupied a place in the
research of liquid crystals. This type of molecules can be represented by two
rigid rods connected end to end with a fixed angle, thus called bent-core
molecules. Bent-core molecules break the rotational symmetry of rod-like
molecules, and have proven to exhibit numerous new liquid crystalline phases\cite{JJAP}.
A few rigid molecules of other architectures have also been studied and the
experimental results indicate more complex phases\cite{NT}.
To describe LC phases, orientation-dependent variables are necessary to be
included in the free energy.
The statistical mechanics of nematic phase of rods gives the free energy
\begin{equation}\label{FreeEng0}
F[f]=F_0+k_BT\left(\int_{S^2}\d\bm{m} f(\bm{m})\log f(\bm{m}) +
\frac{c}{2} \int\int_{S^2\times S^2} \d\bm{m}\d\bm{m'}f(\bm{m})G(\bm{m},\bm{m'})f(\bm{m'})
\right)
\end{equation}
where $f(\bm{m})$ is orientational probability density function,
$G(\bm{m},\bm{m'})$ is kernel function, and $c$ is concentration.
Two of the most well-known $G$ are Onsager potential\cite{Ons}
\begin{equation}\label{Onsager}
2L^2D|\bm{m}\times\bm{m'}|
\end{equation}
for a rod of length $L$ and diameter $D$, and Maier-Saupe potential\cite{M_S}
\begin{equation}\label{MS}
C|\bm{m}\times\bm{m'}|^2=C-C(\bm{m}\cdot\bm{m'})^2
\end{equation}
with $C$ related to $L$, $D$ and temperature $T$.
The top eigenvalue of the second moment of $\bm{m}$,
\begin{equation}
S=\left<\bm{mm}\right>,
\end{equation}
is defined as the order parameter of the uniaxial nematic phase\cite{LiqCryst},
which is the only spatially homogeneous phase found for rod-like molecules
other than isotropic phase.
The free energy (\ref{FreeEng0}) is a natural extension of virial expansion of spheres.
When handling a generic rigid molecule, a three-dimensional rotation
$P\in SO_3$ is necessary to describe its orientation. Thus we need to
substitute $\bm{m}$ with $P$ in (\ref{FreeEng0}). Kernel function $G(P,P')$
can be deduced from pairwise interaction of molecules.
Different phases correspond to different local minima of the free energy.
However, it is obscure to distinguish phases with the probability density
function $f$. One always wants to seek a few order parameters to classify them,
like the eigenvalues of $\left<\bm{mm}\right>$ for nematic phase of rods.
In the existing approaches, order parameters are usually considered at first.
Models at different levels are constructed about these order parameters.
For example, in \cite{OrdPar, BiLand} Landau-type free energies are constructed
for molecules with $C_{2v}$ and $D_{2h}$ symmetries.
In \cite{Bi1} a molecular theory of $D_{2h}$ is dicussed. Four order parameters
are proposed. The kernel function there is a polynomial of
$\bm{m}_i\cdot\bm{m'}_j$. When solving the model, further assumptions are made
to deduce the equations of the four order parameters.
The purpose of this article is to present a procedure of reducing $f$ to
a few order parameters.
In this procedure, symmetries of molecules play a key role. These symmetries
will be inherited by kernel function $G$ and probability density function $f$.
The symmetries of $G$ and $f$ make it possible to reduce the configuration
space. As an example, the reduction will derive (\ref{FreeEng0}) for molecules
with axial symmetry.
The next step is to look for a good approximation of $G$.
Here we are partially inspired by some thoughts in \cite{Bi1}.
We will prove that
$G$ is a function of $\bar{P}=P^{-1}P'=(p_{ij})_{3\times 3}$, and approximate $G$
with a polynomial of $p_{ij}$. The advantage of polynomial approximation is
that the Euler-Langrange equation of $f$ could be replaced with self-consistent
equations of several moments of $\bm{m}_i$ that fully characterize the system.
The symmetries of $G$ determine the form of approximate kernel function.
In other words, the symmetries of $G$ determine the candidate moments.
Truncation within the remaining terms is followed, which
relies on intuitions from experiments and simulations. Maier-Saupe potential
is obtained spontaneously after this step for molecules with $D_{\infty h}$
symmetry, and the form of approximation is derived for molecules with $C_{2v}$
symmetry.
The coefficients of polynomial approximation of $G$ are determined by
molecular parameters and temperature. The analysis of the impacts of these
coefficients might further reveal some properties of the chosen moments.
Analysis of this type has been done for rods\cite{AxiSymMS,AxiSym2,AxiSym3} and polar rods\cite{Dipol}.
We will present some results on molecules with $C_{2v}$.
The analysis would enable us to find independent
variables in these moments, which are chosen as order parameters.
From the implications of the analysis and the experimental results,
we predict that five order parameters are enough for bent-core molecules.
The rest of this paper is organized as follows. Sec.\ref{Model} describes
the density functional theory of generic rigid molecules.
The construction of kernel function $G$ is discussed. Some simple properties
are presented. Excluded-volume potential is derived for two types of
molecules with $C_{2v}$ symmetry.
In Sec.\ref{Sy}, we analyze the symmetric properties of $G$ and $f$ and
describe the reduction of configuration space.
Sec.\ref{Trunc} shows the derivation of the equations of moments and
the screening of the moments by symmetries of kernel function.
Sec.\ref{Ord} is dedicated to the analysis of the impacts of the coefficients
in polynomial approximation of $G$.
In Sec.\ref{Con}, we make a conclusion and propose some prospective problems.
\section{Modelling of rigid molecules of arbitrary shape}\label{Model}
This section presents the density functional theory of rigid molecules.
We start from a general formulation, then deduce the free energy for spatially
homogeneous phases.
A three-dimensional rigid molecule might be chiral, leading to two possible
configurations that cannot coincide through proper rotation.
In this work we simply deal with systems with single chirality.
Systems with mixed chirality can be treated by regarding two kinds of chirality
as different molecules.
\subsection{Representation of the configuration of rigid molecules}
We choose a reference point $\hat{O}$ on the rigid molecule and a body-fixed orthogonal
basis $\bm{m}_1,\bm{m}_2,\bm{m}_3$. The configuration of the molecule is
determined by the position of $\hat{O}$ and the orientation of $\bm{m}_i$. In a
space-fixed orthogonal coordinate system $(O;\bm{e}_1,\bm{e}_2,\bm{e}_3)$, they
can be expressed in terms of $\bm{x}_0=\overrightarrow{O\hat{O}}$
and a three-dimensional proper rotation $P\in SO_3$.
In the language of matrix, $P$ is orthogonal such that
\begin{equation}\label{RotP}
\left(
\bm{m}_1,
\bm{m}_2,
\bm{m}_3
\right)
=
\left(
\bm{e}_1,
\bm{e}_2,
\bm{e}_3
\right)P.
\end{equation}
The elements of $P^T=(m_{ij})$ is given by
$$
m_{ij}=\bm{m}_i\cdot\bm{e}_j.
$$
The position of a fixed point on the molecule is represented by its coordinates
in the body-fixed coordinate system $(O;\bm{m}_1,\bm{m}_2,\bm{m}_3)$:
$$
\bm{\hat{x}}=\left(
\begin{array}{c}
\hat{x}_1\\
\hat{x}_2\\
\hat{x}_3
\end{array}
\right).
$$
Its location in space, expressed by its coordinates in $(O;\bm{e}_1,\bm{e}_2,\bm{e}_3)$, is
$$
\bm{x}=\left(
\begin{array}{c}
x_1\\
x_2\\
x_3
\end{array}
\right)
=P\bm{\hat{x}}+\bm{x}_0.
$$
Every $P\in SO_3$ has a representation by Euler angles $\alpha, \beta, \gamma$:
\begin{eqnarray}
&&P(\alpha,\beta,\gamma)\nonumber\\
&=&\left(
\begin{array}{ccc}
\cos\alpha &\quad -\sin\alpha\cos\gamma &\quad\sin\alpha\sin\gamma\\
\sin\alpha\cos\beta &\quad\cos\alpha\cos\beta\cos\gamma-\sin\beta\sin\gamma &
\quad -\cos\alpha\cos\beta\sin\gamma-\sin\beta\cos\gamma\\
\sin\alpha\sin\beta &\quad\cos\alpha\sin\beta\cos\gamma+\cos\beta\sin\gamma &
\quad -\cos\alpha\sin\beta\sin\gamma+\cos\beta\cos\gamma
\end{array}
\right),\label{EulerRep}
\end{eqnarray}
with
$$
\alpha\in [0,\pi],\ \beta,\gamma\in [0,2\pi).
$$
The uniform probability measure on $SO_3$ is given by
$$
\d\nu=\frac{1}{8\pi^2}\sin\alpha\d\alpha\d\beta\d\gamma.
$$
\subsection{Density functional theory}
We start from the extension of virial expansion that includes inhomogeneity
both spatial and orientational.
\begin{equation}\label{FreeEng1}
\begin{split}
F[f]&=F_0+\frac{k_BT}{V}
\left[\int \d\nu\d\bm{x} f(\bm{x},P)\log f(\bm{x},P)\right. \\
&\left.+\frac{1}{2}\int\d\nu(P)\d\bm{x}\d\nu(P')\d\bm{x'}
f(\bm{x},P)G(\bm{x},P,\bm{x'},P')f(\bm{x'},P')\right].
\end{split}
\end{equation}
The probability density function $f$ agrees with the concentration $c$:
$$
\frac{1}{V}\int\d\bm{x}\int\d\nu f(\bm{x},P)=c.
$$
Virial expansion is appropriate for small concentration.
Corrections for large concentration have also been discussed, such as
in \cite{Largec}.
The kernel function in (\ref{FreeEng1}) is given by Mayer function\cite{Mayer}
\begin{equation}\label{VirExp}
G(\bm{x},P,\bm{x'},P')=1-\exp\left(-U(\bm{x},P,\bm{x'},P')/k_BT\right)
\end{equation}
where $U$ is pairwise interaction.
Many types of interaction could appear in $U$. But here we model the molecule
as a combination of spheres with the same diameter $D$ and assume that $U$
consists of the sum of interaction of every pairs of spheres.
Suppose that the distribution of their centers is given by $\rho(\bm{\hat{x}})$
in the body-fixed coordinate system, then $U$ can be written as
\begin{equation}\label{Interaction}
U(\bm{x},P,\bm{x'},P')=\int\d\bm{\hat{x}}\d\bm{\hat{x}'}
V_0\Big(|(P\bm{\hat{x}}+\bm{x})-(P'\bm{\hat{x}'}+\bm{x'})|\Big)
\rho(\bm{\hat{x}})\rho(\bm{\hat{x}'})
\end{equation}
where $V_0(r)$ is the potential of a single pair of spheres.
It can take hardcore potential
\begin{equation}\label{hardcore}
V_0(r)=\left\{
\begin{array}{cc}
\infty,&r\le D\\
0,&r>D
\end{array}
\right.
,
\end{equation}
or Lennard-Jones potential
\begin{equation}\label{LJ}
V_0(r)=4\epsilon\left[
\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^6\right].
\end{equation}
In (\ref{LJ}) $\sigma$ is a function of $D$. Some other types of interaction between spheres
can also be incoporated in $V_0$, such as electrostatic potential for charged
molecules.
Independent of $V_0$, kernel function $G$ has the following properties:
\begin{propos}\label{Inv0}
\begin{enumerate}
\item
$G(\bm{x},P,\bm{x'},P')$ remains unchanged when switching $(\bm{x},P)$ and
$(\bm{x'}, P')$:
\begin{equation}\label{InvSwap}
G(\bm{x},P,\bm{x'},P')=G(\bm{x'},P',\bm{x},P).
\end{equation}
\item
$G(\bm{x},P,\bm{x'},P')$ depends only on $\bm{x-x'}$ when $P$ and $P'$
are fixed:
\begin{equation}\label{Inv1}
G(\bm{x},P,\bm{x'},P')=G(\bm{x-x'},P,P').
\end{equation}
\item
$G(\bm{x},P,\bm{x'},P')$ is invariant when two molecules rotate together:
\begin{equation}\label{Inv2}
G\big(T(\bm{x-x'}),TP,TP'\big)=G(\bm{x-x'},P,P'),\ \forall T\in SO_3.
\end{equation}
\end{enumerate}
\end{propos}
\begin{proof}
From (\ref{VirExp}) it is sufficient to show
\begin{eqnarray*}
&&U(\bm{x},P,\bm{x'},P')=U(\bm{x'},P',\bm{x},P)=U(\bm{x-x'},P,P'),\\
&&U\big(T(\bm{x-x'}),TP,TP'\big)=U(\bm{x-x'},P,P').
\end{eqnarray*}
The former is obvious from the definition (\ref{Interaction}) of $U$. For the
latter,
\begin{eqnarray*}
U\big(T(\bm{x-x'}),TP,TP'\big)&=&\int\d\bm{\hat{x}}\d\bm{\hat{x}'}
V_0\Big(\left|(TP\bm{\hat{x}}+T\bm{x})-(TP'\bm{\hat{x}'}+T\bm{x'})\right|\Big)
\rho(\bm{\hat{x}})\rho(\bm{\hat{x}'})\\
&=&\int\d\bm{\hat{x}}\d\bm{\hat{x}'}
V_0\Big(|T|\cdot\left|(P\bm{\hat{x}}+\bm{x})-(P'\bm{\hat{x}'}+\bm{x'})\right|\Big)
\rho(\bm{\hat{x}})\rho(\bm{\hat{x}'})\\
&=&U(\bm{x-x'},P,P').
\end{eqnarray*}
\end{proof}
Now we deduce the free energy of spatially homogeneous phases, namely
$$
f(\bm{x},P)=c\tilde{f}(P)
$$
where $\tilde{f}(P)$ is a density function on $SO_3$. Define homogeneous kernel
function as
\begin{equation}\label{HomG}
\tilde{G}(P,P')=\int\d\bm{x'} G(\bm{x-x'},P,P').
\end{equation}
It is well-defined because the integration on the right side is
invariant with $\bm{x}$:
\begin{eqnarray*}
\int\d\bm{x'} G(\bm{x-x'},P,P')&=&\int\d(\bm{x'-x}) G(\bm{x-x'},P,P')\\
&=&\int\d\bm{x'} G(\bm{0-x'},P,P'),\qquad \forall \bm{x}\in \mathbb{R}^3.
\end{eqnarray*}
Applying Proposition \ref{Inv0} to $\tilde{G}$, we have
\begin{propos}\label{RelP}
$\tilde{G}(P,P')$ satisfies
$$
\tilde{G}(P,P')=\tilde{G}(P',P)
$$
and
$$
\tilde{G}(P,P')=\tilde{G}(TP,TP').
$$
By setting $T=P^{-1}$, we know that $\tilde{G}(P,P')$ is a function of
$\bar{P}=P^{-1}P'$, which is denoted by $\tilde{G}(\bar{P})$.
\end{propos}
\begin{proof}
Using (\ref{InvSwap}) and (\ref{Inv2}), we get
\begin{eqnarray*}
\tilde{G}(P',P)&=&\int\d\bm{x'}G(\bm{x-x'},P',P)\\
&=&\int\d\bm{x'}G(\bm{x'-x},P,P')=\int\d\bm{x'}G(\bm{-x-x'},P,P')\\
&=&\tilde{G}(P,P'), \\\\
\tilde{G}(TP,TP')&=&\int\d\bm{x'}G\big(T(\bm{x-x'}),TP,TP'\big)\\
&=&\int\d\bm{x'}G(\bm{x-x'},P,P')=\tilde{G}(P,P').
\end{eqnarray*}
\end{proof}
The rest of paper will focus on spatially homogeneous phases.
For convenience we use $f(P)$ and $G(P,P')$ instead of
$\tilde{f}(P)$ and $\tilde{G}(P,P')$.
Then the free energy (\ref{FreeEng1}) becomes
\begin{eqnarray}
\frac{F[f]}{c}&=&\frac{F_0}{c}+k_BT\log c+k_BT
\left[\int \d\nu f(P)\log f(P)\right. \nonumber\\
&&\left.+\frac{c}{2}\int\d\nu(P)\d\nu(P')
f(P)G(P,P')f(P')\right]\label{FreeEngN}
\end{eqnarray}
with the normalization condition of $f(P)$
\begin{equation}\label{Consrv}
\int\d\nu f(P)=1.
\end{equation}
For rod-like and bent-core molecules, the sphere centers lie on a curve.
They can be viewed as either discretely or continuously distributed on the
curve, which means
$$
\rho(\bm{\hat{x}})=\sum_{j=0}^N\delta(\hat{\bm{x}}-\bm{\tilde{r}}_j),
$$
or
$$
\rho(\bm{\hat{x}})=\int\d s \delta\big(\bm{\hat{x}}-\bm{\tilde{r}}(s)\big).
$$
In the discrete version, a rod-like molecule is modelled by
\begin{equation}\label{DisR}
\bm{\tilde{r}}_j=Ls_j\bm{m}_1,\ s_j=\frac{j}{N}-\frac{1}{2},\ L=(N-1)r_0,
\end{equation}
and a bent-core molecule is modelled by
\begin{equation}\label{DisB}
\bm{\tilde{r}}_j=L(\frac{1}{2}-|s_j|)\cos\frac{\theta}{2}\bm{m}_1 +Ls_j
\sin\frac{\theta}{2}\bm{m}_2,\ s_j=\frac{j}{N}-\frac{1}{2},\ L=(N-1)r_0,
\end{equation}
where $N$ is even. In the continum version, a rod-like molecule is modelled by
\begin{equation}\label{ContR}
\bm{\tilde{r}}(s)=Ls\bm{m}_1,\ s\in[-\frac{1}{2},\frac{1}{2}],
\end{equation}
and a bent-core molecule is modelled by
\begin{equation}\label{ContB}
\bm{\tilde{r}}(s)=L(\frac{1}{2}-|s|)\cos\frac{\theta}{2}\bm{m}_1
+Ls\sin\frac{\theta}{2}\bm{m}_2,\ s\in[-\frac{1}{2},\frac{1}{2}].
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.87\textwidth,keepaspectratio]{molecules.pdf}
\caption{Different rigid molecules. From left to right: rod,
bent-core molecule, spherotriangle, spherocuboid.}\label{molecules}
\end{figure}
Both the discrete and continuous versions have the same symmetry: rod-like
molecules have $D_{\infty h}$ symmetry, and bent-core molecules have $C_{2v}$
symmetry. Another example of $C_{2v}$ symmetry is isosceles spherotriangles
(by sphero-A, we refer to the {\it Minkowski sum} of A and a sphere):
\begin{equation}
\rho(\bm{\hat{x}})=\int_T \d S \delta(\bm{\hat{x}}-\bm{r})
\end{equation}
where $T$ is an isosceles triangle that lies in plane $O\bm{m}_1\bm{m}_2$.
Spherocuboids, which possess $D_{2h}$ symmetry, is considered in \cite{Bi1}.
The distribution of sphere centers is given by
\begin{equation}
\rho(\bm{\hat{x}})=\int_C \d V \delta(\bm{\hat{x}}-\bm{r})=\chi_{\{\bm{\hat{x}}\in C\}}
\end{equation}
where $C$ is a cuboid with edges parallel to $\bm{m}_i$.
The molecules mentioned above are drawn in Fig.\ref{molecules}.
Next we try to compute $G(P,P')$ when using hardcore potential
(\ref{hardcore}). Notice that $G(\bm{x},P,\bm{x'},P')$ equals to $1$ if two
molecules overlap and $0$ elsewhere. Thus by (\ref{HomG}) $G(P,P')$ is actually
excluded-volume potential, and the problem has converted into finding this
volume. Onsager potential (\ref{Onsager}) is an approximation of the excluded
volume for rod-like molecules. Similar approach has been done for
spherozonotopes by B. M. Mulder\cite{Mulder} based on {\it Steiner formula}
\cite{Convex}. Here we present the calculation of the excluded volume of
spherotriangles.
Denote the set of sphere centers of two molecules as $T_1$ and $T_2$.
The excluded region of two molecules is $K+B_D$, where $K=T_1-T_2$ and
$B_D$ is a sphere of diameter D. Here $A-B$ represents
$$
A-B=\{a-b|a\in A, b\in B\}.
$$
When $K$ is convex,
the measure of $K+B_D$ is expressed by Steiner formula in a polynomial of $D$.
Here we write down the three-dimensional case
\begin{equation}\label{Steiner}
V(K+B_D)=V_3(K) + DV_2(K) + \pi D^2V_1(K) + \frac{4}{3}\pi D^3,
\end{equation}
where $V_3$ is the volume and $V_2$ is the surface area of $K$(see p.210 of
\cite{Convex}). $V_1$ is the {\it mean width} of $K$ (for the definition,
see p.42 of \cite{Convex}). For a polytope, $V_1$ is written as
\begin{equation}\label{ExtRep}
V_1=\sum_{e}\gamma(e,K)l(e).
\end{equation}
The sum is taken over all the edges of $K$. $l(e)$ is the length of edge, and
$\gamma(e,K)$ represents the external angle at $e$. It is defined as
$$
\gamma(e,K)=\frac{\theta}{2\pi},
$$
where $\theta$ is the angle between outward normal vectors of two faces that
share edge $e$.
For a polytope $K$, each of the four terms in (\ref{Steiner}) represents a part
of $K+B_D$: $K$; parallelepipeds growing outward at each face; circular sector
cylinders at each edge; corner regions at each vertex.
We list $V_i$ for the excluded volume of two spherotriangles
$T_1=\triangle OAB$ and $-T_2=\triangle O'A'B'$.
Details are left to Appendix. Denote the edges of triangles as
$$
\overrightarrow{AO}=\bm{a},\ \overrightarrow{OB}=\bm{b},\ \overrightarrow{BA}=\bm{c},\
\overrightarrow{A'O'}=\bm{a'},\ \overrightarrow{O'B'}=\bm{b'},\ \overrightarrow{B'A'}=\bm{c'}.
$$
We have
\begin{align*}
&V_3(K)=
\frac{1}{2}\Big(|\bm{a}\times\bm{b}\cdot\bm{a'}|+|\bm{a}\times\bm{b}\cdot\bm{b'}|
+|\bm{a}\times\bm{b}\cdot\bm{c'}|+|\bm{a'}\times\bm{b'}\cdot\bm{a}|
+|\bm{a'}\times\bm{b'}\cdot\bm{b}|+|\bm{a'}\times\bm{b'}\cdot\bm{c}|\Big).\\
&V_1(K)=\frac{1}{2}\Big(|\bm{a}|+|\bm{b}|+|\bm{c}|+|\bm{a'}|+|\bm{b'}|+|\bm{c'}|\Big).
\end{align*}
The expression of $V_2$ depends on the relative orientation of two molecules.
Assume that
\begin{eqnarray*}
(\bm{a}\times\bm{b}\cdot\bm{a'})(\bm{a}\times\bm{b}\cdot\bm{b'})\ge 0, \\
(\bm{a'}\times\bm{b'}\cdot\bm{a})(\bm{a'}\times\bm{b'}\cdot\bm{b})\ge 0.
\end{eqnarray*}
Otherwise we could rotate the notations of the edges.
If $(\bm{c}\times\bm{c'}\cdot\bm{a})(\bm{c}\times\bm{c'}\cdot\bm{a'})>0$, then
$$
V_2(K)=|\bm{a}\times \bm{b}|+|\bm{a'}\times \bm{b'}|
+|\bm{a}\times \bm{a'}|+|\bm{a}\times \bm{b'}|+|\bm{b}\times \bm{a'}|
+|\bm{b}\times \bm{b'}|+|\bm{c}\times \bm{c'}|.
$$
If $(\bm{c}\times\bm{c'}\cdot\bm{a})(\bm{c}\times\bm{c'}\cdot\bm{a'})<0$, then
$$
V_2(K)=|\bm{a}\times \bm{b}|+|\bm{a'}\times \bm{b'}|
+|\bm{c}\times \bm{a'}|+|\bm{c}\times \bm{b'}|
+|\bm{a}\times \bm{c'}|+|\bm{b}\times \bm{c'}|.
$$
When $K$ is not convex, $V(K+B_D)$ does not have a general formula. In Appendix
we give an expression for the excluded volume of bent-core molecules.
Excluded-volume potential fails to contain temperature $T$ in
kernel function, which makes it insufficient to study thermotropic LC.
In the next two sections we will present a systematic procedure of constructing
an approximation of kernel function that does not eliminate temperature.
Symmetries play an important role throughout the procedure.
\section{Symmetries of kernel function and reduction of configuration space}
\label{Sy}
In this section we study the symmetric properties of $G$ and $f$ inherited from
molecular symmetries. That a rigid molecule is symmetric under $T\in SO_3$ means
\begin{equation}\label{Symm0}
\rho(T\bm{\hat{x}})=\rho(\bm{\hat{x}}).
\end{equation}
Denote by $\H$ a subgroup of $SO_3$ that leaves the molecule invariant.
We start from two fundamental theorems.
\begin{thm}
\label{TInv}
If $T\in\H$, then
\begin{equation}\label{SymmG}
G(PT,P')=G(P,P'T)=G(P,P').
\end{equation}
\end{thm}
\proof
From (\ref{Symm0}) and (\ref{Interaction}), $U$ is symmetric under $T$:
\begin{eqnarray*}
U(\bm{x},PT,\bm{x'},P')&=&\int\d\bm{\hat{x}}\d\bm{\hat{x}'}
V_0\Big(\left|(PT\bm{\hat{x}}+\bm{x})-(P'\bm{\hat{x}'}+\bm{x'})\right|\Big)
\rho(\bm{\hat{x}})\rho(\bm{\hat{x}'})\\
&=&\int\d(T\bm{\hat{x}})\d\bm{\hat{x}'}
V_0\Big(\left|(P(T\bm{\hat{x}})+\bm{x})-(P'\bm{\hat{x}'}+\bm{x'})\right|\Big)
\rho(T\bm{\hat{x}})\rho(\bm{\hat{x}'})\\
&=&\int\d\bm{\hat{x}}\d\bm{\hat{x}'}
V_0\Big(\left|(P\bm{\hat{x}}+\bm{x})-(P'\bm{\hat{x}'}+\bm{x'})\right|\Big)
\rho(\bm{\hat{x}})\rho(\bm{\hat{x}'})\\
&=&U(\bm{x},P,\bm{x'},P').
\end{eqnarray*}
By (\ref{VirExp}), we have
\begin{eqnarray*}
G(\bm{x-x'},P,P')&=&1-\exp\big(-U(\bm{x},P,\bm{x'},P')\big)\\
&=&1-\exp\big(-U(\bm{x},PT,\bm{x'},P')\big)\\
&=&G(\bm{x-x'},PT,P').
\end{eqnarray*}
Hence by (\ref{HomG}), we get
\begin{eqnarray*}
G(P,P')&=&\int\d\bm{x'} G(\bm{x-x'},P,P')\\
&=&\int\d\bm{x'} G(\bm{x-x'},PT,P')=G(PT,P').
\end{eqnarray*}
The other equality in (\ref{SymmG}) could be obtained similarly.
\qed
\begin{thm}
\label{SymP}
If a molecule has a symmetry plane with unit normal vector $\bm{\hat{k}}$,
then
\begin{equation}\label{SymPlane}
G(J\bar{P}J)=G(\bar{P})
\end{equation}
where $J$ is the rotation around $\bm{\hat{k}}$ by $\pi$.
\end{thm}
\begin{proof}
Assume that the symmetry plane contains $\hat{O}$, otherwise we shift the body-fixed
coordinate system to meet this requirement. Now we have
\begin{equation}\label{SymPlane0}
\rho\big(\bm{\hat{x}}-2(\bm{\hat{k}}\cdot\bm{\hat{x}})\bm{\hat{k}}\big)=\rho(\bm{\hat{x}}).
\end{equation}
Note that
\begin{eqnarray*}
J\bm{\hat{x}}&=&J\big(\bm{\hat{x}}-(\bm{\hat{k}}\cdot\bm{\hat{x}})\bm{\hat{k}}\big)
+(\bm{\hat{k}}\cdot\bm{\hat{x}})J\bm{\hat{k}}\\
&=&-\big(\bm{\hat{x}}-(\bm{\hat{k}}\cdot\bm{\hat{x}})\bm{\hat{k}}\big)
+(\bm{\hat{k}}\cdot\bm{\hat{x}})\bm{\hat{k}}\\
&=&-\bm{\hat{x}}+2(\bm{\hat{k}}\cdot\bm{\hat{x}})\bm{\hat{k}}.
\end{eqnarray*}
Substituting it into (\ref{SymPlane0}), we get
$$
\rho(-J\bm{\hat{x}})=\rho(\bm{\hat{x}}).
$$
Therefore
\begin{eqnarray*}
&&U(\bm{x},PJ,\bm{x'},P'J)\\
&=&\int\d\bm{\hat{x}}\d\bm{\hat{x}'}
V_0\Big(\left|(PJ\bm{\hat{x}}+\bm{x})-(P'J\bm{\hat{x}'})+\bm{x'}\right|\Big)
\rho(\bm{\hat{x}})\rho(\bm{\hat{x}'})\\
&=&\int\d(-J\bm{\hat{x}})\d(-J\bm{\hat{x}'})
V_0\Big(\left|\big(P(-J\bm{\hat{x}})-\bm{x}\big)-\big(P'(-J\bm{\hat{x}'}\big)-\bm{x'})\right|\Big)
\rho(-J\bm{\hat{x}})\rho(-J\bm{\hat{x}'})\\
&=&\int\d\bm{\hat{x}}\d\bm{\hat{x}'}
V_0\Big(\left|(P\bm{\hat{x}}-\bm{x})-(P'\bm{\hat{x}'}-\bm{x'})\right|\Big)
\rho(\bm{\hat{x}})\rho(\bm{\hat{x}'})\\
&=&U(-\bm{x},P,-\bm{x'},P').
\end{eqnarray*}
Similar to Theorem \ref{TInv}, we have
$$
G(\bm{x-x'},P,P')=G(\bm{x'-x},PJ,P'J).
$$
Thus
\begin{eqnarray*}
G(P,P')&=&\int\d\bm{x'} G(\bm{x-x'},P,P')\\
&=&\int\d\bm{x'} G(\bm{x'-x},PJ,P'J)=G(PJ,P'J).
\end{eqnarray*}
\end{proof}
The local minima of the free energy (\ref{FreeEngN}) satisfy the Euler-Lagrange
equation
\begin{equation}\label{EL}
\lambda+\log f(P) + c\int\d\nu(P')G(P,P')f(P')=0
\end{equation}
where $\lambda$ is a Lanrangian multiplier to ensure the normalization of $f$.
Denote
\begin{equation}\label{DefU}
W(P)=c\int\d\nu(P')G(P,P')f(P').
\end{equation}
The solution of (\ref{EL}) has the form
\begin{equation}\label{Boltz}
f(P)=C\exp\big(-W(P)\big).
\end{equation}
\begin{thm}\label{fInv}
If $T\in \H$, the solutions of (\ref{EL}) satisfy
\begin{equation}\label{fInv1}
f(P)=f(PT).
\end{equation}
\end{thm}
\proof
Substitute $P$ with $PT$ in (\ref{EL}). By Theorem \ref{TInv}, $G(PT,P')
=G(P,P')$, thus
$$
\lambda+\log f(PT)=-\int\d\nu(P')G(PT,P')f(P')
=-\int\d\nu(P')G(P,P')f(P')=\lambda+\log f(P).
$$
\qed
With the symmetries of $G$ and $f$, the configuration space could be
reduced. Theorem.\ref{TInv} and \ref{fInv} indicate that $G$ is
a function of $\H \bar{P}\H$, and $f$ is a function of $P\H$. Note that cosets
of subgroup $\H$ form a partition of $SO_3$. Thus it allows us to define
$\Omega=\{P\H | P\in SO_3\}$ as the new configuration space, where $f$ and $G$
are well-defined in $\Omega$ and $\Omega\times\Omega$, respectively.
If we denote the probability space on $SO_3$ as $(SO_3, \mathcal{F}, \nu)$, then the new
probability space $(\Omega, \mathcal{F}_{\H},\nu_{\H})$ is defined as follows:
\begin{eqnarray*}
&&\mathcal{F}_{\H}=\{\mathscr{A}\H |\mathscr{A}\subset SO_3\}\bigcap\mathcal{F}, \\
&&\nu_{\H}(\mathscr{A}\H)=\nu(\mathscr{A}\H).
\end{eqnarray*}
It can be proved that for any $\mathcal{F}_{\H}$ measurable function $h$,
$$
\int_{\Omega}h\d\nu_{\H}=\int_{SO_3}\tilde{h}\d\nu
$$
where $\tilde{h}$ is defined as $\tilde{h}(P)=h(P\H)$. Hence the free energy could be rewritten as
\begin{eqnarray*}
\frac{F}{c}&=&\frac{F_0}{c}+k_BT\log c
+ k_BT\left[\int_{\Omega}f(P\H)\ln f(P\H)\d\nu_{\H}\right.\\
&&\left.+\frac{c}{2}\int_{\Omega\times\Omega}
f(P\H)G(\H\bar{P}\H)f(P'\H)\d\nu_{\H}(P\H)\d\nu_{\H}(P'\H)\right]
\end{eqnarray*}
with the normalization condition $\int_{\Omega}f(P\H)\d\nu_{\H}=1$.
The above process reduces the configuration space of molecules with
$C_{\infty}$ symmetry to $S^2$.
\begin{thm}
For molecules with $C_{\infty}$ symmetry, the configuration space is reduced
to $S^2$ with the uniform probablity measure
$$
\d\nu_{\H}=\frac{\sin\alpha}{4\pi}\d\alpha\d\beta.
$$
\end{thm}
\proof
Because $\H$ consists of all the rotations around $\bm{m}_1$,
$$
P(\alpha,\beta,\gamma)\H=\big\{P(\alpha,\beta,\theta)|\theta\in[0,2\pi)\big\}.
$$
So we could select $P(\alpha,\beta,0)$ as the representative element of $P\H$,
and $\Omega$ becomes
$$
\Omega=\big\{P(\alpha,\beta,0)|\alpha\in[0,\pi],\beta\in[0,2\pi)\big\}=S^2.
$$
Hence any $\mathscr{A}\H\in\mathcal{F}$ equals to $\mathscr{A}$ if $\mathscr{A}$
consists of some $P(\cdot,\cdot,0)$. The measure on the reduced configuration
space is given by
\begin{eqnarray*}
\nu_{\H}(\mathscr{A}\H)&=&\nu(\mathscr{A}\H)\\
&=&\int_{\mathscr{A}\H} \frac{\sin\alpha}{8\pi^2}\d\alpha\d\beta\d\gamma
\\
&=&\int_{\mathscr{A}}\frac{\sin\alpha}{4\pi}\d\alpha\d\beta
\int_0^{2\pi}\frac{1}{2\pi}\d\gamma\\
&=&\int_{\mathscr{A}}\frac{\sin\alpha}{4\pi}\d\alpha\d\beta.
\end{eqnarray*}
Thus $\d\nu_{\H}=\frac{\sin\alpha}{4\pi}\d\alpha\d\beta$ is the
uniform measure on $S^2$.
\qed
\section{Polynomial approximation of kernel function}
\label{Trunc}
In this section we describe the construction of approximate kernel function.
We aim to use a polynomial of nine elements of $\bar{P}$
$$
\bar{P}=(\bm{m}_i\cdot\bm{m'}_j)_{3\times 3}\triangleq(p_{ij})_{3\times 3}
$$
as the approximation.
This form of approximation reduces the Euler-Lagrange equation (\ref{EL}) to
a few self-consistent equations about moments. Suppose that $G$ has a term
$$
C(\bm{m}_{\sigma_1}\cdot\bm{m}_{\sigma'_1})\ldots
(\bm{m}_{\sigma_n}\cdot\bm{m}_{\sigma'_n})
=C(\bm{m}_{\sigma_1}\ldots\bm{m}_{\sigma_n})
:(\bm{m}_{\sigma'_1}\ldots\bm{m}_{\sigma'_n}).
$$
It corresponds to a term
\begin{eqnarray*}
&&C\int\d\nu(P)\d\nu(P')f(P)(\bm{m}_{\sigma_1}\ldots\bm{m}_{\sigma_n})
:(\bm{m}_{\sigma'_1}\ldots\bm{m}_{\sigma'_n})f(P')\\
&=&C\left(\int\d\nu(P)f(P)\bm{m}_{\sigma_1}\ldots\bm{m}_{\sigma_n}\right):
\left(\int\d\nu(P')(\bm{m}_{\sigma'_1}\ldots\bm{m}_{\sigma'_n})f(P')\right)\\
&=&C\left<\bm{m}_{\sigma_1}\ldots\bm{m}_{\sigma_n}\right>:
\left<\bm{m}_{\sigma'_1}\ldots\bm{m}_{\sigma'_n}\right>
\end{eqnarray*}
in the free energy. And $W(P)$ must be of the form
$$
W(P)=\sum C\left<\bm{m}_{\sigma_1}\ldots\bm{m}_{\sigma_n}\right>
:\bm{m'}_{\sigma'_1}\ldots\bm{m'}_{\sigma'_n}.
$$
Denote by $\mathcal{M}$ the set of moments that appear in the free energy.
This formula indicates that $W(P)$ is determined by the value of moments in
$\mathcal{M}$. On the other hand, the moments can be calculated with (\ref{Boltz}) by
\begin{equation}\label{SelfC}
\left<\bm{m}_{\sigma_1}\ldots\bm{m}_{\sigma_n}\right>=
C\int\d\nu(P')\bm{m'}_{\sigma_1}\ldots\bm{m'}_{\sigma_n}\exp\big(-W(P')\big).
\end{equation}
Notice that the right side is a function of the moments.
Applying this formula to all the moments in $\mathcal{M}$, we obtain a group of
self-consistent equations about these moments. So we only need to solve
the moments in $\mathcal{M}$ instead of $f$.
Next we will deduce the form of polynomial approximation of kernel function
from its symmetries.
Maier-Saupe potential will be derived naturally from the analysis.
The nine elements of $\bar{P}$ are not independent. The third column is
uniquely determined by the other two columns:
$$
p_{i3}=p_{i+1,1}p_{i+2,2}-p_{i+1,2}p_{i+2,1},\ i=1,2,3.
$$
Therefore $G$ can be expressed by a function of six variables
$$
p_{ij},\quad i=1,2,3,\ j=1,2.
$$
\begin{propos}\label{SymP2}
If a molecule has reflection symmetry, and the symmetry plane is perpendicular
to $\bm{m}_3$,
then $G$ depends only on the following four elements of $\bar{P}$:
$$
p_{ij},\quad i,j=1,2.
$$
\end{propos}
\begin{proof}
When $p_{ij}(i,j=1,2)$ are given properly, $(p_{31},p_{32})$ might take two
possible pairs of value: $(y_1,y_2)$ and $(-y_1,-y_2)$, which satisfy
\begin{eqnarray*}
&&p_{11}^2+p_{21}^2+y_1^2=p_{12}^2+p_{22}^2+y_2^2=1, \\
&&p_{11}p_{11}+p_{21}p_{22}+y_1y_2=0.
\end{eqnarray*}
Using Theorem \ref{SymP}, we have
$$
G(J\bar{P}J)=G(\bar{P})
$$
with
\begin{equation}\label{m3}
J=\left(
\begin{array}{ccc}
-1&\ 0&\ 0\\
0&\ -1&\ 0\\
0&\ 0&\ 1
\end{array}
\right).
\end{equation}
Note that $J\bar{P}J$ leaves $p_{ij}(i,j=1,2)$ remained,
but changes the sign of $p_{31},p_{32}$, thus
$$
G(p_{11},p_{21},y_1,p_{12},p_{22},y_2)=G(p_{11},p_{21},-y_1,p_{12},p_{22},-y_2)
=G(p_{11},p_{21},p_{12},p_{22}).
$$
\end{proof}
Next we examine the properties of $G$ when $\H$ contains a rotation of an angle
$\theta$ around $\bm{m}_1$. The matrix that represents this rotation is
\begin{equation}
J_{\theta}=\left(
\begin{array}{ccc}
1&\ 0&\ 0\\
0&\ \cos\theta&\ -\sin\theta\\
0&\ \sin\theta&\ \cos\theta
\end{array}
\right).
\end{equation}
Direct computation gives
\begin{equation}\label{Jtheta}
J_{\theta}\bar{P}(\alpha,\beta,\gamma)=\bar{P}(\alpha,\beta+\theta,\gamma),\quad
\bar{P}(\alpha,\beta,\gamma)J_{\theta}=\bar{P}(\alpha,\beta,\gamma+\theta)
\end{equation}
where $\bar{P}(\alpha,\beta,\gamma)$ is the representation of $\bar{P}$
by Euler angles.
\begin{propos}\label{Rot}
If $J_{\theta}\in\H$, then
\begin{equation}\label{Rot0}
G\big(\bar{P}(\alpha,\beta,\gamma)\big)=G\big(\bar{P}(\alpha,\beta+\theta,\gamma)\big)
=G\big(\bar{P}(\alpha,\beta,\gamma+\theta)\big).
\end{equation}
\end{propos}
\proof
Using Theorem \ref{TInv}, we have
\begin{eqnarray*}
G(\bar{P})=G(\bar{P}J_{\theta}),\qquad
G(\bar{P})=G(J_{\theta}\bar{P}).
\end{eqnarray*}
Along with (\ref{Jtheta}), we obtain (\ref{Rot0}).
\qed
In the following theorem, $\bm{m}_1$ always coincides with the rotational axis.
\begin{thm}\label{SymB}
\begin{enumerate}
\item
For a molecule with $C_{2v}$ symmetry, $G$ is a function of $p_{11},p_{12},p_{21},p_{22}$, with
\begin{equation}\label{Sym_m1}
G(p_{11},p_{12},p_{21},p_{22})=G(p_{11},-p_{12},p_{21},-p_{22})
=G(p_{11},p_{12},-p_{21},-p_{22}).
\end{equation}
\item
For a molecule with $C_{\infty}$ symmetry, $G$ is a function of
$p_{11}=\bm{m}_1\cdot\bm{m'}_1$. If the molecule has $D_{\infty h}$ symmetry,
$G$ is a function of $|p_{11}|$.
\end{enumerate}
\end{thm}
\proof
\begin{enumerate}
\item
Theorem \ref{TInv} gives $G(\bar{P})=G(J_{\pi}\bar{P})=G(\bar{P}J_{\pi})$.
By Proposition \ref{SymP2}, (\ref{Sym_m1}) holds.
\item
Axially symmetry means that Proposition \ref{Rot} is valid with arbitrary
$\theta$. Therefore
\begin{equation}\label{AxiSym1}
G\big(\bar{P}(\alpha,\beta,\gamma)\big)=G\big(\bar{P}(\alpha,0,0)\big)
=G(\cos\alpha)=G(p_{11}).
\end{equation}
Like in Theorem \ref{SymP}, we suppose that the plane contains $\hat{O}$.
Note that the rotation of $\pi$ around $\bm{m}_3$, represented by $J$ defined
in (\ref{m3}), is contained in $D_{\infty h}$. By Theorem \ref{TInv},
$G(\bar{P}J)=G(\bar{P})$. We deduce from (\ref{AxiSym1}) that
$$
G(\bar{P})=G(p_{11})=G(-p_{11})=G(|p_{11}|).
$$
\end{enumerate}
\qed
With the above discussion, we are able to construct polynomial approximations
for molecules with different symmetries. We start from the approximate
kernel function of molecules with $D_{\infty h}$ symmetry.
In Theorem \ref{SymB} we have proven that
$G=G(|\bm{m}_1\cdot\bm{m'}_1|)$. Its approximation should be a polynomial of
$\bm{m}_1\cdot\bm{m'}_1$ without odd-degree terms. Therefore it is at least
quadratic, which coincides with the form of Maier-Saupe potential:
$$
G=c_0+c_2(\bm{m}_1\cdot\bm{m'}_1)^2.
$$
The above form indicates that $\mathcal{M}=\big\{\left<\bm{m}_1\bm{m}_1\right>\big\}$.
When a molecule has only $C_{\infty}$ symmetry, odd-degree terms of $p_{11}$
no longer vanishes. Quadratic approximation will be
\begin{equation}\label{PolRod}
G=c_0+c_1(\bm{m}_1\cdot\bm{m'}_1)+c_2(\bm{m}_1\cdot\bm{m'}_1)^2,
\end{equation}
which is discussed in \cite{Dipol}. When this kernel is used,
$\mathcal{M}=\big\{\left<\bm{m}_1\right>,\ \left<\bm{m}_1\bm{m}_1\right>\big\}$.
Now we turn to the approximations of kernel function for molecules with
$C_{2v}$ symmetry, including bent-core molecules and spherotriangles.
By Proposition \ref{SymP2}, an approximation of $G$ is a polynomial of four
variables $p_{11},p_{12},p_{21},p_{22}$. Then by Proposition \ref{RelP}, it is
symmetric with respect to $p_{12}$ and $p_{21}$, which means
$$
G=G(p_{11},p_{22},p_{12}+p_{21},p_{12}p_{21}).
$$
Using (\ref{Sym_m1}), we are able to determine the form of polynomial.
Quadratic approximation is written as
\begin{equation}\label{QuaApp}
G=c_0+c_1p_{11}+c_2p_{11}^2+c_3p_{22}^2+c_4(p_{12}^2+p_{21}^2).
\end{equation}
Cubic approximation is written as
\begin{equation}\label{CubApp}
G=c_0+c_1p_{11}+c_2p_{11}^2+c_3p_{22}^2+c_4(p_{12}^2+p_{21}^2)
+c_5p_{11}^3+c_6p_{11}p_{22}^2+c_7p_{11}(p_{12}^2+p_{21}^2)+c_8p_{12}p_{21}p_{22}.
\end{equation}
For quadratic approximation,
$$
\mathcal{M}=\big\{\left<\bm{m}_1\right>,
\left<\bm{m}_1\bm{m}_1\right>, \left<\bm{m}_2\bm{m}_2\right>\big\};
$$
for cubic approximation,
$$
\mathcal{M}=\big\{\left<\bm{m}_1\right>,
\left<\bm{m}_1\bm{m}_1\right>, \left<\bm{m}_2\bm{m}_2\right>,
\left<\bm{m}_1\bm{m}_1\bm{m}_1\right>,\left<\bm{m}_1\bm{m}_2\bm{m}_2\right>\big\}.
$$
From the above discussion we know that the form of polynomial approximation
is determined by molecular symmetries.
The coefficients $c_i$ can be calculated by projecting $G$ to the space
spanned by all the polynomials of the given form. If the approximation has
the form
$$
\sum_i c_iq_i(\bar{P}),
$$
then the coefficients $c_i$ are determined by
$$
\sum_i \left[\int_{SO_3}\d \nu(\bar{P})q_i(\bar{P})q_j(\bar{P})\right]c_i
=\int_{SO_3}\d \nu(\bar{P})G(\bar{P};\Theta)q_j(\bar{P}).
$$
In the above, $\Theta$ is a set that consists of temperature and a group of
molecular parameters. The formula reveals that these coefficients are functions
of $\Theta$. Generally speaking, as temperature is included in $\Theta$, the
approximate $G$ is able to describe both lyotropic and thermotropic liquid
crystals.
The projection of Onsager potential to span$\{1,p_{11}^2\}$ gives
\begin{equation}
c_2=-\frac{15\pi}{32}cL^2D.
\end{equation}
As a constant difference in $G$ does not affect the solution, $c_0$ is ignored.
It is easy to see that $c_2$ is propotional to one effective parameter $cL^2D$.
The projection of the excluded-volume potential of spherocuboids is derived by
R. Rosso and E. G. Virga in \cite{quadproj}. In Appendix, we discuss the
projection of excluded-volume potential of isoceles spherotriangles to the
space of quadratic approximations. Suppose that the top corner is $\theta$ and
the length of lateral is $L/2$. The results are
\begin{eqnarray}
c_2&=&-\frac{15}{64}cL^3\sin\theta\cos^2\frac{\theta}{2}
-\frac{15\pi}{128}cL^2D\cos^4\frac{\theta}{2},\label{c_2}\\
c_3&=&-\frac{15}{64}cL^3\sin\theta\sin\frac{\theta}{2}(1+\sin\frac{\theta}{2})
-\frac{15\pi}{128}cL^2D\sin^2\frac{\theta}{2}(1+\sin\frac{\theta}{2})^2, \label{c_3}\\
c_4&=&-\frac{15}{128}cL^3\sin\theta(1+\sin\frac{\theta}{2})
-\frac{15\pi}{128}cL^2D\cos^2\frac{\theta}{2}\sin\frac{\theta}{2}
(1+\sin\frac{\theta}{2}). \label{c_4}
\end{eqnarray}
And $c_1$ is proportional to $cL^2D$ with
$$
c_1=\frac{3}{8}cL^2DK(\theta),
$$
where $K(\theta)$ is a function of $\theta$ defined in (\ref{Ktheta}).
\section{Further analysis and the choice of order parameters}\label{Ord}
In the previous section we select some moments and reduce the density
functional theory to a group of equations about them. Those equations usually
imply some properties of the moments. They could help us to choose independent
components of the moments as order parameters. Here we try to extract these
properties. Some of them depend on the values of coefficients. As the
coefficients are determined by molecular parameters and temperature,
these properties would reveal the impacts of them.
When $G$ takes Maier-Saupe potential, the only moment in $\mathcal{M}$ is
$\left<\bm{m}_1\bm{m}_1\right>$. It can be diagonalized by selecting
axes along its eigenvectors. Its trace equals to $1$, leaving only two degrees
of freedom remained. These two degrees of freedom could be further reduced to
one by the proof of uniaxial property\cite{AxiSymMS,AxiSym2,AxiSym3}.
\begin{thm}[Axially symmetry of the solution with Maier-Saupe potential]
If $G$ takes Maier-Saupe potential (\ref{MS}), every solution of (\ref{EL}) is
axially symmetric
\begin{equation}\label{Uniaxi}
f=f(\bm{m}_1\cdot\bm{n})=C\exp(-\eta(\bm{m}_1\cdot\bm{n})^2).
\end{equation}
\end{thm}
When $\mathcal{M}$ has more than one moments, there are usually some relations between
them. In \cite{Dipol} the following conclusion is shown,
which reduces the number of order parameters for polar rods to $3$.
\begin{thm}\label{m1_0_org}
When $G$ takes (\ref{PolRod}), we have
\begin{enumerate}
\item $\left<\bm{m}_1\right>$ parallels to one of the eigenvectors of
$\left<\bm{m}_1\bm{m}_1\right>$.
\item If $-c_1\le 1$ in (\ref{PolRod}), $\left<\bm{m}_1\right>=0$.
\end{enumerate}
\end{thm}
From now on we will focus on the kernel (\ref{QuaApp}):
$$
G=c_1p_{11}+c_2p_{11}^2+c_3p_{22}^2+c_4(p_{12}^2+p_{21}^2)
$$
where $c_0$ is set to zero, for it does not affect the solutions.
$W(P)$ is written as
\begin{eqnarray*}
W(P)&=&c_1\left<\bm{m}_1\right>\cdot\bm{m}_1
+\big(2c_2\left<\bm{m}_1\bm{m}_1\right>+c_4\left<\bm{m}_2\bm{m}_2\right>\big)
:\bm{m}_1\bm{m}_1\\
&&+\big(2c_3\left<\bm{m}_2\bm{m}_2\right>+c_4\left<\bm{m}_1\bm{m}_1\right>\big)
:\bm{m}_2\bm{m}_2\\
&=&c_1\left<\bm{m}_1\right>\cdot\bm{m}_1+W_1(P).
\end{eqnarray*}
Write down the components of $\bm{m}_i$ as
$$
(\bm{m}_1,\bm{m}_2,\bm{m}_3)=(\bm{e}_1,\bm{e}_2,\bm{e}_3)\left(
\begin{array}{ccc}
m_{11}&m_{21}&m_{31}\\
m_{12}&m_{22}&m_{32}\\
m_{13}&m_{23}&m_{33}
\end{array}
\right).
$$
Recalling the equality (\ref{RotP}), we know that $m_{ij}$ are the elements of
$P$. The next theorem contains a direct extension of the second part of Theorem
\ref{m1_0_org}, and discusses the relationship of axes of three moments
$\left<\bm{m}_{1}\right>,\left<\bm{m}_{1}\bm{m}_{1}\right>,
\left<\bm{m}_{2}\bm{m}_{2}\right>$.
\begin{thm}\label{MoB}
\begin{enumerate}
\item If $-c_1\le 1$, $\left<\bm{m}_1\right>=0$.
\item If $\left<\bm{m}_1\bm{m}_1\right>$ and $\left<\bm{m}_2\bm{m}_2\right>$
can be diagonalized simutaneously, $\left<\bm{m}_1\right>$ parallels to one of
the eigenvectors of $\left<\bm{m}_1\bm{m}_1\right>$.
\item If $c_1\ge -1$ and $c_4^2=c_2c_3$, $\left<\bm{m}_1\bm{m}_1\right>$ and
$\left<\bm{m}_2\bm{m}_2\right>$ can be diagonalized simutaneously by the axes
of $\left<d_1\bm{m}_1\bm{m}_1+d_2\bm{m}_2\bm{m}_2\right>$, where
$$
c_2=\pm d_1^2,\ c_3=\pm d_2^2,\ c_4=d_1d_2.
$$
\item If $\left<d_1\bm{m}_1\bm{m}_1+d_2\bm{m}_2\bm{m}_2\right>$ is uniaxial, and
$\left<\bm{m}_1\right>$ parallels to the axis, then both
$\left<\bm{m}_1\bm{m}_1\right>$ and $\left<\bm{m}_2\bm{m}_2\right>$ are
uniaxial.
\end{enumerate}
\end{thm}
\proof
\begin{enumerate}
\item
Set $J=\mbox{diag}(-1,1,-1)$. It is easy to verify that
$$
W_1(PJ)=W_1(P).
$$
The self-consistent equation of $\bm{m}_1$ yields
\begin{eqnarray*}
\left<\bm{m}_{1}\right>
&=&\frac{2\int \d\nu \bm{m}_{1}
\exp\big(-W_1(P)-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)}
{2\int\d\nu\exp\big(-W_1(P)-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)}\\
&=&\frac{\int \d\nu \bm{m}_{1}\big[
\exp\big(-W_1(P)-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)
-\exp\big(-W_1(JP)+c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)\big]}
{\int\d\nu\big{[}\exp(-W_1(P)-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1})
+\exp(-W_1(JP)+c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1})\big{]}}\\
&=&\frac{\int \d\nu \bm{m}_{1}\exp\big(-W_1(P)\big)
\sinh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)}
{\int \d\nu \exp(-W_1(P))
\cosh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)}.
\end{eqnarray*}
Therefore
\begin{equation}\label{m1m}
|\left<\bm{m}_{1}\right>|^2
=\frac{\int \d\nu \exp(-W_1(P))\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}
\sinh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)}
{\int \d\nu \exp(-W_1(P))
\cosh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)}.
\end{equation}
If $-c_1\le 0$, then
$$
\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}
\sinh(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1})\le 0,
$$
which yields $|\left<\bm{m}_{1}\right>|^2\le 0$.
If $0<-c_1\le 1$, using $x\tanh(x)<x^2$ for $x\ne 0$, we get
\begin{eqnarray*}
&&\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}
\sinh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)\\
&=& -c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}
\tanh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)
\cosh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)\\
&<&-c_1\big(\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)^2
\cosh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)\\
&\le& |\left<\bm{m}_{1}\right>|^2|\bm{m}_{1}|^2
\cosh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big)\\
&=&|\left<\bm{m}_{1}\right>|^2
\cosh\big(-c_1\left<\bm{m}_{1}\right>\cdot\bm{m}_{1}\big).
\end{eqnarray*}
If $\left<\bm{m}_1\right>\ne 0$,
we substitute the above inequality into (\ref{m1m}) and get
$|\left<\bm{m}_{1}\right>|^2<|\left<\bm{m}_{1}\right>|^2$, which is a contradiction.
\item
Select coordinate axes that diagonalize $\left<\bm{m}_1\bm{m}_1\right>$ and
$\left<\bm{m}_2\bm{m}_2\right>$. Now $W_1(P)$ is of the form
$$
W_1(P)=\sum_{i=1,2,j=1,2,3} c_{ij}m_{ij}^2.
$$
Set
\begin{equation}\label{Js}
J_1=\mbox{diag}(-1,1,1),\ J_2=\mbox{diag}(1,-1,1),\ J_3=\mbox{diag}(1,1,-1),
\end{equation}
the form of $W_1(P)$ indicates that
\begin{equation}\label{Ws}
W_1(P)=W_1(J_1PJ_3)=W_1(J_2PJ_3)=W_1(J_1J_2P).
\end{equation}
If $\left<\bm{m}_1\right>$ does not parallel to any one of the axes, at least
two of its components are nonzero.
Suppose $r_1=\left<m_{11}\right>\ne 0$, $r_2=\left<m_{12}\right>\ne 0$ and
denote $r_3=\left<m_{13}\right>$.
By part 1 of the current theorem, $-c_1>1$.
Thus $x\sinh(-c_1x)>0$ for $x\ne 0$.
Using the self-consistent equation of $\left<m_{11}m_{12}\right>$, we get
\begin{eqnarray*}
&&4r_1r_2\left<m_{11}m_{12}\right>\\
&=&\frac{4}{Z}\int \d\nu
r_1r_2m_{11}m_{12}\exp\big{(}-W_1(P)-c_1(r_1m_{11}+r_2m_{12}+r_3m_{13})\big{)}\\
&=&\frac{1}{Z}\int \d\nu r_1r_2m_{11}m_{12}\exp(-c_1r_3m_{13})
\Big{[}\exp\big(-W_1(P)-c_1(r_1m_{11}+r_2m_{12})\big)\\
&&-\exp\big(-W_1(J_1PJ_3)-c_1(-r_1m_{11}+r_2m_{12})\big)\\
&&-\exp\big(-W_1(J_2PJ_3)-c_1(r_1m_{11}-r_2m_{12})\big)\\
&&+\exp\big(-W_1(J_1J_2P)-c_1(-r_1m_{11}-r_2m_{12})\big)\Big{]}\\
&=&\frac{1}{Z}\int \d\nu r_1r_2m_{11}m_{12}\exp(-W_1(P)-c_1r_3m_{13})
\sinh (-c_1r_1m_{11})\sinh(-c_2r_2m_{12})\\
&>&0.
\end{eqnarray*}
This inequality violates the diagonalization of $\left<\bm{m}_1\bm{m}_1\right>$.
\item
From $c_4^2=c_2c_3$, we can write
$$
c_2p_{11}^2+c_3p_{22}^2+c_4(p_{12}^2+p_{21}^2)=\pm
(d_1\bm{m}_1\bm{m}_1+d_2\bm{m}_2\bm{m}_2):(d_1\bm{m'}_1\bm{m'}_1+d_2\bm{m'}_2\bm{m'}_2).
$$
Without loss of generality, the sign on the right side is assumed positive.
Because $c_1\ge -1$, we get $\left<\bm{m}_1\right>=0$. Thereby $W$ converts into
\begin{eqnarray*}
W(P)=W_1(P)=\left<d_1\bm{m}_1\bm{m}_1+d_2\bm{m}_2\bm{m}_2\right>:(d_1\bm{m}_1\bm{m}_1+d_2\bm{m}_2\bm{m}_2).
\end{eqnarray*}
This would enable us to select coordinate axes according to the axes of
$\left<d_1\bm{m}_1\bm{m}_1\right.+\left.d_2\bm{m}_2\bm{m}_2\right>$.
We show that
$\left<\bm{m}_1\bm{m}_1\right>$ and $\left<\bm{m}_2\bm{m}_2\right>$ are
diagonalized as well. In other words, we need to show that the off-diagonal
elements are zero. Let $J_1,J_2,J_3$ be defined as in (\ref{Js}).
Equation (\ref{Ws}) still holds for $W_1$. Therefore
\begin{eqnarray*}
&&4\left<m_{11}m_{12}\right>\\
&=&\frac{4}{Z}\int \d\nu m_{11}m_{12}\exp(-W_1(P))\\
&=&\frac{1}{Z}\int \d\nu m_{11}m_{12}
\Big[\exp\big(-W_1(P)\big)-\exp\big(-W_1(J_1PJ_3)\big)\\
&&-\exp\big(-W_1(J_2PJ_3)\big)+\exp\big(-W_1(J_1J_2P)\big)\Big]\\
&=&0.
\end{eqnarray*}
The zero values of other off-diangonal elements can be obtained similarly.
\item
First we choose axes that diagonalize $\left<d_1\bm{m}_1\bm{m}_1
+d_2\bm{m}_2\bm{m}_2\right>$ with diagonal elements $b_1,b_2,b_3$.
The uniaxiality requires that two of them are equal. Assume $b_1=b_2$,
then $\left<m_{11}\right>=\left<m_{12}\right>=0$.
Thereby $W$ is simplified to
\begin{eqnarray*}
W(P)&=&d_1(b_1(m_{11}^2+m_{12}^2)+b_3m_{13}^2)+d_2(b_1(m_{21}^2+m_{22}^2)+b_3m_{23}^2)+r_{13}m_{13}\\
&=&(d_1+d_2)b_1+d_1(b_3-b_1)m_{13}^2+d_2(b_3-b_1)m_{23}^2+r_{13}m_{13}
\end{eqnarray*}
where $r_{13}=\left<m_{13}\right>$. Set
$$
J=\left(
\begin{array}{ccc}
0&\ -1&\ 0\\
1&\ 0&\ 0\\
0&\ 0&\ 1
\end{array}
\right).
$$
Using $W(P)=W(JP)$, we get
\begin{eqnarray*}
\left<m_{12}^2\right>&=&\frac{1}{Z}\int \d\nu m_{12}^2\exp\big(-W(P)\big)\\
&=&\frac{1}{Z}\int \d\nu m_{11}^2\exp\big(-W(JP)\big)\\
&=&\frac{1}{Z}\int \d\nu m_{11}^2\exp\big(-W(P)\big)\\
&=&\left<m_{11}^2\right>.
\end{eqnarray*}
\end{enumerate}
\qed
We tend to believe that $\left<\bm{m}_1\bm{m}_1\right>$ and
$\left<\bm{m}_2\bm{m}_2\right>$ can be diagonalized simutaneously.
The results of Theorem \ref{MoB} would reduce the degrees of freedom
of order parameters of bent-core molecules to $5$. We choose coordinate axes as
eigenvectors of $\left<\bm{m}_1\bm{m}_1\right>$ as well as those of
$\left<\bm{m}_2\bm{m}_2\right>$, then $\left<\bm{m}_1\right>$ parallels to one
of the eigenvectors of $\left<\bm{m}_1\bm{m}_1\right>$. Because the trace of
$\left<\bm{m}_1\bm{m}_1\right>$ and $\left<\bm{m}_2\bm{m}_2\right>$ equal to
$1$, both of the two second moments contribute two degrees of freedom. At last
$\left<\bm{m}_1\right>$ contributes one, making them five in total.
Finally, we should point out that the set of order parameters should be decided
by results of experiments and simulations so as to be able to distinguish
dirrerent phases. It should also follow this criterion to determine where to
truncate the polynomial approximation of $G$.
Rod-like molecules exhibits only uniaxial nematics.
As we have described, Maier-Saupe potential is a polynomial approximation of
$G$ truncated on the second order. With thorough analysis the number of order
parameter is reduced to 1. Therefore Maier-Saupe potential is proven to be the
most concise model of rod-like molecules that covers experimental results.
Up to now, spatially homogeneous phases of bent-core molecule are restrained to
uniaxial or biaxial nematics, without the observation of polar order.
This seems to indicate the sufficiency to approximate $G$ with quadratic
polynomials, which contradicts with what is proposed in \cite{OrdPar}. Also it
will be interesting to see if any phases with polar order would appear.
\section{Conclusion and outlook}\label{Con}
A generic modelling procedure is proposed for rigid molecules of arbitrary
shape. The modelling of kernel function incorporates pairwise interaction.
We show that the symmetries of molecule determine the reduced configuration
space and the form of polynomial approximations of $G$ with its coefficients
depended on temperature and molecular parameters.
An approximate kernel is deduced for molecules with $C_{2v}$ symmetry.
By approximating $G$ with polynomial, the system is reduced to a
group of equations about moments of body-fixed axes. Some properties of these
moments are studied for molecules with $C_{2v}$ symmetry,
and the number of order parameters is predicted for bent-core molecules.
The prediction needs to be verified by results of simulations and comparison to
experiments. Moreover, it remains unknown whether there are some general
relationships between the moments.
A clear understanding of them would help us to find out a
minimal complete set of order parameters.
\section{Appendix}
\subsection{The excluded-volume potential of spherotriangles}
\subsubsection{The calculation of excluded volume}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth,keepaspectratio]{tri.pdf}
\caption{Two possible geometries of $K$}\label{tri}
\end{figure}
Here we calculate the excluded-volume potential of two
spherotriangles $T_1+B_{D/2}$ and $T_2+B_{D/2}$.
The excluded region can be represented by $K+B_D$ where $K=T_1-T_2$.
By (\ref{Steiner}), we need to calculate $V_3,\ V_2,\ V_1$ for $K$.
Denote the vertices of $T_1$ as $OAB$ that lie in plane $\pi$,
and those of $-T_2$ as $O'A'B'$ that lie in plane $\pi'$.
$K$ is a polytope, for it is the convex hull of nine points
$$
\{O,A,B\}+\{O',A',B'\}.
$$
The edges of two triangles are denoted as
$$
\overrightarrow{AO}=\bm{a},\ \overrightarrow{OB}=\bm{b},\ \overrightarrow{BA}=\bm{c},\
\overrightarrow{A'O'}=\bm{a'},\ \overrightarrow{O'B'}=\bm{b'},\ \overrightarrow{B'A'}=\bm{c'}.
$$
If $\pi$ and $\pi'$ do not parallel, we can label the vertices properly such
that the plane $\pi+O'-O$ seperates $A'$ and $B'$, and the plane $\pi'+O-O'$
seperates $A$ and $B$, namely
\begin{eqnarray*}
(\bm{m}_3\cdot\bm{a'})(\bm{m}_3\cdot\bm{b'})\ge 0,\\
(\bm{m'}_3\cdot\bm{a})(\bm{m'}_3\cdot\bm{b})\ge 0.
\end{eqnarray*}
If the intersection of two triangles $T_1$ and $T_2+O-O'$ is not empty, which
indicates
$$
(\bm{m}_3\cdot\bm{c'})(\bm{m'}_3\cdot\bm{c})<0,
$$
$K$ is drawn in the left part of Fig.\ref{tri};
and if it is empty, which indicates
$$
(\bm{m}_3\cdot\bm{c'})(\bm{m'}_3\cdot\bm{c})>0,
$$
Now we may assume that $O=O'$, then $K$ is drawn in the right part of
Fig.\ref{tri}. The notion $P_{AA'}$ represents the point located at
$O+\overrightarrow{OA}+\overrightarrow{O'A'}$, etc..
When $\pi$ and $\pi'$ are parallel, we can label the vertices such that
$T_2$ intersects with $\angle AOB$ or its vertical angle.
First we calculate $V_3(K)$. For the case on the left part of Fig.\ref{tri},
$K$ can be divided into the prisms
$$
AP_{AA'}P_{AB'}-BP_{BA'}P_{BB'},\ A'P_{AA'}P_{BA'}-OAB,\ OAB-B'P_{AB'}P_{BB'},
$$
or
$$
A'P_{AA'}P_{BA'}-B'P_{AB'}P_{BB'},\ AP_{AA'}P_{AB'}-OA'B',\ OA'B'-BP_{BA'}P_{BB'}.
$$
Thus
$$
V_3=|\bm{a}\times\bm{b}\cdot\bm{a'}|+|\bm{a}\times\bm{b}\cdot\bm{b'}|
+|\bm{a'}\times\bm{b'}\cdot\bm{c}|
=|\bm{a}\times\bm{b}\cdot\bm{c'}|+|\bm{a'}\times\bm{b'}\cdot\bm{a}|
+|\bm{a'}\times\bm{b'}\cdot\bm{b}|.
$$
For the case on the right, $K$ can be divided into the prisms
$$
AP_{AA'}P_{AB'}-BP_{BA'}P_{BB'},\ A'P_{AA'}P_{BA'}-B'P_{AB'}P_{BB'},
$$
or
$$
AP_{AA'}P_{AB'}-OA'B',\ A'P_{AA'}P_{BA'}-OAB,\ OAB-B'P_{AB'}P_{BB'},\ OA'B'-BP_{BA'}P_{BB'}.
$$
Thus
$$
V_3=|\bm{a}\times\bm{b}\cdot\bm{a'}|+|\bm{a}\times\bm{b}\cdot\bm{b'}|
+|\bm{a'}\times\bm{b'}\cdot\bm{a}|+|\bm{a'}\times\bm{b'}\cdot\bm{b}|
=|\bm{a}\times\bm{b}\cdot\bm{c'}|+|\bm{a'}\times\bm{b'}\cdot\bm{c}|.
$$
For both cases, we have
\begin{equation}
V_3(K)=
\frac{1}{2}\Big(|\bm{a}\times\bm{b}\cdot\bm{a'}|+|\bm{a}\times\bm{b}\cdot\bm{b'}|
+|\bm{a}\times\bm{b}\cdot\bm{c'}|+|\bm{a'}\times\bm{b'}\cdot\bm{a}|
+|\bm{a'}\times\bm{b'}\cdot\bm{b}|+|\bm{a'}\times\bm{b'}\cdot\bm{c}|\Big).
\end{equation}
Next we calculate $V_1(K)$. Each edge of $K$ parallels to one of the six edges
of $T_1$ and $T_2$. As an example, we describe the contribution to $V_1$ of
edges parallel to $\bm{a}$. As the faces contain one of those edges, the
outward normal vectors lie in a plane perpendicular to $\bm{a}$.
For the case on the left, there are three edges parallel to $\bm{a}$:
$$
A'P_{AA'},\ OA,\ B'P_{AB'}.
$$
As their length equals to $|\bm{a}|$, we only need to calculate the sum of
external angles, which is
$$
\frac{1}{2\pi}(\angle \left<\bm{n}_{A'P_{AA'}P_{BA'}},\bm{n}_{OAP_{AA'}A'}\right>
+\angle \left<\bm{n}_{OAP_{AA'}A'},\bm{n}_{OAP_{AB'}B'}\right>
+\angle \left<\bm{n}_{OAP_{AB'}B'},\bm{n}_{B'P_{AB'}P_{BB'}}\right>).
$$
Note that $\bm{n}_{A'P_{AA'}P_{BA'}}$ and $\bm{n}_{B'P_{AB'}P_{BB'}}$ are reverse, and
the four vectors
$$
\bm{n}_{A'P_{AA'}P_{BA'}},\ \bm{n}_{OAP_{AA'}A'},\
\bm{n}_{OAP_{AB'}B'},\ \bm{n}_{B'P_{AB'}P_{BB'}}
$$
are sequentially arranged. Thus the three angles add up to $\pi$,
and the sum of the external angles equals to $\frac{1}{2}$.
For the case on the right, there are two edges parallel to $\bm{a}$:
$$
A'P_{AA'},\ B'P_{AB'}.
$$
Again we only need the sum of the external angles:
$$
\frac{1}{2\pi}
(\angle \left<\bm{n}_{A'P_{AA'}P_{BA'}},\bm{n}_{A'P_{AA'}P_{AB'}B'}\right>
+\angle \left<\bm{n}_{A'P_{AA'}P_{AB'}B'},\bm{n}_{B'P_{AB'}P_{BB'}}\right>)
=\frac{1}{2}.
$$
Therefore the amount of the external angles at the edges parallel to $\bm{a}$ is always
$\frac{1}{2}$. The above calculation can be done for the other five edges, leading to
\begin{equation}
V_1(K)=\frac{1}{2}\Big(|\bm{a}|+|\bm{b}|+|\bm{c}|+|\bm{a'}|+|\bm{b'}|+|\bm{c'}|\Big).
\end{equation}
The expression of $V_2(K)$ is different for two cases in Fig.\ref{tri}.
The faces of $K$ always contain four triangles $\triangle AP_{AA'}P_{AB'},
\triangle BP_{BA'}P_{BB'}, \triangle A'P_{AA'}P_{BA'}$ and $\triangle B'P_{AB'}P_{BB'}$.
The other faces are some parallelograms. For the case in the left, they are
$$
OAP_{AA'}A',\ OBP_{BA'}A',\ OAP_{AB'}B',\ OBP_{BB'}B',\ P_{AA'}P_{AB'}P_{BB'}P_{BA'}.
$$
Thus
$$
V_2(K)=|\bm{a}\times \bm{b}|+|\bm{a'}\times \bm{b'}|
+|\bm{a}\times \bm{a'}|+|\bm{a}\times \bm{b'}|+|\bm{b}\times \bm{a'}|
+|\bm{b}\times \bm{b'}|+|\bm{c}\times \bm{c'}|.
$$
For the case in the right, they are
$$
ABP_{BA'}P_{AA'},\ ABP_{BB'}P_{AB'},\ A'B'P_{AB'}P_{AA'},\ A'B'P_{BB'}P_{BA'}.
$$
Thus
$$
V_2(K)=|\bm{a}\times \bm{b}|+|\bm{a'}\times \bm{b'}|
+|\bm{c}\times \bm{a'}|+|\bm{c}\times \bm{b'}|
+|\bm{a}\times \bm{c'}|+|\bm{b}\times \bm{c'}|.
$$
We point out that
\begin{equation}\label{PntRef}
V_2(T_1-T_2)+V_2(T_1+T_2)=
\sum_{\bm{e}\in\{\bm{a,b,c}\},\bm{e'}\in\{\bm{a',b',c'}\}}|\bm{e}\times\bm{e'}|
+2\big(|\bm{a}\times\bm{b}|+|\bm{a'}\times\bm{b'}|\big).
\end{equation}
In fact, when $T_2$ is substituted with $-T_2$,
$\bm{a'},\bm{b'},\bm{c'}$ convert into $-\bm{a'},-\bm{b'},-\bm{c'}$.
So $(\bm{m}_3\cdot\bm{a'})(\bm{m}_3\cdot\bm{b'})$ and
$(\bm{m'}_3\cdot\bm{a})(\bm{m'}_3\cdot\bm{b})$ remain unchanged, while
$(\bm{m}_3\cdot\bm{c'})(\bm{m'}_3\cdot\bm{c})$ alters its sign.
This means that one of $T_1-T_2$ and $T_1+T_2$ corresponds to the case in the
left, while the other corresponds to the case in the right. Therefore
(\ref{PntRef}) holds.
The excluded volume of rods could be obtained for congruent
$\triangle OAB,\ \triangle O'A'B'$ with $\angle AOB=\pi$.
In this case, $\bm{c}=L\bm{m}$, $V_3=0$ and $V_1=2L$.
$$
V_2=|\bm{c}\times \bm{a'}|+|\bm{c}\times \bm{b'}|
+|\bm{a}\times \bm{c'}|+|\bm{b}\times \bm{c'}|
=|\bm{c}\times (\bm{a'}-\bm{b'})|
+|(\bm{a}-\bm{b})\times \bm{c'}|
=2|\bm{c}\times \bm{c'}|.
$$
Hence
$$
V=2L^2D|\bm{m}\times \bm{m'}|+2\pi LD^2+\frac{4}{3}\pi D^3
$$
which is a constant different from Onsager's form.
\subsubsection{Quadratic projection of the excluded-volume potential}
The above derivation for excluded volume is valid for any pair of triangles.
Now we suppose that $T$ is isoceles with top corner $\theta$ and length of
lateral sides $L/2$. Two triangles are given by $T_1=PT$ and $T_2=P'T$.
The unit vectors along the edges of two triangles are written as follows.
\begin{eqnarray*}
\bm{e}_a=\frac{\bm{a}}{|\bm{a}|}=\bm{m}_1\cos\frac{\theta}{2}+\bm{m}_2\sin\frac{\theta}{2},\
\bm{e}_b=\frac{\bm{b}}{|\bm{b}|}=-\bm{m}_1\cos\frac{\theta}{2}+\bm{m}_2\sin\frac{\theta}{2},\
\bm{e}_c=\frac{\bm{c}}{|\bm{c}|}=-\bm{m}_2,\\
\bm{e'}_a=\frac{\bm{a'}}{|\bm{a'}|}=\bm{m'}_1\cos\frac{\theta}{2}+\bm{m'}_2\sin\frac{\theta}{2},\
\bm{e'}_b=\frac{\bm{b'}}{|\bm{b'}|}=-\bm{m'}_1\cos\frac{\theta}{2}+\bm{m'}_2\sin\frac{\theta}{2},\
\bm{e'}_c=\frac{\bm{c'}}{|\bm{c'}|}=-\bm{m'}_2
\end{eqnarray*}
with $|\bm{a}|=|\bm{b}|=|\bm{a'}|=|\bm{b'}|=L/2$
and $|\bm{c}|=|\bm{c'}|=L\sin\frac{\theta}{2}$.
We aim to project $V$ onto the space spanned by
$$
Q=\{1, p_{11}, p_{11}^2, p_{12}^2,p_{21}^2, p_{22}^2\}.
$$
Note that the following functions in span$\{Q\}$ are mutually orthogonal:
$$
1,p_{11}, \frac{1}{2}(3p_{11}^2-1), \sqrt{3}(p_{12}^2+\frac{1}{2}(p_{11}^2-1)),
\sqrt{3}(p_{21}^2+\frac{1}{2}(p_{11}^2-1)),
2p_{22}^2+(p_{12}^2+p_{21}^2)+\frac{1}{2}p_{11}^2-\frac{3}{2}.
$$
We focus on the even-order terms first. Let
\begin{eqnarray*}
k_0&=&\int \d\nu(\bar{P})V(\bar{P}),\\
k_1&=&\int \d\nu(\bar{P})V(\bar{P})p_{11}^2,\\
k_2&=&\int \d\nu(\bar{P})V(\bar{P})p_{12}^2=\int \d\nu(\bar{P})V(\bar{P})p_{21}^2,\\
k_3&=&\int \d\nu(\bar{P})V(\bar{P})p_{22}^2.
\end{eqnarray*}
The even-order part of projection will be written as
\begin{align*}
5&\left[(\frac{3}{2}k_1-\frac{1}{2}k_0)(\frac{3}{2}p_{11}^2-\frac{1}{2})
+3(k_2+\frac{1}{2}(k_1-k_0))(p_{12}^2+p_{21}^2+p_{11}^2-1)\right.\\
&\left.+4(k_3+k_2+\frac{1}{4}k_1-\frac{3}{4}k_0)(p_{22}^2
+\frac{1}{2}(p_{12}^2+p^2_{21})+\frac{1}{4}p_{11}^2-\frac{3}{4}) \right].
\end{align*}
By comparing the coefficients, we have
\begin{eqnarray}
c_2&=&5(4k_1+4k_2+k_3-3k_0),\label{cc2}\\
c_3&=&5(k_1+4k_2+4k_3-3k_0), \label{cc3}\\
c_4&=&5(2k_1+5k_2+2k_3-3k_0).\label{cc4}
\end{eqnarray}
In the above, $k_0,k_1,k_2,k_3$ can be evaluated analytically. We use the
notation $p_{ij}(\bar{P})$ to represent the $(i,j)$ element of $\bar{P}$.
First we point out that
\begin{eqnarray}
&&\int_{SO_3}\d\nu(\bar{P})V_2(\bar{P})p^2_{ij}(\bar{P})\nonumber\\
&=&\int_{SO_3}\d\nu(\bar{P})p_{ij}^2(\bar{P})
\left(|\bm{a}\times\bm{b}|+|\bm{a'}\times\bm{b'}|+
\sum_{\bm{e}\in\{\bm{a,b,c}\},\bm{e'}\in\{\bm{a',b',c'}\}}
\frac{1}{2}|\bm{e}\times\bm{e'}|\right).\label{cent}
\end{eqnarray}
In fact, $V_2(\bar{P})=V_2(PT-P'T)$.
Let $J=\mbox{diag}(-1,-1,1)$, then $JT=-T$.
Thereby
$$V_2(\bar{P}J)=V_2(PT-P'JT)=V_2(PT+P'T).$$
By (\ref{PntRef}) we have
$$
V_2(\bar{P})+V_2(\bar{P}J)=
2\big(|\bm{a}\times\bm{b}|+|\bm{a'}\times\bm{b'}|\big)+
\sum_{\bm{e}\in\{\bm{a,b,c}\},\bm{e'}\in\{\bm{a',b',c'}\}}|\bm{e}\times\bm{e'}|.
$$
Meanwhile $p_{ij}(\bar{P}J)=p_{ij}(\bar{P})$, therefore (\ref{cent}) holds.
We need to calculate the terms like
\begin{equation}\label{cos}
\int_{SO_3}\d\nu(\bar{P})p_{ij}^2|\bm{a}\times\bm{b}\cdot\bm{a'}|
=\frac{1}{8}L^3\sin\theta\int_{SO_3}\d\nu(\bar{P})p_{ij}^2|\bm{m}_3\cdot
\bm{e'}_a|
\end{equation}
and
\begin{equation}\label{sin}
\int_{SO_3}\d\nu(\bar{P})p_{ij}^2|\bm{a}\times\bm{a'}|
=\frac{1}{4}L^2\int_{SO_3}\d\nu(\bar{P})p_{ij}^2
|\bm{e}_a\times\bm{e'}_a|.
\end{equation}
We describe the strategy to compute integrals
$$
\int_{SO_3}\d\nu(\bar{P})p_{ij}^2|\bm{e}\times\bm{e'}|,\quad
\int_{SO_3}\d\nu(\bar{P})p_{ij}^2|\bm{e}\cdot\bm{e'}|,
$$
where $\bm{e},\bm{e'}$ are unit vectors. The following formula is needed.
\begin{equation}
\int_{SO_3}\d\nu(\bar{P})f(\bar{P})=\int_{SO_3}\d\nu(\bar{P})f(R_1^{-1}\bar{P}R_2),
\quad \forall R_1,R_2\in SO_3.
\end{equation}
Choose $R_1$ and $R_2$ such that
$$
R_1\bm{e}=\bm{m}_1,\quad R_2\bm{e'}=\bm{m'}_1.
$$
The integral is rewritten as
\begin{eqnarray*}
\int_{SO_3}\d\nu(\bar{P})p_{ij}^2|\bm{e}\times\bm{e'}|&=&
\int_{SO_3}\d\nu(\bar{P})p^2_{ij}(R_1^{-1}\bar{P}R_2)|\bm{m}_1\times\bm{m'}_1|\\
&=&\int^{\pi}_0 \d\alpha\int^{2\pi}_0\d\beta\int^{2\pi}_0\d\gamma
\frac{\sin\alpha}{8\pi^2} |\sin\alpha|Q(\alpha,\beta,\gamma),
\end{eqnarray*}
in which $Q$ is a trigonometric polynomial of $\alpha,\beta,\gamma$.
When the cross product is replaced by dot product, $|\sin\alpha|$ is
substituted with $|\cos\alpha|$.
We compute (\ref{cos}) as an example. Define $R_1$ and $R_2$ by
\begin{align*}
R_1\bm{m}_1&=-\bm{m}_3,&R_1\bm{m}_2&=\bm{m}_2,&R_1\bm{m}_3&=\bm{m}_1,\\
R_2\bm{m'}_1&=\bm{m'}_1\cos\frac{\theta}{2}-\bm{m'}_2\sin\frac{\theta}{2},&
R_2\bm{m'}_2&=\bm{m'}_2\cos\frac{\theta}{2}+\bm{m'}_1\sin\frac{\theta}{2},&
R_2\bm{m'}_3&=\bm{m'}_3.
\end{align*}
Then we have
$$
p_{11}(R_1^{-1}\bar{P}R_2)=-p_{31}\cos\frac{\theta}{2}+p_{32}\sin\frac{\theta}{2}.
$$
Hence
\begin{eqnarray*}
&&\int_{SO_3}\d\nu(\bar{P})p_{11}^2|\bm{m}_3\cdot
(\bm{m'}_1\cos\frac{\theta}{2}+\bm{m'}_2\sin\frac{\theta}{2})|\\
&=&\int_{SO_3}\d\nu(\bar{P})(-p_{31}\cos\frac{\theta}{2}+p_{32}
\sin\frac{\theta}{2})^2|\bm{m}_1\cdot\bm{m'}_1|\\
&=&\int_0^{\pi}\d\alpha\int_0^{2\pi}\d\beta\int_{0}^{2\pi}
\d\gamma\frac{\sin\alpha}{8\pi^2}|\cos\alpha|\\
&&\big(-\sin\alpha\sin\beta\cos\frac{\theta}{2}+\sin\frac{\theta}{2}
(\cos\alpha\sin\beta\cos\gamma+\cos\beta\sin\gamma)\big)^2\\
&=&\frac{1}{8}\cos^2\frac{\theta}{2}+\frac{3}{16}\sin^2\frac{\theta}{2}.
\end{eqnarray*}
The other terms could be handled similarly. All the results are listed in
Table.\ref{projcoe} at the end of the article.
By collecting those results, we get
\begin{eqnarray*}
2k_0&=&\frac{1}{4}cL^3\sin\theta(1+\sin\frac{\theta}{2})
+\frac{\pi}{4}cL^2D(1+2\sin\frac{\theta}{2}+\sin^2\frac{\theta}{2})+3C,\\
2k_1&=&\frac{1}{32}cL^3\sin\theta(2\cos^2\frac{\theta}{2}
+3\sin^2\frac{\theta}{2}+3\sin\frac{\theta}{2})
+\frac{\pi}{64}cL^2D\Big[4\cos^4\frac{\theta}{2}+5\sin^4\frac{\theta}{2}\\
&&+12\sin^2\frac{\theta}{2}\cos^2\frac{\theta}{2}+2\sin\frac{\theta}{2}
(6\cos^2\frac{\theta}{2}+5\sin^2\frac{\theta}{2})+5\sin^2\frac{\theta}{2}\Big]+C,\\
2k_2&=&\frac{1}{64}cL^3\sin\theta(5+5\sin\frac{\theta}{2})
+\frac{\pi}{64}cL^2D\Big[6\cos^4\frac{\theta}{2}+6\sin^4\frac{\theta}{2}\\
&&+9\sin^2\frac{\theta}{2}\cos^2\frac{\theta}{2}+\sin\frac{\theta}{2}
(9\cos^2\frac{\theta}{2}+12\sin^2\frac{\theta}{2})+6\sin^2\frac{\theta}{2}\Big]+C,\\
2k_3&=&\frac{1}{32}cL^3\sin\theta(3\cos^2\frac{\theta}{2}
+2\sin^2\frac{\theta}{2}+2\sin\frac{\theta}{2})
+\frac{\pi}{64}cL^2D\Big[5\cos^4\frac{\theta}{2}+4\sin^4\frac{\theta}{2}\\
&&+12\sin^2\frac{\theta}{2}\cos^2\frac{\theta}{2}+2\sin\frac{\theta}{2}
(6\cos^2\frac{\theta}{2}+4\sin^2\frac{\theta}{2})+4\sin^2\frac{\theta}{2}\Big]+C,
\end{eqnarray*}
where $C$ is a constant
$$
3C=\frac{1}{4}L^2D\sin\theta+D^2L(1+\sin\frac{\theta}{2})+\frac{4}{3}\pi D^3.
$$
By (\ref{cc2})-(\ref{cc4}), we get (\ref{c_2})-(\ref{c_4}).
The computation of $c_1$ is complicated. Note that $V_3$ does not contribute to
$c_1$. In fact, it is obvious that $V_3(\bar{P}J)=V_3(\bar{P})$ and
$p_{11}(\bar{P}J)=-p_{11}(\bar{P})$, which yield
$$
\int_{SO_3}\d\nu(\bar{P})V_3p_{11}=0.
$$
Therefore
$$
c_1=\frac{3}{8}cDL^2K(\theta).
$$
Denote
\begin{align*}
I_{aa}&=|\bm{e}_a\times\bm{e'}_a|, &
I_{ab}&=|\bm{e}_a\times\bm{e'}_b|, &
I_{ac}&=2\sin\frac{\theta}{2}|\bm{e}_a\times\bm{e'}_c|, \\
I_{ba}&=|\bm{e}_b\times\bm{e'}_a|, &
I_{bb}&=|\bm{e}_b\times\bm{e'}_b|, &
I_{bc}&=2\sin\frac{\theta}{2}|\bm{e}_b\times\bm{e'}_c|, \\
I_{ca}&=2\sin\frac{\theta}{2}|\bm{e}_c\times\bm{e'}_a|, &
I_{cb}&=2\sin\frac{\theta}{2}|\bm{e}_c\times\bm{e'}_b|, &
I_{cc}&=4\sin^2\frac{\theta}{2}|\bm{e}_c\times\bm{e'}_c|.
\end{align*}
$K(\theta)$ is written as
\begin{equation}
\begin{split}
K(\theta)&=\int_{SO_3}\d\nu(\bar{P})p_{11}\\
&\left[(I_{aa}+I_{ab}+I_{ba}+I_{bb}+I_{cc})
\chi_{\{ (\bm{m}_3\cdot\bm{a'})(\bm{m}_3\cdot\bm{b'})> 0,
(\bm{m'}_3\cdot\bm{a})(\bm{m'}_3\cdot\bm{b})> 0,
(\bm{m}_3\cdot\bm{c'})(\bm{m'}_3\cdot\bm{c})<0\}}\right.
\\
&(I_{ac}+I_{bc}+I_{ca}+I_{cb})
\chi_{\{ (\bm{m}_3\cdot\bm{a'})(\bm{m}_3\cdot\bm{b'})> 0,
(\bm{m'}_3\cdot\bm{a})(\bm{m'}_3\cdot\bm{b})> 0,
(\bm{m}_3\cdot\bm{c'})(\bm{m'}_3\cdot\bm{c})>0\}}
\\
&(I_{ab}+I_{ac}+I_{bb}+I_{bc}+I_{ca})
\chi_{\{ (\bm{m}_3\cdot\bm{b'})(\bm{m}_3\cdot\bm{c'})> 0,
(\bm{m'}_3\cdot\bm{a})(\bm{m'}_3\cdot\bm{b})> 0,
(\bm{m}_3\cdot\bm{a'})(\bm{m'}_3\cdot\bm{c})<0\}}
\\
&(I_{aa}+I_{ba}+I_{cb}+I_{cc})
\chi_{\{ (\bm{m}_3\cdot\bm{b'})(\bm{m}_3\cdot\bm{c'})> 0,
(\bm{m'}_3\cdot\bm{a})(\bm{m'}_3\cdot\bm{b})> 0,
(\bm{m}_3\cdot\bm{a'})(\bm{m'}_3\cdot\bm{c})>0\}}
\\
&(I_{ac}+I_{aa}+I_{bc}+I_{ba}+I_{cb})
\chi_{\{ (\bm{m}_3\cdot\bm{c'})(\bm{m}_3\cdot\bm{a'})> 0,
(\bm{m'}_3\cdot\bm{a})(\bm{m'}_3\cdot\bm{b})> 0,
(\bm{m}_3\cdot\bm{b'})(\bm{m'}_3\cdot\bm{c})<0\}}
\\
&(I_{ab}+I_{bb}+I_{cc}+I_{ca})
\chi_{\{ (\bm{m}_3\cdot\bm{c'})(\bm{m}_3\cdot\bm{a'})> 0,
(\bm{m'}_3\cdot\bm{a})(\bm{m'}_3\cdot\bm{b})> 0,
(\bm{m}_3\cdot\bm{b'})(\bm{m'}_3\cdot\bm{c})>0\}}
\\
&(I_{ba}+I_{bb}+I_{ca}+I_{cb}+I_{ac})
\chi_{\{ (\bm{m}_3\cdot\bm{a'})(\bm{m}_3\cdot\bm{b'})> 0,
(\bm{m'}_3\cdot\bm{b})(\bm{m'}_3\cdot\bm{c})> 0,
(\bm{m}_3\cdot\bm{c'})(\bm{m'}_3\cdot\bm{a})<0\}}
\\
&(I_{bc}+I_{cc}+I_{aa}+I_{ab})
\chi_{\{ (\bm{m}_3\cdot\bm{a'})(\bm{m}_3\cdot\bm{b'})> 0,
(\bm{m'}_3\cdot\bm{b})(\bm{m'}_3\cdot\bm{c})> 0,
(\bm{m}_3\cdot\bm{c'})(\bm{m'}_3\cdot\bm{a})>0\}}
\\
&(I_{bb}+I_{bc}+I_{cb}+I_{cc}+I_{aa})
\chi_{\{ (\bm{m}_3\cdot\bm{b'})(\bm{m}_3\cdot\bm{c'})> 0,
(\bm{m'}_3\cdot\bm{b})(\bm{m'}_3\cdot\bm{c})> 0,
(\bm{m}_3\cdot\bm{a'})(\bm{m'}_3\cdot\bm{a})<0\}}
\\
&(I_{ba}+I_{ca}+I_{ab}+I_{ac})
\chi_{\{ (\bm{m}_3\cdot\bm{b'})(\bm{m}_3\cdot\bm{c'})> 0,
(\bm{m'}_3\cdot\bm{b})(\bm{m'}_3\cdot\bm{c})> 0,
(\bm{m}_3\cdot\bm{a'})(\bm{m'}_3\cdot\bm{a})>0\}}
\\
&(I_{bc}+I_{ba}+I_{cc}+I_{ca}+I_{ab})
\chi_{\{ (\bm{m}_3\cdot\bm{c'})(\bm{m}_3\cdot\bm{a'})> 0,
(\bm{m'}_3\cdot\bm{b})(\bm{m'}_3\cdot\bm{c})> 0,
(\bm{m}_3\cdot\bm{b'})(\bm{m'}_3\cdot\bm{a})<0\}}
\\
&(I_{bb}+I_{cb}+I_{ac}+I_{aa})
\chi_{\{ (\bm{m}_3\cdot\bm{c'})(\bm{m}_3\cdot\bm{a'})> 0,
(\bm{m'}_3\cdot\bm{b})(\bm{m'}_3\cdot\bm{c})> 0,
(\bm{m}_3\cdot\bm{b'})(\bm{m'}_3\cdot\bm{a})>0\}}
\\
&(I_{ca}+I_{cb}+I_{aa}+I_{ab}+I_{bc})
\chi_{\{ (\bm{m}_3\cdot\bm{a'})(\bm{m}_3\cdot\bm{b'})> 0,
(\bm{m'}_3\cdot\bm{c})(\bm{m'}_3\cdot\bm{a})> 0,
(\bm{m}_3\cdot\bm{c'})(\bm{m'}_3\cdot\bm{b})<0\}}
\\
&(I_{cc}+I_{ac}+I_{ba}+I_{bb})
\chi_{\{ (\bm{m}_3\cdot\bm{a'})(\bm{m}_3\cdot\bm{b'})> 0,
(\bm{m'}_3\cdot\bm{c})(\bm{m'}_3\cdot\bm{a})> 0,
(\bm{m}_3\cdot\bm{c'})(\bm{m'}_3\cdot\bm{b})>0\}}
\\
&(I_{cb}+I_{cc}+I_{ab}+I_{ac}+I_{ba})
\chi_{\{ (\bm{m}_3\cdot\bm{b'})(\bm{m}_3\cdot\bm{c'})> 0,
(\bm{m'}_3\cdot\bm{c})(\bm{m'}_3\cdot\bm{a})> 0,
(\bm{m}_3\cdot\bm{a'})(\bm{m'}_3\cdot\bm{b})<0\}}
\\
&(I_{ca}+I_{aa}+I_{bb}+I_{bc})
\chi_{\{ (\bm{m}_3\cdot\bm{b'})(\bm{m}_3\cdot\bm{c'})> 0,
(\bm{m'}_3\cdot\bm{c})(\bm{m'}_3\cdot\bm{a})> 0,
(\bm{m}_3\cdot\bm{a'})(\bm{m'}_3\cdot\bm{b})>0\}}
\\
&(I_{cc}+I_{ca}+I_{ac}+I_{aa}+I_{bb})
\chi_{\{ (\bm{m}_3\cdot\bm{c'})(\bm{m}_3\cdot\bm{a'})> 0,
(\bm{m'}_3\cdot\bm{c})(\bm{m'}_3\cdot\bm{a})> 0,
(\bm{m}_3\cdot\bm{b'})(\bm{m'}_3\cdot\bm{b})<0\}}
\\
&\left.(I_{cb}+I_{ab}+I_{bc}+I_{ba})
\chi_{\{ (\bm{m}_3\cdot\bm{c'})(\bm{m}_3\cdot\bm{a'})> 0,
(\bm{m'}_3\cdot\bm{c})(\bm{m'}_3\cdot\bm{a})> 0,
(\bm{m}_3\cdot\bm{b'})(\bm{m'}_3\cdot\bm{b})>0\}}\right].
\end{split}
\label{Ktheta}
\end{equation}
\subsection{The excluded-volume potential of bent-core molecules}
Denote by $\bm{p}_1$ and $\bm{p}_2$ two unit vectors along the arms of molecule.
The excluded region of two molecules is the union of four spheroparallelograms
$$
V_{ij}=O\bm{p}_i\bm{p'}_j+B_D,\qquad i,j=1,2.
$$
Thus the excluded volume can be written as
$$
V=\sum|V_{ij}|-\sum|V_{ij}\cap V_{i'j'}|+\sum|V_{ij}\cap V_{i'j'}\cap V_{i''j''}|-|V_{11}\cap V_{12}\cap V_{21}\cap V_{22}|.
$$
We have already known that
$$
|V_{ij}|=2L^2D|\bm{p}_i\times\bm{p'}_j|+2\pi LD^2+\frac{4}{3}\pi D^3.
$$
So we only need to compute the volumes of the intersections above.
When calculating the volume of a region $U$, we can write
$$
|U|=\int\d x\d y m(\Omega(x,y))
$$
where $m(\cdot)$ denotes the measure of a set and
$\Omega(x,y)=\{z|(x,y,z)\in U\}$.
Because $V_{ij}$ is convex, $\Omega(x,y)$ is an interval
$[l_{ij}(x,y)$, $u_{ij}(x,y)]$ for $U=V_{ij}$. Thus
\begin{eqnarray*}
|V_{ij}\cap V_{i'j'}|&=&\int\d x\d y \big[\min\{u_{ij},u_{i'j'}\}-\max\{l_{ij},l_{i'j'}\}\big]^+,\\
|V_{ij}\cap V_{i'j'}\cap V_{i''j''}|&=&\int\d x\d y \big[\min\{u_{ij},u_{i'j'},u_{i''j''}\}-\max\{l_{ij},l_{i'j'},l_{i''j''}\}\big]^+,\\
|V_{11}\cap V_{12}\cap V_{21}\cap V_{22}|&=&\int\d x\d y
\big[\min\{u_{11},u_{12},u_{21},u_{22}\}-\max\{l_{11},l_{12},l_{21},l_{22}\}\big]^+,
\end{eqnarray*}
where $x^+=\max\{x,0\}$.
Now the problem turns into computing $l_{ij}(x,y)$ and $u_{ij}(x,y)$.
Put one molecule in the plane $xOy$ with the arrowhead at $O$ and $\bm{m}_1$
along $-x$. Then $\bm{p}_{1,2}=L(\cos\frac{\theta}{2},\pm\sin\frac{\theta}{2},0)$.
We describe how to compute $u(x,y)$ of the spheroparallelogram $O\bm{p}_1\bm{
p'}_1+B_D$, where $\bm{p'}_1/L=(p,q,r)$ is a unit vector. A spheroparallelogram
consists of a parallelpiped, four half cylinders at each edge of parallelogram,
and four corners, each of which is enclosed by two planes and a sphere.
Classify $u(x,y)$ into three cases by where $(x,y,u(x,y))$ lies: the
parallelpiped; one of the four half cylinders; one of the four spheres.
For the first case, the distance of $(x,y,u(x,y))$ to plane
$O\bm{p}_1\bm{p'}_1$ equals to $D$. The normal vector of $O\bm{p}_1\bm{p'}_1$ is
$$
\bm{p}_1\times\bm{p'}_1=(A,B,C)=(r\sin\frac{\theta}{2},-r\cos\frac{\theta}{2},
q\cos\frac{\theta}{2}-p\sin\frac{\theta}{2}).
$$
Hence
$$
u(x,y)=\frac{D\sqrt{A^2+B^2+C^2}}{|C|}-\frac{Ax+By}{C}.
$$
For the second case, the distance equals to $D$ bewteen $(x,y,u(x,y))$ and the
axis of one of the four half cylinders. Thereby $u(x,y)$ is the larger root of
$$
(x-x_0)^2+(y-y_0)^2+\big(u(x,y)-z_0\big)^2-\big[a(x-x_0)+b(y-y_0)+c(u(x,y)-z_0)\big]^2=D^2.
$$
In the above, $(x_0,y_0,z_0)$ is any point on the axis, which may take $O$ or
$\Big(L(\cos\frac{\theta}{2}+p),L(\sin\frac{\theta}{2}+q),Lr\Big)$;
$(a,b,c)$ is the unit vector along the axis, which may take
$(\cos\frac{\theta}{2},\sin\frac{\theta}{2},0)$ or $(p,q,r)$.
For the third case, $u(x,y)$ is the larger root of
$$
(x-x_0)^2+(y-y_0)^2+(z-z_0)^2=D^2
$$
where $(x_0,y_0,z_0)$ is one of the four vertices of the parallelogram
$O\bm{p}_1\bm{p'}_1$.
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth,keepaspectratio]{region.pdf}
\caption{Integration regions, divided into three cases. }\label{region}
\end{figure}
All remaining is to clarify the region of three cases. In Fig.\ref{region},
they are coloured by red, white and blue respectively. The region contains
all points whose distance to the central parallelogram (drawn in
dotted line in Fig.\ref{region}), which is spanned by
$\overrightarrow{OA}=(L\cos\frac{\theta}{2},L\sin\frac{\theta}{2})$
and $\overrightarrow{OB}=(Lp,Lq)$, is no more than $D$.
It consists of the central parallelogram, four rectangles at each edge and
four sectors at each corner. The red region is the projection to plane $xOy$
of the parallelogram $O_1\bm{p}_1\bm{p'}_1$ with
$$
\overrightarrow{OO_{1}}=D\mbox{sgn}C\frac{\bm{p}_1\times\bm{p'}_1}
{|\bm{p}_1\times\bm{p'}_1|}.
$$
It is obtained by shifting the central
parallelogram along $\overrightarrow{OO'}$, where $O'$ locates at
$$
\frac{D\mbox{sgn}(q\cos\frac{\theta}{2}-p\sin\frac{\theta}{2})}
{\sqrt{1-(p\cos\frac{\theta}{2}+q\sin\frac{\theta}{2})^2}}
(r\sin\frac{\theta}{2},-r\cos\frac{\theta}{2}).
$$
Two of the four white regions are rectangles.
The other two white regions are enclosed by two line segments and two
elliptical arcs. Each elliptical arc connects a vertex of the red region and
a vertex of the rectangles on the boundary, such as $O'S$. It is the projection
of a curve to $xOy$. The curve is part of the intersection of the sphere
$$
(x-x_0)^2+(y-y_0)^2+(z-z_0)^2=D^2
$$
and the plane
$$
p(x-x_0)+q(y-y_0)+r(z-z_0)=0
$$
where $(x_0,y_0,z_0)$ is one of the vertices of $O\bm{p}_1\bm{p'}_1$.
By eliminating $z$, we get the equation of the curve:
$$
\big[p(x-x_0)+q(y-y_0)\big]^2+r^2\big[(x-x_0)^2+(y-y_0)^2\big]=r^2D^2.
$$
The blue regions are those enclosed by a line segment, an elliptical arc
defined above, and a circular arc on the boundary.
\textbf{Acknowledgements.}
P. Zhang is partly supported by NSF of China under Grant 50930003 and 21274005.
|
2,869,038,153,789 | arxiv | \section{Introduction}
The large gauge hierarchy between the 4D Planck scale,
\begin{equation}
\Lambda_P^{(4)} = \sqrt{\frac{1}{8 \pi G_N^{(4)}}} \simeq 2.4 \times 10^{18} \ \mathrm{GeV},
\end{equation}
where $G_N^{(4)}$ is the 4D gravitational Newton constant, and the measured Higgs boson mass $m_h \simeq 125 \ \mathrm{GeV}$, is one of the internal puzzles of the Standard Model (SM) of particle physics \cite{Donoghue2014} when coupled to gravity. If the Higgs boson is described by an elementary scalar field which is not protected by a symmetry, the radiative corrections to $m^2_h$ are quadratically sensitive to mass scales of new degrees of freedom. Such degrees of freedom are expected at least at scales where gravitational self interactions become strong, which is the 4D Planck scale $\Lambda_P^{(4)}$ in the usual setup. Then the measured $m_h$ implies an incredible fine tuning between the Higgs boson bare mass and the radiative corrections: this is the naturalness problem of the Higgs boson mass due to the large hierarchy between the 4D Planck scale $\Lambda_P^{(4)}$ and the ElectroWeak (EW) scale (usually called the gauge hierarchy problem) \cite{Wilson:1970ag,Susskind:1978ms,tHooft:1979rat}. A possibility to solve this issue is to embed the SM into a theory where the true Planck scale, i.e. the scale where gravitational self interactions become strong and new degrees of freedom are expected, is in the $1-10$ TeV range. However, this does not solve the question how a UltraViolet (UV) completion of
quantum gravity is achieved.
In 1998, N.~Arkani-Hamed, S.~Dimopoulos and G.R.~Dvali (ADD) proposed in Ref.~\cite{ArkaniHamed:1998rs} to compactify $q \in \mathbb{N}^*$ flat spacelike extra dimensions on a $q$-dimensional compact space $\mathcal{C}_q$ of volume $\mathcal{V}_q$ with a factorizable spacetime geometry $\mathcal{M}_4 \times \mathcal{C}_q$, where $\mathcal{M}_4$ is the 4D Minkowski spacetime. The 4D Planck scale $\Lambda_P^{(4)}$ is just an effective scale given by the relation
\begin{equation}
\left[\Lambda_P^{(4)}\right]^2 = \left[ \Lambda_P^{(4+q)} \right]^{q+2} \, \mathcal{V}_q \, ,
\label{ADD_formula}
\end{equation}
involving the $(4+q)$D Planck scale
\begin{equation}
\Lambda_P^{(4+q)} = \left[ \dfrac{1}{8 \pi G^{(4+q)}_N} \right]^{1/(q+2)},
\end{equation}
where $G_N^{(4+q)}$ is the $(4+q)$D gravitational Newton constant. $\Lambda_P^{(4+q)}$ is the real scale at which gravity becomes strongly coupled, so it is the true cut-off of the QFT\footnote{To be more precise, if the UV completion is perturbative, the UV gravitational degrees of freedom appear at a scale $\Lambda_{UV}<\Lambda_P^{(4+q)}$. The cut-off of the EFT is $\Lambda_{UV}$ and not $\Lambda_P^{(4+q)}$. For example, in perturbative string theory, the string scale is lower than the higher-dimensional Planck scale. To simplify the discussion, we ignore this possibility here.}, and this model solves the gauge hierarchy problem if $\Lambda_P^{(4+q)} \sim \mathcal{O}(1)$ TeV with a large compactified volume $\mathcal{V}_q$. In ADD models, the SM fields must be localized on a 3-brane, in contrast to gravity which is a property of $(4+q)D$ spacetime in general relativity. At large distances between two test masses on the 3-brane, gravity appears as a feebly coupled theory, because gravitational fluxes spread into the large volume $\mathcal{V}_q$ of the bulk. Quickly \cite{ArkaniHamed:1998nn}, it was realized that the Kaluza-Klein (KK) modes of fields propagating into this large volume $\mathcal{V}_q$ have couplings to the SM fields suppressed by $\sqrt{\mathcal{V}_q}$. One can thus build natural models of feebly interacting particles, i.e. particles which have a tiny coupling constant with the SM, like right-handed neutrinos \cite{Dienes:1998sb,ArkaniHamed:1998vp,Dvali:1999cn}, axions \cite{Chang:1999si,Dienes:1999gw}, dark photons \cite{ArkaniHamed:1998nn}, etc. In Ref.~\cite{ArkaniHamed:1998nn}, ADD proposed a simple toroidal compactification $\mathcal{C}_q = \left(\mathcal{R}_1\right)^q$, where $\mathcal{R}_1$ is the circle of radius $R$, and $\mathcal{V}_q = (2 \pi R)^q$. The bulk fields, like the graviton, generate a tower of KK modes with a uniform mass gap of $1/R$. For a benchmark value $\Lambda_P^{(4+q)} = 1$ TeV, the $(4+q)$D Planck length is $\ell_P^{(4+q)} = 1/ \Lambda_P^{(4+q)} \simeq 2 \times 10^{-19} \ \mathrm{m}$, and one gets Tab.~\ref{table} and Fig.~\ref{graph} from Eq.~\eqref{ADD_formula}.
\begin{table}[h]
\begin{center}
\begin{tabular}{c||c|c|c}
$q$ & $R$ (m) & $R/\ell_P^{(4+q)}$ & $M_{KK}$ (eV) \\
\hline \hline
1 & $2 \times 10^{11}$ & $9 \times 10^{29}$ & $1 \times 10^{-18}$ \\
2 & $8 \times 10^{-5}$ & $4 \times 10^{14}$ & $3 \times 10^{-3}$ \\
4 & $2 \times 10^{-12}$ & $8 \times 10^{6}$ & $1 \times 10^5$ \\
6 & $4 \times 10^{-15}$ & $2 \times 10^{4}$ & $5 \times 10^7$ \\
22 & $8 \times 10^{-19}$ & $4$ & $3 \times 10^{11}$
\end{tabular}
\caption{$R$, $R/\ell_P^{(4+q)}$ and $M_{KK}$ as a function of $q$ for $\Lambda_P^{(4+q)} = 1 \ \mathrm{TeV}$.}
\label{table}
\end{center}
\end{table}
Motivated by UV completions in superstring/M-theory \cite{Antoniadis:1998ig,Ibanez:2012zz} requiring 10/11 spacetime dimensions, most of the efforts concentrated on $q \leq 7$. The compactification radius $R$ must be stabilized at a large value compared to $\ell_P^{(4+q)}$, which reintroduces a geometrical hierarchy \cite{ArkaniHamed:1998nn} with a low KK mass gap: too light KK-gravitons are constrained by astrophysics, cosmology and collider physics\footnote{For a review of the constraints on the simplest ADD model with toroidal compactification, c.f. Ref.~\cite{Pomarol:2018oca}. \label{note_const_ADD}}. When one probes gravitational Newton's law at large [small] distances with respect to $R$, gravity appears 4D [$(4+q)$D]. The case $q=1$ is excluded because it leads to a modification of 4D gravitational Newton's law at the scale of the solar system. ADD's proposal is thus often associated with the Large Extra Dimensions (LEDs) paradigm, and is just a reformulation of the gauge hierarchy problem.
In the literature, there are interesting propositions to stabilize such large compactification radii \cite{ArkaniHamed:1998kx, ArkaniHamed:1999dz, Mazumdar:2001ya, Carroll:2001ih, Albrecht:2001cp, Antoniadis:2002gw, Peloso:2003nv}. To circumvent this geometrical hierarchy problem, a solution can be to abandon the framework of UV completions by superstring/M-theory, and to increase the number of extra dimensions. This possibility was mentioned first in Ref.~\cite{ArkaniHamed:1998nn}. Fig.~\ref{graph} shows that, for sufficiently large $q$, the remaining hierarchy disappears: $R \sim \ell_P^{(4+q)}$. A possible trail to UV complete this kind of model could be via Loop Quantum Gravity (LQG) \cite{Rovelli:2014ssa} since, a priori, LQG does not fix the number of spatial dimensions. First attempts to add spacelike extra dimensions to LQG were made in Refs.~\cite{Bodendorfer:2011nv, Bodendorfer:2011nw, Bodendorfer:2011nx, Bodendorfer:2011ny, Bodendorfer:2013jba, Bodendorfer:2011pb, Bodendorfer:2011pc, Bodendorfer:2011hs, Bodendorfer:2011xe, Thurn:2013lko}.
\begin{figure}[h]
\begin{center}
\includegraphics[height=8cm]{graph.pdf}
\end{center}
\caption{Graph of $R/\ell_P^{(4+q)}$ as a function of $q$ for $\Lambda_P^{(4+q)} = 1 \ \mathrm{TeV}$.}
\label{graph}
\end{figure}
The most popular way to overcome the geometrical hierarchy problem is the warped extra dimension scenario proposed in 1999 by L.~Randall and R.~Sundrum (RS) in Ref.~\cite{Randall:1999ee}, known as the RS1 model. A less known approach is the compactification of $q \geq 2$ spacelike extra dimension on a compact hyperbolic manifold with a large genus (number of holes) proposed in 2000 in Ref.~\cite{Kaloper:2000jb} (see also Ref.~\cite{Orlando:2010kx}).
The goal of the present work is to discuss another compactification geometry which may solve the geometrical hierarchy problem in ADD-like models. In 2005, H.D.~Kim proposed in Ref.~\cite{Kim:2005aa} to realize ADD's idea by compactifying a LED on a 1D singular variety: a metric graph\footnote{Metric graphs have interesting applications in physics, chemistry and mathematics (c.f. Ref.~\cite{Kuchment2002} for a short review). A 2D QFT on a star graph background was developped in Refs.~\cite{Bellazzini:2006jb, Bellazzini:2006kh, Bellazzini:2008mn,Bellazzini:2008cs}.}, like a star or a rose with respectively $N$ leaves/petals of equal length/circumference $\ell$. The reader can find a mathematical introduction to the spectral analysis of differential operators defined on metric graphs, the so-called quantum graphs, in Ref.~\cite{Kuchment2014}. In Ref.~\cite{Kim:2005aa}, it was shown that, for large $N$, one can build a phenomenologically viable model with only a single LED which gives sizable submillimeter deviations from the Newtonian law of gravity in tabletop experiments. The KK mass scale is $M_{KK} = 1/\ell \sim \mathcal{O}(10-100)$ meV. Here, we want to push the concept further and we take $\ell$ close to $\ell_P^{(5)}$ for large $N$, so $M_{KK} = 1/\ell \sim \mathcal{O}(0.1-1)$ TeV which does not reintroduce a scale hierarchy and evade all constrains on traditional ADD models (with a compactification on a low dimensional torus) from submillimeter tests of Newtonian gravity, astrophysics and cosmology. The integer $N$ is radiatively stable so the scenario solves completely the naturalness problem of the Higgs mass under the hypothesis of an exact global permutation symmetry between the leaves/petals. Ref.~\cite{Kim:2005aa} gives no information on the way to embed the SM fields into the proposed geometry. Are they bulk or brane-localized fields? In this work, we will see that the SM fields must be localized on a 3-brane and we find it particulary interesting to localize them on the junction (central vertex) of the star/rose graph.
In the context of the compactification of an extra dimension on a metric graph, the star graph is the most popular \cite{Kim:2005aa, Cacciapaglia:2006tg, Bechinger:2009qk, Abel:2010kw, Law:2010pv, Fonseca:2019aux}, mainly because, with AdS$_5$ leaves, these effective 5D braneworlds capture the low energy behavior of models with warped throats, arising in flux compactification in type IIB superstring theory \cite{Verlinde:1999fy, Klebanov:2000hb, Giddings:2001yu, Dimopoulos:2001ui, Dimopoulos:2001qd, Kachru:2003aw, Cacciapaglia:2006tg}, when one integrates out the modes associated to the transverse dimensions of the throats. In this work, we study a spacelike extra dimension compactified on a flat star/rose graph with identical leaves/petals by adopting a bottom-up approach: we are completely agnostic about the origin of this curious geometry in a UV theory like string theories, LQG, etc.
The authors of Ref.~\cite{Cacciapaglia:2006tg} analyzed a Klein-Gordon field, a Dirac field, a Maxwell field and Einsteinian gravity propagating in an extra dimension compactified on a star with $N$ leaves of different lengths. For that purpose, they define a copy of the same 5D field on each leaf. The copies are connected at the junction of the star through brane-localized interactions and the continuity of the metric. Of course, a different 5D field on each leaf is not equivalent to only one 5D field defined on the whole star graph. In order to recover only one zero mode propagating on the whole star, they add brane-localized mass terms and take the limit of infinite masses such that $N-1$ zero modes decouple from the Effective Field Theory (EFT). However, the meaning of an infinite brane-localized mass term is not clear when the cut-off of the EFT is not so far above the KK scale $M_{KK} = 1/\ell$. That is why we choose in this work the more straigtforward approach of Refs.~\cite{Kim:2005aa, Fujimoto:2019fzb} where a 5D field is defined on the whole metric graph from the start. For that purpose, one needs a distribution theory on the star/rose graph allowing to define a Lagrangian with possible field discontinuities at the junction. Instead of Schwartz's distribution theory \cite{Schwartz1, Schwartz2}, it is more appropriate to use a generalization of Kurasov's one \cite{KURASOV1996297}. In Section~\ref{spacetime_geom}, we give the definitions of a star/rose graph and introduce the elements of the distribution theory we need.
The KK mass spectrum and wave functions of a 5D massless real scalar field on a star/rose graph were studied in Ref.~\cite{Kim:2005aa}. In Section~\ref{KG_field}, we generalize it by adding a 5D mass to the scalar field. Besides, we clarify the method and hypothesises of this previous study, especially the hypothesis of continuity of the scalar field across the cental vertex of the star/rose graph.
Recently, a 5D Dirac field with a compactification on a flat rose graph was considered in Ref.~\cite{Fujimoto:2019fzb}. They took petals of possibly different circumferences and included a 5D Dirac mass for the fermion. In this framework, they considered the rose graph as a master quantum graph since one can reduce it to a star graph by a suitable choice of junction conditions. They studied the general mathematical properties of the junction conditions for the rose graph and classified them. Their work was restricted to the analysis of the zero modes only: the KK mass spectrum and wavefunctions of the excited modes were not considered. They determined the number of zero mode solutions for each type of boundary conditions in their classification. Their work was motivated by the future goals of generating three fermion generations and the features of the flavor sector of the SM fermions from the zero modes of only one generation of 5D fermions. In Section~\ref{Dirac_field}, we study the particular case of a 5D massless Dirac field on a star/rose graph with identical leaves/petals. We use a different approach compared to the one of Ref.~\cite{Fujimoto:2019fzb}. Instead of imposing arbitrary junction conditions, we keep only the natural junction conditions at the vertices, i.e. the junction conditions for which the variation of the action at the vertices vanishes for arbitrary field variations \cite{Hilbert, Csaki:2003dt, Cheng:2010pt, Angelescu:2019viv, Nortier:2020xms}. Indeed, we prefer junction conditions originating from the variation of the action (and thus of the fields) at the vertices. We will see that the natural junction conditions depend only on the hypothesis of the continuity of the fields at the junction. In this approach, we need the Henningson-Sfetsos (HS) boundary terms for 5D fermions \cite{Henningson:1998cd, Mueck:1998iz, Arutyunov:1998ve, Henneaux:1998ch, Contino:2004vy, vonGersdorff:2004eq, vonGersdorff:2004cg, Angelescu:2019viv, Nortier:2020xms} whose importance was stressed recently in Refs.~\cite{Angelescu:2019viv, Nortier:2020xms}. Besides, we do not restrict ourselves to the study of the zero modes only; we determine the KK mass spectrum and wavefunctions of all KK modes.
In Section~\ref{ADD_star_rose}, we propose a model to reduce the gravity scale to the TeV scale with a large compactified volume, but with EW and KK scales which coincide. The SM fields are localized on the 3-brane at the central vertex of the star/rose, and we compute their couplings to spinless KK-gravitons. We find that the results are very different from standard ADD models in the literature, due to the very specific features of the rose/star graph with identical leaves/petals. We also discuss briefly what kind of physics is expected in the Planckian and trans-Planckian regime of the model, the possibility of a hidden sector made of KK-gravitons and of a dark matter candidate: a stable black hole remnant \cite{Koch:2005ks, Dvali:2010gv, Bellagamba:2012wz, Alberghi:2013hca}, the Planckion \cite{Treder:1985kb, Dvali:2016ovn}.
In Section~\ref{Dirac_Neutrinos}, we revisit the models of Refs.~\cite{Dienes:1998sb, ArkaniHamed:1998vp, Dvali:1999cn}, which generate small Dirac neutrino masses with right-handed neutrinos identified with the zero modes of gauge singlet fermions propagating in a large compactified volume, by adapting this idea to our spacetime geometries. We consider a toy model with only one generation of neutrinos. We use alternatively the zero mode approximation and the exact treatment concerning the brane-localized Yukawa coupling between the SM Higgs field with the 5D neutrino and the 4D left-handed neutrino of the SM particle content. For this exact treatment of a brane-localized Yukawa interaction, we use the 5D method that we developped in Ref.~\cite{Angelescu:2019viv, Nortier:2020xms} with other authors. We find that a large number of KK-neutrinos are sterile and are part of the hidden sector of the proposed models.
We conclude and propose some perspectives in Section~\ref{conclusion_star_rose}. In Appendix~\ref{conventions}, we give our conventions for the 5D Minkowski metric, the Dirac matrices and spinors.
\section{Star \& Rose Graphs}
\label{spacetime_geom}
\subsection{Geometries}
In this subsection, we define the geometries on which we compactify. The reader is refered to Chapter 1 of Ref.~\cite{Kuchment2014} for basic definitions, vocabulary and properties of metric and quantum graphs which we will use in what follows.
\paragraph{$N$-star --}
The $N$-star $\mathcal{S}_N$ (c.f. Fig.~\ref{star_rose_graph}) is defined as the flat equilateral star graph with $N$ bonds directed from 1 vertex of degree $N$ to $N$ vertices of degree 1. It is a flat 1D space of volume $L = N \ell$ obtained by gluing $N$ intervals (the leaves) of length $\ell$ at a common boundary $J$ (the junction). The $i^\text{th}$ leaf ends at the opposite side of the junction $J$: the boundary $B_i$. $\mathcal{S}_N$ is symmetric under the group $\Sigma_N$, which is the set of all permutations of the $N$ leaves. For example, $\mathcal{S}_1$ is the interval of length $\ell$, $\mathcal{S}_2$ the interval of length $2\ell$ symmetric under a reflection ($\Sigma_2 \simeq \mathbb{Z}_2$) with respect to the midpoint $J$, and $\mathcal{S}_3$ is a claw. The couple of coordinates $(y, i) \in [0,\ell] \times \llbracket 1, N \rrbracket$ are assigned to every point of the $i^\text{th}$ leaf with the identification:
\begin{equation}
\forall (i, j) \in \llbracket 1, N \rrbracket^2, \ i \neq j, \ (0, i) \sim (0, j) \, .
\end{equation}
\paragraph{$N$-rose --}
The $N$-rose $\mathcal{R}_N$ (c.f. Fig.~\ref{star_rose_graph}) is defined as the flat equilateral rose graph (also called rhodonea or bouquet of circles), with $N$ directed loops (1 vertex of degree $2N$). It is a flat 1D space of volume $L=N\ell$ obtained by gluing the boundaries of $N$ intervals (the petals), of radius $R$ and circumference $\ell = 2 \pi R$, at a single point $V$ (the vertex/junction). $\mathcal{R}_N$ is symmetric under the group $\Sigma_N$, which is the set of all permutations of the $N$ petals. For example, $\mathcal{R}_1$ is a circle, $\mathcal{R}_2$ a lemniscat, $\mathcal{R}_3$ a trifolium, and $\mathcal{R}_4$ a quadrifolium. The couple of coordinates $(y, i) \in [0, \ell] \times \llbracket 1, N \rrbracket$ is assigned to every point of the $i^\text{th}$ petal, with the identifications:
\begin{equation}
\forall i \in \llbracket 1, N \rrbracket, \ (0, i) \sim (\ell, i) \, ,
\end{equation}
and
\begin{equation}
\forall (i, j) \in \llbracket 1, N \rrbracket^2, \ i \neq j, \, (0, i) \sim (0, j) \, .
\end{equation}
\begin{figure}[h]
\begin{center}
\includegraphics[height=7cm]{star_rose.pdf}
\end{center}
\caption{Embeddings of a 5-star $\mathcal{S}_5$ (one the left) and of a 5-rose $\mathcal{R}_5$ (on the right) in $\mathbb{R}^2$.}
\label{star_rose_graph}
\end{figure}
\subsection{Distribution Theory on a Star/Rose Graph}
In order to study a field theory on $\mathcal{K}_N = \mathcal{S}_N$ or $\mathcal{R}_N$ with localized interactions at the vertices, one needs to define a distribution theory on these metric graphs. To be general, we allow the test functions to be discontinuous at the junction. The usual Schwartz's distribution theory \cite{Schwartz1, Schwartz2} is thus not suitable and one should consider instead a generalization on metric graphs of Kurasov's distribution theory \cite{KURASOV1996297}. Up to our knowledge, such a generalization on metric graphs was not considered in the literature. We will thus define in this subsection the objects we need for our study.
\paragraph{Function on $\mathcal{K}_N$ --}
A complex function $f$ on the metric graph $\mathcal{K}_N$ is defined as:
\begin{equation}
f: \left\{
\begin{array}{ccc}
[0, \ell] \times \llbracket 1, N \rrbracket & \rightarrow & \mathbb{C} \, , \\
(y, i) & \mapsto & f(y, i) \, .
\end{array}
\right.
\end{equation}
For each $i \in \llbracket 1, N \rrbracket$, we define a function:
\begin{equation}
f_i: \left\{
\begin{array}{ccc}
[0, \ell] & \rightarrow & \mathbb{C} \, , \\
y & \mapsto & f_i(y) \equiv f(y, i) \, .
\end{array}
\right.
\end{equation}
\begin{itemize}
\item $f$ is continuous/differentiable at $(y,i)=(y_0,i_0)$ if $f_{i_0}$ is continuous/differentiable at $y=y_0$. The derivative of $f$ at $(y,i)=(y_0,i_0)$ is $\partial_y f(y,i) \equiv \partial_y f_i(y)$.
\item $f$ is continuous across the junction if
\begin{equation}
\forall (i,j), \ f(0,i)=f(0,j) \, .
\end{equation}
If it is not the case, $f$ is discontinuous/multivalued at the junction.
\end{itemize}
\paragraph{Test function on $\mathcal{K}_N$ --}
The set of test functions $\mathcal{T}$ is the set of all complex functions $\varphi$ on $\mathcal{K}_N$ such that the functions $\varphi_i$ are infinitely differentiable bounded functions on $[0, \ell]$. We stress that a function $\varphi \in \mathcal{T}$ and/or its derivatives can be discontinuous at the junction.
\paragraph{Distribution --}
A distribution $D \in \mathcal{T}'$ is a linear form on $\mathcal{T}$:
\begin{equation}
\forall \varphi \in \mathcal{T} \, , \ D[\varphi] \equiv \sum_{i=1}^N D_i[\varphi_i] \, ,
\end{equation}
where for every compact set $\mathfrak{B}_i \in [0, \ell]$, there exist constants $C_i$ and $m_i$ such that
\begin{equation}
\forall \varphi \in \mathcal{T} \, , \ \text{supp}(\varphi_i) \in \mathfrak{B}_i \, , \ \left| D_i[\varphi_i] \right| \leq C_i \sum_{\alpha_i\leq m_i} \sup \left| \partial_y^{\alpha_i} \varphi_i (y, i) \right| \, .
\end{equation}
\paragraph{Regular distribution --}
For any integrable complex function $f$ on $\mathcal{K}_N$, one can define a regular distribution $\widetilde{f} \in \mathcal{T}'$ such that
\begin{equation}
\forall \varphi \in \mathcal{T} \, , \ \widetilde{f}[\varphi] \equiv \sum_{i=1}^N \widetilde{f}_i[\varphi_i] \ \ \ \text{with} \ \ \ \widetilde{f}_i[\varphi_i] \equiv \int_{0}^{\ell} dy \ f(y, i) \, \varphi(y, i) \, .
\end{equation}
A distribution which is not regular is singular.
\paragraph{Product of distributions --}
If $D \in \mathcal{T}'$ and $f \in \mathcal{T}$, one can define the product $f D$ as
\begin{equation}
(fD)[\varphi] \equiv D[f \varphi] \, .
\end{equation}
If $\widetilde{f}$ is the regular distribution associated to $f$, the product $\widetilde{f} D$ is defined as
\begin{equation}
\widetilde{f}D \equiv fD \, .
\end{equation}
\paragraph{Dirac distribution --}
The Dirac distribution on $\mathcal{K}_N$ centered at $(y_0,i_0)$ is the singular distribution $\delta_{y_0,i_0}$ defined as
\begin{equation}
\forall \varphi \in \mathcal{T} \, , \ \delta_{y_0,i_0}[\varphi] \equiv \varphi(y_0, i_0) \, .
\end{equation}
We want to build a Dirac-like distribution $\delta_{J/V}$ centered at $J/V$ to localize interactions at the junction. Consider the $N$-star $\mathcal{S}_N$. Let $\eta$ be an infinitely differentiable real function on $\mathcal{S}_N$ such that
\begin{equation}
\forall y \in [0, \ell] \, , \ \forall (i, j) \in \llbracket 1, N \rrbracket^2 \, , \ \eta(y,i) = \eta(y,j) \, ,
\label{def_eta}
\end{equation}
and
\begin{equation}
\sum_{i=1}^N \int_0^\ell dy \ \eta(y,i) = 1 \, .
\label{norm_eta}
\end{equation}
We define $\eta_\epsilon$:
\begin{equation}
\eta_\epsilon (y,i) = \dfrac{1}{\epsilon} \, \eta \left( \dfrac{y}{\epsilon}, i \right) \, ,
\label{dilat_eta}
\end{equation}
with $\epsilon > 0$, and we associate the regular distribution $\widetilde{\eta_\epsilon}$ to it. The Dirac distribution $\delta_J$ at the junction $J$ is defined as the weak limit:
\begin{equation}
\delta_J \equiv \lim_{\epsilon \to 0} \eta_\epsilon \, .
\end{equation}
We have
\begin{align}
\forall \varphi \in \mathcal{T} \, , \ \delta_J[\varphi]
&= \lim_{\epsilon \to 0} \sum_{i=1}^N \int_0^\ell dy \ \eta_\epsilon (y,i) \, \varphi (y,i) \, , \nonumber \\
&= \lim_{\epsilon \to 0} \sum_{i=1}^N \int_0^\ell dy \ \eta (y,i) \, \varphi ( \epsilon y,i) \, , \nonumber \\
&= \sum_{i=1}^N \varphi(0,i) \int_0^\ell dy \ \eta(y,i) \, , \nonumber \\
&= \dfrac{1}{N} \, \sum_{i=1}^N \varphi(0,i) \, .
\end{align}
We conclude that
\begin{equation}
\delta_J = \dfrac{1}{N} \, \sum_{i=1}^N \delta_{0,i} \, .
\end{equation}
In the same way, one can build a Dirac distribution $\delta_V$ at the vertx $V$ of the $N$-rose $\mathcal{R}_N$ such that
\begin{equation}
\delta_V \equiv \lim_{\epsilon \to 0} \eta_\epsilon \, ,
\end{equation}
where $\eta_\epsilon$ is defined as in Eq.~\eqref{dilat_eta} and $\eta$ is an infinitely differentiable real function on $\mathcal{R}_N$ such that
\begin{equation}
\forall y \in [0, \ell] \, , \ \forall (i, j) \in \llbracket 1, N \rrbracket^2 \, , \ \eta(y,i)=\eta(y,j) \ \text{and} \ \eta(y-\ell,i)=\eta(y-\ell,j) \, ,
\end{equation}
and normalized as in Eq.~\ref{norm_eta}. Then,
\begin{equation}
\delta_V = \dfrac{1}{2N} \, \sum_{i=1}^N \left( \delta_{0,i} + \delta_{\ell,i} \right) \, .
\end{equation}
We have thus defined a Dirac distribution centered at the junction $J/V$ which acts on test functions possibly discontinuous at $J/V$.
\paragraph{Distributional derivative --} In Kurasov's distribution theory, one defines a distributional derivative in the same way as in Schwartz's distribution theory. The distributional derivative $\partial_y D$ of a distribution $D \in \mathcal{T}'$ is defined by
\begin{equation}
\forall \varphi \in \mathcal{T} \, , \ \partial_y D [\varphi] = - D [\partial_y \varphi] \, .
\end{equation}
The derivative of a regular distribution $\widetilde{f} \in \mathcal{T}'$ is thus
\begin{equation}
\partial_y \widetilde{f} = \left\{ \partial_y f \right\} + \sum_{i=1}^N \left( \delta_{0, i} - \delta_{\ell, i} \right) f \, ,
\end{equation}
where $\left\{ \partial_y f \right\}$ is the regular distribution associated to derivative $\partial_y f$. As in original Kurasov's distribution theory, the distributional derivative does not coincide with the derivative defined in the classical sense. For instance, the distributional derivative of the regular distribution associated to a constant function is not zero. For the unit function $\mathbf{1}: (y, i) \mapsto 1$, we have
\begin{equation}
\partial_y \widetilde{\mathbf{1}} = \sum_{i=1}^N \left( \delta_{0, i} - \delta_{\ell, i} \right) \, .
\end{equation}
Instead, it would be more natural to define the distributional derivative as
\begin{equation}
\forall \varphi \in \mathcal{T} \, , \ \partial_y D [\varphi] = - D [\partial_y \varphi] - \sum_{i=1}^N \left[ \left( \delta_{0, i} - \delta_{\ell, i} \right) D \right] [\varphi] \, .
\end{equation}
However, in this case, a distributional derivative of the Dirac distributions $\delta_{0/\ell, i}$ and $\delta_{J/V}$ would involve Dirac distributions squared which is not defined. Therefore, the price to pay in order to define a usefull distributional derivative is to have extra boundary terms at the vertices, compared to the traditional distributional derivative for a regular distribution in Schwartz's distribution theory. One can thus define the $n^\text{th}$ derivative of the Dirac distribution $\delta_{y_0, i_0}$ as
\begin{equation}
\partial_y^n \delta_{y_0, i_0} [\varphi] = (-1)^n \, \partial_y^n \varphi(y_0, i_0) \, .
\end{equation}
\paragraph{Moment expansion --}
We will adapt the moment expansion \cite{Estrada:1994} of Schwartz's distribution theory to our case. Consider the $N$-star. The Taylor series of a test function $\varphi \in \mathcal{T}$ is
\begin{equation}
\forall y \in [0, \ell], \forall i \in \llbracket 1, N \rrbracket, \ \varphi(y,i) = \sum_{n=0}^{+\infty} \partial_y^n \varphi(0,i) \, \dfrac{y^n}{n!} \, .
\end{equation}
Then, the action of the previous regular distribution $\widetilde{\eta}$ is
\begin{equation}
\widetilde{\eta} [\varphi] = \sum_{i=1}^N \int_0^\ell dy \ \eta(y,i) \sum_{n=0}^{+\infty} \partial_y^n \varphi(0,i) \, \dfrac{y^n}{n!} \, .
\end{equation}
We define the $n^\text{th}$ moment of the function $\eta$ as
\begin{equation}
\mu_n = \widetilde{\eta} [y^n] = \sum_{i=1}^N \int_0^\ell dy \ \eta(y,i) \, \dfrac{y^n}{n!} \, .
\end{equation}
Thus,
\begin{align}
\widetilde{\eta} [\varphi] &= \left( \sum_{n=0}^{+\infty} \sum_{i=1}^N \dfrac{(-1)^n \, \mu_n}{N n!} \, \partial_y^n \delta_{0,i} \right) [\varphi] \, , \nonumber \\
&= \left( \sum_{n=0}^{+\infty} \dfrac{(-1)^n \, \mu_n}{n!} \, \partial_y^n \delta_J \right) [\varphi] \, .
\end{align}
A similar result is obtained with the $N$-rose. We define the moment expansion of $\widetilde{\eta}$ by
\begin{equation}
\widetilde{\eta} = \sum_{n=0}^{+\infty} \dfrac{(-1)^n \, \mu_n}{n!} \, \partial_y^n \delta_{J/V} \, .
\end{equation}
\subsection{Star/Rose Extra Dimension}
We want to study a field theory on the flat factorizable geometry $\mathcal{M}_4 \times \mathcal{K}_N$, with $\mathcal{K}_N = \mathcal{S}_N$ or $\mathcal{R}_N$. The coordinates can be split as $(z^M, i) = (x^\mu, y, i)$, where $x^\mu$ are the coordinates of $\mathcal{M}_4$. One has $M \in \llbracket 0, 4 \rrbracket$, and $\mu \in \llbracket 0, 3 \rrbracket$. The junction $J/V$ and the boundaries $B_i$ break explicitely the 5D Lorentz-Poincaré symmetries to the 4D ones, but the 5D symmetries are still preserved locally in the bulk, in the same way as orbifold fixed points. The junction and boundaries are thus 3-branes where one can localize 4D fields and brane-localized kinetic and/or interaction terms for the bulk fields. The 3-branes at the boundaries are called $B_i$-branes, and the 3-brane at the junction is called $J/V$-brane for $\mathcal{K}_N = \mathcal{S}_N/\mathcal{R}_N$.
One can consider 5D fields which propagate only in one petal/leaf as in Refs.~\cite{Cacciapaglia:2006tg, Bechinger:2009qk, Abel:2010kw, Law:2010pv, Fonseca:2019aux}, or 5D fields which propagate into all the star/rose graph. In this latter case, it is straightforward to generalize our discussion of functions defined on $\mathcal{K}_N$ to the case of 5D fields. The 5D fields can be discontinuous at the junction and thus multivalued at this point. One can interpret it from an EFT point of view (which is always the case in any realistic model with interactions): the value of the field at the point $(x^\mu, 0, i)$ is the one at the neighborhood but outside the core of the $J/V$-brane, since its microscopic description is outside the range of validity of the EFT. One should think the point $(0,i)$ of the graph as $(0^+, i)$. Therefore, the fact that the field is multivalued at the junction is not a problem. It is convenient to define the field theory within a distributional approach with respect to the coordinates $(y, i)$, where the 5D fields are functions of $x^\mu$ but linear forms which act on functions of $(y,i)$.
\section{5D Klein-Gordon Field on a Star/Rose Graph}
\label{KG_field}
\subsection{Klein-Gordon Equation \& Junction/Boundary Conditions}
\label{KG_field_star}
We study a 5D real scalar field $\Phi$ of mass dimension 3/2 and of mass $M_\Phi$ defined on $\mathcal{M}_4 \times \mathcal{K}_N$. The 5D fields $\Phi_i$ are supposed to be smooth functions on the interval $[0, \ell]$. We associate to $\Phi$ a regular distribution $\widetilde{\Phi}$. The Lagrangian $\widetilde{\mathcal{L}_\Phi}$ describing the dynamics of $\Phi$ is defined at the level of distributions. The action is
\begin{equation}
S_\Phi = \int d^4x \ \widetilde{\mathcal{L}_\Phi} [\mathbf{1}] \, ,
\end{equation}
with the unit test function $\mathbf{1}: (y, i) \mapsto 1$, and
\begin{equation}
\widetilde{\mathcal{L}_\Phi} = - \dfrac{1}{2} \, \widetilde{\Phi} \Box_5 \widetilde{\Phi} - \dfrac{1}{4} \, \widetilde{\Phi}^2 \, \partial_y \left( \delta_{\ell, i} - \delta_{0, i} \right) - \dfrac{M_\Phi^2}{2} \, \widetilde{\Phi}^2 \, ,
\label{L_Phi_star}
\end{equation}
with $M_\Phi^2 \geq 0$. We do not include brane-localized kinetic/mass/interaction terms, and the boundary terms are chosen to have Neumann-like conditions at the junction and boundaries. The action reduces to the standard form for a Klein-Gordon field:
\begin{equation}
S_\Phi = \int d^4x \ \sum_{i=1}^N \int_0^\ell dy \left( \dfrac{1}{2} \, \partial^M \Phi \partial_M \Phi - \dfrac{M_\Phi^2}{2} \, \Phi^2 \right) \, ,
\label{S_Phi_star}
\end{equation}
where the boundary terms coming from the distributional derivatives cancel each other.
$\Phi$ can be continuous or discontinuous across the $J/V$-brane. This feature depends on the microscopic structure of the $J/V$-brane in the UV completion. We apply Hamilton's principle to the action $S_\Phi$, with arbitrary variations $\delta \Phi$ of $\Phi$ in the bulk and on the branes. The $\delta \Phi$'s inherite the (dis)continuity properties from $\Phi$ across the $J/V$-brane and we extract the junction and boundary conditions from integrals over total derivatives. We get the Klein-Gordon equation:
\begin{equation}
\left( \partial^M \partial_M + M_\Phi^2 \right) \Phi \left( x^\mu, y, i \right) = 0 \, ,
\label{KG_Phi_star}
\end{equation}
Neumann boundary conditions on the $B_i$-branes:
\begin{equation}
\partial_{y} \Phi \left( x^\mu, \ell, i \right) = 0 \, ,
\label{BCs_Phi_star}
\end{equation}
and the junction condition depends on the (dis)continuity of $\Phi$:
\begin{itemize}
\item If $\Phi$ is allowed to be discontinuous across the junction, we get Neumann junction conditions:
\begin{equation}
\partial_y \Phi(x^\mu, 0/\ell, i) = 0 \, .
\end{equation}
We say that the $J/V$-brane is \textit{airtight} to the field $\Phi$, which means that the spectrum is equivalent to the one obtained by disconnecting the $N$ bonds at the vertex $J/V$ into $N$ disjoined intervals. A brane which is airtight to the field behaves like a boundary for this field.
\item If we impose to $\Phi$ to be continuous across the junction, we get a Neumann-Kirchhoff junction condition:
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{\sum_{i=1}^N \partial_{y} \Phi \left( x^\mu, 0, i \right) = 0} & \text{for} & \mathcal{K}_N = \mathcal{S}_N \, , \\ \\
\displaystyle{\sum_{i=1}^N \left[ \partial_{y} \Phi \left( x^\mu, y, i \right) \right]_{y=0}^{\ell} = 0} & \text{for} & \mathcal{K}_N = \mathcal{R}_N \, ,
\end{array}
\right.
\label{JC_Phi_star}
\end{equation}
with $\left[ g(y) \right]_{y=a}^b = g(b) - g(a)$. When $\Phi$ is continuous across the junction, the leaves/petals communicate through the $J/V$-brane which is thus not airtight.
\end{itemize}
\subsection{Kaluza-Klein Dimensional Reduction}
\label{KK_scalar_20}
We will not study here the KK dimensional reduction when $\Phi$ is allowed to be discontinuous across the junction since it reduces to a 5D scalar field on $N$ disjoined intervals. The case of a 5D scalar field on an interval is very well known in the literature \cite{Dobrescu:2008zz}. In the following, we focus on the continuous case.
\subsubsection{Separation of Variables}
We perform the KK dimensional reduction of the 5D field theory to an effective 4D one in terms of KK degrees of freedom. A general 5D field $\Phi$ can be expanded as
\begin{equation}
\Phi \left( x^\mu, y, i \right) = \sum_b \ \sum_{n_b} \ \sum_{d_b} \phi^{(b, \, n_b, \, d_b)} \left( x^\mu \right) \, f_\phi^{(b, \, n_b, \, d_b)} \left( y, i \right) \, .
\label{KK_Phi_star}
\end{equation}
We label each KK mode solution by a triplet $(b, n_b, d_b)$, where:
\begin{itemize}
\item $b$ labels the different KK towers for the same 5D field which are defined by different mass spectra (see below);
\item $n_b$ labels the levels in the KK tower $b$;
\item $d_b$ labels the degenerate modes for each KK level $(b, n_b)$. We choose the notation $d_b$, instead of the more appropriate one $d(b, n_b)$, for simplifying the notations since we will see that each KK level for a KK tower $b$ has the same degeneracy: there is thus no ambiguity.
\end{itemize}
The 5D equation \eqref{KG_Phi_star} splits into the Klein-Gordon equations for the 4D fields $\phi^{(b, \, n_b, \, d_b)}$:
\begin{equation}
\left( \partial^\mu \partial_\mu + \left[ m_\phi^{(b, \, n_b)} \right]^2 \right) \phi^{(b, \, n_b, \, d_b)} (x^\mu) = 0 \, ,
\label{KG_KK-Phi_star}
\end{equation}
with
\begin{equation}
\left[ m_\phi^{(b, \, n_b)} \right]^2 = M_\Phi^2 + \left[ k_\phi^{(b, \, n_b)} \right]^2 \, ,
\label{m_func_k}
\end{equation}
and the differential equations for the wave functions $f_\phi^{(b, \, n_b, \, d_b)}$:
\begin{equation}
\left( \partial_{y}^2 + \left[ k_\phi^{(b, \, n_b)} \right]^2 \right) f_\phi^{(b, \, n_b, \, d_b)} \left( y, i \right) = 0 \, ,
\label{wave_eq_Phi_star}
\end{equation}
where $\left[ m_\phi^{(b, \, n_b)} \right]^2 \geq 0$ is the mass squared of the KK modes $\phi^{(b, \, n_b, \, d_b)}$, and $\left[ k_\phi^{(b, \, n_b)} \right]^2 \in [0, \ + \infty)$ is an eigenvalue of the operator $\partial_{y}^2$ on $\mathcal{K}_N$ associated to the eigenfunctions $f_\phi^{(b, \, n_b, \, d_b)}$. The orthonormalization conditions for the wave functions $f_\phi^{(b, \, n_b, \, d_b)}$ are
\begin{equation}
\sum_{i=1}^N \int_0^\ell dy \ f_\phi^{(b, \, n_b, \, d_b)}(y, i) \, f_\phi^{(b', \, n'_{b'}, \, d'_{b'})}(y, i) = \delta^{bb'} \, \delta^{n_{b} n'_{b'}} \, \delta^{d_{b} d'_{b'}} \, .
\label{norm_wave_Phi_star}
\end{equation}
The conditions on the 5D field $\Phi$ on the 3-branes are naturally transposed to conditions on the KK wave functions $f_\phi^{(b, \, n_b, \, d_b)}$.
\subsubsection{Zero Modes}
We are looking for zero mode solutions ($b=0$, $n_0 = 0$, $k_\phi^{(0, \, 0)}=0$) of Eq.~\eqref{wave_eq_Phi_star}. For both compactifications on $\mathcal{S}_N$ and $\mathcal{R}_N$, there is only one zero mode ($d_0 \in \{1\}$) whose wave function is flat, such that:
\begin{equation}
f_\phi^{(0, \, 0, \, 1)} (y, i) = \sqrt{\dfrac{1}{N \ell}} \, .
\label{zero_mode_sol_Phi_2}
\end{equation}
\subsubsection{Excited Modes}
\subsubsection*{\boldmath \textcolor{black}{$a)$ $N$-Star}}
The general solutions of Eq.~\eqref{wave_eq_Phi_star}, satisfying the Neumann boundary conditions \eqref{BCs_Phi_star}, are of the form:
\begin{equation}
f_\phi^{(b, \, n_b, \, d_b)} (y, i) = A_i^{(b, \, n_b, \, d_b)} \, \cos \left[ k_\phi^{(b, \, n_b)} \, (y-\ell) \right] \, ,
\end{equation}
with $A_i^{(b, \, n_b, \, d_b)} \in \mathbb{R}$. The continuity condition on the wave functions at the $J$-brane gives
\begin{equation}
\forall (i,j) \, , \ A_i^{(b, \, n_b, \, d_b)} \, \cos \left[ k_\phi^{(b, \, n_b)} \, \ell \right] = A_j^{(b, \, n_b, \, d_b)} \, \cos \left[ k_\phi^{(b, \, n_b)} \, \ell \right] \, ,
\label{cont_phi_star_20}
\end{equation}
which leads to two kinds of excited KK modes: the KK wave functions $f_\phi^{(b, \, n_b, \, d_b)}$ can vanish or not on the $J$-brane.
\subsubsection*{\boldmath \textcolor{black}{\textit{First case: $f_\phi^{(b, \, n_b, \, d_b)}(0,i)=0$}}}
Eq.~\eqref{cont_phi_star_20} gives the KK mass spectrum
\begin{equation}
\cos \left[ k_\phi^{(b, \, n_b)} \, \ell \right] = 0 \ \ \ \underset{b=1}{\Longrightarrow} \ \ \ k_\phi^{(1, \, n_1)} = \left( n_1 + \dfrac{1}{2} \right) \dfrac{\pi}{\ell} \, , \ n_1 \in \mathbb{N} \, ,
\label{mass_spect_Phi_star_2}
\end{equation}
which defines the KK tower $b=1$. The Neumann-Kirchhoff junction condition \eqref{JC_Phi_star} implies
\begin{equation}
\sum_{i=1}^N A_i^{(1, \, n_1, \, d_1)} = 0 \, .
\label{eq_Ai_20}
\end{equation}
Each KK level $(1, n_1)$ is thus $N-1$ times degenerate ($d_1 \in \llbracket 1, N-1 \rrbracket$) and the KK wave functions are
\begin{equation}
f_\phi^{(1, \, n_1, \, d_1)} (y, i) = \epsilon^{(d_1)}_i \sqrt{\dfrac{2}{\ell}} \, \cos \left[ k_\phi^{(1, \, n_1)} \, (y-\ell) \right] \, ,
\end{equation}
with the $(N-1)$-vector basis:
\begin{align}
\overrightarrow{\epsilon^{(1)}} &= \dfrac{1}{\sqrt{2}} \left( 1, -1, 0, \cdots, 0 \right) \, , \nonumber \\
\overrightarrow{\epsilon^{(2)}} &= \dfrac{1}{\sqrt{6}} \left( 1, 1, -2, 0, \cdots, 0 \right) \, , \nonumber \\
\vdots \nonumber \\
\overrightarrow{\epsilon^{(N-1)}} &= \dfrac{1}{\sqrt{N(N-1)}} \left( 1, 1, \cdots, 1, -(N-1) \right) \, .
\label{basis_epsi}
\end{align}
\subsubsection*{\boldmath \textcolor{black}{\textit{Second case: $f_\phi^{(b, \, n_b, \, d_b)}(0,i) \neq 0$}}}
We have $\cos \left[ k_\phi^{(b, \, n_b)} \, \ell \right] \neq 0$ so Eq.~\eqref{cont_phi_star_20} gives
\begin{equation}
\forall (i, j) \, , \ A_i^{(b, \, n_b, \, d_b)} = A_j^{(b, \, n_b, \, d_b)} \equiv A^{(b, \, n_b, \, d_b)} \, .
\label{eq_A_20}
\end{equation}
The Kirchhoff junction condition \eqref{JC_Phi_star} leads to the KK mass spectrum
\begin{equation}
\sin \left[ k_\phi^{(b, \, n_b)} \, \ell \right] = 0 \ \ \ \underset{b=2}{\Longrightarrow} \ \ \ k_\phi^{(2, \, n_2)} = n_2 \, \dfrac{\pi}{\ell} \, , \ n_2 \in \mathbb{N}^* \, ,
\label{mass_spect_Phi_star_1}
\end{equation}
which defines the KK tower with $b=2$ whose KK levels are not degenerate ($d_2 \in \{1\}$). The KK wave functions are
\begin{equation}
f_\phi^{(2, \, n_2, \, 1)} (y, i) = \sqrt{\dfrac{2}{N\ell}} \, \cos \left[ k_\phi^{(2, \, n_2)} \, (y-\ell) \right] \, .
\end{equation}
\subsubsection*{\boldmath \textcolor{black}{$b)$ $N$-Rose}}
Again, to satisfy the continuity condition at the $V$-brane, the KK wave functions $f_\phi^{(b, \, n_b, \, d_b)}$ can vanish or not at the vertex.
\subsubsection*{\boldmath \textcolor{black}{\textit{First case: $f_\phi^{(b, \, n_b, \, d_b)}(0,i)=0$}}}
The general solutions of Eq.~\eqref{wave_eq_Phi_star} with $f_\phi^{(b, \, n_b, \, d_b)}(0,i)=0$ are of the form:
\begin{equation}
f_\phi^{(b, \, n_b, \, d_b)} (y, i) = A_i^{(b, \, n_b, \, d_b)} \, \sin \left[ k_\phi^{(b, \, n_b)} \, y \right] \, ,
\label{gen_prof_rose_50}
\end{equation}
with $A_i^{(b, \, n_b, \, d_b)} \in \mathbb{R}$. The periodicity condition for each petal at the vertex gives $\sin \left[ k_\phi^{(b, \, n_b, \, d_b)} \, \ell \right] = 0$. Moreover, the Neumann-Kirchhoff junction condition \eqref{JC_Phi_star} implies
\begin{equation}
\left( \cos \left[ k_\phi^{(b, \, n_b)} \, \ell \right] - 1 \right) \sum_{i=1}^N A_i^{(b, \, n_b, \, d_b)} = 0 \, .
\label{JC_phi_30}
\end{equation}
\newpage
There are thus two possibilities:
\begin{itemize}
\item First possibility:\\
The KK mass spectrum is
\begin{equation}
\left\{
\begin{array}{rcl}
\cos \left[ k_\phi^{(b, \, n_b)} \, \ell \right] &\neq& 1 \\
\sin \left[ k_\phi^{(b, \, n_b)} \, \ell \right] &=& 0
\end{array}
\right.
\ \ \ \underset{b=1}{\Longrightarrow} \ \ \ k_\phi^{(1, \, n_1)} = (2n_1+1) \, \dfrac{\pi}{\ell} \, , \ n_1 \in \mathbb{N} \, ,
\label{mass_spect_Phi_rose_3}
\end{equation}
and defines the KK tower $b=1$. From the conditions \eqref{JC_phi_30}, we get Eq.~\eqref{eq_Ai_20} so $d_1 \in \llbracket 1, N-1 \rrbracket$ and the KK wave functions are
\begin{equation}
f_\phi^{(1, \, n_1, \, d_1)} (y, i) = \epsilon^{(d_1)}_i \sqrt{\dfrac{2}{\ell}} \, \sin \left[ k_\phi^{(1, \, n_1)} \, y \right] \, ,
\end{equation}
with the $(N-1)$-vector basis \eqref{basis_epsi}.
\item Second possibility:\\
The KK mass spectrum is
\begin{equation}
\left\{
\begin{array}{rcl}
\cos \left[ k_\phi^{(b, \, n_b)} \, \ell \right] &=& 1 \\
\sin \left[ k_\phi^{(b, \, n_b)} \, \ell \right] &=& 0
\end{array}
\right.
\ \ \ \underset{b=2}{\Longrightarrow} \ \ \ k_\phi^{(2, \, n_2)} = 2n_2 \, \dfrac{\pi}{\ell} \, , \ n_2 \in \mathbb{N}^* \, ,
\label{mass_spect_Phi_rose_2}
\end{equation}
which satisfies Eq.~\eqref{JC_phi_30} and defines the KK tower $b=2$. We get $d_2 \in \llbracket 1, N \rrbracket$ and the KK wave functions are
\begin{equation}
f_\phi^{(2, \, n_2, \, d_2)} (y, i) = \eta^{(d_2)}_i \sqrt{\dfrac{2}{\ell}} \, \sin \left[ k_\phi^{(2, \, n_2)} \, y \right] \, ,
\label{wave_funct_1}
\end{equation}
with the $N$-vector basis:
\begin{align}
\overrightarrow{\eta^{(1)}} &= \left( 1, 0, \cdots, 0 \right) \, , \nonumber \\
\overrightarrow{\eta^{(2)}} &= \left( 0, 1, 0, \cdots, 0 \right) \, , \nonumber \\
\vdots \nonumber \\
\overrightarrow{\eta^{(N)}} &= \left( 0, \cdots, 0, 1 \right) \, .
\label{basis_eta}
\end{align}
\end{itemize}
\subsubsection*{\boldmath \textcolor{black}{\textit{Second case}: $f_\phi^{(b, \, n_b, \, d_b)}(0,i) \neq 0$}}
The KK mass spectrum is the same as in Eq.~\eqref{mass_spect_Phi_rose_2}. Each KK level $(b, n_b)$ is thus degenerate with the $N$ KK modes of the level $(2, n_2)$ so $d_2 \in \llbracket 1, N+1 \rrbracket$: we label the KK modes with non-vanishing wave functions at the junction by the triplet $(2, n_2, N+1)$. The KK wave functions are
\begin{equation}
f_\phi^{(2, \, n_2, \, N+1)} (y, i) = \sqrt{\dfrac{2}{N \ell}} \, \cos \left( m_\phi^{(2, \, n_2)} \, y \right) \, .
\label{wave_phi_rose_interm}
\end{equation}
\vspace{1cm}
Finally, we insist on the fact that all KK towers labeled by $b$ are present in the spectrum: they do not correspond to different models. The 5D field $\Phi$ has one zero mode of mass $M_\Phi$ in both geometries ($\mathcal{K}_N = \mathcal{S}_N$ or $\mathcal{R}_N$) and excited modes. For a massless 5D field ($M_\Phi=0$), the mass gap between the KK modes is of the order of $1/\ell$. Some of the KK modes have wave functions which vanish at the junction. We will see a physical application of these results in Subsection~\ref{pheno_star_rose_graviton}, where we will study a toy model of a 5D spinless graviton which is just a real scalar field with $M_\Phi = 0$. The zero mode is thus identified with the 4D massless graviton (where we do not take into account the spin).
\section{5D Massless Dirac Field on a Star/Rose Graph}
\label{Dirac_field}
\subsection{Dirac-Weyl Equations \& Junction/Boundary Conditions}
We study a 5D massless Dirac field
\begin{equation}
\Psi =
\begin{pmatrix}
\Psi_L \\
\Psi_R
\end{pmatrix}
\end{equation}
of mass dimension 2 defined on $\mathcal{M}_4 \times \mathcal{K}_N$, where the fields $\Psi_L$ and $\Psi_R$ describe fermion fields of left and right-handed 4D chirality respectively. To the function $\Psi$, we associate the regular distribution $\widetilde{\Psi}$. The action is
\begin{equation}
S_\Psi = \int d^4x \ \widetilde{\mathcal{L}_\Psi} [\mathbf{1}] \, ,
\end{equation}
with the Lagrangian
\begin{equation}
\widetilde{\mathcal{L}_\Psi}
= \dfrac{i}{2} \, \bar{\widetilde{\Psi}} \Gamma^M \overleftrightarrow{\partial_M} \widetilde{\Psi} + \sum_{i=1}^N \dfrac{s_i}{2} \, \bar{\widetilde{\Psi}} \widetilde{\Psi} \, \left( \delta_{0,i} - \delta_{\ell,i} \right) \, ,
\end{equation}
where $\bar{\widetilde{\Psi}} = \Gamma^0 \widetilde{\Psi}$, $\overleftrightarrow{\partial_M} = \vec{\partial_M} - \overleftarrow{\partial_M}$ and $\Gamma^M = \left( \gamma^\mu, i \gamma^5 \right)$ are the 5D Dirac matrices\footnote{Our conventions for the Dirac algebra is given in Appendix~\ref{conventions}.}
We include the HS boundary terms at the vertices with $s_i=\pm 1$. The relative sign between the HS terms at $(y,i)=(0,i)$ and $(y,i)=(\ell,i)$ is chosen in order to allow the existence of zero modes \cite{Angelescu:2019viv, Nortier:2020xms}. If we flip the sign of $s_i$, we exchange the features of the left and right-handed KK modes. In what follows, we choose $s_i = 1$.
The action can be written as
\begin{equation}
S_\Psi = \int d^4x \sum_{i=1}^N \left\{ \left( \int_{0}^{\ell} dy \ \dfrac{i}{2} \, \bar{\Psi} \Gamma^M \overleftrightarrow{\partial_M} \Psi \right) - \left[ \dfrac{1}{2} \, \bar{\Psi} \Psi \right]_{y=0}^{\ell} \right\} \, ,
\label{S_Psi_star}
\end{equation}
where the boundary terms coming from the distributional derivatives cancel each other. The conserved Noether current associated to the symmetry $U(1): \ \Psi \ \mapsto \ e^{-i \alpha} \Psi$, with $\alpha \in \mathbb{R}$, is
\begin{equation}
j_\Psi^M = \bar{\Psi} \Gamma^M \Psi \ \ \ \text{with} \ \ \ \partial_M j_\Psi^M = 0 \, .
\end{equation}
Current conservation requires a Kirchhoff condition for the current at the junction:
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{\sum_{i=1}^N j_\Psi^M \left( x^\mu, 0, i \right) \overset{!}{=} 0} & \text{for} & \mathcal{K}_N = \mathcal{S}_N \, , \\ \\
\displaystyle{\sum_{i=1}^N \left[ j_\Psi^M \left( x^\mu, y, i \right) \right]_{y=0}^{\ell} \overset{!}{=} 0} & \text{for} & \mathcal{K}_N = \mathcal{R}_N \, .
\end{array}
\right.
\end{equation}
For the component $M = 4$ one gets at the junction:
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{\sum_{i=1}^N \left. \left( \Psi_L^\dagger \Psi_R - \Psi_R^\dagger \Psi_L \right) \right|_{y=0} \overset{!}{=} 0} & \text{for} & \mathcal{K}_N = \mathcal{S}_N \, , \\ \\
\displaystyle{\sum_{i=1}^N \left[ \Psi_L^\dagger \Psi_R - \Psi_R^\dagger \Psi_L \right]_{y=0}^{\ell} \overset{!}{=} 0} & \text{for} & \mathcal{K}_N = \mathcal{R}_N \, .
\end{array}
\right.
\label{Kir_current_psi}
\end{equation}
We apply Hamilton's principle to the action $S_\Psi$, with arbitrary variations of the fields $\delta \Psi_{L/R}$ in the bulk and on the branes. We get the massless Dirac-Weyl equations for the 5D fields $\Psi_{L/R}$:
\begin{equation}
\left \{
\begin{array}{r c l}
i \sigma^\mu \partial_\mu \Psi_R (x^\mu, y, i) + \partial_y \Psi_L (x^\mu, y, i) &=& 0 \, ,
\\ \vspace{-0.2cm} \\
i \bar{\sigma}^\mu \partial_\mu \Psi_L (x^\mu, y, i) - \partial_y \Psi_R (x^\mu, y, i) &=& 0 \, .
\end{array}
\right.
\label{Dirac_Psi_star}
\end{equation}
Therefore, when the fields are on-shell, $\Psi_L$ and $\Psi_R$ are not independent so the junction/boundary conditions must not overconstrain $\Psi_L$ and $\Psi_R$ at the same point \cite{Henneaux:1998ch, Contino:2004vy, Nortier:2020xms}. The addition of the HS terms guarantee that only $\Psi_L$ is constrained on the branes by the minimization of the action \cite{Henneaux:1998ch, Contino:2004vy, Angelescu:2019viv, Nortier:2020xms}. We get the Dirichlet boundary conditions at the $B_i$-branes:
\begin{equation}
\Psi_L \left( x^\mu, \ell, i \right) = 0 \ \text{(for $\mathcal{K}_N = \mathcal{S}_N)$} \, .
\label{D_psi_B}
\end{equation}
The fields $\Psi_L$ and $\Psi_R$ can be taken independently (dis)continuous across the junction. $\delta \Psi_{L/R}$ is (dis)continuous as $\Psi_{L/R}$. One can explore the different possibilities of junction conditions for $\Psi_L$, depending on the (dis)continuity of the fields here, summarized in Tab.~\ref{table_junction}:
\begin{itemize}
\item Case 1: If both $\Psi_L$ and $\Psi_R$ are allowed to be discontinuous, one gets Dirichlet junction conditions:
\begin{equation}
\left\{
\begin{array}{l}
\Psi_L(x^\mu, 0, i) = 0 \ \text{for $\mathcal{K}_N = \mathcal{S}_N$ or $\mathcal{R}_N$,} \\
\Psi_L(x^\mu, \ell, i) = 0 \ \text{for $\mathcal{K}_N = \mathcal{R}_N$,}
\end{array}
\right.
\label{D_junction_psi}
\end{equation}
which correspond to a 5D field $\Psi$ defined on $N$ disjoined intervals. The spectrum of a 5D fermion on an interval is well known in the literature \cite{Csaki:2003sh}. The $J/V$-brane is airtight, which is illustrated by the fact that each incoming or outcoming current at the $J/V$-brane vanishes like at a boundary \eqref{Kir_current_psi}. There is a chiral zero mode (here right-handed) in each leaf/petal and a KK tower of vector-like fermions with a mass gap of $\pi/\ell$. If one generation of the SM fermion sector propagates on a $3$-star/$3$-rose it is possible to generate three generations at the level of zero modes with a airtight $J/V$-brane. The mechanism is the same as Refs.~\cite{Fujimoto:2012wv, Fujimoto:2013ki, Fujimoto:2014fka, Fujimoto:2014pra, Fujimoto:2017lln, Fujimoto:2019lbo} with point interactions along an interval/circle to generate several zero modes from a unique discontinuous 5D fermion field.
\item Case 2: If we impose that both $\Psi_L$ and $\Psi_R$ are continuous, we obtain no additional boundary condition at the $V$-brane, but a Dirichlet condition \eqref{D_junction_psi} for $\Psi_L$ at the $J$-brane.
\item Case 3: If we impose only the continuity on $\Psi_L$, there is a Dirichlet condition \eqref{D_junction_psi} for $\Psi_L$ at the $J/V$-brane.
\item Case 4: If we impose only the continuity on $\Psi_R$, Hamilton's principle gives a Kirchhoff junction condition for $\Psi_L$ at the $J/V$-brane:
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{\sum_{i=1}^N \Psi_L \left( x^\mu, 0, i \right) = 0} & \text{for} & \mathcal{K}_N = \mathcal{S}_N \, , \\ \\
\displaystyle{\sum_{i=1}^N \left[ \Psi_L \left( x^\mu, y, i \right) \right]_{y=0}^{\ell} = 0} & \text{for} & \mathcal{K}_N = \mathcal{R}_N \, ,
\end{array}
\right.
\label{JC_Psi_star}
\end{equation}
which solves the current condition \eqref{Kir_current_psi}.
\end{itemize}
For the physical applications in this article, we will add brane-localized terms at the junction for $\Psi_R$, which are in general are incompatible with the continuity of $\Psi_L$ \cite{Angelescu:2019viv, Nortier:2020xms}. In this article, we will not consider airtight branes so we impose only that $\Psi_R$ is continuous at the junction in what follows.
\begin{table}[h]
\begin{center}
\begin{tabular}{l||c|c}
$N$-star & $\Psi_L$ continuous & $\Psi_L$ discontinuous \\
\hline \hline
$\Psi_R$ continuous & Dirichlet & Kirchhoff \\
$\Psi_R$ discontinuous & Dirichlet & Dirichlet
\end{tabular}
\vspace{0.5cm}
\begin{tabular}{l||c|c}
$N$-rose & $\Psi_L$ continuous & $\Psi_L$ discontinuous \\
\hline \hline
$\Psi_R$ continuous & & Kirchhoff \\
$\Psi_R$ discontinuous & Dirichlet & Dirichlet
\end{tabular}
\caption{The different possibilities of junction conditions.}
\label{table_junction}
\end{center}
\end{table}
\subsection{Kaluza-Klein Dimensional Reduction}
\label{KK_Drac_20}
\subsubsection{Separation of Variables}
In order to perform the KK dimensional reduction of the 5D field theory, we use the same method as in Subsection~\ref{KK_scalar_20} for the scalar field, with the same system of labels for the KK modes. We expand the 5D fields $\Psi_{L/R}$ as
\begin{equation}
\left\{
\begin{array}{rcl}
\Psi_{L} \left( x^\mu, y, i \right) &=& \displaystyle{\sum_b \ \sum_{n_b} \ \sum_{d_b} \psi_L^{(b, \, n_b, \, d_b)} \left( x^\mu \right) \, f_L^{(b, \, n_b, \, d_b)} \left( y, i \right)} \, , \\
\Psi_{R} \left( x^\mu, y, i \right) &=& \displaystyle{\sum_b \ \sum_{n_b} \ \sum_{d_b} \psi_R^{(b, \, n_b, \, d_b)} \left( x^\mu \right) \, f_R^{(b, \, n_b, \, d_b)} \left( y, i \right)} \, ,
\end{array}
\right.
\label{KK_Psi_star_1}
\end{equation}
where $\psi_{L/R}^{(b, \, n_b, \, d_b)}$ are 4D Weyl fields and $f_{L/R}^{(b, \, n_b, \, d_b)}$ are wave functions defined on $\mathcal{K}_N$. The 5D equations \eqref{Dirac_Psi_star} split into Dirac-Weyl equations for the 4D fields $\psi_{L/R}^{(b, \, n_b, \, d_b)}$:
\begin{equation}
\left \{
\begin{array}{r c l}
i \sigma^\mu \partial_\mu \psi_R^{(b, \, n_b, \, d_b)} (x^\mu) - m_\psi^{(b, \, n_b)} \, \psi_L^{(b, \, n_b, \, d_b)} (x^\mu) &=& 0 \, ,
\\ \vspace{-0.2cm} \\
i \bar{\sigma}^\mu \partial_\mu \psi_L^{(b, \, n_b, \, d_b)} (x^\mu) - m_\psi^{(b, \, n_b)} \, \psi_R^{(b, \, n_b, \, d_b)} (x^\mu) &=& 0 \, ,
\end{array}
\right.
\label{Dirac_KK-Psi_star}
\end{equation}
and the differential equation for the wave functions $f_{L/R}^{(b, \, n_b, \, d_b)}$:
\begin{equation}
\forall y \neq 0 \, , \ \left \{
\begin{array}{r c l}
\partial_y f_R^{(b, \, n_b, \, d_b)} (y, i) - m_\psi^{(b, \, n_b)} \, f_L^{(b, \, n_b, \, d_b)} (y, i) &=& 0 \, ,
\\ \vspace{-0.2cm} \\
\partial_y f_L^{(b, \, n_b, \, d_b)} (y, i) + m_\psi^{(b, \, n_b)} \, f_R^{(b, \, n_b, \, d_b)} (y, i) &=& 0 \, ,
\end{array}
\right.
\label{wave_eq_Psi_star}
\end{equation}
where $m_\psi^{(b, \, n_b)}$ is the mass of the KK modes $(b, \, n_b, \, d_b)$. The wave functions $f_{L/R}^{(b, \, n_b, \, d_b)}$ are orthonormalized with the conditions
\begin{equation}
\sum_{i=1}^N \int_0^\ell dy \ \left[ f_{L/R}^{(b, \, n_b, \, d_b)}(y, i) \right]^* \, f_{L/R}^{(b', \, n'_{b'}, \, d'_{b'})}(y, i) = \delta^{bb'} \, \delta^{n_{b} n'_{b'}} \, \delta^{d_{b} d'_{b'}} \, ,
\label{orthonorm_KK-Phi_star}
\end{equation}
The conditions on the 5D field $\Psi_{L/R}$ at the vertices are naturally transposed to conditions on the KK wave functions $f_{L/R}^{(b, \, n_b, \, d_b)}$.
\subsubsection{Zero Modes}
We are looking for zero mode solutions ($b=0$, $n_0=0$, $m_\psi^{(0, \, 0)}=0$) of Eq.~\eqref{wave_eq_Psi_star} for which the first order differential equations are decoupled. For both compactifications on $\mathcal{S}_N$ and $\mathcal{R}_N$, there is only one right-handed zero mode ($d_0 \in \{1\}$). Its wave function is continuous across the $J/V$-brane and flat:
\begin{equation}
f_R^{(0, \, 0, \, 1)} (y, i) = \sqrt{\dfrac{1}{N \ell}} \, .
\label{zero_mode_star_psi_20}
\end{equation}
For the left-handed zero modes, it is necessary to distinguish between the compactification on $\mathcal{S}_N$ and $\mathcal{R}_N$.
\subsubsection*{\boldmath \textcolor{black}{$a)$ $N$-Star}}
There is no left-handed zero mode for $\mathcal{K}_N = \mathcal{S}_N$.
The theory is thus chiral at the level of the zero mode, which generalizes the well known result of the particular case of a compactification on the interval $\mathcal{S}_1$. The compactification on a star graph is thus very interesting since it allows to build models where the SM fields propagate in the extra dimension: the SM particles are identified with the zero modes of the 5D fields. In Section~\ref{Dirac_Neutrinos}, we will identify the right-handed neutrinos with the zero modes of 5D Dirac fields coupled to brane-localized 4D left-handed neutrinos. The goal is to propose a toy model to obtain small Dirac neutrino masses.
\subsubsection*{\boldmath \textcolor{black}{$b)$ $N$-Rose}}
For $\mathcal{K}_N = \mathcal{R}_N$, we have $N$ degenerate left-handed zero modes ($d_0 \in \llbracket 1, N \rrbracket$). The theory is vector-like at the level of the zero modes, which generalizes the result of the compactification on a circle $\mathcal{R}_1$ in the literature. Therefore, with the compactification on a rose graph and without an airtight $V$-brane, one cannot build models with the SM fields propagating in the extra dimension except if one is able to propose a mechanism which generates chirality by giving a mass to the mirror partners of the SM fermions. If this is possible, one recovers the three SM generations by taking $N=3$. The KK wave functions $f_L^{(0, \, 0, \, d_0)}$ are flat in each petal and discontinuous across the $V$-brane (except for $\mathcal{K}_N = \mathcal{R}_1$, the circle, where they can be taken continuous):
\begin{equation}
f_L^{(0, \, 0, \, d_0)} (y, i) = \eta_i^{(d_0)} \sqrt{\dfrac{1}{\ell}} \, ,
\label{zero_mode_f_L}
\end{equation}
with the $N$-vector basis \eqref{basis_eta}.
\subsubsection{Excited Modes}
\label{excited_modes_fermion}
We are looking for massive KK modes ($m_\psi^{(b, \, n_b)} \neq 0$). The coupled first order differential equations \eqref{wave_eq_Psi_star} can be decoupled into second order ones:
\begin{equation}
\left( \partial_{y}^2 + \left[m_\psi^{(b, \, n_b)}\right]^2 \right) f_{L/R}^{(b, \, n_b, \, d_b)} \left( y, i \right) = 0 \, .
\label{eq_Psi_wave_2nd}
\end{equation}
The KK wave functions $f_{R}^{(b, \, n_b, \, d_b)}$ are continuous across the junction. In the same way as in the case of the scalar field, it is necessary to distinguish between the cases where the $f_{R}^{(b, \, n_b, \, d_b)}$'s vanish or not at the junction. One can follow the same method as in Subsection~\ref{KK_scalar_20}. We will not give again all the details here since there is no major technical difference. We summarize the results in what follows.
\newpage
\subsubsection*{\boldmath \textcolor{black}{$a)$ $N$-Star}}
\subsubsection*{\boldmath \textcolor{black}{\textit{First case: $f_R^{(b, \, n_b, \, d_b)}(0,i)=0$}}}
\label{1st_case_psi_star}
The KK mass spectrum is
\begin{equation}
m_\psi^{(1, \, n_1)} = \left( n_1 + \dfrac{1}{2} \right) \dfrac{\pi}{\ell} \, , \ n_1 \in \mathbb{N} \, ,
\label{mass_spect_Psi_star_2}
\end{equation}
and defines the KK tower $b=1$. Each KK level is $N-1$ times degenerate ($d_1 \in \llbracket 1, N-1 \rrbracket$) and the KK wave functions are
\begin{equation}
\left\{
\begin{array}{rcl}
f_L^{(1, \, n_1, \, d_1)} (y, i) & = & - \epsilon^{(d_1)}_i \sqrt{\dfrac{2}{\ell}} \, \sin \left[ m_\psi^{(1, \, n_1)} \, (y-\ell) \right] \, , \\
f_R^{(1, \, n_1, \, d_1)} (y, i) & = & \epsilon^{(d_1)}_i \sqrt{\dfrac{2}{\ell}} \, \cos \left[ m_\psi^{(1, \, n_1)} \, (y-\ell) \right] \, ,
\end{array}
\right.
\label{Psi_free_fR=0}
\end{equation}
with the $(N-1)$-vector basis \eqref{basis_epsi}. The $f_L^{(1, \, n_1, \, d_1)}$'s are discontinuous across the $J$-brane (except for $\mathcal{K}_N = \mathcal{S}_1$, the interval, where they are taken continuous).
\subsubsection*{\boldmath \textcolor{black}{\textit{Second case: $f_R^{(b, \, n_b, \, d_b)}(0,i) \neq 0$}}}
The KK mass spectrum is
\begin{equation}
m_\psi^{(2, \, n_2)} = n_2 \, \dfrac{\pi}{\ell} \, , \ n_2 \in \mathbb{N}^* \, ,
\label{mass_spect_Psi_star_1}
\end{equation}
which is not degenerate ($d_2 \in \{1\}$) and defines the KK tower $b=2$. The KK wave functions are
\begin{equation}
\left\{
\begin{array}{rcl}
f_L^{(2, \, n_2, \, d_2)} (y, i) &=& - \sqrt{\dfrac{2}{N \ell}} \, \sin \left[ m_\psi^{(2, \, n_2)} \, (y-\ell) \right] \, , \\
f_R^{(2, \, n_2, \, d_2)} (y, i) &=& \sqrt{\dfrac{2}{N \ell}} \, \cos \left[ m_\psi^{(2, \, n_2)} \, (y-\ell) \right] \, ,
\end{array}
\right.
\end{equation}
where the $f_L^{(2, \, n_2, \, d_2)}$'s can be taken continuous across the $J$-brane.
\subsubsection*{\boldmath \textcolor{black}{$b)$ $N$-Rose}}
\subsubsection*{\boldmath \textcolor{black}{\textit{First case: $f_R^{(b, \, n_b, \, d_b)}(0,i)=0$}}}
\label{1st_case_psi_rose}
There are two cases:
\begin{itemize}
\item First case: \\
The KK mass spectrum is
\begin{equation}
m_\psi^{(1, \, n_1)} = (2n_1+1) \, \dfrac{\pi}{\ell} \, , \ n_1 \in \mathbb{N} \, ,
\label{mass_spect_Psi_rose_3}
\end{equation}
and defines the KK tower $b=1$ with $d_1 \in \llbracket 1, N-1 \rrbracket$. The KK wave functions are
\begin{align}
\left\{
\begin{array}{rcl}
f_L^{(1, \, n_1, \, d_1)} (y, i) &=& \epsilon^{(d_1)}_i \sqrt{\dfrac{2}{\ell}} \, \cos \left[ m_\psi^{(1, \, n_1)} \, y \right] \, , \\
f_R^{(1, \, n_1, \, d_1)} (y, i) &=& \epsilon^{(d_1)}_i \sqrt{\dfrac{2}{\ell}} \, \sin \left[ m_\psi^{(1, \, n_1)} \, y \right] \, ,
\end{array}
\right.
\label{wave_funct_Psi_rose_fR(0)=0}
\end{align}
with the $(N-1)$-vector basis \eqref{basis_epsi}. The $f_L^{(1, \, n_1, \, d_1)}$'s are discontinuous across the $V$-brane.
\item Second possibility:\\
The KK mass spectrum is
\begin{equation}
m_\psi^{(2, \, n_2)} = 2n_2 \, \dfrac{\pi}{\ell} \, , \ n_2 \in \mathbb{N}^* \, ,
\label{mass_spect_Psi_rose_bis}
\end{equation}
and defines the KK tower $b=2$ with $d_2 \in \llbracket 1, N \rrbracket$. The KK wave functions are
\begin{equation}
\left\{
\begin{array}{rcl}
f_L^{(2, \, n_2, \, d_2)} (y, i) &=& \eta^{(d_2)}_i \sqrt{\dfrac{2}{\ell}} \, \cos \left[ m_\psi^{(2, \, n_2)} \, y \right] \, , \\
f_R^{(2, \, n_2, \, d_2)} (y, i) &=& \eta^{(d_2)}_i \sqrt{\dfrac{2}{\ell}} \, \sin \left[ m_\psi^{(2, \, n_2)} \, y \right] \, ,
\end{array}
\right.
\label{wave_funct_psi_1_rose}
\end{equation}
with the $N$-vector basis \eqref{basis_eta}. The $f_L^{(2, \, n_2, \, d_2)}$'s are discontinuous across the $V$-brane (except for $\mathcal{K}_N = \mathcal{R}_1$, the circle, where they can be taken continuous).
\end{itemize}
\subsubsection*{\boldmath \textcolor{black}{\textit{Second case}: $f_R^{(b, \, n_b, \, d_b)}(0,i) \neq 0$}}
The KK mass spectrum is the same as in Eq.~\eqref{mass_spect_Psi_rose_bis}. Each KK level $(b, n_b)$ is thus degenerate with the $N$ KK modes of the level $(2, n_2)$ so $d_2 \in \llbracket 1, N+1 \rrbracket$: we label the KK modes with non-vanishing wave functions at the junction by the triplet $(2, n_2, N+1)$. The KK wave functions are
\begin{equation}
\left\{
\begin{array}{rcl}
f_L^{(2, \, n_2, \, N+1)} (y, i) &=& - \sqrt{\dfrac{2}{N \ell}} \, \sin \left[ m_\psi^{(2, n_2)} \, y \right] \, , \\
f_R^{(2, \, n_2, \, N+1)} (y, i) &=& \sqrt{\dfrac{2}{N \ell}} \, \cos \left[ m_\psi^{(2, n_2)} \, y \right] \, ,
\end{array}
\right.
\label{wave_func_psi_rose_period}
\end{equation}
where the $f_L^{(2, \, n_2, \, N+1)}$'s can be taken continuous across the $V$-brane.
\vspace{1cm}
Like for the scalar field, all KK towers labeled by $b$ are present in the spectrum. Each excited KK level is vector-like for both compactifications, and the mass gap between the KK modes is of the order of $1/\ell$.
\section{A Low 5D Planck Scale with a Star/Rose Extra Dimension}
\label{ADD_star_rose}
In this section, we propose an ADD model with brane-localized 4D SM fields where gravity propagates in a large star/rose extra dimension with large $N$ and a natural value for $\ell$.
\subsection{Lowering the Gravity Scale}
With a LED compactified on a metric graph $\mathcal{K}_N = \mathcal{S}_N$ or $\mathcal{R}_N$, one can obtain a low 5D Planck scale. In this case, Eq.~\eqref{ADD_formula} gives
\begin{equation}
\left[\Lambda_P^{(4)}\right]^2 = L \, \left[ \Lambda_P^{(5)} \right]^{3} \, , \ \ \ L = N \ell \, .
\label{ADD_formula_star}
\end{equation}
To solve the gauge hierarchy problem with the hypothesis of an exact global $\Sigma_N$ symmetry (see Section~\ref{conclusion_star_rose} for a discussion when this hypothesis is relaxed), we choose $\Lambda_P^{(5)} \simeq 1$ TeV, obtained with $L \simeq 1 \times 10^{12}$ m. In order to be in the EFT regime, i.e. below the 5D Planck scale, we need $\ell > \ell_P^{(5)}$, with $\ell_P^{(5)}=1/\Lambda_P^{(5)} \simeq 2 \times 10^{-19}$ m. In practice, $\ell/\ell_P^{(5)} \simeq 10$ with a large $N \simeq 6 \times 10^{29}$ should be enough, and thus a KK mass scale near the EW scale: $M_{KK} = 1/\ell \simeq 100$ GeV. Such heavy KK-gravitons evade completely the constrains from submillimeter tests of 4D gravitational Newtons's law, stellar physics and cosmology (c.f. footnote \ref{note_const_ADD} p.~\pageref{note_const_ADD}). If one allows for $1\%$ of fine tuning for $m_h$ by pushing $\Lambda_P^{(5)}$ up to 10 TeV with $N \simeq 6 \times 10^{27}$, one can even allow for $M_{KK} \simeq 1$ TeV. Moreover, if the concepts of space and volume still make sense at the Planck scale $\Lambda_P^{(5)}$, by taking $\ell \simeq \ell_P^{(5)}$ and $N \simeq 6 \times 10^{30}$ to get $\Lambda_P^{(5)} \simeq 1$ TeV, there is no tower of KK-gravitons in the EFT, instead the first experimental hints for a low 5D Planck scale are strongly coupled quantum gravity phenomena near $\Lambda_P^{(5)}$. Such a large $N$ seems puzzling at first glance: the reader would wonder whether our proposition is just a reformulation of the gauge hierarchy into why $N$ is large. However, in the EFT, $N$ is a conserved integer so it is stable under radiative corrections, it does not need to be dynamically stabilized, and has no preferred value. When $\ell \gg \ell_P^{(5)}$, the models proposed in this article are formulated in the context of EFTs defined on a classical background spacetime $\mathcal{M}_4 \times \mathcal{K}_N$, where the number of leaves/petals $N$ is fixed by the definition of the model, even in presence of gravity. Possibly, $N$ becomes a dynamical quantity in a theory of Planckian gravity involving a quantum spacetime. The situation is somewhat similar to other beyond SM scenarii in the literature to solve the hierarchy problem:
\begin{itemize}
\item The model of Ref.~\cite{Kaloper:2000jb} where $q \geq 2$ spacelike dimension are compactified on a compact hyperbolic manifold of genus $g$ (the number of holes) and of volume $\mathcal{V}_q$. A compact hyperbolic manifold has two length scales: a curvature radius $R_c$ and a linear size $L \sim R_c \log (g)$. For $L \gg R_c/2$, we have
\begin{equation}
\mathcal{V}_q \sim R_c^q \exp \left[ (q-1) \dfrac{L}{R_c} \right] \, .
\end{equation}
For $q=3$, $\Lambda_P^{(7)} \simeq 1$ TeV, and $R_c \sim 1/\Lambda_P^{(7)}$, the formula gives $L \sim 35 R_c$ so the number of holes is very large: $g \sim e^{35} \sim 10^{15}$.
\newpage
\item The model of Refs.~\cite{ArkaniHamed:1998kx, Corley:2001rt}, where $q$ spacelike extra dimensions are stabilized by a large number $N$ of branes with inter-brane forces, forming a brane crystal.
\begin{equation}
N \sim \dfrac{1}{\alpha^q} \left[ \dfrac{\Lambda_P^{(4)}}{\Lambda_P^{(4+q)}} \right]^2 \simeq \dfrac{10^{30}}{\alpha^q} \, ,
\label{crystal_brane}
\end{equation}
for $\Lambda_P^{(4+q)} \simeq 1$ TeV, where $\alpha$ is a parameter which controls the inter-brane distance. One should have $\alpha \simeq 10$ in order to be in the regime where general relativity is valid and also to avoid a new fine tuning.
\item The model proposed by G.R.~Dvali~et al. in Refs.~\cite{Dvali:2007hz,Dvali:2007wp,Dvali:2008fd,Dvali:2008jb,Dvali:2009fw,Dvali:2009ne,Dvali:2019ewm} involving a large number $N_p$ of particle species. This is a 4D model where the scale at which gravity becomes strongly coupled $\Lambda_G$ is given by
\begin{equation}
\Lambda_G = \dfrac{\Lambda_P^{(4)}}{\sqrt{N_p}} \, .
\label{N-species}
\end{equation}
This effect can be understood perturbatively as being the result of the radiative corrections of $N_p$ particle species to the graviton propagator. Then, $\Lambda_G \simeq 1$ TeV for $N_p \simeq 6 \times 10^{30}$. From Eqs.~\eqref{ADD_formula_star} and \eqref{N-species} it is a curious coincidence that $N_p=N=\left(\Lambda_P^{(4)}/\Lambda_P^{(5)}\right)^2 \simeq 6 \times 10^{30}$ when $\ell=\ell_P^{(5)}$. One can make the same remarks for the number of branes in the brane crystal model when $\alpha_q = 1$ in Eq.~\eqref{crystal_brane}, which means that the inter-brane distance is the fundamental Planck length.
\item The model of $N$-naturalness proposed by N.~Arkani-Hamed~et al. in Ref.~\cite{Arkani-Hamed:2016rle}. There are $N$ SM-like sectors which are mutually non-interacting. The Higgs mass parameter squared $\mu_H^2$ takes values between $- \Lambda_H^2$ and $\Lambda_H^2$, where $\Lambda_H^2$ is the scale common to the $N$ sectors that cuts the quadratic divergences to $\mu_H^2$. Then, for a wide range of $\mu_H^2$ distributions, one expects that some sectors are accidentally tuned at the $1/N$ level, such that $|\mu_H^2| \sim \Lambda_H^2/N$. The sector with the smallest non-zero Vacuum Expectation Value (VEV) is identified with our sector. When $\Lambda_H \gg \Lambda_{EW}$, $N$ has thus to be large in order to have $|\mu_H| \sim 100$ GeV. There is no need for new physics at the TeV scale!
\end{itemize}
The only quantity which needs to be dynamically stabilized is the leaf (petal) length (circumference) $\ell$, otherwise the radion, i.e. the scalar field which represents the fluctuations of $\ell$, remains massless and conflicts with the null result from the search for a new force of infinite range. Moreover, if only the graviton propagates into the extra dimension, the bosonic quantum loops are known to make the extra dimension unstable, which shrinks to a point \cite{Appelquist:1982zs, Appelquist:1983vs}. On the one hand, if $\ell \gtrsim \mathcal{O}(10) \times \ell_P^{(5)}$, the corrections of Planckian gravity can safely be neglected (EFT regime), and it is important to add a field theoretical mechanism to the model to stabilize $\ell$. On the other hand, if $\ell \sim \ell_P^{(5)}$, one expects $\mathcal{O}(1)$ corrections from Planckian gravity, and one needs a complete theory of gravity to formulate the model and to address its stabilization.
In the EFT regime, we have supposed the existence of an exact global $\Sigma_N$ symmetry of the $N$-star/rose, which is not realistic since gravity is supposed to break global symmetries (see Section~\ref{conclusion_star_rose} for a discussion on the impact on the resolution of the gauge hierarchy problem). This is reminiscent of the 5D models with a Universal Extra Dimension (UED) compactified on an interval symmetric under an exact $\mathbb{Z}_2$ reflection with respect to its midpoint \cite{Appelquist:2000nn}. One can also mention the model of Ref.~\cite{Agashe:2007jb} where two identical slices of AdS$_5$ are glued to a common UV-brane. The extra dimension of these models can be stabilized by the dynamics of additonal bulk fields at the quantum level by a balance between the contribution of bosonic and fermionic loops \cite{Ponton:2001hq}, or at the classical level by a potential for a scalar field as in the Goldberger-Wise mechanism originally proposed for the RS1 model \cite{Goldberger:1999uk, DeWolfe:1999cp, Csaki:2000zn}. In Ref.~\cite{Law:2010pv}, it is shown how to stabilize a $N$-star $\mathcal{S}_N$ with the potential of a different scalar field in each leaf, these $N$ scalar fields are related by the $\Sigma_N$ symmetry. One can apply these mechanisms here with $N$ large. Other stabilization mechanisms were proposed in Ref.~\cite{ArkaniHamed:1998kx}, in particular it is possible to stabilize an extra dimension compactified on a circle with the help of a complex scalar field with a topologically conserved winding number. As it is possible to stabilize one petal by this mechanism, one can repeat it with a different scalar field in each petal of the $N$-rose $\mathcal{R}_N$. Both for $\mathcal{S}_N$ and $\mathcal{R}_N$, the $N$ scalar fields meet only at the junction $J/V$ and we assume that they interact only through gravity, like the $N$ copies of the SM in Ref.~\cite{Dvali:2007wp, Dvali:2009ne}. Therefore, the picture reduces to stabilize $N$ independent leaves/petals, which is a simplified version of the mechanism in Ref.~\cite{Law:2010pv}. If the bulk fields in all leaves/petals have an exact global $\Sigma_N$ symmetry, the geometrical $\Sigma_N$ symmetry is preserved (see Section~\ref{conclusion_star_rose} for a discussion when it is not the case).
\subsection{Embedding the Standard Model Fields}
Up to this subsection we did not mention how we embed the SM fields into the proposed spacetime geometries. On the one hand, in the ADD-like models in the literature, the SM fields must be localized on a 3-brane, while gravity and possibly other exotic fields propagate in the bulk \cite{ArkaniHamed:1998rs,Antoniadis:1998ig,ArkaniHamed:1998nn,ArkaniHamed:1998vp,ArkaniHamed:1998kx,ArkaniHamed:1998sj,Berezhiani:1998wt}. On the other hand, in the RS1-like models, one can allow some or all SM fields to propagate in the extra dimension \cite{Davoudiasl:1999tf, Pomarol:1999ad, Grossman:1999ra, Chang:1999nh, Gherghetta:2000qt, Davoudiasl:2000wi, Luty:2004ye, Davoudiasl:2005uu, Cacciapaglia:2006mz}. One reason is that the KK scale in the ADD and RS1 models are respectively below and above the TeV scale. The absence of discovery of low mass KK excitations for the SM particles rules out an ADD model with bulk SM fields. In the case of an ADD model with a star extra dimension, where it is possible to embed a chiral model at the zero mode level, one can naively think that it allows the SM fields to propagate in the bulk with a KK mass scale $M_{KK} = 1/\ell \sim \mathcal{O}(1)$ TeV. However, one should also consider the magnitude of the couplings of the zero mode gauge bosons. In the RS1 model with a 5D gauge field, the 5D gauge coupling $g_{RS}^{(5)}$ (of mass dimension $-1/2$) is related to the zero mode gauge coupling $g_{RS}^{(4)}$ by the relation \cite{Pomarol:1999ad}:
\begin{equation}
g_{RS}^{(4)} = \dfrac{g_{RS}^{(5)}}{\sqrt{L_{RS}}} \, ,
\label{gauge_RS}
\end{equation}
where $L_{RS}$ is the proper length of the warped extra dimension. $L_{RS}$ is not large compared to the 5D Planck length $\ell_P^{(5)}$, so for a natural gauge coupling
\begin{equation}
g_{RS}^{(5)} \sim \sqrt{\ell_P^{(5)}} \Rightarrow g_{RS}^{(4)} \sim \mathcal{O}(1) \, ,
\end{equation}
which is the good magnitude for a SM gauge coupling. However, in the ADD model with a $(4+q)$D gauge field, the higher dimensional gauge coupling $g_{ADD}^{(4+q)}$ (of mass dimension $-q/2$) is related to the zero mode gauge coupling $g_{ADD}^{(4)}$ by the relation \cite{Csaki:2004ay}:
\begin{equation}
g_{ADD}^{(4)} = \dfrac{g_{ADD}^{(4+q)}}{\sqrt{\mathcal{V}_q}} \, .
\label{gauge_ADD}
\end{equation}
With a natural value for the gauge coupling
\begin{equation}
g_{ADD}^{(4+q)} \sim \left[ \Lambda_P^{(4+q)} \right]^{-q/2} \, ,
\end{equation}
one obtains with Eq.~\eqref{ADD_formula}:
\begin{equation}
g_{ADD}^{(4)} \sim \dfrac{1}{\sqrt{\mathcal{V}_q \left[ \Lambda_P^{(4+q)}\right]^q}} = \dfrac{\Lambda_P^{(4+q)}}{\Lambda_P^{(4)}} \sim 10^{-16} \, ,
\end{equation}
so $g_{ADD}^{(4)}$ is a very tiny coupling and cannot be identified with a SM gauge coupling. This result depends only on the volume of the compactified space, i.e. on the hierarchy between the 4D and $(4+q)$D Planck scales. It is still valid for the geometries considered in this article, where $\mathcal{V}_1 = L$. Therefore, the gauge coupling argument is much stronger than the one of the KK mass scale to rule out bulk SM fields: it applies to every compactified geometry one can imagine to realize an ADD model.
After this discussion, it is clear that in the case of a star/rose extra dimension with a large $N$, even if $M_{KK} = 1/\ell \gtrsim 1$ TeV, the SM fields must be localized on a 3-brane, like in the other ADD models in the literature. Consider a 5D EFT with a brane. The cut-off in the bulk and the 3-brane thickness are noted $\Lambda$ and $\epsilon$ respectively. There are two cases~\cite{delAguila:2006atw}:
\begin{itemize}
\item The fat brane ($\epsilon > 1/\Lambda$): its microscopic description is in the range of validity of the 5D EFT. Usually, a fat brane is a topological defect \cite{Akama:1982jy, Rubakov:1983bb} and it is necessary to provide a field theoretical mechanism to trap the zero modes of 5D fields of various spins in the neighborhood of the brane \cite{Visser:1985qm, Jackiw:1975fn, Dvali:1996xe, Dubovsky:2001pe, Ohta:2010fu}. The topological defects are the first prototypes of braneworlds in the literature and are chosen by ADD to trap the SM fields in their first article on LEDs \cite{ArkaniHamed:1998rs}. It is also possible to localize the zero modes of 5D fields towards orbifold fixed points or spacetime boundaries with large 5D masses of brane-localized kinetic terms \cite{Dvali:2000rx, Dubovsky:2001pe, Fichet:2019owx}. Irrespectively of the trapping mechanism of the fat brane, we speak about quasi-localized 5D fields.
\item The thin brane ($\epsilon \leq 1/\Lambda$): its microscopic description is outside the range of validity of the 5D EFT. A thin brane is described in the EFT by an infinitely thin hypersurface where 4D fields are strictly localized \cite{Sundrum:1998sj, Csaki:2004ay, Fichet:2019owx}. The trapping mechanism of the fields is relegated to the UV completion. This case became popular when it was relalized that 4D fields can live in the worldvolume of solitonic objects in some UV completions, like D-brane stacks in superstring theories where matter fields are described by open strings attached to them \cite{Polchinski:1996na, Bachas:1998rg, Johnson:2003gi}. In EFTs, orbifold fixed points, spacetime boundaries or metric graph vertices are perfect candidates for thin branes. One can also obtain a thin brane by integrating out the width of a fat brane: one gets an EFT with a cut-off equal to the the inverse of the brane width, and 4D fields (the zero modes of the quasi-localized 5D fields of the UV completion) strictly localized on the thin brane (depending on the quasi-localization mechanism, the excited KK-modes do not necessarily decouple \cite{Fichet:2019owx}). Quickly, after the theoretical discovery of D-branes, physicists explored the new possibilities offered by thin branes \cite{Antoniadis:1998ig, ArkaniHamed:1998nn, Randall:1999ee, Randall:1999vf, Lykken:1999nb, Kogan:1999wc, Gregory:2000jc, Dvali:2000hr}.
\end{itemize}
For the models studied in this article, if one considers quasi-localized 5D SM fields on the $J/V$-branes (fat branes), one has a problem. Indeed, consider the $N$-star $\mathcal{S}_N$, the fat brane has a thickness $\epsilon > \ell_P^{(5)}$ extended into each leaf, so a zero mode gauge coupling $g_4$ is related to the 5D gauge coupling $g_5 \sim \sqrt{\ell_P^{(5)}}$ as
\begin{equation}
g_4 \sim \dfrac{g_5}{\sqrt{N \epsilon}} \lesssim \mathcal{O} \left( \dfrac{1}{\sqrt{N}} \right) \, ,
\label{gauge_fat_J}
\end{equation}
similar to Eqs.~\eqref{gauge_RS}-\eqref{gauge_ADD}. As $N$ is large, the model will suffer from the same problem as for bulk SM fields in ADD models: the gauge couplings of the zero modes are too suppressed to match the values measured in experiments. The same problem arises with a fat $V$-brane in the case of the $N$-rose $\mathcal{R}_N$. However, there is no problem with quasi-localized SM fields on a fat $B_i$-brane where
\begin{equation}
g_4 \sim \dfrac{g_5}{\sqrt{\epsilon}} \lesssim \mathcal{O} (1) \, .
\end{equation}
Therefore, we will consider only 4D SM fields localized on thin $J/V$-branes: the SM gauge fields do not arise from the limit of quasi-localized 5D fields which propagate into the leaves/petals, so the gauge couplings are not suppressed by $\sqrt{N}$. Moreover, spacetime symmetries allow us to localize 4D degrees of freedom exactly on the 3-branes located at the vertices. However, there are various arguments that gravity implies the existence of a minimal length scale in Nature of the order of the fundamental Planck length (see Ref.~\cite{Hossenfelder:2012jw} for a review). Therefore, in a UV completion including gravity, the singular behavior the singular feature of the junction should be regularized. In Ref.~\cite{EXNER200577}, it is shown that the spectrum of a quantum graph can arise in the thin limit of a ``graph-like manifold''. Therefore, the metric graph structure of our extra dimension could emerge from a UV completion where the internal space is a $q$D graph-like manifold with $q-1$ transverse dimensions of 5D Planck size, and where the vertex at the junction is regularized \cite{Cacciapaglia:2006tg}. After integrating out these transverse dimensions, one is left with only one extra dimension compactified on a metric graph. Concerning the UV origin of the brane-localized 4D SM fields, we adopt a bottom-up approach where we do not assume a specific UV completion. We stress that it is crucial that it does not rely on quasi-localized higher-dimensional fields propagating into the leaves/petals. In fact, one can imagine a UV-completion with a $q$D graph-like manifold and a fat $J/V$-brane made of $q$D quasi-localized fields. However, the wave functions of the zero modes must be highly peaked inside the protrusion \cite{Kuchment2002} at the vertex $J/V$, i.e. they must decrease quickly inside the vertex protrusion such that they are suppressed at least by $1/\sqrt{N}$ at the entrance of a leaf/petal to avoid the problem of Eq.~\eqref{gauge_fat_J}. One can also imagine another UV-completion: if it is possible to generate graph-like 6D manifolds in superstring theories, the 4D SM fields may live in the worldvolume of a D-brane stack at the regularized $J/V$ vertex.
It is interesting to notice that if one localizes the SM fields on the $B_i$-branes of a star extra dimension, one has $N$ copies of the SM fields, if one does not want to break explicitly the $\Sigma_N$ symmetry of $\mathcal{S}_N$. Then, if one softly breaks this symmetry only by the mass term of the Higgs field, one can hope to be able to realize the $N$-naturalness idea proposed in Ref.~\cite{Arkani-Hamed:2016rle}, with the reheaton (the field which populates the Universe after inflation by decaying into SM particles and possibly other fields) as a bulk field. The Higgs mass parameter is
\begin{equation}
|\mu_H| \sim \dfrac{\Lambda_P^{(5)}}{N} \, ,
\end{equation}
where a first $1/\sqrt{N}$ factor comes from the effect of the $N$ SM fields copies coupled to gravity (c.f.~\eqref{N-species}), and a second one comes from the uniform distribution of the parameters $\mu_H^2$ discussed in Ref.~\cite{Arkani-Hamed:2016rle}. Although there are $N$ copies of the SM fields, Ref.~\cite{Arkani-Hamed:2016rle} gives two explicit models where the reheaton decays preferencially into our SM sector, which allows a large number of SM sectors subject to the constraints from cosmology. Applied to our setup, it is thus possible to have a higher $\Lambda_P^{(5)}$ with no new physics at the TeV scale, which could explain the null result from the search for beyond SM particles at the Large Hadron Collider (LHC, $\sqrt{s}=13$~TeV). This idea needs to be investigated deeper in the future. Here, we will consider only one copy of the SM fields, and localize them on the $J/V$-brane. The gauge hierarchy is generated by the volume of the compactified extra dimension, with strongly coupled quantum gravity effects accessible to the LHC ($\sqrt{s}=14$~TeV) or a future hadronic collider.
\subsection{Phenomenology}
\label{pheno_star_rose_graviton}
\subsubsection{Kaluza-Klein Gravitons}
\label{KK-graviton}
In an ADD model, gravity and possibly other exotic fields propagate into the extra dimensions. It is crucial for our proposition to have an idea of the implication of gravitons propagating into the bulk. The KK dimensional reduction of a 5D graviton \cite{Csaki:2004ay} leads to a tower of KK-gravitons with a zero mode, and one massless graviscalar (the radion). A massless graviphoton is also present if there is no boundary for the extra dimension (present for the $N$-rose $\mathcal{R}_N$ and absent for the $N$-star $\mathcal{S}_N$). As the existence of KK-gravitons can have important phenomenological effects, one has to extract their KK mass spectrum and their couplings to the SM fields. By a suitable gauge choice, the Euler-Lagrange equations for a 5D massless graviton reduce to Klein-Gordon equations. One can thus study a 5D massless real scalar field coupled minimaly to the energy momentum tensor of the SM to obtain the KK mass spectrum and the couplings of spinless KK-gravitons. This 5D spinless graviton $\Phi$ couples to the energy momentum sources through the effective metric:
\begin{equation}
g_{\mu \nu} = \left( 1 + \dfrac{\Phi}{2\left[ \Lambda_P^{(5)} \right]^{3/2}} \right) \eta_{\mu \nu} \, .
\end{equation}
The metric $g_{\mu \nu}$ and thus $\Phi$ have to be continuous at the junction. We focus on the compactification on $\mathcal{S}_N$. The case of $\mathcal{R}_N$ is very similar. The coupling of the 5D spinless graviton to the energy momentum tensor
\begin{equation}
T^{\mu \nu} = \left. \dfrac{2}{\sqrt{|g|}} \dfrac{\delta S_{SM}}{\delta g_{\mu \nu}} \right|_{g_{\mu\nu} = \eta_{\mu\nu}}
\end{equation}
of the 4D SM fields (of action $S_{SM}$) localized on the $J$-brane is
\begin{equation}
\left(\dfrac{1}{2\left[ \Lambda_P^{(5)} \right]^{3/2}} \, \widetilde{\Phi} \, T^\mu_\mu \, \delta_J\right)[1] = \int d^4x \sum_{i=1}^N \ \dfrac{1}{2N\left[ \Lambda_P^{(5)} \right]^{3/2}} \, \Phi (x^\mu, 0, i) \, T^\mu_\mu \, .
\label{int_Phi-SM_star_1}
\end{equation}
One can use the KK decomposition of Subsection~\ref{KK_scalar_20} with $M_\Phi = 0$ and treat the brane-localized interactions with the SM fields as a perturbation. The zero mode is identified with the 4D massless graviton. We note $n_*$ the number of KK-modes which couple to the SM fields below the cut-off $\Lambda_P^{(5)}$. Only the KK-gravitons with a wave function which does not vanish on the $J$-brane couple to the SM fields. The interaction term \eqref{int_Phi-SM_star_1} gives
\begin{equation}
\int d^4x \left[ \dfrac{1}{2\Lambda_P^{(4)}} \, \phi^{(0, \, 0, \, 1)} (x^\mu) \, T^\mu_\mu + \dfrac{\sqrt{2}}{2\Lambda_P^{(4)}} \, \sum_{n_2 = 1}^{n_*} (-1)^{n_2} \, \phi^{(2, \, n_2, \, 1)}(x^\mu) \, T^\mu_\mu \right] \, ,
\label{int_Phi-SM_star_2}
\end{equation}
where
\begin{equation}
n_* \sim \dfrac{\Lambda_P^{(5)} \, \ell}{\pi} \, .
\label{n_max}
\end{equation}
The KK modes whose wave functions do not vanish at $y=0$ couple individually to the energy momentum tensor of the SM with a coupling suppressed by $\Lambda_P^{(4)}$: they are thus very feebly coupled, and the probability $P_1$ to emit a single KK-graviton is proportional to its coupling squared:
\begin{equation}
P_1 \propto \left[ \dfrac{E}{\Lambda_P^{(4)}} \right]^2 \, ,
\label{P_1}
\end{equation}
where $E$ is the energy of matter originating from $T_\mu^\mu$ in Eq.~\eqref{int_Phi-SM_star_2}. We compare two benchmark scenarii with $\Lambda_P^{(5)} \simeq 1$ TeV:
\paragraph{Benchmark scenario \#1: $N = 1$.}
This case is the traditional situation of ADD models in the literature with only one extra dimension. From Eq.~\eqref{ADD_formula_star}, we have $M_{KK} = 1/\ell \sim \mathcal{O}(10^{-18})$ eV, which is excluded by the success of 4D gravitational Newton's law at the scale of the solar system. Eqs.~\eqref{ADD_formula_star} and \eqref{n_max} give
\begin{equation}
n_* \sim \dfrac{1}{\pi} \left[ \dfrac{\Lambda_P^{(4)}}{\Lambda_P^{(5)}} \right]^2 \sim 10^{30} \, ,
\end{equation}
so we have a large number of KK-gravitons below the cut-off. At colliders with a center of mass energy which reaches $\Lambda_P^{(5)}$, the probability to produce one out of $n_*$ gravitons becomes then
\begin{equation}
P_* = n_* \, P_1 \propto \left[ \dfrac{E}{\Lambda_P^{(5)}} \right]^2 \, .
\label{P*ADD}
\end{equation}
where we used Eqs.~\eqref{ADD_formula_star} , \eqref{n_max} and \eqref{P_1}.
This last result is also valid in more realistic models with more than one extra dimension which can pass with success the submillimeter tests of 4D gravitational Newton's law. The KK tower can thus be probed and constrained at the LHC ($\sqrt{s}=13$~TeV), c.f. Ref.~\cite{Pomarol:2018oca}.
\paragraph{Benchmark scenario \#2: $N \simeq 6 \times 10^{29}$.} In this case, the large volume in Eq.~\eqref{ADD_formula_star} is generated by a large $N$ and $M_{KK} = 1/\ell \simeq 100$ GeV. Thus there are few KK modes which couple to the SM fields: $n_* \simeq 3$ from Eq.~\eqref{n_max}, and $P_* \sim P_1$ at the LHC. So the KK tower is completely invisible in current experiments. The compactification on $\mathcal{S}_N$ can thus circumvent the current LHC constraints on the KK-gravitons of traditionnal ADD models.
However, these results follow from the zero-thickness brane hypothesis. How are they modified by a brane width in the UV completion? Indeed, we have already discussed that one expects that the singular behavior of the junction is soften in a UV-completion including gravity. After integrating out the UV degrees of freedom, one is left with an effective brane form factor as in Ref.~\cite{Kiritsis:2001bc} to modelize the brane width\footnote{We stress that this effective brane form factor has nothing to do with the wave function of the zero mode of quasi-localized 5D fields on the $J$-brane but is related to the UV description of the brane. We have already discussed that the brane-localized SM fields are 4D degrees of freedom in the EFT.}. It is a function $\mathcal{B}_J(y)$ rapidly decreasing over a distance $\ell_P^{(5)}$ and normalized such that
\begin{equation}
\sum_{i=1}^N \int_0^\ell dy \ \mathcal{B}_J(y) = 1 \, .
\end{equation}
One can perform a moment expansion of $\mathcal{B}_J(y)=\Lambda_P^{(5)} b \left( \Lambda_P^{(5)} y \right)$, where $b(y)$ is an intermediate function defined for convenience, such that
\begin{equation}
\widetilde{\mathcal{B}_J} = \sum_{n=0}^{+ \infty} \dfrac{b_n}{\left[\Lambda_P^{(5)}\right]^{n}} \, \partial_y^n \delta_J \, ,
\label{asymp_exp}
\end{equation}
with
\begin{equation}
b_n = \dfrac{(-1)^n}{n!} \int_0^\ell dy \ y^n \, b(y) \, .
\end{equation}
The action describing the interaction between the spinless graviton and the SM fields is
\begin{align}
&\int d^4x \left( \dfrac{1}{2 \left[ \Lambda_P^{(5)} \right]^{n + 3/2}} \, \widetilde{\Phi} \, T^\mu_\mu \, \widetilde{\mathcal{B}_J} \right) [\mathbf{1}] \nonumber \\
&=\int d^4x \ \left(\sum_{n=1}^{+\infty}\dfrac{b_n}{2 \left[ \Lambda_P^{(5)} \right]^{n + 3/2}} \, \widetilde{\Phi} \, T^\mu_\mu \, \partial_y^n \delta_J\right)[\mathbf{1}] \nonumber \\
&= \int d^4x \sum_{n=1}^{+\infty} \sum_{i=1}^N \ \dfrac{(-1)^n \, b_n}{2N\left[ \Lambda_P^{(5)} \right]^{n+3/2}} \, \partial_y^n \Phi (x^\mu, 0, i) \, T^\mu_\mu \, .
\end{align}
One can naively think that the large number of KK-gravitons which do not couple to the SM fields through the operator \eqref{int_Phi-SM_star_1} will have non-vanishing couplings to the SM via the higher-dimensional operators. Then, one expects that $P_*$ is less suppressed than in Eq.~\eqref{P_1}. However, this is not the case. Indeed, by using the equations for the wave functions \eqref{wave_eq_Phi_star}, one can show that
\begin{equation}
\forall l \geq 1 \left\{
\begin{array}{l c l}
\partial_y^{2l} f_\phi^{(b, \, n_b, \, d_b)} (0, i) &=& (-1)^l \, \left[ k_\phi^{(b, \, n_b)} \right]^{2l} f_\phi^{(b, \, n_b, \, d_b)} (0, i) \, , \\ \\
\partial_y^{2l+1} f_\phi^{(b, \, n_b, \, d_b)} (0, i) &=& (-1)^l \, \left[ k_\phi^{(b, \, n_b)} \right]^{2l} \partial_y f_\phi^{(b, \, n_b, \, d_b)} (0, i) \, .
\end{array}
\right.
\end{equation}
Therefore, for $n$ even, again only the tower $b=2$ contributes, with an extra suppression factor $\left[ M_{KK}/\Lambda_P^{(5)} \right]^n$. For $n$ odd, the Neumann-Kirchhoff junction conditions \eqref{JC_Phi_star} imply that these operators vanish. We conclude that even when we take into account the brane width in the UV, the KK-graviton towers are still invisible at the LHC ($\sqrt{s}=14$~TeV). The KK-gravitons with $b \neq 2$ constitute a hidden sector.
One notices also that this important feature of the $N$-star compactification is valid only if the SM fields are localized on the $J$-brane. If one localizes them on one of the $B_i$-branes instead, they couple also to the KK-gravitons whose wave functions vanish at the $J$-brane. One can easily show that this crucial difference implies that $P_*$ is of the same form as in Eq.~\eqref{P*ADD}, as in the case of standard ADD models, and one will be able to constrain this scenario at the LHC.
\subsubsection{Ultraviolet Gravitational Objects}
Black holes are expected to appear near the cut-off scale $\Lambda_P^{(5)}$, when the coupling to 5D gravitons becomes non-perturbative. However, in the case of the benchmark scenario \#2, we saw that the coupling of the energy-momentum tensor of the SM to the linear superposition of KK-gravitons is suppressed by $\Lambda_P^{(4)}$ instead of $\Lambda_P^{(5)}$, so one expects that the couplings of the brane-localized SM fields to the tower of KK-gravitons remains perturbative well above $\Lambda_P^{(5)}$, questioning the possibility of producing black holes in trans-Planckian collisions of SM particles. However, once the linear superposition of KK-gravitons with a trans-Planckian energy leaves the $J/V$-brane, where it was perturbatively produced through SM fields in a trans-Planckian collision, it will interact with all the KK-gravitons, including those whose wave functions vanish on the $J/V$-brane. This last process is non-perturbative above $\Lambda_P^{(5)}$ and will produce a black hole. Near this threshold, the black holes are dominated by quantum corrections, we speak about Quantum Black Holes (QBHs) \cite{Rizzo:2006zb, Alberghi:2006km, Meade:2007sz, Casadio:2008qy, Calmet:2008dg, Gingrich:2009hj, Gingrich:2010ed, Dvali:2010gv, Nicolini:2011nz, Mureika:2011hg, Calmet:2012fv, Kilic:2012wp, Belyaev:2014ljc, Arsene:2016kvf} which need a complete theory of quantum gravity to be described. Besides, the lightest QBH, the Planckion \cite{Treder:1985kb, Dvali:2016ovn}, is the last stage of the evaporation of a semi-classical black hole by Hawking radiation. In some models, this black hole remnant \cite{Koch:2005ks, Dvali:2010gv, Bellagamba:2012wz, Alberghi:2013hca} is stable and one can speculate that it can constitute a part of dark matter \cite{Conley:2006jg, Dvali:2010gv, Nakama:2018lwy}. There are also a large number of KK-gravitons below the TeV scale whose wave functions vanish on the $J/V$-brane, where the SM fields are localized (c.f. Subsection~\ref{KK_scalar_20}): these KK-gravitons interact only with gravity in the bulk, and constitute a natural hidden sector which could be populated by black hole evaporation during the early Universe.
\section{Toy Model of Small Dirac Neutrino Masses}
\label{Dirac_Neutrinos}
\subsection{Zero Mode Approximation}
It is known from Refs.~\cite{Dienes:1998sb, ArkaniHamed:1998vp, Dvali:1999cn} that if the left-handed neutrinos, localized on the SM brane, interact with gauge singlet neutrinos, propagating in the bulk in the form of an internal torus $\left( \mathcal{R}_1 \right)^q$ of radius $R$ and large volume $\mathcal{V}_q$, one can get small Dirac masses for the neutrinos. With one left-handed neutrino and one gauge singlet neutrino (without BLKTs), the Dirac mass is \cite{ArkaniHamed:1998vp}:
\begin{equation}
m_\nu \simeq \dfrac{\left| y_\nu^{(4+q)} \right| v}{\sqrt{2 \mathcal{V}_q}} \, ,
\ \ \ \mathcal{V}_q = (2 \pi R)^q \, ,
\label{m_nu_1}
\end{equation}
where $v$ is the SM Higgs field VEV, and $y_\nu^{(4+q)}$ is the $(4+q)$D Yukawa coupling of mass dimension $-q/2$. Eq.~\eqref{m_nu_1} is valid if one can use the zero mode approximation, i.e. neglect the mixing between the zero mode and the KK-excitations of the bulk gauge singlet neutrino:
\begin{equation}
\dfrac{\left| y_\nu^{(4+q)} \right| v}{\sqrt{2\mathcal{V}_q}} \ll M_{KK} \equiv \dfrac{1}{R} \, .
\end{equation}
For a natural value
\begin{equation}
\left| y_\nu^{(4+q)} \right| \sim \left[ \ell_P^{(4+q)} \right]^{q/2} \, ,
\end{equation}
with $\Lambda_P^{(4+q)} \sim \mathcal{O}(1) \ \rm{TeV}$, one has, with Eq.~\eqref{ADD_formula},
\begin{equation}
m_\nu \sim \dfrac{v}{\sqrt{2\mathcal{V}_q \left[\Lambda_P^{(4+q)}\right]^q}} = \dfrac{v \Lambda_P^{(4+q)}}{\sqrt{2}\Lambda_P^{(4)}} \sim \mathcal{O}(0.1) \ \rm{meV} \, ,
\end{equation}
which is a good order of magnitude for the neutrino masses.
We want to see if it is possible to build such a model for a LED compactified on the metric graph $\mathcal{K}_N$. For the compactification on a star/rose graph, one takes a 4D left-handed neutrino $\nu_L$ of mass dimension 3/2 localized on the $J/V$-brane, and a 5D gauge singlet neutrino $\Psi$ of mass dimension 2 propagating into the bulk. The action of the model is
\begin{align}
S_{\nu} &= S_\Psi + \int d^4x \left( \widetilde{\mathcal{L}_\nu} + \widetilde{\mathcal{L}_{\Psi \nu}} \right) \delta_{J/V}[\mathbf{1}] \, , \nonumber \\
&= S_\Psi + \int d^4x \left( \mathcal{L}_\nu + \left. \widetilde{\mathcal{L}_{\Psi \nu}} \right|_{y=0} \right) \, .
\label{S_nu}
\end{align}
The free action $S_\Psi$ is given by Eq.~\eqref{S_Psi_star}, and
\begin{equation}
\mathcal{L}_\nu = \dfrac{i}{2} \, \nu_L^\dagger \bar{\sigma}^\mu \overleftrightarrow{\partial_\mu} \nu_L \, ,
\end{equation}
The brane-localized mass term is
\begin{equation}
\widetilde{\mathcal{L}_{\psi \nu}} = - \dfrac{y_\nu^{(5)} v}{\sqrt{2}} \, \nu_L^\dagger \widetilde{\Psi_R} + \rm{H.c.} \, ,
\label{nu_Yuk}
\end{equation}
where $y_\nu^{(5)}$ can be taken real since a phase shift of the Yukawa coupling can be compensated by a phase shift of the field $\nu_L$. We have imposed that the leptonic number $L$ is conserved, so $U(1)_L$ is a symmetry of the model: in this way, bulk and brane-localized Majorana mass terms for the neutrino fields are not allowed. We have also assumed the absence of a bulk Dirac mass term to simplify the discussion. By adopting a perturbative approach, where $\mathcal{L}_{\psi \nu}$ is treated as a perturbation, one can perform the KK dimensional reduction of Section~\ref{Dirac_field}. In the regime where we can use the zero mode approximation, i.e. when
\begin{equation}
\dfrac{y_\nu^{(5)} v}{\sqrt{2L}} \ll M_{KK} \equiv \dfrac{1}{\ell} \, ,
\end{equation}
we get a mass term for the zero mode neutrino:
\begin{equation}
-m_\nu \, \nu_L^\dagger \psi_R^{(0, \, 0, \, 1)} + \rm{H.c.} \, ,
\end{equation}
with
\begin{equation}
m_\nu = \dfrac{y_\nu^{(5)} v}{\sqrt{2}} \, f_R^{(0, \, 0, \, 1)}(0)
= \dfrac{y_\nu^{(5)} v}{\sqrt{2L}} \, .
\label{m_nu_2}
\end{equation}
For a natural value
\begin{equation}
y_\nu^{(5)} \sim \sqrt{\ell_P^{(5)}} \, ,
\end{equation}
with $\Lambda_P^{(5)} \simeq 1 \ \rm{TeV}$, one has, from Eqs.~\eqref{ADD_formula_star}, \eqref{m_nu_1} and \eqref{m_nu_2},
\begin{equation}
m_\nu \sim \dfrac{v}{\sqrt{2L \Lambda_P^{(5)}}} = \dfrac{v \Lambda_P^{(5)}}{\sqrt{2}\Lambda_P^{(4)}} \sim \mathcal{O}(0.1) \ \rm{meV} \, .
\end{equation}
As $m_\nu \ll M_{KK} \equiv 1/\ell$ for the benchmark scenario \#2, the zero mode approximation is thus justified.
However, within the perturbative approach, we find that the $N$ left-handed zero modes in Section~\ref{Dirac_field} for the $N$-rose $\mathcal{R}_N$ do not get masses from the brane-localized mass term \eqref{nu_Yuk}. They remain massless and do not mix with the left-handed neutrino localized on the $V$-brane: they are sterile neutrinos which do not participate in neutrino oscillations. However, they are coupled to gravity and may have an impact on the cosmological history. As our model requires a large $N$, it appears to be ruled out by cosmological constraints which are sensible to the number of light fermionic degrees of freedom. Even with a brane-localized reheaton, which does not couple to the modes with discontinuous wave functions like the $N$ left-handed zero modes, mini-black hole evaporation should produce them in the early Universe. A solution could be to add a new ingredient to the model to give a mass to these $N$ zero modes. A priori, our toy model is thus interesting only for the compactification on a star graph.
\subsection{Exact Treatment}
\label{exact_treatment}
\subsubsection{Euler-Lagrange Equations \& Junction/Boundary Conditions}
In this subsection, we take the effect of the brane-localized mass term $\widetilde{\mathcal{L}_{\Psi \nu}}$ on the KK mass spectrum exactly into account with the 5D method of Ref.~\cite{Angelescu:2019viv, Nortier:2020xms}. From Hamilton's principle applied to the action $S_\nu$ \eqref{S_nu}, we get the Euler-Lagrange equations: Eq.~\eqref{Dirac_Psi_star} and
\begin{equation}
i \bar{\sigma}^\mu \partial_\mu \nu_L(x^\mu) - M \, \Psi_R(x^\mu, 0, i) = 0 \, ,
\label{nu_ELE_5D}
\end{equation}
with
\begin{equation}
M = \dfrac{y^{(5)}_\nu v}{\sqrt{2}} \, .
\end{equation}
We get also a Kirchhoff junction condition for the left-handed field on the $J/V$-brane:
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{\sum_{i=1}^N \Psi_L (x^\mu, 0, i) = M \, \nu_L(x^\mu)} & \text{for} & \mathcal{K}_N = \mathcal{S}_N \, , \\
\displaystyle{\sum_{i=1}^N \left[ \Psi_L (x^\mu, y, i) \right]_{y=0}^{\ell} = M \, \nu_L(x^\mu)} & \text{for} & \mathcal{K}_N = \mathcal{R}_N \, ,
\end{array}
\right.
\label{Kir_junc_mix_Psi}
\end{equation}
and Dirichlet boundary conditions \eqref{D_psi_B} for the left-handed field on the $B_i$-branes.
\subsubsection{Separation of Variables}
We want to solve the field equations by separation of variables and sum over all linearly independant solutions. We thus write the KK decomposition \eqref{KK_Psi_star_1} and expand $\nu_L$ as a linear superposition of the left-handed KK modes which are mass eigenstates:
\begin{equation}
\nu_L(x^\mu) = \sum_{b} \ \sum_{n_b} \sum_{d_b} a^{(b, \, n_b, \, d_b)} \, \psi_L^{(b, \, n_b, \, d_b)} \left( x^\mu \right) \, ,
\label{nu_L_sep}
\end{equation}
with $a^{(b, \, n_b, \, d_b)} \in \mathbb{C}$. Indeed, the brane-localized mass term induces a mixing between the field $\nu_L$ and the KK modes of $\Psi_L$ obtained in Section~\ref{Dirac_field}. Here, we expand the fields $\nu_L$ and $\Psi_L$ in the same KK basis spanned by the $\psi_L^{(b, \, n_b, \, d_b)}$'s (the basis of the mass eigenstates). The reader can follow the discussion between Eqs.~\eqref{KK_Psi_star_1} and \eqref{orthonorm_KK-Phi_star}, we will use the same notations but we stress that, in Section~\ref{Dirac_field}, $\psi_L^{(b, \, n_b, \, d_b)}$ is an element of the KK basis without brane-localized mass term but here it is an element of the KK basis including the effect of the brane-localized mass term. Besides, the orthonormalization conditions \eqref{orthonorm_KK-Phi_star} for the functions $f_{L/R}^{(b, \, n_b, \, d_b)} \neq 0$ are replaced by
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{\left[a^{(b, \, n_b, \, d_b)}\right]^* a^{(b', \, n'_{b'}, \, d'_{b'})} + \sum_{i=1}^N \int_0^\ell dy \ \left[ f_{L}^{(b, \, n_b, \, d_b)}(y, i) \right]^* \, f_{L}^{(b', \, n'_{b'}, \, d'_{b'})}(y, i)} &=& \delta^{bb'} \, \delta^{n_{b} n'_{b'}} \, \delta^{d_{b} d'_{b'}} \, , \\
\displaystyle{\sum_{i=1}^N \int_0^\ell dy \ \left[ f_{R}^{(b, \, n_b, \, d_b)}(y, i) \right]^* \, f_{R}^{(b', \, n'_{b'}, \, d'_{b'})}(y, i)} &=& \delta^{bb'} \, \delta^{n_{b} n'_{b'}} \, \delta^{d_{b} d'_{b'}} \, .
\end{array}
\right.
\label{norm_wave_Psi_star_nu}
\end{equation}
The conditions on the 5D fields $\Psi_{L/R}$ become conditions on the KK wave functions $f_{L/R}^{(b, \, n_b, \, d_b)}$ by using Eq.~\eqref{KK_Psi_star_1}. There is a new Kirchhoff junction condition on the $J/V$-brane from Eq.~\eqref{Kir_junc_mix_Psi}:
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{\sum_{i=1}^N f_L^{(b, \, n_b, \, d_b)} (0, i) = a^{(b, \, n_b, \, d_b)} \, M} & \text{for} & \mathcal{K}_N = \mathcal{S}_N \, , \\
\displaystyle{\sum_{i=1}^N \left[ f_L^{(b, \, n_b, \, d_b)} (y, i) \right]_{y=0}^{\ell} = a^{(b, \, n_b, \, d_b)} \, M } & \text{for} & \mathcal{K}_N = \mathcal{R}_N \, .
\end{array}
\right.
\label{JC_nu_Psi_wave_1}
\end{equation}
Moreover, Eq.~\eqref{nu_ELE_5D}, with Eqs.~\eqref{Dirac_KK-Psi_star}, \eqref{KK_Psi_star_1} and \eqref{nu_L_sep}, gives:
\begin{equation}
a^{(b, \, n_b, \, d_b)} \, m_\psi^{(b, \, n_b)} = M \, f_R^{(b, \, n_b, \, d_b)}(0, i) \, .
\label{eq_a}
\end{equation}
For $m_\psi^{(b, \, n_b)} \neq 0$, Eqs.~\eqref{JC_nu_Psi_wave_1} and \eqref{eq_a} together lead to:
\begin{equation}
\left\{
\begin{array}{rcl}
\displaystyle{\sum_{i=1}^N f_L^{(b, \, n_b, \, d_b)} (0, i) = \dfrac{M^2}{m_\psi^{(b, \, n_b)}} \, f_R^{(b, \, n_b, \, d_b)}(0, i)} & \text{for} & \mathcal{K}_N = \mathcal{S}_N \, , \\
\displaystyle{\sum_{i=1}^N \left[ f_L^{(b, \, n_b, \, d_b)} (y, i) \right]_{y=0}^{\ell} = \dfrac{M^2}{m_\psi^{(b, \, n_b)}} \, f_R^{(b, \, n_b, \, d_b)}(0, i) } & \text{for} & \mathcal{K}_N = \mathcal{R}_N \, .
\end{array}
\right.
\label{JC_nu_Psi_wave_2}
\end{equation}
\subsubsection{Kaluza-Klein Mode Analysis on the Star Graph}
We give here only the KK mode analysis of the $N$-star $\mathcal{S}_N$, since the $N$-rose $\mathcal{R}_N$ compactification should be incompatible with cosmology without additional assumptions. For completeness, we give the KK-mode analysis on $\mathcal{R}_N$ in Appendix~\ref{neutrino_rose_app}.
\vspace{1cm}
There are no zero modes ($b=0$, $n_0=0$, $m_\psi^{(0, \, 0)}=0$) with $M \neq 0$ for $\mathcal{K}_N = \mathcal{S}_N$. Let us look at massive KK modes ($m_\psi^{(b, \, n_b)} \neq 0$). The coupled first order differential equations \eqref{wave_eq_Psi_star} can be decoupled into second order ones: Eq.~\eqref{eq_Psi_wave_2nd}. The KK wave functions $f_{R}^{(b, \, n_b, \, d_b)}$ are still continuous across the junction. We will use the same method as in Subsections~\ref{KK_scalar_20} and \ref{KK_Drac_20}. We give the results in what follows.
\subsubsection*{\boldmath \textcolor{black}{\textit{First case: $f_R^{(b, \, n_b, \, d_b)}(0,i)=0$}}}
The results are identical to the ones in Subsection~\ref{excited_modes_fermion}, Paragraph ``First case: $f_R^{(b, \, n_b, \, d_b)}(0, i)=0$'' of a) p.~\pageref{1st_case_psi_star}. This condition gives the mass spectrum \eqref{mass_spect_Psi_star_2} which defines the KK tower $b=1$. We have $a^{(1, \, n_1, \, d_1)} = 0$ from Eq.~\eqref{eq_a} so the left-handed modes do not mix with $\nu_L$: they are completely sterile, interact only with gravity, and are thus part of the hidden sector of the model.
\subsubsection*{\boldmath \textcolor{black}{\textit{Second case: $f_R^{(b, \, n_b, \, d_b)}(0,i) \neq 0$}}}
The KK mass spectrum is given by the transcendental equation:
\begin{equation}
m_\psi^{(2, \, n_2, \, d_2)} \, \tan \left[ m_\psi^{(2, \, n_2)} \, \ell \right] = \dfrac{M^2}{N} \, , \ \ \ n_2 \in \mathbb{N} \, ,
\end{equation}
whose solutions $m_\psi^{(2, \, n_2)}$ define the KK tower $b=2$ and are not degenerate ($d_2 \in \{1\}$). We have $a^{(2, \, n_2, \, 1)} \neq 0$ from Eq.~\eqref{eq_a} so the left-handed modes mix with $\nu_L$. The lightest mode $(2, 0, 1)$ is identified with the neutrino we observe in Nature\footnote{Of course, we observe three generations of neutrinos in Nature and here we consider a toy model with only one generation.}. In the decoupling limit $\ell \rightarrow 0$, the mass of this mode is given by Eq.~\eqref{m_nu_2} in the zero mode approximation. Indeed, in this limit the excited KK modes decouple and their mixing with the lightest massive mode $(2, 0, 1)$ goes to zero. The KK wave functions are
\begin{equation}
\left\{
\begin{array}{rcl}
f_L^{(2, \, n_2, \, 1)} (y, i) &=& - \left[ \dfrac{N\ell}{2} + \dfrac{M^2(2N-1)}{2 \left(\left[m_\psi^{(2, \, n_2)}\right]^2 + \left[ \dfrac{M^2}{N} \right]^2 \right)} \right]^{-1/2} \, \sin \left[ m_\psi^{(2, \, n_2)} \, (y-\ell) \right] \, , \\ \\
f_R^{(2, \, n_2, \, 1)} (y, i) &=& \left[ \dfrac{N\ell}{2} + \dfrac{M^2(2N-1)}{2 \left(\left[m_\psi^{(2, \, n_2)}\right]^2 + \left[ \dfrac{M^2}{N} \right]^2 \right)} \right]^{-1/2} \, \cos \left[ m_\psi^{(2, \, n_2)} \, (y-\ell) \right] \, ,
\end{array}
\right.
\end{equation}
where the $f_L^{(2, \, n_2, \, 1)}$'s are discontinuous across the $J$-brane (except for $\mathcal{K}_N = \mathcal{S}_1$, the interval, where they are taken continuous). This discontinuity is sourced by the brane-localized interaction.
\vspace{1cm}
In a nutshell, the massive KK modes have still a mass gap of order $1/\ell$. Only the KK modes, whose right-handed Weyl spinors have non-vanishing wave functions at the junction without brane-localized Yukawa couplings (c.f. Subsection~\ref{KK_Drac_20}), are affected by the addition of the brane-localized SM left-handed neutrino $\nu_L$ and Higgs field. The KK masses are shifted and the wave functions of the left-handed Weyl spinors become discontinuous at the junction\footnote{In the literature, it is known that an interaction localized on a brane away from a boundary or at a fixed point of the orbifolds $\mathcal{R}_1/\mathbb{Z}_2$ and $\mathcal{R}_1/(\mathbb{Z}_2 \times \mathbb{Z}_2')$ implies a discontinuity for a 5D fermion field \cite{Bagger:2001qi, Csaki:2003sh, Casagrande:2008hr, Casagrande:2010si, Carena:2012fk, Malm:2013jia, Barcelo:2014kha}.}. This result is easy to understand when the Yukawa interaction with the VEV of the Higgs field is treated as a perturbation: only non-vanishing wave functions at the junction in Subsection~\ref{KK_Drac_20} will have non-vanishing matrix elements. These modes mix with the SM left-handed neutrino. The other ones are completely sterile and interact only through gravity.
\section{Conclusion \& Perspectives}
\label{conclusion_star_rose}
In this work, we have studied the possibily of compactifying a large spacelike extra dimension on a star/rose graph with identical leaves/petals. In Section~\ref{spacetime_geom}, we have adapted Kurasov's distribution theory to a star/rose graph. In this way, we have defined a rigorous framework to build a field theory on these geometries.
In Sections~\ref{KG_field} and \ref{Dirac_field}, we have worked out the KK decomposition of a Klein-Gordon and Dirac field respectively. Our main contributions, compared to the previous articles \cite{Kim:2005aa, Fujimoto:2019fzb}, are discussions concerning the different possibilities for the continuity of the fields at the junction and the impact on the KK-spectrum. In particular, we have pointed out the case of an airtight brane (when the off-shell fields are allowed to be discontinuous), which is equivalent to $N$ disconnected bonds. Moreover, we have studied for the first time the KK-modes of a massless 5D Dirac fermion propagating into the whole star/rose graph. We have also discussed the chirality of the zero modes. For both bosonic and fermionic massless fields, the KK scale is given by the inverse of the leaf/petal length/circumference.
One can realize a large compactified volume with a high KK scale for a large number of small leaves/petals. This possibility has been investigated in Section~\ref{ADD_star_rose} in order to lower the gravity scale to the TeV scale and solve the gauge hierarchy problem under the hypothesis of an exact global $\Sigma_N$ symmetry. Moreover, we have shown that if the SM fields are localized on the 3-brane at the junction of the star/rose graph, they couple only to few modes of the whole tower of KK-gravitons, even when a UV brane thickness is taken into account. The couplings of the SM fields to this KK tower are suppressed by the large 4D Planck scale instead of the 5D one at the TeV scale: the KK-gravitons are thus completely invisible at the LHC ($\sqrt{s}=14$~TeV). This result is in sharp contrast to standard ADD models in the literature with compactification on a torus or its orbifolds, where the SM fields couple to the whole tower of KK-gravitons, which implies couplings suppressed only by the 5D Planck scale (near a TeV) which one can constrain at the LHC.
Besides KK-gravitons, our proposition can still be probed at hadronic colliders through the search for strongly coupled phenomena induced by gravity like QBHs at the LHC ($\sqrt{s}=14$~TeV) or semi-classical black holes at the Future Circular Collider proton-proton (FCC pp, $\sqrt{s}=100$~TeV). The absence of a theory of Planckian gravity renders difficult to make precise predictions concerning the production and decay of QBHs or other exotic states near the Planck scale. It is thus delicate to translate the LHC data ($\sqrt{s}=13$~TeV) into constraints on the 5D Planck scale of our model and to estimate the degree of fine tuning which remains to accomodate an EW scale at 100 GeV.
Finally, in Section~\ref{Dirac_Neutrinos} we have proposed to realize in our scenario a toy model of small Dirac neutrino masses. For that purpose, we have considered only one generation of neutrinos coupled to one gauge singlet fermion in the bulk. The large compactified volume suppresses the 5D Yukawa coupling of order unity, and we have been able to reproduce the good order of magnitude for the mass of the neutrinos, with a model which accomodates also a 5D Planck scale at the TeV scale. This kind of model was discussed previously only with a toroidal/orbifold compactification, and the adaptation to a star/rose graph is new. The model is realistic only for the compactification on a star graph since the rose graph predicts a large number of massless left-handed sterile neutrinos incompatible with cosmology. Moreover, we have found that our models have a hidden sector consisting in secluded KK-gravitons and sterile KK-neutrinos possibly populated during the early Universe by the decays of mini-black holes in the bulk. The Planckion could also be a candidate to dark matter.
Our effective model (as well as the traditionnal UED models) has a global $\Sigma_N$ symmetry which acts on the geometry. One of the Swampland conjectures is the absence of global symmetries in a complete theory of quantum gravity (see Refs.~\cite{Brennan:2017rbf, Palti:2019pca} for reviews on the Swampland program). If true, there is no global $\Sigma_N$ symmetry and one has a different leaf/petal size $\ell_i$ for each $i$. One can define the average of the leaf/petal size as
\begin{equation}
\langle \ell \rangle = \dfrac{1}{N} \sum_i \ell_i \, ,
\end{equation}
such that Eq.~\eqref{ADD_formula_star} is still valid but with $L = N \langle \ell \rangle$. The mass spectrum must be studied numerically. In general, the mass scale of the lightest KK-mode is given by the inverse of the largest $\ell_i$. If the $\ell_i$'s are incommensurate (i.e. $\forall i \neq j, \ \ell_i/\ell_j \notin \mathbb{Q}$), the KK-spectrum is chaotic \cite{PhysRevLett.79.4794, Kottos_1999, Gaspard2000a, Dabaghian2001a, PhysRevLett.88.044101, kottos2005quantum, Cacciapaglia:2006tg}. Moreover, there are more KK-modes below the cut-off $\Lambda_P^{(5)}$ which couple to the $J$-brane for different $\ell_i$'s: $P_*$ is thus less suppressed than for identical $\ell_i$'s such that one is in an intermediate situation between the benchmark scenarii \#1 and \#2. If the $\ell_i$'s are incommensurate, there are no KK modes whose wavefunctions vanish at the $J$-brane and the KK-spectrum is chaotic.
In a realistic model including gravitationnal effects, instead of an exact global $\Sigma_N$ symmetry, one should consider a geometry with an approximate one. In case of GW mechanisms within each leaf, the $\ell_i$'s would depend on the mass parameter for the GW scalar field within each leaf. Identical leaf lengths (exact $\Sigma_N$ symmetry) corresponds to $\Sigma_N$ symmetric mass parameters within all leafs. The question is then in how far quantum gravity effects affect classically $\Sigma_N$ symmetric mass parameters. If these modifications remain within $\sim 10\%$ for all leafs, i.e. the effects of gravity can be considered as a perturbation of the geometry with an exact $\Sigma_N$ symmetry, the hierarchy problem could be considered as solved, with the common scale being given by $1/\langle \ell \rangle$. On the other hand, if the modified leaf lengths follow a statistical distribution (like a Gaussian) around a central value $\langle \ell \rangle$, the hierarchy problem can be considered as solved only if this distribution is extremely narrow such that the number of very light KK-gravitons remains compatible with present constraints, see subsection~\ref{KK-graviton}. In the absence of a concrete theory of quantum gravity it is impossible to answer these questions. Therefore, the toy model of this article with an exact global $\Sigma_N$ symmetry should not be considered as a viable solution of the gauge hierarchy problem but as a scenario within which (among others) a solution of the hierarchy problem may be possible.
We also want to discuss some perspectives for future investigations. As a follow-up of the present work, it would be important to study the unitarity constraints on our models, since a low gravity scale is known to need a UV completion at a scale lower than the higher-dimensional Planck scale in standard ADD models with a toroidal compactification \cite{Atkins:2010re, Antoniadis:2011bi}. It is also important to see how the mechanism to produce small neutrino masses is influenced by adding bulk and brane-localized Majorana mass terms, in the way of Ref.~\cite{Dienes:1998sb}. Moreover, strongly coupled gravity at the TeV scale may generate dangerous brane-localized higher-dimensional operators inducing proton decay, large Majorana neutrino masses and Flavor Changing Neutral Currents (FCNCs) \cite{Antoniadis:1998ig}. Without knowledge of the UV completion, we cannot know if these operators are naturally suppressed. If this is not the case, the natural value of their Wilson coefficients are suppressed only by the TeV scale and one has to add new ingredients to the scenarii to forbid them, like gauging some global symmetries of the SM as the baryon and lepton numbers \cite{Perez:2015rza} and other flavor symmetries \cite{Berezhiani:1998wt, ArkaniHamed:1998sj, ArkaniHamed:1999yy}.
Beyond the motivations for the present work, we stress that the 5D background geometries that we studied here can be used in general to generate feebly coupled interactions. Indeed, the couplings of the whole KK tower of a 5D field, coupled to SM fields localized at the junction of the star/rose graph, are in general suppressed by the square root of the compactified volume. One can easily imagine how it can be used to build consistent models of axions and dark matter with order one 5D couplings. Moreover, a 5D field has KK modes whose wave functions vanish at the junction where the SM is localized: they are thus good candidates for a hidden sector. The star graph compactification with a small number of leaves can also be used to build models of 5D SM fields as the theory is chiral at the level of zero modes for 5D fermions. Generating $N$ fermion zero modes from only one 5D fermion propagating into a star/rose graph with $N$ leaves/petals and an airtight brane is interesting from the point of view of flavor physics. Moreover, it would be interesting to see if one can implement a 5D supersymmetric field theory or 5D supergravity on the star/rose background. In every scenario, it is important to investigate different possibilities of field theoretical mechanism to stabilize the leaf/petal length scale.
\acknowledgments
I would like to thank Ulrich Ellwanger for encouragement to develop my ideas of model building with an extra dimension compactified on a
star/rose graph and for reviewing the manuscript. Thanks to Sylvain
Fichet, Ruifeng Leng, Grégory Moreau, Jérémie Quevillon and Robin Zegers for useful discussions. This research was supported by the IDEX Paris-Saclay, the collège doctoral of the Université Paris-Saclay and the Université Paris-Sud.
|
2,869,038,153,790 | arxiv | \section{Conclusions and future work}
Optimal scaling of the ADMM method for distributed quadratic programming was addressed. In particular, a class of distributed quadratic problems were cast as equality-constrained quadratic problems, to which the scaled ADMM method is applied. For this class of problems, the network-constrained scaling corresponds to the usual step-size constant and the edge weights of the communication graph. Under mild assumptions on the communication graph, analytical expressions for the optimal convergence factor and the optimal step-size were derived in terms of the spectral properties of the graph. Supposing the optimal step-size is chosen, the convergence factor is further minimized by optimally choosing the edge weights. Our results were illustrated in numerical examples and significant performance improvements over state-of-the art techniques were demonstrated. As a future work, we plan to extend the results to a broader class of distributed quadratic problems.
\section{The ADMM method}\label{sec:background}
The ADMM algorithm solves problems of the form
\begin{align}\label{eq:constrained problem}
\begin{array}[c]{ll}
\underset{x,\,z}{\mbox{minimize}} & f(x)+g(z)\\
\mbox{subject to} & Ex+Fz - h = 0
\end{array}
\end{align}
where $f$ and $g$ are convex functions, $x\in {\mathbf R}^n$, $z\in {\mathbf R}^m$, $h\in {\mathbf R}^p$. Moreover, $E\in {\mathbf R}^{p\times n}$ and $F\in {\mathbf R}^{p\times m}$ have full-column rank; see~\cite{Boyd11} for a detailed review.
The method is based on the \emph{augmented Lagrangian}
\begin{align}
\label{eq:augmented_Lagrangian}
L_{\rho}(x,z,\mu) &= f(x)+g(z) + (\rho/2)\Vert Ex+Fz - h \Vert_2^2 \\
&+ \mu^{\top}(Ex+Fz - h) \nonumber
\end{align}
and performs sequential minimization of the $x$ and $z$ variables, followed by a dual variable update:
\begin{align}
x^{k+1} &= \underset{x}{\operatorname{argmin}}\, L_{\rho}(x,z^{k}, \mu^{k}) \nonumber\\
z^{k+1} &= \underset{z}{\operatorname{argmin}}\, L_{\rho}(x^{k+1}, z, \mu^k) \label{eqn:admm_iterations}\\
\mu^{k+1} &= \mu^{k} + \rho(Ex^{k+1}+Fz^{k+1} - h). \nonumber
\end{align}
\iffalse
It is often convenient to express the iterations in terms of the scaled dual variable $\mu = \rho u$, leading to iterations
\begin{align}
x^{k+1} &= \underset{x}{\operatorname{argmin}} f(x)+ (\rho/2)\Vert Ex+Fz^k -h + u^k\Vert_2^2\nonumber \\
z^{k+1} &= \underset{z}{\operatorname{argmin}} g(z) + (\rho/2)\Vert Ex^{k+1}+Fz - h +u^{k}\Vert_2^2 \label{eqn:admm_scaled}\\
u^{k+1} &= u^{k} + Ex^{k+1} + Fz^{k+1} - h\nonumber
\end{align}
\fi
These iterations indicate that the method is particularly useful when the $x$- and $z$-minimizations can be carried out efficiently (e.g.~admit closed-form expressions). One advantage of the method is that there is only one single algorithm parameter, $\rho$, and under rather mild conditions, the method can be shown to converge for all values of the parameter; see, e.g.,~\cite{Boyd11}.
However, $\rho$ has a direct impact on the convergence speed of the algorithm, and inadequate tuning of this parameter may render the method very slow. In the remaining parts of this paper, we will derive explicit expressions for the step-size $\rho$ that minimizes the convergence factor~\eqref{eq:convergence_factor} for some particular classes of problems.
\section{Distributed Quadratic Programming as a Consensus Problem}
\todo[inline]{As discussed earlier, one could show how (unconstrained, or locally equality constrained?) QP problems solved distributively reduce to consensus problems on the shared variables with (block-)diagonal Hessian matrices.\\
This would be a nice motivation for the consensus analysis that follows.}
Consider an unconstrained quadratic programming problem of the form
\begin{align*}
\begin{array}[c]{ll}
\mbox{minimize} & \frac{1}{2}x^\top Q x + q^\top x,
\end{array}
\end{align*}
where $Q$ has the following structure
\begin{align*}
Q&=\begin{bmatrix}
Q_{11} & 0 & \cdots & 0 &Q_{s1}\\
0 & Q_{22} & & &Q_{s2}\\
\vdots & & \ddots & &\vdots\\
0 & & & Q_{nn} &Q_{sn}\\
Q_{1s} & Q_{2s} & \cdots & Q_{ns} &Q_{ss}
\end{bmatrix}\\
q^\top &=
\begin{bmatrix}
q_1^\top & \dots & q_s^\top
\end{bmatrix}
\end{align*}
with $Q_s \in \mathbf{R}$ for simplicity. The optimization problem is almost decoupled and can be solved in a distributed fashion by solving
\begin{align*}
\begin{array}[c]{ll}
\mbox{minimize} & \sum_{i=1}^n
f_i(x_i, y_{(i,s)})\\
\mbox{subject to} & y_{(1,s)} = y_{(2,s)},
\end{array}
\end{align*}
with
\[
f_i(x_i, y_{(i,s)})= \frac{1}{2}
\begin{bmatrix}
x_i\\
y_{(i,s)}
\end{bmatrix}^\top
\begin{bmatrix}
Q_{ii} & Q_{is}\\
Q_{si} & Q_{ss}
\end{bmatrix}
\begin{bmatrix}
x_i \\
y_{(i,s)}
\end{bmatrix}
+
\begin{bmatrix}
q_i \\
q_{is}
\end{bmatrix}^\top
\begin{bmatrix}
x_i \\
y_{(i,s)}
\end{bmatrix}
\]
Since the unshared variables $x_i$ are unconstrained, one can solve them analytically with respect to the shared variables, yielding
\[
f_i(y_{(i,s)})= \frac{1}{2} y_{(i,s)}^\top\left(Q_{ss} - Q_{si}Q_{ii}^{-1}Q_{is}\right)y_{(i,s)} + (q_s - Q_{si}Q_{ii}^{-1}q_{i})^\top y_{(i,s)}.
\]
Hence the optimization problem can be rewritten as
\begin{align*}
\begin{array}[c]{ll}
\mbox{minimize} & \sum_{i=1}^n f_i(y_{(i,s)}) \\
\mbox{subject to} & y_{(i,s)} = y_{(j,s)},\quad \forall i\neq j
\end{array}
\end{align*}
which reduces to a weighted consensus problem on the shared variable $y_s$.
\todo[inline]{Seems that it reduces to a "global" consensus problem only when there is a single shared variable (scalar or vector). If there are multiple shared variables that are not constrained to be qual to each other, say $y_{(1,s_1)} = y_{(2,s_1)}$ and $y_{(2,s_2)} = y_{(3,s_2)}$, then one has multiple "local" consensus.}
\section{Optimal convergence factor for consensus algorithms}
\label{sec:consensus}
The ADMM method has also been used as a basis for distributed optimization and consensus algorithms on graphs. In this section, we formulate consensus problem as a distributed QP program and present explicit formulas for the optimal step-size and convergence factor for particular cases including average consensus.
Let ${\mathcal{G}}(\mathcal{V},\mathcal{E},\mathcal{W})$ be a connected undirected weighted graph with vertex set $\mathcal{V}$, edge set $\mathcal{E}$, and edge weights $\mathcal{W}$. Each vertex $i\in {\mathcal V}$ represents an agent, and an edge $(i,j)\in {\mathcal E}$ means that agents $i$ and $j$ can exchange information. We let $d_i$ denote the weighted degree of vertex $i$, \emph{i.e.} the sum of the edge weights that are incident on $i$. Each agent $i$ holds a value $y_i$ and it only coordinates with its neighbors ${\mathcal N}_i=\{j\neq i \vert (i,j)\in \mathcal{E}\}$ to compute the network-wide optimal QP solution
\[\bar{x}=\underset{x\in\mathbf{R}}{\mbox{argmin}} \frac{1}{2}\sum_{i\in \mathcal{V}} \varpi_i (x - y_i)^2, \]
where $\varpi_i>0$ is a weighting variable. In particular, for the average consensus we have $\varpi_i = 1$ for all $i\in\mathcal{V}$.
\subsection{Consensus with edge variables}
Constraints must be imposed on the distributed problem so that consensus is achieved. One such way is to enforce all pairs of nodes connected by an edge to have the same value, i.e. $x_i = x_j$ for all $(i,j)\in\mathcal{E}$, resulting in $Bx = 0$, where $B$ is the node-to-edge incidence matrix with an arbitrary direction. To include this constraint in the ADMM formulation, the auxiliary variable $z_{(i,j)}$ is created for each edge $(i,j)$, with $z_{(i,j)} = z_{(j,i)}$, and the problem is formulated as
\begin{equation*}
\begin{array}[c]{ll}
\mbox{minimize} &\frac{1}{2}\sum_{i\in \mathcal{V}} \varpi_i (x_i - y_i)^2\\
\mbox{subject to} & x_i = z_{(i,j)},\quad\forall i\in \mathcal{V},\; \forall (i,j)\in\mathcal{E}.
\end{array}
\end{equation*}
Decomposing the incidence matrix as $B= B_I + B_O$, where $[B_I]_{ij} = 1$ ($[B_O]_{ij} = 1$) if and only if node $j$ is the head (tail) of edge $(i,j)$, the consensus problem can be rewritten as
\begin{equation*}
\begin{array}[c]{ll}
\mbox{minimize} &\frac{1}{2}x^\top Q x - q^\top x\\
\mbox{subject to} &
\begin{bmatrix}
R_OB_O \\
R_IB_I
\end{bmatrix}x - \begin{bmatrix}
R_O \\
R_I
\end{bmatrix}z = 0,
\end{array}
\end{equation*}
where $Q=\mbox{diag}([\varpi_1\,\dots\,\varpi_n])$, $q^\top = [\varpi_1 y_1\,\dots\, \varpi_n y_n]$, and $W_I=R_I^\top R_I$ and $W_O = R_O^\top R_O$ are non-negative diagonal matrices corresponding to the edge weights along the in- and out-directions. Since the graph is assumed to be undirected, we let $W_I=W_O = W$.
\todo[inline]{It should be possible to motivate that, for full-column matrix $E$ there exists a matrix $T$ so that $TE = [I\; \star]^\top$. This would then enable one to choose $R$ so that $\bar{E} = RTE$ and $\bar{E}^\top \bar{E} = Q$.\\
In any case, for the consensus with edge weights $T$ is clear and preserves the problem decomposability. Since the graph is connected, the rows of $E = [B_O^\top \; B_I^\top]^\top$ can be reordered so that it results in $\bar{E} = [I\; \star]^\top$.}
As derived in the previous section, the ADMM iterations can be written in matrix form as~\eqref{eq:ADMM_eq_matrix_x}.
Since $\Pi_{\mathcal{R}(\bar{F})} = W_O + W_I = 2W$, $\bar{E}^\top \Pi_{\mathcal{R}(\bar{F})}\bar{E} = \frac{1}{2}(B_I+B_O)^\top W (B_I+B_O) = \frac{1}{2}(D+A)$, and $\bar{E}^\top \bar{E} = B_O^\top WB_O + B_I^\top W B_I = D$, we have
\begin{align}\label{eq:Consensus_linearMatrix}
\begin{bmatrix}
x^{k+1}\\
x^{k}
\end{bmatrix}
&=
\begin{bmatrix}
\rho(Q+\rho D)^{-1}A + I & - \frac{\rho}{2}(Q+\rho D)^{-1}(D+A) \\
I & 0
\end{bmatrix}
\begin{bmatrix}
x^{k}\\
x^{k-1}
\end{bmatrix}.
\end{align}
The following result readily follows.
\begin{thm}
\label{thm:optimal_rho_lambda_consensus_edge}
Suppose $W$ is chosen so that $D=Q$ and let $\{\lambda_i\}$ be the set of ordered generalized eigenvalues of $(A,\;D)$ for which $\lambda_1 \leq \dots \leq \lambda_n=1$. The optimal $\bar\lambda \in \left[\lambda_1,\; \lambda_{n-1} \right]$ and $\rho$ that jointly maximize and minimize $\vert \phi_{2n-1} \vert$, respectively, are given by
\begin{equation*}
\begin{aligned}
\bar\lambda^\star &=\left\{
\begin{array}{ll}
\lambda_{n-1} &, \; \lambda_{n-1}\geq 0\\
\lambda_{1} &,\; \lambda_{n-1} < 0.
\end{array}
\right.\\
\rho^\star &= \left\{
\begin{array}{ll}
\frac{1}{k\sqrt{1-\lambda_{n-1}^2}} &, \; \lambda_{n-1}\geq 0\\
\frac{1}{k} &,\; \lambda_{n-1} < 0.
\end{array}
\right.
\end{aligned}
\end{equation*}
Furthermore, the corresponding convergence factor is described by
\begin{equation*}
\begin{aligned}
\vert \phi_{2n-1} \vert = \left\{
\begin{array}{ll}
\frac{1}{2}\left(1+\frac{\lambda_{n-1}}{1+\sqrt{1-\lambda_{n-1}^2}}\right) &, \; \lambda_{n-1}\geq 0\\
\frac{1}{2} &,\; \lambda_{n-1} < 0.
\end{array}
\right.
\end{aligned}
\end{equation*}
\end{thm}
\subsubsection{Optimizing the Network-Constrained Scaling}
Up to now, we have tuned a distributed consensus problem with optimizing a single penalty constant $\rho$ to achieve the best explicit convergence factor $\vert \phi_{2n-1}\vert$ as described in Theorem~\ref{thm:optimal_rho_lambda_consensus_edge}. Given that for the optimal step-size $\rho^\star$, the convergence factor $\vert \phi_{2n-1}\vert$ only depends on $\lambda_{n-1}$, another possibility for further tuning the factor $\vert \phi_{2n-1}\vert$ is to minimize the second largest generalized eigenvalue of $(A,D)$, $\lambda_{n-1}$, with respect to the edge weights $\mathcal{W}$. Below we show how this optimization can be formulated.
\begin{definition}
\label{def:consensus_Generalized}
Define $\mathcal{A}$ as the span of real symmetric matrices with sparsity pattern induced by $\mathcal{G}$,i.e.
\begin{align*}
\mathcal{A} = \{S \in \mathcal{S}^n \vert S_{ij}=0 \,\mbox{if} \, i \neq j \, \mbox{and}\, (i,j)\,\not\in \mathcal{E} \}.
\end{align*}
\end{definition}
\begin{thm}\label{thm:consensus_weight_optimization}
The second largest generalized eigenvalue of $(A,D)$, $\lambda_{n-1}$, can be minimized by solving
\begin{align}
\label{eq:consensus_weight_optimization}
\begin{array}{ll}
\underset{A}{\mbox{minimize}} & \lambda \\
\mbox{subject to} & A \in \mathcal{A},\\
& D = \mbox{diag}(A\textbf{1}_n),\\
& A - \lambda D - \textbf{1}_n\textbf{1}_n^\top \prec 0,\\
& D \succ 0,\\
& D+A \succeq 0,
\end{array}
\end{align}
where $\lambda^\star = \lambda_{n-1}^\star$ is the second largest eigenvalue of the optimal solution $(A^\star,D^\star)$.
\end{thm}
\todo[inline]{This formulation needs to be updated. For instance, the weights need to be positive, otherwise they do not fit the ADMM formulation.}
\todo[inline]{Unless we apply the transformation that relaxes the constraint that $D$ should be regular, this constraint needs to be included above.}
\subsection{Consensus with node variables}
An alternative way to enforce consensus is for each node to create a copy of its current value $x_i$ and force all the copies from neighboring nodes to be the same, i.e. $x_i = z_i$ for all $i\in\mathcal{V}$ and $z_i = z_j$ for all $(i,j)\in\mathcal{N_i}$. The corresponding ADMM iterations are formulated as
\begin{equation*}
\begin{array}[c]{ll}
\mbox{minimize} &\frac{1}{2}x^\top Q x - q^\top x\\
\mbox{subject to} &
\begin{bmatrix}
R_n \\
R_eB_I
\end{bmatrix}x - \begin{bmatrix}
R_n \\
R_e B_O
\end{bmatrix}z = 0,
\end{array}
\end{equation*}
where $W=R_e^\top R_e$ and $W_n = R_n^\top R_n$ are non-negative diagonal matrices corresponding to the edge and node weights, respectively. Similarly as before, the ADMM iterations can be written in matrix form as~\eqref{eq:ADMM_eq_matrix_x}.
Since $\bar{E}^\top \Pi_{\mathcal{R}(\bar{F})}\bar{E} = (W_n + B_I^\top W B_O)(W_n + B_O^\top W B_O)^{-1}(W_n + B_O^\top W B_I) = A(D_O)^{-1}A$ and $\bar{E}^\top \bar{E} =W_n + B_I^\top W B_I = D_I$, we have
\begin{align}\label{eq:Consensus_linearMatrix}
\begin{bmatrix}
x^{k+1}\\
x^{k}
\end{bmatrix}
&=
\begin{bmatrix}
\rho(Q+\rho D_I)^{-1}(2AD_O^{-1}A - D_I) + I & - \frac{\rho}{2}(Q+\rho D_I)^{-1}AD_O^{-1}A \\
I & 0
\end{bmatrix}
\begin{bmatrix}
x^{k}\\
x^{k-1}
\end{bmatrix}.
\end{align}
\todo[inline]{Include theorem about the optimal convergence.}
\todo[inline]{Comment on the fact that $AD_O^{-1}A$ corresponds to the 2nd power of the graph, i.e. nodes become connected to second-hop neighbors, and relate to improved convergence.}
\todo[inline]{Highlight that the convergence factor is lower-bounded by 0.5, as opposed to what is stated in other work.}
\subsubsection{Optimizing the Network-Constrained Scaling}
\section{ADMM for distributed quadratic programming}\label{sec:distributed_QP}
We are now ready to develop optimal scalings for the ADMM iterations for distributed quadratic programming. Specifically, we will consider a scenario where $n$ agents collaborate to minimize an objective function on the form
\begin{align}~\label{eq:Distributed_QP}
\begin{array}[c]{ll}
\underset{\eta}{\mbox{minimize}} & \frac{1}{2}\eta^\top \bar{Q} \eta + \bar{q}^\top \eta,
\end{array}
\end{align}
where $\eta=[\eta_1^\top\,\dots\,\eta_{n}^\top\,\eta_{s}]^\top$ and $\eta_i\in\mathbf{R}^{n_i}$ represents the private decisions of the agent $i$, $\eta_s\in\mathbf{R}$ is a shared decision among all agents, and $\bar{Q}$ has the structure
\begin{align}\label{eq:Qbar}
\bar{Q}&=\begin{bmatrix}
Q_{11} & 0 & \cdots & 0 &Q_{s1}\\
0 & Q_{22} & & &Q_{s2}\\
\vdots & & \ddots & &\vdots\\
0 & & & Q_{nn} &Q_{sn}\\
Q_{1s} & Q_{2s} & \cdots & Q_{ns} &Q_{ss}
\end{bmatrix}\\
\bar{q}^\top &=
\begin{bmatrix}
q_1^\top & \dots & q_s^\top
\end{bmatrix}
\end{align}
Here, $Q_{ss} \in \mathbf{R}$ for simplicity, $Q_{ii}\succ 0$, and $Q_{si} = Q^\top_{is} \in\mathbf{R}^{n_i}$. Such structured cost functions are common in optimization for interconnected systems. For instance, the problem of state estimation in electric power networks~\cite{Gomez2011_DSE} gives rise to such sparsity structure. Given that $\eta_s$ is scalar, state estimation for an electric power network with the physical structure depicted in Fig.~\ref{fig:graph_coup} results in such structured $\bar{Q}$.
\begin{figure}[h]
\centering
\subfigure[Coupling graph.]
{\includegraphics[width=0.5\hsize]{Figures/couplings_graph.eps}
\label{fig:graph_coup}}
\hspace{10pt}
\subfigure[Communication graph.]
{\includegraphics[width=0.35\hsize]{Figures/communication_graph_v2.eps}
\label{fig:graph_comm}}
\caption{The cost coupling resulting in $\bar{Q}$ as outlined in \eqref{eq:Qbar}. In (a) each agent $i\neq s$ represents a large area of the power network, while node $s$ corresponds to the connection point between all the areas. In (b) the agents from different areas need to jointly minimize~\eqref{eq:Distributed_QP} constrained by the communication network.}
\label{fig:graphs}
\end{figure}
The optimization problem is almost decoupled, except for the shared variable $\eta_{s}$, and can be solved in a distributed fashion by
introducing copies of the shared variable $x_{(i,s)}=\eta_{s}$ to each agent and solving the optimization problem
\begin{align*}
\begin{array}[c]{ll}
\underset{\{\eta_i\}, \{x_{(i,s)}\} }{\mbox{minimize}} & \sum_{i=1}^n
f_i(\eta_i, x_{(i,s)})\\
\mbox{subject to} & x_{(i,s)}=x_{(j,s)}, \quad \forall\, i,j,\; i\neq j
\end{array}
\end{align*}
with
\begin{align*}
f_i(\eta_i, x_{(i,s)}) &\triangleq \frac{1}{2}
\begin{bmatrix}
\eta_i\\
x_{(i,s)}
\end{bmatrix}^\top
\begin{bmatrix}
Q_{ii} & Q_{is}\\
Q_{si} & \alpha_iQ_{ss}
\end{bmatrix}
\begin{bmatrix}
\eta_i \\
x_{(i,s)}
\end{bmatrix}\\
&+
\begin{bmatrix}
q_i \\
\alpha_iq_{s}
\end{bmatrix}^\top
\begin{bmatrix}
\eta_i \\
x_{(i,s)}
\end{bmatrix},
\end{align*}
where $\alpha_i>0$ indicates how the cost associated with $\eta_s$ is distributed among the the copies $x_{(i,s)}$ with $\sum_{i=1}^n \alpha_i= 1$.
Since the private variables $\eta_i$ are unconstrained, one can solve for them analytically with respect to the shared variables, yielding
\begin{align*}
f_i(x_{(i,s)}) &\triangleq \frac{1}{2}x_{(i,s)}^\top \hat{Q}_i x_{(i,s)} + \hat{q}_i^\top x_{(i,s)},\\
\hat{Q}_i &=\left(Q_{ss}\alpha_i - Q_{is}Q_{ii}^{-1}Q_{si}\right),\\
\hat{q}_i & = (q_s\alpha_i - Q_{si}Q_{ii}^{-1}q_{i}).
\end{align*}
When $\bar{Q}$ is positive definite, there exist a set $\{\alpha_i\}$ such that each $f_i(x_{(i,s)})$ is convex, as stated in the following result.
\begin{lem}\label{lem:DQP_alphas}
For $\bar{Q}\succ 0$, there exist $\{\alpha_i\}$ such that $\sum_{i=1}^n \alpha_i= 1$ and $\hat{Q}_i > 0$ for all $i=1,\dots,n$.
\end{lem}
\begin{proof}
See the appendix.
\end{proof}
Hence the optimization problem can be rewritten as
\begin{align}\label{eq:dqp_as_consensus}
\begin{array}[c]{ll}
\underset{\{x_{(i,s)}\}}{\mbox{minimize}} & \sum_{i=1}^n f_i(x_{(i,s)}) \\
\mbox{subject to} & x_{(i,s)} = x_{(j,s)},\quad \forall\,i,j,\; i\neq j
\end{array}
\end{align}
which reduces to an agreement, or consensus, problem on the shared variable $x_s$ between all the nodes $i\neq s$ depicted in Fig.~\ref{fig:graph_comm}.
Each agent $i$ holds a local copy of the shared variable $x_i\triangleq x_{(i,s)}$ and it only coordinates with its neighbors ${\mathcal N}_i$ to compute the network-wide optimal solution to the agreement problem~\eqref{eq:dqp_as_consensus}.
The constraints imposed by the graph can be formulated in different ways, for instance by assigning auxiliary variables to each edge or node~\cite{Italian}. The former is illustrated next.
\subsection{Enforcing agreement with edge variables}\label{subsec:Edge_variable}
Constraints must be imposed on the distributed problem so that consensus is achieved. One such way is to enforce all pairs of nodes connected by an edge to have the same value,~i.e.~$x_i = x_j$ for all $(i,j)\in\mathcal{E}$. To include this constraint in the ADMM formulation, the auxiliary variable $z_{(i,j)}$ is created for each edge $(i,j)$, with $z_{(i,j)} = z_{(j,i)}$, and the problem is formulated as
\begin{equation*}
\begin{array}[c]{ll}
\underset{\{x_i\},\{z_{(i,j)}\}}{\mbox{minimize}} &\frac{1}{2}\sum_{i\in \mathcal{V}} f_i(x_i)\\
\mbox{subject to} & x_i = z_{(i,j)},\quad\forall i\in \mathcal{V},\; \forall (i,j)\in\mathcal{E}\\
& z_{(i,j)} = z_{(j,i)},\quad \forall (i,j)\in\mathcal{E}.
\end{array}
\end{equation*}
Consider an arbitrary direction for each edge $e_i \in {\mathcal E}$. Now decompose the incidence matrix as $B= B_I + B_O$, where $[B_I]_{ij} = 1$ ($[B_O]_{ij} = 1$) if, and only if, node $j$ is the head (tail) of the edge $e_i = (j,k)$. The optimization problem can be rewritten as
\begin{equation}\label{eq:consensus_problem}
\begin{array}[c]{ll}
\underset{x,z}{\mbox{minimize}} &\frac{1}{2}x^\top Q x - q^\top x\\
\mbox{subject to} &
\begin{bmatrix}
RB_O \\
RB_I
\end{bmatrix}x - \begin{bmatrix}
R \\
R
\end{bmatrix}z = 0,
\end{array}
\end{equation}
where $Q=\mbox{diag}([\hat{Q}_1\,\dots\,\hat{Q}_n])$, $q^\top = [\hat{q}_1\,\dots\, \hat{q}_n]$, and $W= R^\top R$ is the non-negative diagonal matrix corresponding to the edge-weights.
\begin{assumption}
\label{assump:Graph_Connected}
The graph ${\mathcal{G}}(\mathcal{V},\mathcal{E},\mathcal{W})$ is connected.
\end{assumption}
As derived in the previous section, the ADMM iterations can be written in matrix form as~\eqref{eq:ADMM_eq_matrix_x}.
Since $\Pi_{\mathcal{R}(\bar{F})} =2W$, $\bar{E}^\top \Pi_{\mathcal{R}(\bar{F})}\bar{E} = \frac{1}{2}(B_I+B_O)^\top W (B_I+B_O) = \frac{1}{2}(D+A)$, and $\bar{E}^\top \bar{E} = B_O^\top WB_O + B_I^\top W B_I = D$, we have
\begin{align}\label{eq:Consensus_linearMatrix}
\begin{array}[c]{ll}
M_{11} &= \rho(Q+\rho D)^{-1}A + I, \\
M_{12}& = - \frac{\rho}{2}(Q+\rho D)^{-1}(D+A).
\end{array}
\end{align}
The main result in this paper is stated below and, for given $W=R^\top R$, it explicitly characterizes the optimal $\rho$ solving Problem~\ref{prob:optimal_scaling} and the corresponding convergence factor of~\eqref{eq:ADMM_eq_matrix_x} with $M_{11}$ and $M_{12}$ derived in~\eqref{eq:Consensus_linearMatrix}.
\begin{thm}
\label{thm:optimal_rho_lambda_consensus_edge}
Suppose $W\succeq 0$ is chosen so that $\mathcal{G}$ is connected and $D=\kappa Q$ for $\kappa>0$. Let $\{\lambda_i\}$ be the set of ordered generalized eigenvalues of $(A,\;D)$ for which $\lambda_1 \leq \dots \leq \lambda_n=1$. The optimal step-size $\rho^\star$ that minimizes the convergence factor $\phi^\star$ is
\begin{equation*}
\begin{aligned}
\rho^\star &= \left\{
\begin{array}{ll}
\frac{1}{\kappa \sqrt{1-\lambda_{n-1}^2}} &, \; \lambda_{n-1}\geq 0\\
\frac{1}{\kappa} &,\; \lambda_{n-1} < 0.
\end{array}
\right.
\end{aligned}
\end{equation*}
Furthermore, the corresponding convergence factor is
\begin{equation*}
\begin{aligned}
\phi^\star&=\vert \phi_{2n-1} \vert = \left\{
\begin{array}{ll}
\frac{1}{2}\left(1+\frac{\lambda_{n-1}}{1+\sqrt{1-\lambda_{n-1}^2}}\right) &, \; \lambda_{n-1}\geq 0\\
\frac{1}{2} &,\; \lambda_{n-1} < 0.
\end{array}
\right.
\end{aligned}
\end{equation*}
\end{thm}
\begin{proof}
The proof is presented in the appendix.
\end{proof}
Note that, for a given $W$, the optimal $\rho^\star$ and convergence factor $\vert \phi_{2n-1} \vert$ are parameterized by $\kappa$ and $\lambda_{n-1}$.
Moreover, it is easy to see that $\vert \phi_{2n-1} \vert \geq \frac{1}{2}$ and minimizing $\lambda_{n-1}$ leads to the minimum convergence factor. Hence, by finding $W^\star$ as the edge-weights minimizing $\lambda_{n-1}$, the optimal scaling is then given by $\rho^\star(\lambda_{n-1}^\star)W^\star$.
The optimal choice of $W^\star$ is described in the following section.
\subsection{Optimal network-constrained scaling}
Here we address the second part of Problem~\ref{prob:optimal_scaling} by computing the optimal scaling matrix $R^\star$ that, together with $\rho^\star$, provides the optimal scaling minimizing the ADMM convergence factor. But first we introduce a transformation to relax the assumption that $D=\kappa Q$.
The constraints in the agreement problem~\eqref{eq:consensus_problem} enforce $x=\mathbf{1}_n y$ for some $y\in\mathbf{R}$, where $\mathbf{1}_n\in\mathbf{R}^n$ is a vector with all entries equal to $1$. Therefore the optimization problem is equivalent to
\begin{equation}\label{eq:consensus_problem_scalar}
\begin{array}[c]{ll}
\underset{y}{\mbox{minimize}} &\frac{1}{2}y\mathbf{1}_n^\top Q \mathbf{1}_n y - q^\top \mathbf{1}_n y.\\
\end{array}
\end{equation}
The next result readily follows.
\begin{lem}\label{lem:D_Qkappa}
Consider the optimization problem~\eqref{eq:consensus_problem}. For given diagonal $D\succ 0$, the optimal solution to~\eqref{eq:consensus_problem} remains unchanged when $Q$ is replaced by $\frac{1}{\kappa} D$ if $\kappa=\frac{\mathbf{1}_n^\top D \mathbf{1}_n}{\mathbf{1}_n^\top Q \mathbf{1}_n}$.
\end{lem}
\begin{proof}
The proof follows directly from converting~\eqref{eq:consensus_problem} to~\eqref{eq:consensus_problem_scalar} and having $\mathbf{1}_n^\top Q \mathbf{1}_n = \frac{1}{\kappa}\mathbf{1}_n^\top D \mathbf{1}_n$.
\end{proof}
Thus the constraint $D=\kappa Q$ can be achieved for any $D\succ 0$ by modifying the original problem~\eqref{eq:consensus_problem} by replacing $Q$ with $\frac{1}{\kappa}D$ and let $\kappa=\frac{\mathbf{1}_n^\top D \mathbf{1}_n}{\mathbf{1}_n^\top Q \mathbf{1}_n}$. Below we show how the minimization of $\lambda_{n-1}$ with respect to $W$ can be formulated, where the adjacency matrix $A$ is determined by the edge-weights $W$ and graph-induced sparsity pattern $\mathcal{A}$.
\begin{thm}\label{thm:consensus_weight_optimization_v2}
Consider the weighted graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W})$ and assume there exist non-negative edge-weights $\mathcal{W}=\{w_{ij}\}$ such that $\mathcal{G}$ is connected. The non-negative edge-weights $\{w_{ij}\}$ minimizing the second largest generalized eigenvalue of $(A,D)$, $\lambda_{n-1}$, while having $\mathcal{G}$ connected are obtained from the optimal solution to the quasi-convex problem
\begin{align}
\label{eq:consensus_weight_optimization}
\begin{array}{ll}
\underset{\{w_{ij}\},\,\lambda}{\mbox{minimize}} & \lambda \\
\mbox{subject to}
& w_{ij} \geq 0,\quad \;\;\;\,\, \forall \,i,j\in\mathcal{V},\\
& A_{ij} = w_{ij},\quad \;\forall \,(i,j)\in\mathcal{E},\\
& A_{ij} = 0,\quad \; \; \;\, \, \forall \,(i,j)\not\in\mathcal{E},\\
& D = \mbox{diag}(A\textbf{1}_n),\\
& D \succ \epsilon I,\\
& A - D - \textbf{1}_n\textbf{1}_n^\top \prec 0,\\
& P^\top\left( A - \lambda D \right) P \prec 0,
\end{array}
\end{align}
where the columns of $P\in\mathbf{R}^{n\times n-1}$ form an orthonormal basis of $\mathcal{N}(\textbf{1}_n^\top)$ and $\epsilon>0$.
\end{thm}
\begin{proof}
The proof is in the appendix.
\end{proof}
Given the results derived in this section, the optimal scaling $\rho^\star W^\star$ solving Problem~\ref{prob:optimal_scaling} can be computed as summarized in Algorithm~\ref{alg:optimal_scaling_modified}.
\begin{algorithm}[]
\caption{Optimal Network-Constrained Scaling}
\begin{enumerate}
\item Compute $W^\star$ and the corresponding $D^\star$ and $\lambda_{n-1}^\star$ according to Theorem~\ref{thm:consensus_weight_optimization_v2};
\item Given $D^\star$ and $Q$, compute $\kappa^\star$ from Lemma~\ref{lem:D_Qkappa};
\item Given $\kappa^\star$ and $\lambda_{n-1}^\star$, compute the optimal step-size $\rho^\star$ as described in Theorem~\ref{thm:optimal_rho_lambda_consensus_edge};
\item The optimal scaling for the ADMM algorithm with $Q$ replaced by $\frac{1}{\kappa^\star}D^\star$ is $\rho^\star W^\star$.
\end{enumerate}
\label{alg:optimal_scaling_modified}
\end{algorithm}
\section{Introduction}
\label{sec:introduction}
Recently, a number of applications have triggered a strong interest in distributed algorithms for large-scale quadratic programming. These applications include multi-agent systems~\cite{nedic10,Italian}, distributed model predictive control~\cite{accelerated_mpc13, farhad12}, and state estimation in networks~\cite{Falcao1995}, to name a few.
As these systems become larger and their complexity increases, more efficient algorithms are required. It has been argued that the alternating direction method of multipliers (ADMM) is a particularly powerful and efficient approach~\cite{Boyd11}.
One attractive feature of ADMM is that it is guaranteed to converge for all (positive) values of its step-size parameter~\cite{Boyd11}. This contrasts many alternative techniques, such as dual decomposition, where mistuning of the step-size for the gradient updates can render the iterations unstable.
The ADMM method has been observed to converge fast in many applications~\cite{Boyd11,tyler12,marriette12, Mota12} and for certain classes of problems it has a guaranteed linear rate of convergence~\cite{luo12,boley12, deng12}. However, the solution times are sensitive to the choice of the step-size parameter, and the ADMM iterations can converge (much) slower than the standard gradient algorithm if the parameter is poorly tuned.
In practice, the ADMM algorithm is tuned empirically for each specific application. In particular, for distributed quadratic programming,~\cite{tyler12,marriette12, Mota12} report various rules of thumb for picking the step-size. However, a thorough analysis and design of optimal step-size and scaling rules for the ADMM algorithm is still missing in the literature.
The aim of this paper is to close this gap for a class of distributed quadratic programming problems.
We first consider a particular class of equality-constrained quadratic programming problems and derive the corresponding iterations for the ADMM method. The iterations are shown to be linear and the corresponding eigenvalues are characterized as roots of quadratic polynomials. These results are then used to develop optimally scaled ADMM iterations for a class of distributed quadratic programming problems that appear in power network state-estimation applications~\cite{Gomez2011_DSE}. In this class of problems, a number of agents collaborate with neighbors in a graph to minimize a convex objective function with a specific sparsity structure over a mix of shared and private variables. We show that quadratic programming problems with this structure can be reduced to an equality constrained convex quadratic programming problem in terms of private variables only. The ADMM iterations for this quadratic problem are then formulated taking into account the communication network constraints. The network-constrained scaling of the ADMM method includes the step-size and edge weights of the communication graph. Methods to minimize the convergence factor by optimal scaling of the ADMM iterations are proposed for generic connected graphs. In particular, analytical expressions for the optimal step-size and convergence factor are derived in terms of the spectral properties of the communication graph. A tight lower-bound for the convergence factor is also obtained. Finally, given that the optimal step-size is chosen, we propose methods to further minimize the convergence factor by optimizing the edge weights.
The outline of this paper is as follows. Section~\ref{sec:background} gives an elementary background to the ADMM method. The ADMM iterations for a class of equality-constrained quadratic programming problems are formulated and analyzed in Section~\ref{sec:qp_equality}. Distributed quadratic programming and optimal networked-constrained scaling of the ADMM algorithm are addressed in Section~\ref{sec:distributed_QP}. Numerical examples illustrating our results and comparing them to state-of-the art techniques are presented in Section~\ref{sec:numerical}. Finally, a discussion and outlook on future research concludes the paper.
\subsection{Notation}
We denote the set of real and complex numbers with $\mathbf{R}$, and $\mathbf{C}$, respectively. For a given matrix $A\in \mathbf{R}^{n\times m}$, denote $\mathcal{R}(A)\triangleq \{y\in\mathbf{R}^n \vert \; y=Ax,\, x\in\mathbf{R}^{m} \}$ as its range-space and let $\mathcal{N}(A)\triangleq\{x\in\mathbf{R}^m \vert \; Ax=0\}$ be the null-space of $A$. For $A$ with full-column rank, define $A^\dagger \triangleq (A^\top A)^{-1}A^\top$ as the pseudo-inverse of $A$ and $\Pi_{\mathcal{R}(A)} \triangleq A A^\dagger$ as the orthogonal projector onto $\mathcal{R}(A)$. Since $\mathcal{R}(A)$ and $\mathcal{N}(A^\top)$ are orthogonal complements, we have $\Pi_{\mathcal{N}(A^\top)}=I-\Pi_{\mathcal{R}(A)}$ and $\Pi_{\mathcal{R}(A)}\Pi_{\mathcal{N}(A^\top)} = 0$.
Now consider $A,D\in \mathbf{R}^{n\times n}$, with $D$ invertible. The generalized eigenvalues of $(A, D)$ are defined as the values $\lambda \in \mathbf{C}$ such that $(A - \lambda D) v = 0$ holds for some nonzero vector $v\in\mathbf{C}^n$. Additionally, $A\succ 0$ ($A\succeq 0$) denotes that $A$ is positive definite (semi-definite).
Consider the sequence $\{x^k\}$ converging to the fixed-point $x^\star$. The convergence factor of the converging sequence is defined as~\cite{Bertsekas1989}
\begin{equation}\label{eq:convergence_factor}
\begin{aligned}
\phi^\star &\triangleq \sup_{x^k\neq x^\star} \dfrac{\|x^{k+1} - x^\star\|}{\|x^{k} - x^\star\|}.
\end{aligned}
\end{equation}
Definitions from graph theory are now presented~\cite{Godsil2001}.
Let ${\mathcal{G}}(\mathcal{V},\mathcal{E},\mathcal{W})$ be a connected undirected graph with vertex set $\mathcal{V}$ with $n$ vertices, edge set $\mathcal{E}$ with $m$ edges, and edge-weights $\mathcal{W}$. Each vertex $i\in {\mathcal V}$ represents an agent, and an edge $e_k=(i,j)\in {\mathcal E}$ means that agents $i$ and $j$ can exchange information. Letting $w_{e_k}\geq 0$ be the weight of $e_k$, the edge-weight matrix is defined as $W\triangleq\mbox{diag}([w_{e_1}\, \dots \, w_{e_m}])$. Denote ${\mathcal N}_i\triangleq\{j\neq i \vert (i,j)\in \mathcal{E}\}$ as the neighbor set of node $i$.
Define $\mathcal{A}$ as the span of real symmetric matrices, $\mathcal{S}^n$, with sparsity pattern induced by $\mathcal{G}$, $\mathcal{A} \triangleq \{S \in \mathcal{S}^n \vert S_{ij}=0 \,\mbox{if} \, i \neq j \, \mbox{and}\, (i,j)\,\not\in \mathcal{E} \}$. The adjacency matrix $A\in\mathcal{A}$ is defined as $A_{ij} = w_{ij}$ for $(i,j)\in\mathcal{E}$ and $A_{ii}=0$.
The diagonal degree matrix $D$ is given by $D_{ii} = \sum_{j\in\mathcal{N}_i}A_{ij}$.
The incidence matrix $B\in\mathbf{R}^{m\times n}$
is defined as $B_{ij}=1$ if $j \in e_i$ and $B_{ij}=0$ otherwise.
\section{Numerical examples}
\label{sec:numerical}
Next we illustrate our results in numerical examples.
\subsection{Distributed quadratic programming}
Consider a distributed quadratic programming problem with $n=3$ agents and an objective function defined by
\begin{align*}
\bar{Q} &=
\left[\begin{array}{cc:cc:cc:c}
4 & 1 & & & & & 1 \\
1 & 6 & & & & & 2 \\ \hdashline
& & 5 & 4 & & & 3 \\
& & 4 & 8 & & & 4 \\ \hdashline
& & & & 8 & 7 & 5 \\
& & & & 7 & 9 & 6 \\ \hdashline
1 & 2 & 3 & 4 & 5 & 6 & 8 \\
\end{array}\right]\\
\setlength{\dashlinegap}{1pt}
\bar{q}^\top &=\left[\begin{array}{cc;{2pt/1pt}cc;{2pt/1pt}cc;{2pt/1pt}c}
1 & 1 & 1 & 1 & 1 & 1 & 1
\end{array}\right]
.
\end{align*}
As shown previously, the optimization problem can be reformulated on the form
\begin{align*}
\begin{array}[c]{ll}
\underset{\{x_{(i,s)}\}}{\mbox{minimize}} & \sum_{i=1}^n \frac{1}{2} x_{(i,s)}^\top \hat{Q}_i x_{(i,s)} + \hat{q}_i^\top x_{(i,s)} \\
\mbox{subject to} & x_{(i,s)} = x_{(j,s)},\quad \forall i\neq j
\end{array}
\end{align*}
with $n=3$, $\alpha = \frac{1}{n}[0.5\, 0.9\, 1.6]$, $\hat{Q}_1= 0.5507$, $\hat{Q}_2=0.0667$, $\hat{Q}_3 = 0.2232$, $\hat{q}_1= -0.3116$, $\hat{q}_2= -0.3667$, and $\hat{q}_3=-0.1623$. As for the communication graph, we consider a line graph with node $2$ connected to nodes $1$ and $3$. Algorithm~\ref{alg:optimal_scaling_modified} is applied, resulting in $\lambda^\star_{n-1} = 0$ with the edge weights $w_{e_1} = w_{e_2} = 0.1566$ and degree matrix $D=\mbox{diag}([0.1566\; 0.3132\; 0.1566])$. From Theorem~\ref{thm:optimal_rho_lambda_consensus_edge} we then have $\rho^\star=\frac{1}{\kappa}=\frac{\sum_{i=1}^3\hat{Q}_i}{\mathbf{1}_n^\top D \mathbf{1}_n}$ and $\phi^\star=|\phi_{2n-1}|=0.5$, which is the best achievable convergence factor. The performance of the ADMM algorithm with optimal network-constrained scaling is presented in Fig.~\ref{fig:dqp}. The performance the unscaled ADMM algorithm with unitary edge weights and manually optimized step-size $\rho$ is also depicted for comparison. The convergence factor of the manually tuned ADMM algorithm is $|\phi_{2n-1}|=0.557$, thus exhibiting worse performance than the optimally scaled algorithm as depicted in Fig.~\ref{fig:dqp}.
\begin{figure}[h]
\centering
\includegraphics[width=.80\hsize]{Figures/dqp.eps}
\caption{Normalized error for the scaled ADMM algorithm with $W^\star$, and $\rho^\star$ obtained from Algorithm~\ref{alg:optimal_scaling_modified} and the unscaled ADMM algorithm with unitary edge weights and manually selected best step-size $\rho=0.55$ via exhaustive search.\label{fig:dqp}}
\end{figure}
\subsection{Distributed consensus}
\begin{figure}[h]
\centering
\subfigure[$\epsilon=0.2$]
{\includegraphics[width=0.8\hsize]{Figures/consensus/consensus_eps20_rep50.eps}
\label{fig:cons_1_1}}
\subfigure[$\epsilon=0.8$]
{\includegraphics[width=0.8\hsize]{Figures/consensus/consensus_eps80_rep50.eps}
\label{fig:cons_1_2}}
\caption{Performance comparison of the proposed optimal scaling for the ADMM algorithm with state-of-the-art fast-consensus~\cite{Italian}. The network of size $n=[5,20]$ is randomly generated by Erd\H{o}s-R\'enyi graphs with low and high densities $\epsilon = \{0.2,0.8\}$.}
\label{fig:cons_1}
\end{figure}
In this section we apply our methodology to derive optimally scaled ADMM iterations for the average consensus problems and compare our convergence factors with the state-of-the art fast consensus algorithm presented in~\cite{Italian}. The average consensus problem is a particular case of~\eqref{eq:consensus_problem} where $x\in \mathbf{R}$, $Q=\alpha I$ for some $\alpha \in \mathbf{R}$, and $q=\mathbf{0}$.
As an indicator of the performance, we compute the convergence factors for the two methods on a large number of randomly generated Erd\H{o}s-R\'enyi graphs. Fig.~\ref{fig:cons_1} presents Monte Carlo simulations of the convergence factors versus the number of nodes $n\in [5, 20]$. Each component $(i,j)$ in the adjacency matrix $A$ is non-zero with probability $p=(1+\epsilon)\frac{\log(n)}{n}$, where $\epsilon \in (0,1)$ and $n$ is the number of vertices. In our simulations, we consider two scenarios: sparse graphs with $\epsilon = 0.2$ and dense topologies $\epsilon=0.8$. For every network size, $50$ network instances are generated, the convergence factors are computed and averaged to generate the depicted results. The figure shows two versions of Algorithm~\ref{alg:optimal_scaling_modified} with and without weight optimization in Theorem~\ref{thm:consensus_weight_optimization_v2}. We observe a significant improvement compared to the state-of-the-art fast consensus~\cite{Italian} in both sparse and dense topologies.
\section{ADMM for a class of equality-constrained quadratic programming problems}
\label{sec:qp_equality}
In this section, we develop scaled ADMM iterations for a particular class of equality-constrained convex quadratic programming problems. In terms of the standard formulation (\ref{eq:constrained problem}), these problems have $f(x)=\frac{1}{2}x^\top Qx+q^{T}x$ with $Q\succ 0$ and $q\in \mathbf{R}^n$, $g(z)=0$, and $h=0$.
An important difference compared to the standard ADMM iterations described in the previous section is the introduction of a matrix $R\in \mathbf{R}^{r\times p}$ scaling the equality constraints
\begin{align}
\label{eq:Qp_constraint}
R(Ex+Fz)=0.
\end{align}
The underlying assumption on the choice of $R$ is that all non-zero vectors $v=Ex+Fz,\, \forall x \in {\mathbf R}^{n},\, z\in {\mathbf R}^{m}$ do not belong to the null-space of $R$. In other words, after the transformation~\eqref{eq:Qp_constraint} the feasible set in~\eqref{eq:constrained problem} remains unchanged. Taking into account the transformation in~\eqref{eq:Qp_constraint}, the penalty term in the \emph{augmented Lagrangian} becomes
\begin{align}
\label{eq:augmented_Lagrangian_scaled}
&\frac{1}{2} (Ex+Fz)^\top \rho R^\top R (Ex+Fz).
\end{align}
\begin{definition}
$\rho R^\top R$ is called the \emph{scaling} of the augmented Lagrangian~\eqref{eq:augmented_Lagrangian}.
\end{definition}
Our aim is to find the optimal scaling that minimizes the convergence factor of the corresponding ADMM iterations. Specifically, introducing $\bar{E}=RE$ and $\bar{F}=RF$, the scaled ADMM iterations read
\begin{align}
x^{k+1} &= (Q+\rho \bar{E}^\top \bar{E})^{-1}\left(-q - \rho\bar{E}^\top(\bar{F}z^k + u^k) \right) \nonumber\\
z^{k+1} &= -(\bar{F}^\top \bar{F})^{-1}\bar{F}^\top \left( \bar{E}x^{k+1} + u^k \right) \label{eqn:admm_iterations}\\
u^{k+1} &= u^{k} + \bar{E}x^{k+1} + \bar{F}z^{k+1} , \nonumber
\end{align}
where $u^k = \mu^k/\rho$. From the $z$- and $u$-iterations we observe
\begin{align*}
u^{k+1} &= (u^{k} + \bar{E}x^{k+1}) - \bar{F}(\bar{F}^\top \bar{F})^{-1}\bar{F}^\top \left( \bar{E}x^{k+1} + u^k\right) \\
&=\Pi_{\mathcal{N}(\bar{F}^\top)}\left( \bar{E}x^{k+1} + u^k\right).
\end{align*}
Since $\mathcal{N}(\bar{F}^\top)$ and $\mathcal{R}(\bar{F})$ are orthogonal complements, then we have $\Pi_{\mathcal{R}(\bar{F})} u^k = 0$ for all $k$, which results in
\begin{equation}\label{eq:z_iterations}
\bar{F}z^k = - \Pi_{\mathcal{R}(\bar{F})}\bar{E}x^k.
\end{equation}
By induction the $u$-iterations can be rewritten as
\begin{equation}\label{eq:u_iterations}
u^{k+1}=
\Pi_{\mathcal{N}(\bar{F}^\top)}\left( \sum_{i=1}^{k+1}(\bar{E}x^{i}) + u^{0} \right).
\end{equation}
Supposing $u^{0} = 0$, without loss of generality, and given~\eqref{eq:z_iterations} and~\eqref{eq:u_iterations}, the $x$-iterations can be rewritten
as
\begin{equation*}
\begin{aligned}
x^{k+1} &=
(Q+\rho \bar{E}^\top \bar{E})^{-1}\left(-q + \rho\bar{E}^\top\Pi_{\mathcal{R}(\bar{F})}\bar{E}x^{k}\right) \\
&- (Q+\rho \bar{E}^\top \bar{E})^{-1}\rho\bar{E}^\top\Pi_{\mathcal{N}(\bar{F}^\top)}\sum_{i=1}^{k}(\bar{E}x^{i}),
\end{aligned}
\end{equation*}
or
in matrix form as
\begin{align}\label{eq:ADMM_eq_matrix_x}
\begin{bmatrix}
x^{k+1}\\
x^{k}
\end{bmatrix}
&=
\underbrace{\begin{bmatrix}
M_{11} & M_{12} \\
I & 0
\end{bmatrix}
}_M
\begin{bmatrix}
x^{k}\\
x^{k-1}
\end{bmatrix},
\end{align}
with
\begin{equation}
\label{eq:ADMM_M11_M12}
\begin{aligned}
M_{11} &=\rho(Q+\rho \bar{E}^\top \bar{E})^{-1}\bar{E}^\top\left(\Pi_{\mathcal{R}(\bar{F})} - \Pi_{\mathcal{N}(\bar{F}^\top)}\right)\bar{E} + I,\\
M_{12}&= - \rho(Q+\rho \bar{E}^\top \bar{E})^{-1}\bar{E}^\top \Pi_{\mathcal{R}(\bar{F})}\bar{E}.
\end{aligned}
\end{equation}
The convergence properties of the ADMM iterations are characterized by the spectral properties of the matrix $M$. In particular, denote $\{\phi_i\}$ as the ordered eigenvalues of $M$ so that $|\phi_1| \leq \dots \leq |\phi_{2n-s}| < |\phi_{2n-s+1}| = \dots = |\phi_{2n}|$ for $s\geq 1$.
The ADMM iterations converge to the optimal solution if $\phi_{2n} = \dots = \phi_{2n-s+1} = 1$ and the respective convergence factor~\eqref{eq:convergence_factor} corresponds to $\phi^\star = |\phi_{2n-s}|$.
Below we state the main problem to be addressed in the remainder of this paper.
\begin{problem}\label{prob:optimal_scaling}
What are the optimal scalar $\rho^\star$ and matrix $R^\star$ in the scaling $\rho R^\top R$ that minimize the convergence factor of the ADMM algorithm?
\end{problem}
As the initial step to tackle Problem~\ref{prob:optimal_scaling}, in what follows we characterize the eigenvalues $\phi_i$ of $M$.
Let $[u^\top \; v^\top]$ be an eigenvector of $M$ associated with the eigenvalue $\phi$, from which we conclude $\phi v = u$. Thus the following holds for the eigenvalues and corresponding eigenvectors of $M$
\begin{equation}\label{eq:eigenvalue_eigenvector}
\phi^2 v =\phi M_{11} v + M_{12} v.
\end{equation}
Our analysis will be simplified by picking $R$ such that $\bar{E}^\top \bar{E} = \kappa Q$ for some $\kappa>0$. The following lemma indicates that such an $R$ can always be found.
\begin{lem}\label{lem:scaling_R}
For $E\in\mathbf{R}^{p\times n}$ with full-column rank and ${\kappa>0}$, there exists an $R$ that does not change the constraint set in~\eqref{eq:constrained problem} and ensures that $\bar{E}^\top\bar{E} = \kappa Q$.
\end{lem}
\begin{proof}
The proof is derived in the appendix.
\end{proof}
Now, replacing $\bar{E}^\top \bar{E} = \kappa Q$ in~\eqref{eq:ADMM_M11_M12} we have
\begin{equation*}
\begin{aligned}
M_{11} &=\frac{\rho}{1+\rho \kappa}(\bar{E}^\top \bar{E})^{-1}\bar{E}^\top\left(\Pi_{\mathcal{R}(\bar{F})} - \Pi_{\mathcal{N}(\bar{F}^\top)}\right)\bar{E} + I,\\
M_{12}&= - \frac{\rho}{1+\rho \kappa}(\bar{E}^\top \bar{E})^{-1}\bar{E}^\top \Pi_{\mathcal{R}(\bar{F})}\bar{E},
\end{aligned}
\end{equation*}
The next result presents the explicit form of the eigenvalues of $M$~in~\eqref{eq:ADMM_eq_matrix_x}.
\begin{thm}\label{thm:M_eigenvalues}
Consider the ADMM iterations~\eqref{eq:ADMM_eq_matrix_x}. If $\bar{E}^\top \bar{E} = \kappa Q$, the eigenvalues of $M$ are described by
\begin{align}\label{eq:eigenvalues_M_phi}
2\phi &= \left(f(\rho)\bar{\lambda} + 1 \right) \pm \sqrt{\left( f(\rho)\bar{\lambda} + 1\right)^2 - 2f(\rho)(\bar{\lambda}+1)},
\end{align}
with
\begin{equation*}
\begin{aligned}
f(\rho) &= \frac{\rho \kappa}{1+\rho \kappa},\\
\bar{\lambda} &= \dfrac{v^\top \left(\bar{E}^\top\left(\Pi_{\mathcal{R}(\bar{F})} - \Pi_{\mathcal{N}(\bar{F}^\top)}\right)\bar{E}\right)v}{v^\top (\bar{E}^\top \bar{E})v},\\
\kappa &= v^\top(\bar{E}^\top \bar{E})v.
\end{aligned}
\end{equation*}
\end{thm}
\begin{proof}
The result follows from~\eqref{eq:eigenvalue_eigenvector} and $\bar{E}^\top \bar{E} = \kappa Q$.
\end{proof}
From~\eqref{eq:eigenvalues_M_phi} one directly sees how $\rho$ and $R$ affect the eigenvalues of $M$. Specifically, $f(\rho)$ is a function of $\rho$, while $\bar\lambda$ only depends on $R$. In the next section we address and solve Problem~\ref{prob:optimal_scaling} for a particular class of problems. The analysis follows by applying Theorem~\ref{thm:M_eigenvalues} and studying the properties of~\eqref{eq:eigenvalues_M_phi} with respect to $\rho$ and $\bar{\lambda}$.
|
2,869,038,153,791 | arxiv | \section{Introduction}\label{sec:intro}
In a distributed storage system, information pertaining to a data file is dispersed across nodes in a network in such a manner that an end-user~(whom we term as a data-collector, or a DC) can retrieve the data stored by tapping into neighboring nodes. A popular option that reduces network congestion and that leads to increased resiliency in the face of node failures, is to employ erasure coding, for example by calling upon maximum-distance-separable~(MDS) codes such as Reed-Solomon~(RS) codes.
Let $B$ be the total number of message symbols, over a finite field $\mathbb{F}_q$ of size $q$. With RS codes, data is stored across $n$ nodes in the network in such a way that the entire data can be recovered by a data-collector by connecting to any arbitrary $k$ nodes, a process of data recovery that we will refer to as \textit{reconstruction}. Several distributed storage systems such as RAID-6, OceanStore~\cite{oceanstore} and Total~Recall~\cite{totalRecall} employ such an erasure-coding option.
Upon failure of an individual node, a self-sustaining data storage network must necessarily possess the ability to repair the failed node. An obvious means to accomplish this, is to permit the replacement node to connect to any $k$ nodes, download the entire data, and extract the data that was stored in the failed node. For example, RS codes treat the data stored in each node as a single symbol belonging to the finite field $\mathbb{F}_q$. When this is coupled with the restriction that individual nodes perform linear operations over $\mathbb{F}_q$, it follows that the smallest unit of data that can be downloaded from a node to assist in the repair of a failed node (namely, an $\mathbb{F}_q$ symbol), equals the amount of information stored in the node itself. As a consequence of the MDS property of an RS code, when carrying out repair of a failed node, the replacement node must necessarily collect data from at least $k$ other nodes. As a result, it follows that the total amount of data download needed to repair a failed node can be no smaller than $B$, the size of the entire file. But clearly, downloading the entire $B$ units of data in order to recover the data stored in a single node that stores only a fraction of the entire data file is wasteful, and raises the question as to whether there is a better option. Such an option is provided by the concept of a \emph{regenerating code} introduced by Dimakis et~al.~\cite{DimKan1}.
Regenerating codes overcome the difficulty encountered when working with an RS code by working with codes whose symbol alphabet is a vector over $\mathbb{F}_q$, i.e., an element of $\mathbb{F}_q^{\alpha}$ for some parameter $\alpha > 1$. Each node stores a vector symbol, or equivalently stores $\alpha$ symbols over $\mathbb{F}_q$. In this setup, it is clear that while maintaining linearity over $\mathbb{F}_q$, it is possible for an individual node to transfer a fraction of the data stored within the node.
Apart from this new parameter $\alpha$, two other parameters $(d, \beta)$ are associated with regenerating codes. Thus we have
\[
\{ q, \ [n, \ k, \ d], \ (\beta, \ \alpha, B)\}
\]
as the parameter set of a regenerating code. Under the definition of regenerating codes introduced in \cite{DimKan1}, a failed node is permitted to connect to an arbitrary subset of $d$ nodes out of the remaining $(n-1)$ nodes while downloading $\beta \leq \alpha$ symbols from each node. The total amount $d\beta$ of data downloaded for repair purposes is termed the \textit{repair bandwidth}. Typically, with a regenerating code, the average repair bandwidth $d\beta$ is small compared to the size of the file $B$. Fig.~\ref{fig:intro_recon} and Fig.~\ref{fig:intro_regen} illustrate reconstruction and node repair respectively, also depicting the relevant parameters.
\begin{figure}[t]
\centering
\subfloat[]{\includegraphics[trim=0in 1.81in 7in 0in, clip, width=0.3\textwidth]{fig_intro_recon.pdf}\label{fig:intro_recon}}
\hspace{.1\textwidth}
\subfloat[]{\includegraphics[trim=0in 1.81in 7in 0in, clip, width=0.3\textwidth]{fig_intro_regen.pdf}\label{fig:intro_regen}}
\caption{\small The regenerating codes setup: (a) data reconstruction, and (b) repair of a failed node.}
\label{fig:completegraph}
\end{figure}
The cut-set bound of network coding can be invoked to show that the parameters of a regenerating code must necessarily satisfy \cite{YunDimKan}:
\begin{eqnarray} B & \leq &
\sum_{i=0}^{k-1} \min\{\alpha,(d-i)\beta\}. \label{eq:cut_set_bound} \end{eqnarray} It is desirable to minimize both $\alpha$ as well as $\beta$ since minimizing $\alpha$ results in a minimum storage solution while minimizing $\beta$~(for a fixed $d$) results in a solution that minimizes the repair bandwidth. It turns out that there is a tradeoff between $\alpha$ and $\beta$. The two extreme points in this tradeoff are termed the minimum storage regenerating~(MSR) and minimum bandwidth regenerating~(MBR) points respectively. The parameters $\alpha$ and $\beta$ for the MSR point on the tradeoff can be obtained by first minimizing $\alpha$ and then minimizing $\beta$ to obtain
\begin{eqnarray}
\alpha_{\text{MSR}} & = & \frac{B}{k} , \nonumber \\
\beta_{\text{MSR}} & = & \frac{B}{k(d-k+1)}. \label{eq:MSR_parameters} \end{eqnarray}
Reversing the order, leads to the MBR point which thus corresponds to
\begin{eqnarray}
\beta_{\text{MBR}} & = & \frac{2B}{k(2d-k+1)} , \nonumber \\
\alpha_{\text{MBR}} & = & \frac{2dB}{k(2d-k+1)} .
\label{eq:MBR_parameters} \end{eqnarray}
The focus of the present paper is on the MSR point. Note that regenerating codes with $(\alpha = \alpha_{\text{MSR}})$ and $(\beta = \beta_{\text{MSR}})$ are necessarily MDS codes over the vector alphabet $\mathbb{F}_q^{\alpha}$. This follows since the ability to reconstruct the data from any arbitrary $k$ nodes necessarily implies a minimum distance $d_{\min}=n-k+1$. Since the code size equals $ \left( q^{\alpha} \right) ^k$, this meets the Singleton bound causing the code to be an MDS code.
\subsection{Choice of the Parameter $\beta$}\label{subsec:beta_1}
Let us next rewrite \eqref{eq:MSR_parameters} in the form
\begin{eqnarray}
\alpha_{\text{MSR}} & = & \beta_{\text{MSR}} (d-k+1) \nonumber \\
B & = & \beta_{\text{MSR}} (d-k+1)(k). \label{eq:beta_as_quantum}\end{eqnarray}
Thus if one is able to construct an $[n,\;k,\;d]$ MSR code with repair bandwidth achieving the cut-set bound for a given value of $\beta$, then both $\alpha_{\text{MSR}}=(d-k+1) \beta_{\text{MSR}}$ and the size $B=k \, \alpha_{\text{MSR}}$ of the file are necessarily fixed. It thus makes sense to speak of an achievable triple
\[
(\beta, \ \ \alpha=(d-k+1) \beta, \ \ B= k \alpha).
\]
However if a triple $(\beta, \alpha, B)$ is achievable, then so is the triple $(\ell \beta, \ell \alpha, \ell B)$ simply through a process of divide and conquer, i.e., we divide up the message file into $\ell$ sub-files and apply the code for $(\beta, \alpha, B)$ to each of the $\ell$ sub-files. Hence, codes that are applicable for the case $\beta=1$, are of particular importance as they permit codes to be constructed for every larger integral value of $\beta$. In addition, a code with small $\beta$ will involve manipulating a smaller number of message symbols and hence will in general, be of lesser complexity. For these reasons, in the present paper, codes are constructed for the case $\beta=1$. Setting $\beta=1$ at the MSR point yields
\begin{equation} \alpha_{\text{MSR}}=d-k+1 . \label{eq:MSR_beta1_parameters}\end{equation}
Note that when $\alpha=1$, we have $B=k$ and meeting the cut-set bound would imply $d = k$. In this case, any $[n,k]$-MDS code will achieve the bound. Hence, we will consider $\alpha > 1$ throughout.
\subsection{Additional Terminology}
\subsubsection{Exact versus Functional Repair}
In general, the cut-set bound~(as derived in~\cite{DimKan1}) applies to functional-repair, that is, it applies to networks which replace a failed node with a replacement node which can carry out all the functions of the earlier failed node, but which does not necessarily store the same data. Thus, under functional-repair, there is need for the network to inform all nodes in the network of the replacement. This requirement is obviated under exact-repair, where a replacement node stores exactly the same data as was stored in the failed node. We will use the term {\em exact-repair MSR code} to denote a regenerating code operating at the minimum storage point, that is capable of exact-repair.
\subsubsection{Systematic Codes}
A systematic regenerating code can be defined as a regenerating code designed in such a way that the $B$ message symbols are explicitly present amongst the $k \alpha$ code symbols stored in a select set of $k$ nodes, termed as the systematic nodes. Clearly, in the case of systematic regenerating codes, exact-repair of the systematic nodes is mandated. A data-collector connecting to the $k$ systematic nodes obtains the $B$ message symbols in an uncoded form, making systematic nodes a preferred choice for data recovery. This makes the fast repair of systematic nodes a priority, motivating the interest in minimizing the repair bandwidth for the exact-repair of systematic nodes.
~
The immediate question that this raises, is as to whether or not the combination of (a) restriction to repair of systematic nodes and (b) requirement for exact-repair of the systematic nodes leads to a bound on the parameters $(\alpha, \beta)$ different from the cut-set bound. It turns out that the same bound on the parameters $(\alpha, \beta)$ appearing in \eqref{eq:MSR_parameters} still applies and this is established in Section~\ref{sec:notation}.
\subsection{Exact-repair MSR Codes as Network Codes}\label{subsec:net_cod}
The existence of regenerating codes for the case of functional-repair was proved~(\cite{DimKan1,YunDimKan}) after casting the reconstruction and repair problems as a multicast network coding problem, and using random network codes to achieve the cut-set bound. As shown in our previous work \cite{ourNCC_NC}, construction of exact-repair MSR codes for the repair of systematic nodes is most naturally mapped to a non-multicast problem in network coding, for which very few results are available
\begin{figure}[h]
\centering
\includegraphics[trim=8in 3.3in 9in 1.5in, clip=true, width=0.8\textwidth]{fig_storage_multicast.pdf}
\caption{\small The MSR code design problem for the exact-repair of just the systematic nodes, as a non-multicast network coding problem. Here, $[n=4, \ k=2 \ d=3]$ with $\beta=1$ giving $(\alpha=2, \ B=4)$. Unmarked edges have capacity $\alpha$. Nodes labelled \textit{DC} are data-collector sinks, and those labelled \textit{$l'$} are replacement node sinks.} \label{fig:stoMult423}
\end{figure}
The non-multicast network for the parameter set $[n=4,\ k=2,\ d=3]$ with $\beta=1$ is shown in Fig.~\ref{fig:stoMult423}. In general, the network can be viewed as having $k$ source nodes, corresponding to the $k$ systematic nodes, generating $\alpha$ symbols each per channel use. The parity nodes correspond to downlink nodes in the graph. To capture the fact that a parity node can store only $\alpha$ symbols, it is split~(as in~\cite{YunDimKan}) into two parts connected by a link of capacity $\alpha$ : parity node $m$ is split into $m_{\text{in}}$ and $m_{\text{out}}$ with all incoming edges arriving at $m_{\text{in}}$ and all outgoing edges emanating from $m_{\text{out}}$.
The sinks in the network are of two types. The first type correspond to data-collectors which connect to an arbitrary collection of $k$ nodes in the network for the purposes of data reconstruction. Hence there are ${n}\choose{k}$ sinks of this type. The second type of sinks represent a replacement node that is attempting to duplicate a failed systematic node, with the node replacing systematic node $\ell$ denoted by $\ell'$. Sinks of this type connect to an arbitrary set of $d$ out of the remaining $(n-1)$ nodes, and hence they are $k { {n-1}\choose{d}}$ in number. It is the presence of these sinks that gives the problem a non-multicast nature.
Thus, the present paper provides an instance where explicit code constructions achieve the cut-set bound for a non-multicast network, by exploiting the specific structure of the network.
~
\paragraph*{Relation Between $\beta$ and Scalar/Vector Network Coding}
The choice of $\beta$ as unity~(as in Fig.~\ref{fig:stoMult423}) may be viewed as an instance of scalar network coding. Upon increase in the value of $\beta$, the capacity of each data pipe is increased by a factor of $\beta$, thereby transforming the problem into a \textit{vector network coding} problem. Thus, $\beta=1$ implies
the absence of \textit{symbol extension}, which in general, reduces the complexity of system implementation and is thus of greater practical interest.
\subsection{Results of the Present Paper}
The primary results of the present paper are:
\begin{itemize}
\item The construction of a family of MDS codes for $d = n-1 \geq 2k-1$ that enable exact-repair of systematic nodes while achieving the cut-set bound on repair bandwidth. We have termed this code the MISER~\footnote{Short for an MDS, Interference-aligning, Systematic, Exact-Regenerating code, that is miserly in terms of bandwidth expended to repair a systematic node.} code.
\item Proof that interference alignment is \textit{necessary} for every exact-repair MSR code.
\item The proof of non-existence of linear exact-repair MSR codes for $d < 2k-3$ in the absence of symbol extension~(i.e., $\beta=1$). This result is clearly of interest in the light of on-going efforts to construct exact-repair codes with $\beta=1$ meeting the cut-set bound~\cite{WuDimISIT,ourAllerton,ourITW,DimSearch,WuArxiv,ourInterior_pts,Changho,ourProductMatrix,puyol}.
\item The construction, also explicit, of an MSR code for $d=k+1$. For most values of the parameters, $d=k+1$ falls under the $d<2k-3$ regime, and in light of the non-existence result above, exact-repair is not possible. The construction does the next best thing, namely, it carries out repair that is approximately exact~\footnote{The code consists of an exact-repair part along with an auxiliary part whose repair is not guaranteed to be exact. This is explained in greater detail in Section~\ref{sec:MDSplus}.}.
\end{itemize}
~
Note that the only explicit codes of the MDS type to previously have been constructed are for small values of parameters, $[n=4, \ k=2,\ d=3]$ and $[n=5, \ k=3,\ d=4]$. Prior work is described in greater detail in Section~\ref{sec:priorWork}.
~
The remainder of the paper is organized as follows. A brief overview of the prior literature in this field is given in the next section, Section~\ref{sec:priorWork}. The setting and notation are explained in Section~\ref{sec:notation}. The appearance of interference alignment in the context of distributed storage for construction of regenerating codes is detailed in Section~\ref{sec:intf_align} along with an illustrative example. Section~\ref{sec:gen_explicit} describes the MISER code. The non-existence of linear exact-repair MSR codes for $d < 2k-3$ in the absence of symbol extension can be found in Section~\ref{sec:non_exist_alpha_3}, along with the proof establishing the necessity of interference alignment. Section~\ref{sec:MDSplus} describes the explicit construction of an MSR code for $d=k+1$. The final section, Section~\ref{sec:conclusion}, draws conclusions.
\section{Prior Work}\label{sec:priorWork}
The concept of regenerating codes, introduced in~\cite{DimKan1,YunDimKan}, permit storage nodes to store more than the minimal $B/k$ units of data in order to reduce the repair bandwidth. Several distributed systems are analyzed, and estimates of the mean node availability in such systems are obtained. Using these values, the substantial performance gains offered by regenerating codes in terms of bandwidth savings are demonstrated.
The problem of minimizing repair bandwidth for the \textit{functional} repair of nodes is considered in~\cite{DimKan1,YunDimKan} where it is formulated as a multicast network-coding problem in a network having an infinite number of nodes. A cut-set lower bound on the repair bandwidth is derived. Coding schemes achieving this bound are presented in~\cite{YunDimKan, WuAchievable} which however, are non-explicit. These schemes require large field size and the repair and reconstruction algorithms are also of high complexity.
Computational complexity is identified as a principal concern in the practical implementation of distributed storage codes in~\cite{Complexi} and a treatment of the use of random, linear, regenerating codes for achieving functional-repair can be found there.
The authors in~\cite{WuDimISIT} and~\cite{ourAllerton} independently introduce the notion of exact-repair. The idea of using interference alignment in the context of exact-repair codes for distributed storage appears first in~\cite{WuDimISIT}. Code constructions of the MDS type are provided, which meet the cut-set lower bound when $k=2$. Even here, the constructions are not explicit, and have large complexity and field-size requirement.
The first explicit construction of regenerating codes for the MBR point appears in \cite{ourAllerton}, for the case $d=n-1$. These codes carry out uncoded exact-repair and hence have zero repair complexity. The required field size is of the order of $n^2$, and in terms of minimizing bandwidth, the codes achieve the cut-set bound.
A computer search for exact-repair MSR codes for the parameter set $[n=5,~k=3,~d=4], \ ~\beta=1$, is carried out in~\cite{DimSearch}, and for this set of parameters, codes for several values of field size are obtained.
A slightly different setting, from the exact-repair situation is considered in~\cite{WuArxiv}, where optimal MDS codes are given for the parameters $d=k+1$ and $n>2k$. Again, the schemes given here are non-explicit, and have high complexity and large field-size requirement.
We next describe the setting and notation to be used in the current paper.
\section{Setting and Notation} \label{sec:notation}
The distributed storage system considered in this paper consists of $n$ storage nodes, each having the capacity to store $\alpha$ symbols. Let $\underline{\mathbf{u}}$ be the message vector of length $B$ comprising of the $B$ message symbols. Each message symbol can independently take values from $\mathbb{F}_q$, a finite field of size $q$.
In this paper, we consider only linear storage codes. As in traditional coding theory, by a linear storage code, we mean that every stored symbol is a linear combination of the message symbols, and only linear operations are permitted on the stored symbols. Thus all symbols considered belong to $\mathbb{F}_q$.
For $m=1,\ldots,n$, let the $(B \times \alpha)$ matrix $\mathbf{G}^{(m)}$ denote the generator matrix of node $m$. Node $m$ stores the following $\alpha$ symbols
\begin{equation} \underline{\mathbf{u}}^t\mathbf{G}^{(m)}. \end{equation}
\noindent
In the terminology of network coding, each column of the nodal generator matrix $\mathbf{G}^{(m)}$ corresponds to the \textit{global kernel}~(linear combination vector) associated to a symbol stored in the node. The $(B \times n \alpha)$ generator matrix for the entire distributed-storage code, is given by
\begin{equation} \mathbb{G} \ = \ \begin{bmatrix}
\mathbf{G}^{(1)} & \mathbf{G}^{(2)} & \cdots & \mathbf{G}^{(n)}
\end{bmatrix}.
\end{equation}
Note that under exact-repair, the generator matrix of the code remains unchanged.
We will interchangeably speak of a node as either storing $\alpha$ symbols, by which we will mean the symbols $\underline{\mathbf{u}}^t\mathbf{G}^{(m)}$ or else as storing $\alpha$ vectors, by which we will mean the corresponding set of $\alpha$ global kernels that form the columns of nodal generator matrix $\mathbf{G}^{(m)}$.
We partition the $B(=k\alpha)$-length vector $\underline{\mathbf{u}}$ into $k$ components, $\underline{u}_i$ for $i=1,\ldots,k$, each comprising of $\alpha$ distinct message symbols:
\begin{equation} \underline{\mathbf{u}}= \begin{bmatrix} \underline{u}_1 \\ \vdots \\ \underline{u}_k\end{bmatrix}. \end{equation}
We also partition the nodal generator matrices analogously into $k$ sub-matrices as
\begin{equation} \mathbf{G}^{(m)} = \begin{bmatrix} G^{(m)}_1 \vspace{5pt} \\ \vdots \vspace{5pt} \\ G^{(m)}_k \vspace{5pt} \end{bmatrix} \label{eq:notation_1}, \end{equation}
\noindent
where each $G^{(m)}_i$ is an $(\alpha \times \alpha)$ matrix. We will refer to $G^{(m)}_i$ as the $i^{\text{th}}$ component of $\mathbf{G}^{(m)}$. Thus, node $m$ stores the $\alpha$ symbols
\begin{equation} \underline{\mathbf{u}}^t \mathbf{G}^{(m)} = \sum_{i=1}^{k} \underline{u}^t_i G^{(m)}_i \label{eq:notation_2}. \end{equation}
~
Out of the $n$ nodes, the first $k$ nodes~(i.e., nodes $1,\ldots,k$) are systematic. Thus, for systematic node $\ell$
\begin{eqnarray}
G^{(\ell)}_i =
\left \lbrace \begin{array}{ll}
I_{\alpha} &\text{if } i=\ell \\
0_\alpha &\text{if } i\neq \ell
\end{array} \right. \quad \forall i \in \{1,\ldots,k \},
\end{eqnarray}
where $0_\alpha$ and $I_\alpha$ denote the $(\alpha \times \alpha)$ zero matrix and identity matrix respectively; systematic node $\ell$ thus stores the $\alpha$ message symbols that $\underline{u}_\ell$ is comprised of.
Upon failure of a node, the replacement node connects to an arbitrary set of $d$ remaining nodes, termed as \textit{helper nodes}, downloading $\beta$ symbols from each. Thus, each helper node passes a collection of $\beta$ linear combinations of the symbols stored within the node. As described in Section~\ref{subsec:beta_1}, an MSR code with $\beta=1$ can be used to construct an MSR code for every higher integral value of $\beta$. Thus it suffices to provide constructions for $\beta=1$ and that is what we do here. When $\beta=1$, each helper node passes just a single symbol. Again, we will often describe the symbol passed by a helper node in terms of its associated global kernel, and hence will often speak of a helper node passing a \textit{vector}~\footnote{A simple extension to the case of $\beta > 1$ lets us treat the global kernels of the $\beta$ symbols passed by a helper node as a \textit{subspace} of dimension at most $\beta$. This `subspace' viewpoint has been found useful in proving certain general results at the MBR point in \cite{ourAllerton}, and for the interior points of the tradeoff in~\cite{ourInterior_pts}.}.
~
Throughout the paper, we use superscripts to refer to node indices, and subscripts to index the elements of a matrix. The letters $m$ and $\ell$ are reserved for node indices; in particular, the letter $\ell$ is used to index systematic nodes. All vectors are assumed to be column vectors. The vector $\underline{e}_i$ represents the standard basis vector of length $\alpha$, i.e., $\underline{e}_i$ is an $\alpha$-length unit vector with $1$ in the $i$th position and $0$s elsewhere. For a positive integer $p$, we denote the $(p \times p)$ zero matrix and the $(p \times p)$ identity matrix by $0_p$ and $I_p$ respectively. We say that a set of vectors is \textit{aligned} if the vector-space spanned by them has dimension at most one.
~
We next turn our attention to the question as to whether or not the combination of (a) restriction to systematic-node repair and (b) requirement of exact-repair of the systematic nodes leads to a bound on the parameters $(\alpha, \beta)$ different from the cut-set bound appearing in~\eqref{eq:cut_set_bound}.
The theorem below shows that the cut-set bound comes into play even if functional repair of a single node is required.
\begin{thm}
Any $[n, \ k, \ d]$-MDS regenerating code~(i.e., a regenerating code satisfying $B=k\alpha$) that guarantees the functional-repair of even a single node,
must satisfy the cut-set lower bound of~\eqref{eq:cut_set_bound} on repair bandwidth, i.e., must satisfy
\begin{equation} \beta \geq \frac{B}{k(d-k+1)}. \end{equation}
\end{thm}
\begin{IEEEproof}
First, consider the case when $\beta=1$. Let $\ell$ denote the node that needs to be repaired, and let $\{m_i \mid i=1, \ldots, d\}$ denote the $d$ helper nodes assisting in the repair of node $\ell$. Further, let $\{\underline{\mathbf{\gamma}}^{(m_i, \; \ell)}\mid i=1,\ldots,d\}$ denote the vectors passed by these helper nodes. At the end of the repair process, let the $(B \times \alpha)$ matrix $\mathbf{G}^{(\ell)}$ denote the generator matrix of the replacement node~(since we consider only functional-repair in this theorem, $\mathbf{G}^{(\ell)}$ need not be identical to the generator matrix of the failed node).
Looking back at the repair process, the replacement node obtains $\mathbf{G}^{(\ell)}$ by operating linearly on the collection of $d$ vectors $\{\underline{\mathbf{\gamma}}^{(m_i, \; \ell)}\mid i=1,\ldots,d\}$ of length $B$. This, in turn, implies that the dimension of the nullspace of the matrix
\begin{equation} \begin{bmatrix} \mathbf{G}^{(\ell)} & \underline{\mathbf{\gamma}}^{(m_1,\; \ell)} & \cdots & \underline{\mathbf{\gamma}}^{(m_d,\; \ell)} \end{bmatrix} \label{eq:nullspace_alpha}\end{equation}
should be greater than or equal to the dimension of $\mathbf{G}^{(l)}$, which is $\alpha$. However, the MDS property requires that at the end of the repair process, the global kernels associated to any $k$ nodes be linearly independent, and in particular, that the matrix
\begin{equation} \begin{bmatrix}\mathbf{G}^{(\ell)} & \underline{\mathbf{\gamma}}^{(m_1,\; \ell)} & \cdots & \underline{\mathbf{\gamma}}^{(m_{k-1},\; \ell)} \end{bmatrix} \end{equation} have full-rank. It follows that we must have
\[
d \ \geq \ k-1+\alpha.
\]
The proof for the case $\beta>1$, when every helper node passes a set of $\beta$ vectors, is a straightforward extension that leads to:
\begin{equation} d\beta \ \geq\ (k-1)\beta + \alpha. \end{equation}
Rearranging the terms in the equation above, and substituting $\alpha = \frac{B}{k}$ leads to the desired result.
\end{IEEEproof}
~
\noindent
Thus, we recover equation~\eqref{eq:MSR_parameters}, and in an optimal code with $\beta=1$, we will continue to have
\[
d \ = \ k-1+\alpha.
\]
In this way, we have shown that even in the setting that we address here, namely that of the exact-repair of the systematic nodes leads us to the same cut-set bound on repair bandwidth as in ~\eqref{eq:cut_set_bound}.
\noindent
The next section explains how the concept of interference alignment arises in the distributed-storage context.
\section{Interference Alignment in Regenerating Codes}\label{sec:intf_align}
The idea of interference alignment has recently been proposed in \cite{CadJafar}, \cite{MotKhan} in the context of wireless communication. The idea here is to design the signals of multiple users in such a way that at every receiver, signals from all the unintended users occupy a subspace of the given space, leaving the remainder of the space free for the signal of the intended user.
In the distributed-storage context, the concept of `interference' comes into play during the exact-repair of a failed node in an MSR code. We present the example of a systematic MSR code with $[n=4, \; k=2, \; d=3]$ and $\beta=1$, which gives $(\alpha=d-k+1=2,\; B=k\alpha = 4)$. Let $\{ u_1, \ u_2, \ u_3, \ u_4 \}$ denote the four message symbols. Since $k=2$ here, we may assume that nodes $1$ and $2$ are systematic and that node $1$ stores $\{ u_1, \ u_2\}$ and node $2$ stores $\{ u_3, \ u_4 \}$. Nodes $3$ and $4$ are then the parity nodes, each storing two linear functions of the message symbols.
\begin{figure}[h]
\centering
\includegraphics[trim= 0.1in 6.4in 4in 0in, clip=true,width=\textwidth]{fig_42msr_regensys}
\caption{\small Illustration of interference alignment during exact-repair of systematic node $1$.}
\label{fig:fig_42msr_regensys}
\end{figure}
Consider repair of systematic node $1$ wherein the $d=3$ nodes, nodes $2$, $3$ and $4$, serve as helper nodes. The second systematic node, node $2$, can only pass a linear combination of message symbols $u_3$ and $u_4$. The two symbols passed by the parity nodes are in general, functions of all four message symbols: $(a_1 u_1 + a_2 u_2 + a_3 u_3 + a_4 u_4)$ and $(b_1 u_1 + b_2 u_2 + b_3 u_3 + b_4 u_4)$ respectively.
Using the symbols passed by the three helper nodes, the replacement of node $1$ needs to be able to recover message symbols $\{u_1,u_2\}$. For obvious reasons, we will term $(a_1 u_1 + a_2 u_2 )$ and $(b_1 u_1 + b_2 u_2)$ as the \textit{desired} components of the messages passed by parity nodes $3$ and $4$ and the terms $(a_3 u_3 + a_4 u_4)$ and $(b_3 u_3 + b_4 u_4)$ as \textit{interference} components.
Since node $2$ cannot provide any information pertaining to the desired symbols $\{ u_1, \ u_2\}$, the replacement node must be able to recover the desired symbols from the desired components $(a_1 u_1 + a_2 u_2 )$ and $(b_1 u_1 + b_2 u_2)$ of the messages passed to it by the parity nodes $3$ and $4$. To access the desired components, the replacement node must be in a position to subtract out the interference components $(a_3 u_3 + a_4 u_4)$ and $(b_3 u_3 + b_4 u_4)$ from the received linear combinations $(a_1 u_1 + a_2 u_2 + a_3 u_3 + a_4 u_4)$ and $(b_1 u_1 + b_2 u_2 + b_3 u_3 + b_4 u_4)$; the only way to subtract out the interference component is by making use of the linear combination of $\{u_3,u_4\}$ passed by node $2$. It follows that this can only happen if the interference components $(a_3 u_3 + a_4 u_4)$ and $(b_3 u_3 + b_4 u_4)$ are aligned, meaning that they are scalar multiples of each other.
An explicit code over $\mathbb{F}_5$ for the parameters chosen in the example is shown in Fig.~\ref{fig:fig_42msr_regensys}. The exact-repair of systematic node $1$ is shown, for which the remaining nodes pass the first of the two symbols stored in them. Observe that under this code, the interference component in the two symbols passed by the parity nodes are aligned in the direction of $u_3$, i.e., are scalar multiples of $u_3$. Hence node $2$ can simply pass $u_3$ and the replacement node can then make use of $u_3$ to cancel~(i.e., subtract out) the interference.
In the context of regenerating codes, interference alignment was first used by Wu et al. \cite{WuDimISIT} to provide a scheme~(although, not explicit) for the exact-repair at the MSR point. However, interference alignment is employed only to a limited extent as only a portion of the interference components is aligned and as a result, the scheme is optimal only for the case $k=2$.
In the next section, we describe the construction of the MISER code which aligns interference and achieves the cut-set bound on the repair bandwidth for repair of systematic nodes. This is the \textit{first} interference-alignment-based explicit code construction that meets the cut-set bound.
\section{Construction of the MISER Code}\label{sec:gen_explicit}
In this section we provide an explicit construction for a systematic, MDS code that achieves the lower bound on repair bandwidth for the exact-repair of systematic nodes and which we term as the MISER code. We begin with an illustrative example that explains the key ideas behind the construction. The general code construction for parameter sets of the form $n=2k,~d=n-1$ closely follows the construction in the example. A simple, code-shortening technique is then employed to extend this code construction to the more general parameter set $n \geq 2k,~d=n-1$.
The construction technique can also be extended to the even more general case of arbitrary $n$, $d \geq 2k-1$, under the added requirement however, that the replacement node connect to all of the remaining systematic nodes.
\subsection{An Example} \label{sec:example}
The example deals with the parameter set, $[n=6,\;k=3,\;d=5]$, $\beta=1$, so that $(\alpha=d-k+1=3,\;B=k\alpha=9)$. We select $\mathbb{F}_7$ as the underlying finite field so that all message and code symbols are drawn from $\mathbb{F}_7$. Note that we have $\alpha=k=3$ here. This is true in general: whenever $n=2k$ and $d=n-1$, we have $\alpha=d-k+1=k$ which simplifies the task of code construction.
~
\subsubsection{Design of Nodal Generator Matrices}
As $k=3$, the first three nodes are systematic and store data in uncoded form. Hence
\begin{equation}
\mathbf{G}^{(1)} = \begin{bmatrix} I_3 \vspace{2pt} \\ 0_3 \vspace{2pt} \\ 0_3 \end{bmatrix} , \
\mathbf{G}^{(2)} = \begin{bmatrix} 0_3 \vspace{2pt} \\ I_3 \vspace{2pt} \\ 0_3 \end{bmatrix} , \
\mathbf{G}^{(3)} = \begin{bmatrix} 0_3 \vspace{2pt} \\ 0_3 \vspace{2pt} \\ I_3 \end{bmatrix}~.
\end{equation}
A key ingredient of the code construction presented here is the use of a Cauchy matrix~\cite{cauchy}. Let \begin{equation} {\Psi}_3 = \left[ \resizebox{!}{!}{\begin{tabular}{*{3}{c}}
${\psi}_1^{(4)}$ & ${\psi}_1^{(5)}$ & ${\psi}_1^{(6)}$ \vspace{2pt} \\
${\psi}_2^{(4)}$ & ${\psi}_2^{(5)}$ & ${\psi}_2^{(6)}$ \vspace{2pt} \\
${\psi}_3^{(4)}$ & ${\psi}_3^{(5)}$ & ${\psi}_3^{(6)}$
\end{tabular}} \right] \label{eq:cauchy} \end{equation}
be a $(3 \times 3)$ matrix such that each of its sub-matrices is full rank. Cauchy matrices have this property and in our construction, we will assume ${\Psi}_3$ to be a Cauchy matrix.
~
We choose the generator matrix of parity node $m~(m=4,5,6)$ to be
\begin{equation} \mathbf{G}^{(m)} = \left[ \resizebox{!}{!}{\renewcommand{\arraystretch}{1.2}\begin{tabular}{*{3}{c}}
$2{\psi}_1^{(m)} $&$ 0 $&$ 0 $ \\
$2{\psi}_2^{(m)} $&$ {\psi}_1^{(m)} $&$ 0$ \\
$2{\psi}_3^{(m)} $&$ 0 $&$ {\psi}_1^{(m)} $ \\ \hline \vspace{-11pt} \\
$ {\psi}_2^{(m)} $&$ 2{\psi}_1^{(m)} $&$ 0 $ \\
$ 0 $&$ 2{\psi}_2^{(m)} $&$ 0 $ \\
$0$ &$ 2{\psi}_3^{(m)} $&${\psi}_2^{(m)}$ \\ \hline \vspace{-11pt} \\
$ {\psi}_3^{(m)} $&$ 0 $&$ 2{\psi}_1^{(m)} $ \\
$ 0 $&$ {\psi}_3^{(m)} $&$ 2{\psi}_2^{(m)} $ \\
$ 0 $&$ 0 $&$ 2{\psi}_3^{(m)}$ \\
\end{tabular}} \right], \end{equation}
where the location of the non-zero entries of the $i$th sub-matrix are restricted to lie either along the diagonal or else within the $i$th column.
The generator matrix is designed keeping in mind the need for interference alignment and this will be made clear in the discussion below concerning the exact-repair of systematic nodes. The choice of scalar `$2$' plays an important role in the data reconstruction property; the precise role of this scalar will become clear when this property is discussed. An example of the $[6, \; 3, \; 5]$ MISER code over $\mathbb{F}_7$ is provided in Fig.~\ref{fig:example_635}, where the Cauchy matrix $\Psi$ is chosen as
\begin{equation} \Psi = \left[\begin{tabular}{>{$}c<{$}>{$}c<{$}>{$}c<{$}}
5 & 4 & 1 \\ 2 & 5 & 4 \\ 3 & 2 & 5
\end{tabular} \right].
\end{equation}
Also depicted in the figure is the exact-repair of node $1$, for which each of the remaining nodes pass the first symbol that they store. It can be seen that the first symbols stored in the three parity nodes $4$, $5$ and $6$ have their interference components (components $2$ and $3$) aligned and their desired components (component $1$) linearly independent.
\begin{figure}
\centering
\includegraphics[trim=0in 0.8in 0 0, clip=true, width=\textwidth]{fig_635_example}
\caption{\small An example of the $[6, \; 3, \; 5]$ MISER code over $\mathbb{F}_7$. Here, $\{u_1,\ldots,u_9\}$ denote the message symbols and the code symbols stored in each of the nodes are shown. Exact-repair of node $1$ is also depicted.}
\label{fig:example_635}
\end{figure}
~
The key properties of the MISER code will be established in the next section, namely:
\begin{itemize} \item that the code is an MDS code over alphabet $\mathbb{F}_q^\alpha$ and this property enables data reconstruction and \item that the code has the ability to carry out exact-repair of the systematic nodes while achieving the cut-set bound on repair bandwidth. \end{itemize}
We begin by establishing the exact-repair property.
~
\subsubsection{Exact-repair of Systematic Nodes}
Our algorithm for systematic node repair is simple. As noted above, each node stores $\alpha=k$ symbols. These $k$ symbols are assumed to be ordered so that we may speak of the first symbol stored by a node, etc. To repair systematic node $\ell$, $1 \leq \ell \leq k$, each of the remaining nodes passes their respective $\ell$th symbol.
Suppose that in our example construction here, node $1$ fails. Each of the parity nodes then pass on their first symbol, or equivalently, in terms of global kernels, the first column of their generator matrices for the repair of node $1$. Thus, from nodes $4,\ 5,$ and $6$, the replacement node obtains
\begin{equation} \hspace{-2pt}
\left[\hspace{-2pt} \resizebox{1.2cm}{!}{\begin{tabular}{c}
$2{\psi}_1^{(4)}$ \\
$2{\psi}_2^{(4)}$ \\
$2{\psi}_3^{(4)}$ \vspace{2pt}\\ \hline
\vspace{-.3cm} \\
${\psi}_2^{(4)}$ \\
$0$ \\
$0$ \vspace{2pt}\\ \hline
\vspace{-.3cm} \\
${\psi}_3^{(4)}$ \\
$0$ \\
$0$
\end{tabular}} \hspace{-2pt}\right]\hspace{-2pt}, \quad
\left[\hspace{-2pt} \resizebox{1.2cm}{!}{\begin{tabular}{c}
$2{\psi}_1^{(5)}$ \\
$2{\psi}_2^{(5)}$ \\
$2{\psi}_3^{(5)}$ \vspace{2pt}\\ \hline
\vspace{-.3cm} \\
${\psi}_2^{(5)}$ \\
$0$ \\
$0$ \vspace{2pt}\\ \hline
\vspace{-.3cm} \\
${\psi}_3^{(5)}$ \\
$0$ \\
$0$
\end{tabular}} \hspace{-2pt}\right]\hspace{-2pt}, \quad
\left[\hspace{-2pt} \resizebox{1.2cm}{!}{\begin{tabular}{c}
$2{\psi}_1^{(6)}$ \\
$2{\psi}_2^{(6)}$ \\
$2{\psi}_3^{(6)}$ \vspace{2pt}\\ \hline
\vspace{-.3cm} \\
${\psi}_2^{(6)}$ \\
$0$ \\
$0$ \vspace{2pt}\\ \hline
\vspace{-.3cm} \\
${\psi}_3^{(6)}$ \\
$0$ \\
$0$
\end{tabular}} \hspace{-2pt}\right]~.
\end{equation}
Note that in each of these vectors, the desired~(first) components are a scaled version of the respective columns of the Cauchy matrix $\Psi_3$. The interference~(second and third) components are aligned along the vector $[1 \ \ 0 \ \ 0]^t$. Thus, each interference component is aligned along a single dimension. Systematic nodes $2$ and $3$ then pass a single vector each that is designed to cancel out this interference. Specifically, nodes $2$ and $3$ respectively pass the vectors
\begin{equation}
\left[\hspace{-2pt} \resizebox{0.6cm}{!}{\begin{tabular}{c}
$0$ \\
$0$ \\
$0$ \\ \hline
$1$ \\
$0$ \\
$0$ \\ \hline
$0$ \\
$0$ \\
$0$
\end{tabular}} \hspace{-2pt}\right], \quad
\left[\hspace{-2pt} \resizebox{0.6cm}{!}{\begin{tabular}{c}
$0$ \\
$0$ \\
$0$ \\ \hline
$0$ \\
$0$ \\
$0$ \\ \hline
$1$ \\
$0$ \\
$0$
\end{tabular}} \hspace{-2pt}\right]~.
\end{equation}
The net result is that after interference cancellation has taken place, replacement node $1$ is left with access to the columns of the matrix
\[
\left[ \resizebox{!}{!}{\begin{tabular}{c}
$2{\Psi}_3$ \\ \hline
$0_3$ \\ \hline
$0_3$
\end{tabular}} \right] .
\]
Thus the desired component is a scaled Cauchy matrix ${\Psi}_3$. By multiplying this matrix on the right by $\frac{1}{2}\Psi_3^{-1}$, one recovers
\[
\left[ \resizebox{!}{!}{\begin{tabular}{c}
$I_3$ \\ \hline
$0_3$ \\ \hline
$0_3$
\end{tabular}} \right]
\]
as desired.
Along similar lines, when nodes $2$ or $3$ fail, the parity nodes pass the second or third columns of their generator matrices respectively. The design of generator matrices for the parity nodes is such that interference alignment holds during the repair of either systematic node, hence enabling the exact-repair of all the systematic nodes.
~
\subsubsection{Data Reconstruction~(MDS property)}\label{sec:eg_recon}
For the reconstruction property to be satisfied, a data-collector downloading symbols stored in any three nodes should be able to recover all the nine message symbols. That is, the $(9 \times 9)$ matrix formed by columnwise concatenation of any three nodal generator matrices, should be non-singular. We consider the different possible sets of three nodes that the data-collector can connect to, and provide appropriate decoding algorithms to handle each case.
(a) \textit{Three systematic nodes:} When a data-collector connects to all three systematic nodes, it obtains all the message symbols in uncoded form and hence reconstruction is trivially satisfied.
(b) \textit{Two systematic nodes and one parity node:} Suppose the data-collector connects to systematic nodes $2$ and $3$, and parity node $4$. It obtains all the symbols stored in nodes $2$ and $3$ in uncoded form and proceeds to subtract their effect from the symbols in node $4$. It is thus left to decode the message symbols $\underline{u}_1$, that are encoded using matrix $G^{(4)}_1$ given by
\begin{equation} G^{(4)}_1= \left[ \resizebox{!}{!}{\begin{tabular}{ccc}
$2{\psi}_1^{(4)} $&$0 $&$0 $\\
$ 2{\psi}_2^{(4)} $ &$ {\psi}_2^{(4)} $&$ 0$\\
$ 2{\psi}_3^{(4)} $&$ 0$&$ {\psi}_3^{(4)} $
\end{tabular}} \right]~. \end{equation}
This lower-triangular matrix is non-singular since by definition, all the entries in a Cauchy matrix are non-zero. The message symbols $\underline{u}_1$ can hence be recovered by inverting $G^{(4)}_1$.
(c) \textit{All three parity nodes:} We consider next the case when a data-collector connects to all three parity nodes. Let $C_1$ be the $(9 \times 9)$ matrix formed by the columnwise concatenation of the generator matrices of these three nodes.
~
\textit{Claim 1:} The data-collector can recover all the message symbols encoded using the matrix $C_1$, formed by the columnwise concatenation of the generator matrices of the three parity nodes:
\begin{equation} C_1 = \left[ \mathbf{G}^{(4)} \quad \mathbf{G}^{(5)} \quad \mathbf{G}^{(6)} \right]. \end{equation}
\begin{IEEEproof}
We permute the columns of $C_1$ to obtain a second matrix $C_2$ in which the $i^{th}\; (i=1,2,3)$ columns of all the three nodes are adjacent to each other as shown below:
\begin{equation} C_2 = \label{eq:invertNonsysStart}
\left[ \resizebox{!}{2.5cm}{\begin{tabular}{ccc|ccc|ccc}
$2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$ 2{\psi}_1^{(6)} $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\
$2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 2{\psi}_2^{(6)} $&$ {\psi}_1^{(4)} $&$ {\psi}_1^{(5)} $&$ {\psi}_1^{(6)} $&$ 0 $&$ 0 $&$ 0 $\\
$2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 2{\psi}_3^{(6)} $&$ 0 $&$ 0 $&$ 0 $&$ {\psi}_1^{(4)} $&$ {\psi}_1^{(5)} $&$ {\psi}_1^{(6)} $ \vspace{2pt}\\ \hline
\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\\
${\psi}_2^{(4)} $&$ {\psi}_2^{(5)} $&$ {\psi}_2^{(6)} $&$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$ 2{\psi}_1^{(6)} $&$ 0 $&$ 0 $&$ 0$ \\
$0 $&$ 0 $&$ 0 $&$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 2{\psi}_2^{(6)} $&$ 0 $&$ 0 $&$ 0$ \\
$0 $&$ 0 $&$ 0 $&$ 2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 2{\psi}_3^{(6)} $&$ {\psi}_2^{(4)} $&$ {\psi}_2^{(5)} $&$ {\psi}_2^{(6)}$ \vspace{2pt} \\ \hline
\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\vspace{-.04cm}&\\
${\psi}_3^{(4)} $&$ {\psi}_3^{(5)} $&$ {\psi}_3^{(6)} $&$ 0 $&$ 0 $&$ 0 $&$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$ 2{\psi}_1^{(6)}$ \\
$0 $&$ 0 $&$ 0 $&$ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)} $&$ {\psi}_3^{(6)} $&$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 2{\psi}_2^{(6)}$ \\
$0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 2{\psi}_3^{(6)}$ \\
\multicolumn{3}{c}{$\underbrace{\qquad\qquad\qquad\qquad}$}&\multicolumn{3}{c}{$\underbrace{\qquad\qquad\qquad\qquad}$}&\multicolumn{3}{c}{$\underbrace{\qquad\qquad\qquad\qquad}$}\\
\multicolumn{3}{c}{\small{group 1}}&\multicolumn{3}{c}\small{group 2}\hspace{1.3cm}&\multicolumn{3}{c}{\small{group 3}}
\vspace{-.8cm} \end{tabular}} \right] \nonumber~.\end{equation} \vspace{.6cm}\\
~
Note that a permutation of the columns does not alter the information available to the data-collector and hence is a permissible operation. This rearrangement of coded symbols, while not essential, simplifies the proof. We then post-multiply by a block-diagonal matrix ${\Psi}_3^{-1}$ to obtain the matrix $C_3$ given by
\begin{eqnarray} C_3 &=& C_2 \left[ \resizebox{!}{!}{\begin{tabular}{ccc}
${\Psi}_3^{-1} $&$ 0_3 $&$ 0_3 $\\
$0_3 $&$ {\Psi}_3^{-1} $&$ 0_3 $\\
$0_3 $&$ 0_3 $&$ {\Psi}_3^{-1} $
\end{tabular}} \right] \\
&=& \left[ \begin{tabular}{ccc|ccc|ccc}
$2 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\
$0 $&$ 2 $&$ 0 $&$ 1 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\
$0 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 0 $&$ 1 $&$ 0 $&$ 0 $\\
\hline
$0 $&$ 1 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\
$0 $&$ 0 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\
$0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $&$ 1 $&$ 0 $\\
\hline
$0 $&$ 0 $&$ 1 $&$ 0 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $\\
$0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 1 $&$ 0 $&$ 2 $&$ 0 $\\
$0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 2 $
\end{tabular} \right]. \end{eqnarray}
To put things back in perspective, the data collector at this point, has access to the coded symbols
\[
\underline{u}^t C_3
\]
associated with the three parity nodes. From the nature of the matrix it is evident that message symbols $u_1$, $u_5$ and $u_9$ are now available to the data-collector, and their effect can be subtracted from the remaining symbols to obtain the matrix
\begin{equation} [u_2 \ u_3 \ u_4 \ u_6 \ u_7 \ u_8]
\underbrace{\left[ \begin{tabular}{cccccc}
$ 2 $&$ 0 $&$ 1 $&$ 0 $&$ 0 $&$ 0 $\\
$ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 1 $&$ 0 $\\
$ 1 $&$ 0 $&$ 2 $&$ 0 $&$ 0 $&$ 0 $\\
$ 0 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $&$ 1 $\\
$ 0 $&$ 1 $&$ 0 $&$ 0 $&$ 2 $&$ 0 $\\
$ 0 $&$ 0 $&$ 0 $&$ 1 $&$ 0 $&$ 2 $\\
\end{tabular} \right]}_{C_4} \label{eq:invertNonsysEnd}.\end{equation}
As $2^2 \neq 1$ in $\mathbb{F}_7$, the matrix $C_4$ above can be verified to be non-singular and thus the remaining message symbols can also be recovered by inverting $C_4$.
\end{IEEEproof}
(d) \textit{One systematic node and two parity nodes:} Suppose the data-collector connects to systematic node $1$ and parity nodes $4$ and $5$. All symbols of node $1$, i.e., $\underline{u}_1$ are available to the data-collector. Thus, it needs to decode the message-vector components $\underline{u}_2$ and $\underline{u}_3$ which are encoded using a matrix $B_1$ given by
\begin{equation} B_1 = \begin{bmatrix}
G_2^{(4)} & G_2^{(5)} \\
G_3^{(4)}& G_3^{(5)}
\end{bmatrix}
\end{equation}
~
\textit{Claim 2:} The block-matrix $B_1$ above is non-singular and in this way, the message-vector components $\underline{u}_2$ and $\underline{u}_3$ can be recovered.
\begin{IEEEproof}
Once again, we begin by permuting the columns of $B_1$. For $i=2,3,1$ (in this order), we group the $i^{th}$ columns of the two parity nodes together to give the matrix
\begin{equation} B_2 =
\left[ \hspace{-.3cm}\resizebox{7.5cm}{!}{
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{cc|cc|cc}
$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$0 $&$ 0 $&$ {\psi}_2^{(4)} $&$ {\psi}_2^{(5)}$ \\
$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$ 0 $&$ 0 $&$ 0$ \\
$2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ {\psi}_2^{(4)} $&$ {\psi}_2^{(5)} $&$ 0 $&$ 0 $ \\ \hline
$ 0 $&$ 0 $&$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)}$\\
$ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)} $&$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$ 0 $\\
$ 0 $&$ 0 $&$ 2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 0 $&$ 0 $
\end{tabular}} \right]. \end{equation}
\noindent Let $\Psi_2$ be the $(2 \times 2)$ sub-matrix of the Cauchy matrix $\Psi_3$, given by \begin{equation}{\Psi}_2 = \left[ \resizebox{!}{!}{\begin{tabular}{cc}
${\psi}_2^{(4)}$ & ${\psi}_2^{(5)}$ \\
${\psi}_3^{(4)}$ & ${\psi}_3^{(5)}$
\end{tabular}} \right]. \end{equation}
Since every sub-matrix of $\Psi_3$ is non-singular, so is $\Psi_2$. Keeping in mind the fact that the data collector can perform any linear operation on the columns of $B_2$, we next multiply the last two columns of $B_2$ by ${\Psi}_2^{-1}$ (while leaving the other $4$ columns unchanged) to obtain the matrix
\begin{equation} B_3 = \left[ \resizebox{!}{!}{\begin{tabular}{cc|cc|cc}
$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$0 $&$ 0 $&$ 1 $&$ 0$\\
$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$ 0 $&$ 0 $&$ 0$ \\
$2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ {\psi}_2^{(4)} $&$ {\psi}_2^{(5)} $&$ 0 $&$ 0 $ \\ \hline
$ 0 $&$ 0 $&$ 2{\psi}_1^{(4)} $&$ 2{\psi}_1^{(5)} $&$0$&$1$\\
$ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)} $&$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$ 0 $\\
$ 0 $&$ 0 $&$ 2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)} $&$ 0 $&$ 0 $
\end{tabular}} \right]~.
\end{equation}
The message symbols associated to the last last two columns of $B_2$ are now available to the data-collector and their effect on the rest of the encoded symbols can be subtracted out to get
\begin{equation} B_4
=\left[ \resizebox{5.3cm}{!}{\begin{tabular}{cc|cc}
$ 2{\psi}_2^{(4)} $&$ 2{\psi}_2^{(5)} $&$ 0 $&$0$\\
$2{\psi}_3^{(4)} $&$ 2{\psi}_3^{(5)}$&$ {\psi}_2^{(4)}$&$ {\psi}_2^{(5)} $\\ \hline
$ {\psi}_3^{(4)} $&$ {\psi}_3^{(5)}$&$ 2{\psi}_2^{(4)}$&$ 2{\psi}_2^{(5)} $\\
$ 0 $&$ 0 $&$ 2{\psi}_3^{(4)}$&$ 2{\psi}_3^{(5)}$
\end{tabular}} \right]~.\end{equation}
Along the lines of the previous case, the matrix $B_4$ above can be shown to be non-singular. We note that this condition is equivalent to the reconstruction in a MISER code with $k=2$ and a data-collector that attempts to recover the data by connecting to the two parity nodes.
\end{IEEEproof}
\subsection{The General MISER Code for $n = 2k,~d=n-1$} \label{sec:MISER_gen}
In this section, the construction of MISER code for the general parameter set $n = 2k,~d=n-1$ is provided. Since the MISER code is built to satisfy the cut-set bound, we have that $d=\alpha+k-1$ which implies that
\begin{equation} k=\alpha~. \end{equation}
This relation will play a key role in the design of generator matrices for the parity nodes as this will permit each parity node to reserve $\alpha=k$ symbols associated to linearly independent global kernels for the repair of the $k$ systematic nodes. In the example just examined, we had $\alpha=k=3$. The construction of the MISER code for the general parameter set $n = 2k,~d=n-1$ is very much along the lines of the construction of the example code.
\subsubsection{Design of Nodal Generator Matrices}
~
The first $k$ nodes are systematic and store the message symbols in uncoded form. Thus the component generator matrices $G^{(\ell)}_i $, $1 \leq i \leq k$ of the $\ell$th systematic node, $1 \leq \ell \leq k$, are given by
\begin{eqnarray}
G^{(\ell)}_i =
\left \lbrace \begin{array}{ll}
I_{\alpha} &\text{if } i=\ell \\
0_\alpha &\text{if } i\neq \ell
\end{array} \right.
\label{eq:explicitSystematicGenMxs}.
\end{eqnarray}
Let $\Psi$ be an $\left(\alpha \times (n-k)\right)$ matrix with entries drawn from $\mathbb{F}_q$ such that every sub-matrix of $\Psi$ is of full rank. Since $n-k=\alpha=k$, we have that $\Psi$ is a square matrix \footnote{In Section~\ref{sec:connect_to_all_systematic}, we extend the construction to the even more general case of arbitrary $n$, $d \geq 2k-1$, under the added requirement however, that the replacement node connect to all of the remaining systematic nodes. In that section, we will be dealing with a rectangular $\left(\alpha \times (n-k)\right)$ matrix $\Psi$.}. Let the columns of $\Psi$ be given by
\begin{eqnarray}
\Psi=\begin{bmatrix}
\underline{\psi}^{(k+1)} & \underline{\psi}^{(k+2)} & \cdots & \underline{\psi}^{(n)}
\end{bmatrix}
\end{eqnarray}
where the $m$th column is given by \begin{equation} \underline{\psi}^{(m)}=\begin{bmatrix}{\psi}^{(m)}_1 \\ \vdots \\ {\psi}^{(m)}_\alpha \end{bmatrix} . \end{equation}
A Cauchy matrix is an example of such a matrix, and in our construction, we will assume ${\Psi}$ to be a Cauchy matrix.
~
\begin{defn}[Cauchy matrix] An $(s \times t)$ Cauchy matrix $\Psi$ over a finite field $\mathbb{F}_q$ is a matrix whose $(i,j)$th element ($1 \leq i \leq s$, $1 \leq j \leq t$) equals $\frac{1}{(x_i-y_j)}$ where $\{x_i\}\cup\{y_j\}$ is an injective sequence, i.e., a sequence with no repeated elements.
\end{defn}
~
Thus the minimum field size required for the construction of a $(s \times t)$ Cauchy matrix is $s+t$. Hence if we choose $\Psi$ to be a Cauchy matrix,
\begin{equation} q \geq \alpha + n - k. \end{equation}
Any finite field satisfying this condition will suffice for our construction.
Note that since $n-k \geq \alpha \geq 2$, we have $q \geq 4$.
~
We introduce some additional notation at this point. Denote the $j$th column of the $(\alpha \times \alpha)$ matrix $G^{(m)}_i$ as $\underline{g}^{(m)}_{i,j}$, i.e., \begin{equation} G^{(m)}_i = \left[\underline{g}^{(m)}_{i,1}\quad \cdots\quad \underline{g}^{(m)}_{i,\alpha}\right].\end{equation}
The code is designed assuming a regeneration algorithm under which each of the $\alpha$ parity nodes passes its $\ell^{\text{th}}$ column for repair of the $\ell^{\text{th}}$ systematic node. With this in mind, for $k+1 \leq m \leq n$, $1 \leq i,j \leq \alpha$, we choose
\begin{equation}
\underline{g}^{(m)}_{i,j}=
\left\lbrace \begin{array}{ll}
\epsilon \underline{\psi}^{(m)} &\text{if }i = j \\
{\psi}^{(m)}_i\underline{e}_j\;\; &\text{if }i\neq j
\end{array}
\right.\label{eq:choose_g}
\end{equation}
where $\epsilon$ is an element from $\mathbb{F}_q$ such that $\epsilon \neq 0 $ and $\epsilon^2 \neq 1$ ~(in the example provided in the previous section, $\epsilon \in \mathbb{F}_7$ was set equal to $2$). The latter condition $\epsilon^2 \neq 1$ is needed during the reconstruction process, as was seen in the example. Note that there always exists such a value $\epsilon$ as long as $q \geq 4$.
As in the example, the generator matrix is also designed keeping in mind the need for interference alignment. This property is utilized in the exact-repair of systematic nodes, as described in the next section.
~
\subsubsection{Exact-Repair of Systematic Nodes}
The repair process we associate with the MISER code is simple. The repair of a failed systematic node, say node $\ell$, involves each of the remaining $d=n-1$ nodes passing their $\ell$th symbols (or equivalently, associated global kernels) respectively. In the set of $\alpha$ vectors passed by the parity nodes, the $\ell$th (desired) component is independent, and the remaining (interference) components are aligned. The interference components are cancelled using the vectors passed by the remaining systematic nodes. Independence in the desired component then allows for recovery of the desired message symbols.
The next theorem describes the repair algorithm in greater detail.
\begin{thm}
In the MISER code, a failed systematic node can be exactly repaired by downloading one symbol from each of the remaining $d=n-1$ nodes.
\end{thm}
\begin{IEEEproof}
Consider repair of the systematic node $\ell$. Each of the remaining $(n-1)$ nodes passes its $\ell$th column, so that the replacement node has access to the global kernels represented by the columns shown below:
\[
\left[
\renewcommand{\arraystretch}{1.43}
\begin{tabular}{>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}|>{$}c<{$}
\underline{e}_{\ell} &\cdots & \underline{0} & \underline{0} &\cdots & \underline{0} &
{\psi}^{(k+1)}_1\underline{e}_{\ell} & \cdots & {\psi}^{(n)}_1\underline{e}_{\ell} \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
\underline{0} &\cdots & \underline{e}_{\ell} & \underline{0} &\cdots & \underline{0} &
{\psi}^{(k+1)}_{\ell-1}\underline{e}_{\ell} & \cdots &{\psi}^{(n)}_{\ell-1}\underline{e}_{\ell} \\
\underline{0} &\cdots & \underline{0} & \underline{0} &\cdots & \underline{0} &
\textcolor{blue}{\epsilon \underline{\psi}^{(k+1)} }& \textcolor{blue}{\cdots} & \textcolor{blue}{\epsilon \underline{\psi}^{(n)}} \\
\underline{0} &\cdots & \underline{0} & \underline{e}_{\ell} &\cdots & \underline{0} &
{\psi}^{(k+1)}_{\ell+1}\underline{e}_{\ell} & \cdots & {\psi}^{(n)}_{\ell+1}\underline{e}_{\ell} \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
\underline{0} &\cdots & \underline{0} & \underline{0} &\cdots & \underline{e}_{\ell} &
{\psi}^{(k+1)}_{k}\underline{e}_{\ell} & \cdots & {\psi}^{(n)}_{k}\underline{e}_{\ell}\vspace{-.35cm}
\\
\multicolumn{6}{>{$}c<{$}}{\underbrace{\hspace{4.5cm}}}&\multicolumn{3}{>{$}c<{$}}{\underbrace{\hspace{3.45cm}}}\vspace{-.1cm}\\
\multicolumn{6}{c}{From systematic nodes}&\multicolumn{3}{c}{From parity nodes}
\vspace{-1cm}
\end{tabular}
\right],
\vspace{1cm}
\]
where $\underline{e}_{\ell}$ denotes the $\ell$th unit vector of length $\alpha$ and $\underline{0}$ denotes a zero vector of length $\alpha$.
Observe that apart from the desired $\ell$th component, every other component is aligned along the vector $\underline{e}_{\ell}$.
The goal is to show that some $\alpha$ linear combinations of the columns above will give us a matrix whose $\ell$th component equals the $(\alpha \times \alpha)$ identity matrix, and has zeros everywhere else. But this is clear from the interference alignment structure just noted in conjunction with linear independence of the $\alpha$ vectors in the desired component:
\begin{equation} \{ \underline{\psi}^{(k+1)}, ~\cdots~, \underline{\psi}^{(n)} \} . \end{equation}
\end{IEEEproof}
Next, we discuss the data reconstruction property.
~
\subsubsection{Data Reconstruction~(MDS Property)} \label{sec:recon}
For reconstruction to be satisfied, a data-collector downloading all symbols stored in any arbitrary $k$ nodes should be able to recover the $B$ message symbols. For this, we need the $(B \times B)$ matrix formed by the columnwise concatenation of any arbitrary collection of $k$ nodal generator matrices to be non-singular. The proof of this property is along the lines of the proof in the example. For completeness, a proof is presented in the appendix.
\begin{thm}\label{thm:MISER_gen_recon}
A data-collector connecting to any $k$ nodes in the MISER code can recover all the $B$ message symbols.
\end{thm}
\begin{IEEEproof}
Please see the Appendix.
\end{IEEEproof}
~
\begin{note}
It is easily verified that both reconstruction and repair properties continue to hold even when we choose the generator matrices of the parity nodes $\underline{g}^{(m)}_{i,j}$, $k+1 \leq m \leq n$, $1 \leq i,j \leq \alpha$ to be given by:
\begin{equation}
\underline{g}^{(m)}_{i,j}= \left\lbrace \begin{array}{l l} \Sigma_i \underline{\psi}^{(m)} &\text{if }i = j \\
{\psi}^{(m)}_i\underline{e}_j\;\; &\text{if }i\neq j
\end{array} \right.\label{eq:choose_g2}
\end{equation}
where $\Sigma_i = \text{diag}\{\epsilon_{i,1}~,~\ldots~,~\epsilon_{i,\alpha}\}$ is an $(\alpha \times \alpha)$ diagonal matrix satisfying
\begin{enumerate}
\item $\epsilon_{i,j} \neq 0$, $\quad\quad~ \forall ~i,j$
\item $\epsilon_{i,j} \, \epsilon_{j,i} \neq 1$, $\quad \forall ~i \neq j$.
\end{enumerate}
~
\noindent
The first condition suffices to ensure exact-repair of systematic nodes. The two conditions together ensure that the~(MDS) reconstruction property holds as well.
\end{note}
\subsection{The MISER Code for $n \geq 2k,~d=n-1$}
In this section we show how the MISER code construction for $n = 2k,~d=n-1$ can be extended to the more general case $n \geq 2k, \ d=n-1$. From the cut-set bound~\eqref{eq:MSR_beta1_parameters}, for this parameter regime, we get
\begin{equation} k \leq \alpha~. \end{equation}
We begin by first showing how an incremental change in parameters is possible.
~
\begin{thm} \label{thm:smaller_k}
An $[n,\ k, \ d]$, linear, systematic, exact-repair MSR code ${\cal C}$ can be derived from an $[n'=n+1,k'=k+1,d'=d+1]$ linear, systematic, exact-repair MSR code ${\cal C}'$. Furthermore if $d'=a k'+b$ in code ${\cal C}'$, $d=a k+b+(a-1)$ in code ${\cal C}$.
\end{thm}
\begin{IEEEproof}
We begin by noting that
\begin{eqnarray}
n-k & = & n'-k' \\
\alpha'& = & \alpha =d-k+1 \\
B' = k'(d'-k'+1) & = & B + \alpha .
\label{eq:B_difference}
\end{eqnarray}
In essence, we use code shortening \cite{SloaneBook} to derive code ${\cal C}$ from code ${\cal C}'$. Specification of code ${\cal C}$ requires that given a collection of $B=k \alpha$ message symbols, we identify the $\alpha$ code symbols stored in each of the $n$ nodes.
We assume without loss of generality, that in code ${\cal C}$, the nodes are numbered $1$ through $n$, with nodes $1$ through $k$ representing the systematic nodes. We next create an additional node numbered $0$.
The encoding algorithm for code ${\cal C}$ is based on the encoding algorithm for code ${\cal C}'$. Given a collection of $B$ message symbols to be encoded by code ${\cal C}$, we augment this collection by an additional $\alpha$ message symbols all of which are set equal to zero. The first set of $B$ message symbols will be stored in systematic nodes $1$ through $k$ and the string of $\alpha$ zeros will be stored in node $0$. Nodes $0$ through $k$ are then regarded as constituting a set of $k'=(k+1)$ systematic nodes for code ${\cal C}'$. The remaining $(n-k)$ parity nodes are filled using the encoding process associated with code ${\cal C}'$ using the message symbols stored in the $k'$ nodes numbered $0$ through $k$. Note that both codes ${\cal C}$ and ${\cal C}'$ share the same number $(n-k)$ of parity nodes.
To prove the data reconstruction property of ${\cal C}$, it suffices to prove that all the $B$ message symbols can be recovered by connecting to an arbitrary set of $k$ nodes. Given a data-collector connecting to a particular set of $k$ nodes, we examine the corresponding scenario in code ${\cal C}'$ in which the data-collector connects to node $0$ in addition to these $k$ nodes. By the assumed MDS property of code ${\cal C}'$, all the $B$ message symbols along with the $\alpha$ message symbols stored in node $0$ can be decoded using the data stored these $(k+1)$ nodes. However, since the $\alpha$ symbols stored in node $0$ are all set equal to zero, they clearly play no part in the data-reconstruction process. It follows that the $B$ message symbols can be recovered using the data from the $k$ nodes (leaving aside node $0$), thereby establishing that code ${\cal C}$ possesses the required MDS data-reconstruction property.
A similar argument can be used to establish the repair property of code ${\cal C}$ as well. Finally, we have
\begin{eqnarray*}
d' & = & a k'+b \\
\ \Rightarrow \ d+1 & = & a(k+1) + b \\
\ \Rightarrow \ d & = & a k + b + (a-1). \end{eqnarray*}
\end{IEEEproof}
~
By iterating the procedure in the proof of Theorem~\ref{thm:smaller_k} above $i$ times we obtain:
~
\begin{cor}
\label{cor:MSR_higher_d}
An $[n,\ k, \ d]$ linear, systematic, exact-repair MSR code ${\cal C}$ can be constructed by shortening a $[n'=n+i,k'=k+i,d'=d+i]$ linear, systematic, exact-repair MSR code ${\cal C}'$. Furthermore if $d'=a k'+b$ in code ${\cal C}'$, $d=a k+b+i(a-1)$ in code ${\cal C}$.
\end{cor}
~
\begin{note}
It is shown in the sequel~(Section~\ref{subsec:equivalence}) that every linear, exact-repair MSR code can be made systematic. Thus, Theorem~\ref{thm:smaller_k} and Corollary~\ref{cor:MSR_higher_d} apply to any linear, exact-repair MSR code~(not just systematic). In addition, note that the theorem and the associated corollary hold for general values of $[n, \ k, \ d]$ and are not restricted to the case of $d=n-1$. Furthermore, a little thought will show that they apply to linear codes ${\cal C}'$ that perform functional repair as well.
\end{note}
~
The next corollary follows from Corollary~\ref{cor:MSR_higher_d}, and the code-shortening method employed in the Theorem~\ref{thm:smaller_k}.
\begin{cor} \label{cor:MISER_code-shortening} The MISER code for $n \geq 2k, \ d=n-1$ can be obtained by shortening the MISER code for $n'=n+(n-2k), \ k'=k + (n-2k), \ d'=d+(n-2k)=n'-1$ .
\end{cor}
~
\begin{figure}
\centering
\includegraphics[trim=0in 0.7in 0in 0in, clip=true,width=\textwidth]{fig_shortening_example}
\caption{\small
Construction of a $[n=5, \; k=2, \; d=4]$ MISER code from a $[n'=6, \; k'=3, \; d'=5]$ MISER code. Shortening the code with respect to node zero is equivalent to removing systematic node $0$ as well as the top component of every nodal generator matrix. The resulting $[n=5, \; k=2, \; d=4]$ MISER code has $\{u_4,\ldots,u_9\}$ as its $B=k\alpha=6$ message symbols.}
\label{fig:MISER_shorten}
\end{figure}
\textit{Example:} The code-shortening procedure represented by Theorem~\ref{thm:smaller_k} is illustrated by the example shown in Fig.~\ref{fig:MISER_shorten}. Here it is shown how a MISER code having code parameters $[n'=6, \; k'=3, \; d'=5]$, $\beta'=1$ and $(\alpha'=d'-k'+1=3, B'=\alpha^{'} k'=9)$ yields upon shortening with respect to the message symbols in node $0$, a MISER code having code parameters $[n=5, \; k=2, \; d=4]$, $\beta=1$ and $(\alpha=d-k+1=3, B=\alpha k=6)$.
\subsection{Extension to $ 2k-1 \leq d \leq n-1$ When The Set of Helper Nodes Includes All Remaining Systematic Nodes} \label{sec:connect_to_all_systematic}
In this section, we present a simple extension of the MISER code to the case when $2k-1 \leq d \leq n-1$, under the additional constraint however, that the set of $d$ helper nodes assisting a failed systematic node includes the remaining $k-1$ systematic nodes. The theorem below, shows that the code provided in Section~\ref{sec:MISER_gen} for $n=2k, \ d=n-1$ supports the case $d=2k-1, d \leq n-1$ as well as long as this additional requirement is met. From here on, extension to the general case $d \geq 2k-1, \ d \leq n-1$ is
straightforward via the code-shortening result in Theorem \ref{thm:smaller_k}. Note that unlike in the previous instance, the $(\alpha \times (n-k))$ Cauchy matrix used in the construction for $d < n-1$ is a rectangular matrix.
\begin{thm}
For $d=2k-1, \ d \leq n-1$, the code defined by the nodal generator matrices in equations~(\ref{eq:explicitSystematicGenMxs}) and~(\ref{eq:choose_g}),
achieves reconstruction and optimal, exact-repair of systematic nodes, provided the replacement node connects to
all the remaining systematic nodes.
\end{thm}
\begin{IEEEproof}
\textit{Reconstruction:} The reconstruction property follows directly from the reconstruction property in the case of the original code.
\textit{Exact-repair of systematic nodes:} The replacement node connects to the $(k-1)$ remaining systematic nodes and an arbitrary $\alpha$ parity nodes~(since, meeting the cut-set bound requires $d=k-1 + \alpha$). Consider a distributed storage system having only these $(k-1+\alpha)$ nodes along with the failed node as its $n$ nodes. Such a system has $d=n-1, \ d=2k-1$ and is identical to the system described in Section \ref{sec:MISER_gen}. Hence exact-repair of systematic nodes meeting the cut-set bound is guaranteed.
\end{IEEEproof}
\subsection{Analysis of the MISER Code}
\paragraph{Field Size Required} The constraint on the field size comes due to construction of the $\left(\alpha \times (n-k)\right)$ matrix $\Psi$ having all sub-matrices of full rank. For our constructions, since $\Psi$ is chosen to be a Cauchy matrix, any field of size $(n+d-2k+1)$ or higher suffices. For specific parameters, the matrix $\Psi$ can be handcrafted to yield smaller field sizes.
\paragraph{Complexity of Exact-Repair of Systematic Nodes} Each node participating in the exact-repair of systematic node $i$, simply passes its $i$th symbol, without any processing. The replacement node has to multiply the inverse of an~($\alpha \times \alpha$) Cauchy matrix with an $\alpha$ length vector and then perform $(k-1)$ subtractions for interference cancellation.
\paragraph{Complexity of Reconstruction} The complexity analysis is provided for the case $n=2k, \ d=n-1$, other cases follow on the similar lines. A data-collector connecting to the $k$ systematic nodes can recover all the data without any additional processing. A data-collector connecting to some $k$ arbitrary nodes has to (in the worst case) multiply the inverse of a $(k \times k)$ Cauchy matrix with $k$ vectors, along with operations having a lower order of complexity.
\subsection{Relation to Subsequent Work~\cite{Changho}}
Two regenerating codes are equivalent if one code can be transformed
into the other via a non-singular symbol remapping~(this definition
is formalized in Section~\ref{subsec:equivalence}). The capabilities
and properties of equivalent codes are thus identical in every way.
The initial presentation of the MISER code in~\cite{ourITW}~(the
name `MISER' was coined only subsequently) provided the construction
of the code along with two~(of three) parts of what may be termed as
a complete decoding algorithm, namely: (a) reconstruction by a data
collector, and (b) exact-repair of failed systematic nodes. It was
not known whether the third part of decoding, i.e., repair of a
failed parity node could be carried out by the MISER code.
Following the initial presentation of the MISER code, the authors of \cite{Changho} show how a \textit{common eigenvector}
approach can be used to establish that exact repair of the parity nodes is also possible under the MISER code construction \footnote{In ~\cite{Changho} a class of regenerating codes is presented that have the same parameters as does the MISER code. This class of codes can however, be shown to be equivalent to the MISER code (and hence to each other) under the equivalence notion presented in Section~\ref{subsec:equivalence}.}.
\section{Necessity of Interference Alignment and Non-Existence of Scalar, Linear, Exact-repair MSR Codes for $d<2k-3$}\label{sec:non_exist_alpha_3}
In Section~\ref{sec:gen_explicit}, explicit, exact-repair MSR codes are constructed for the parameter regimes $(d \geq 2k-1, \ d=n-1)$ performing reconstruction and exact-repair of systematic nodes. These constructions are based on the concept of interference alignment. Furthermore, these codes have a desirable property of having the smallest possible value for the parameter $\beta$, i.e., $\beta=1$.
As previously discussed in Section~\ref{subsec:net_cod}, the problem of constructing exact-repair MSR codes is (in part) a non-multicast network coding problem. In particular, for the case of $\beta=1$, it reduces to a \textit{scalar network coding} problem. Upon increase in the value of $\beta$, the capacity of every data pipe is increased by a factor of $\beta$, thereby transforming it into a \textit{vector network coding} problem. Thus, $\beta=1$ corresponds to the absence of symbol extension, which in general, reduces the complexity of system implementation. Furthermore, as noted in Section~\ref{subsec:beta_1}, an MSR code for every larger integer value of $\beta$, can be obtained by concatenating multiple copies of a $\beta=1$ code. For this reason, the case of $\beta=1$ is of special interest and a large section of the literature in the field of regenerating codes~(\cite{WuDimISIT,ourAllerton,ourITW,DimSearch,WuArxiv,ourInterior_pts,Changho,ourProductMatrix,puyol}) is devoted to this case.
In the present section, we show that for $d<2k-3$, there exist no linear, exact-repair MSR codes achieving the cut-set bound on the repair bandwidth in the absence of symbol extension.
In fact, we show that the cut-set bound cannot be achieved even if exact-repair of only the systematic nodes is desired. We first assume the existence of such a linear, exact-repair MSR code $\mathcal{C}$ satisfying: \begin{equation} (\beta=1,\ B=k\alpha,\ \alpha=d-k+1) \end{equation} and \begin{equation} (d < 2k-3 \Rightarrow \alpha < k-2).\end{equation}
Subsequently, we derive properties that this code must necessarily satisfy. Many of these properties hold for a larger regime of parameters and are therefore of independent interest. In particular, we prove that \textit{interference alignment}, in the form described in Section~\ref{sec:intf_align}, is \textit{necessary}. We will show that when $d <2k-3$ the system becomes over-constrained, leading to a contradiction.
We begin with some some additional notation.
~
\begin{note}
In recent work, subsequent to the original submission of this paper, it is shown in \cite{Jafar_arxiv,Changho_arxiv_intfalign} that the MSR point under exact-repair can be achieved asymptotically for all $[n, ~k, ~d]$ via an infinite symbol extension, i.e., in the limit as $\beta \rightarrow \infty$. This is established by presenting a scheme under which $\lim_{\beta \rightarrow \infty} \frac{\gamma}{d \beta} = 1$. Note that in the asymptotic setup, since both $\alpha, B$ are multiples of $\beta$, these two parameters tend to infinity as well.
\end{note}
\subsection{Additional Notation}\label{sec:subspaceview}
We introduce some additional notation for the vectors passed by the helper nodes to the replacement node. For $\ell,m \in \{1,\ldots,n\},\ell \neq m$, let $\underline{\gamma}^{(m,\ell)}$, denote the vector passed by node $m$ for repair of node $\ell$. In keeping with our component notation, we will use $\underline{\gamma}^{(m,\ell)}_i$ to denote the $i$th component, $1 \leq i \leq k$, of this vector.
Recall that a set of vectors are \textit{aligned} when the vector-space spanned by them has a dimension no more than one. Given a matrix $A$, we denote its column-space by $\text{colspace}[A]$ and its~(right) null space by $\text{nullspace}[A]$. Clearly, $\underline{\gamma}^{(m,\ell)} \in \text{colspace}\left[\mathbf{G}^{(m)}\right]$.
\subsection{Equivalent Codes}\label{subsec:equivalence}
Two codes $\mathcal{C}$ and $\mathcal{C}'$ are equivalent if
$\mathcal{C}'$ can be represented in
terms of $\mathcal{C}$ by \begin{enumerate}[i)]
\item a change of basis of the vector
space generated by the message symbols~(i.e., a remapping of the message symbols), and
\item a change of basis
of the column-spaces of the nodal generator matrices~(i.e., a remapping of the symbols stored within a node). \end{enumerate}
A more rigorous definition is as follows.
~
\begin{defn}[Equivalent Codes] Two codes $\mathcal{C}$ and $\mathcal{C}'$ are equivalent if
\begin{eqnarray} \mathbf{G}'^{(m)} &=& W \;\mathbf{G}^{(m)}\; U^{(m)} \\
\underline{\gamma}'^{(m,\ell)} &=& W \;\underline{\gamma}^{(m,\ell)} \end{eqnarray}
$\forall~\ell,m \in \{1,\ldots,n\},\;\ell\neq m$, for some
$(B\times B)$ non-singular matrix $W$, and some $(\alpha \times \alpha)$
non-singular matrix $U^{(m)}$.
\end{defn}
~
Since the only operator required to transform a code to its equivalent is a symbol remapping, the capabilities and properties of equivalent codes are identical in every respect. Hence, in the sequel, we will not distinguish between two equivalent codes and the notion of code equivalence will play an important role in the present section. Here, properties of a code that is equivalent to a given code are first derived and the equivalence then guarantees that these properties hold for the given code as well. The next theorem uses the notion of equivalent codes to show that every linear exact-repair MSR code can be made systematic.
~
\begin{thm}
Every linear, exact-repair MSR code can be made systematic via a non-singular linear transformation of the rows of the generator matrix, which simply corresponds to a re-mapping of the message symbols. Furthermore, the choice of the $k$ nodes that are to be made systematic can be arbitrary.
\end{thm}
\begin{IEEEproof}
Let the generator matrix of the given linear, exact-repair MSR code $\mathcal{C}$ be $\mathbb{G}$. We will derive an equivalent code $\mathcal{C}'$ that has its first $k$ nodes in systematic form. The reconstruction (MDS property) of code $\mathcal{C}$ implies that the $(B \times B)$ sub-matrix of $\mathbb{G}$, \[ \left[ \mathbf{G}^{(1)}~\mathbf{G}^{(2)}~\cdots~\mathbf{G}^{(k)}\right]\] is non-singular. Define an equivalent code $\mathcal{C}'$ having its generator matrix $\mathbb{G}'$ as:
\begin{equation} \mathbb{G}' = \left[ \mathbf{G}^{(1)}~\mathbf{G}^{(2)}~\cdots~\mathbf{G}^{(k)}\right]^{-1} ~ \mathbb{G}. \label{eq:convert_to_systematic}\end{equation}
Clearly, the $B$ left-most columns of $\mathbb{G}'$ form a $B \times B$ identity matrix, thus making the equivalent code $\mathcal{C}'$ systematic. As the repair is exact, the code will retain the systematic form following any number of failures and repairs.
The transformation in equation~\eqref{eq:convert_to_systematic} can involve any arbitrary set of $k$ nodes in $\mathcal{C}$, thus proving the second part of the theorem.
\end{IEEEproof}
~
The theorem above permits us to restrict our attention to the class of systematic codes, and assume the first $k$ nodes~(i.e., nodes $1,\ldots,k$) to be systematic. Recall that, for systematic node $\ell~(\in \{1,\ldots,k\})$,
\begin{eqnarray} G^{(\ell)}_i = \left \lbrace
\begin{array}{ll}
I_{\alpha} &\text{if } i=\ell \\
0_\alpha &\text{if } i\neq \ell
\end{array} \right. \quad \forall i \in \{1,\ldots,k \}.
\end{eqnarray}
Thus, systematic node $\ell$ stores the $\alpha$ symbols in $\underline{u}_\ell$.
\subsection{Approach}
An exact-repair MSR code should be capable of performing exact-repair of any failed node by connecting to any arbitrary subset of $d$ of the remaining $(n-1)$ nodes, while meeting the cut-set bound on repair bandwidth. This requires a number of repair scenarios to be satisfied. Our proof of non-existence considers a less restrictive setting, in which exact-repair of only the systematic nodes is to be satisfied. Further, we consider only the situation where a failed systematic node is to be repaired by downloading data from a specific set of $d$ nodes, comprised of the $(k-1)$ remaining systematic nodes, and some collection of $\alpha$ parity nodes. Thus, for the remainder of this section, we will restrict our attention to a subset of the $n$ nodes in the distributed storage network, of size $(k+\alpha)$ nodes, namely, the set of $k$ systematic nodes and the first $\alpha$ parity nodes. Without loss of generality, within this subset, we will assume that nodes $1$ through $k$ are the systematic nodes and that nodes $(k+1)$ through $(k+\alpha)$ are the $\alpha$ parity nodes. Then with this notation, upon failure of systematic node $\ell$, $1 \leq \ell \leq k$, the replacement node is assumed to connect to nodes $\{1,\ldots,k+\alpha\}\backslash\{\ell\}$.
The generator matrix $\mathbb{G}$ of the entire code can be written in a block-matrix form as shown in Fig.~\ref{fig:non_ach_1}. In the figure, each~(block) column represents a node and each~(block) row, a component. The first $k$ and the remaining $\alpha$ columns contain respectively, the generator matrices of the $k$ systematic nodes and the $\alpha$ parity nodes.
\begin{figure}[h]
\centering
\includegraphics[trim=0.5in 2.7in 1.5in 0.4in, clip=true, width=0.5\textwidth]{fig_non_ach_1.pdf}
\caption{\small The generator matrix $\mathbb{G}$ of the entire code. First $k$~(block) columns are associated with the systematic nodes $1$ to $k$ and the next $\alpha$~(block) columns to the parity nodes $(k+1)$ to $(k+\alpha)$. Empty blocks denote zero matrices.} \label{fig:non_ach_1}
\end{figure}
We now outline the steps involved in proving the non-existence
result. Along the way, we will uncover some interesting and
insightful properties possessed by linear, exact-repair MSR codes. \begin{enumerate}
\item We begin by establishing that in order
to satisfy the data reconstruction property, each sub-matrix in the parity-node section
of the generator matrix~(see Fig.~\ref{fig:non_ach_1}) must be non-singular. \item Next, we show
that the vectors passed by the $\alpha$ parity nodes for the repair
of any systematic node must necessarily satisfy two properties:
\begin{itemize} \item alignment of the interference components, and \item linear independence of the desired
component. \end{itemize}
\item We then prove that in the collection of $k$ vectors passed by a
parity node for the respective repair of the $k$ systematic nodes, every $\alpha$-sized subset
must be linearly independent. This is
a key step that links the vectors stored in a node to those passed
by it, and enables us to replace the $\alpha$ columns of the
generator matrix of a parity node with the vectors it passes to aid in the repair of some
subset of $\alpha$ systematic nodes. We will assume that these $\alpha$ systematic nodes are in fact, nodes $1$ through $\alpha$.
\item Finally, we will show that
the necessity of satisfying multiple interference-alignment
conditions simultaneously, turns out to be over-constraining, forcing alignment in the desired components as well. This leads to a contradiction, thereby proving the non-existence result.
\end{enumerate}
\subsection{Deduced Properties}
\begin{pty}[Non-singularity of the Component Submatrices]\label{pty:nec_recon}
Each of the component submatrices $\{ G^{(m)}_i \mid k+1 \leq m \leq k+ \alpha, \ \ 1 \leq i \leq k \}$ is non-singular.
\end{pty}
\begin{IEEEproof}
Consider a data-collector connecting to systematic nodes $2$ to $k$
and parity node $(k+1)$. The data-collector has thus access to the
block matrix shown in Fig.~\ref{fig:non_ach_2}.
\begin{figure}[h]
\centering
\includegraphics[trim=0.3in 3.5in 3in 0.3in, clip=true, width=0.45\textwidth]{fig_non_ach_2}
\caption{\small The block matrix accessed by a data-collector connecting
to systematic nodes $2$ through $k$ and parity node $(k+1)$.}
\label{fig:non_ach_2}
\end{figure}
For the data-collector to recover all the data, this block matrix
must be non-singular, forcing $G_1^{(k+1)}$ to be non-singular. A similar argument shows that the same must hold in the case of each of the other component submatrices.
\end{IEEEproof}
~
\begin{cor}\label{cor:component_colspace}
Let $H=[H_1^t \ H_2^t, \cdots, H_k^t]^t$ be a $(k \alpha \times \ell)$ matrix each of whose $\ell \geq 1$ columns is a linear combination of the columns of $\mathbf{G}^{(m)}$ for some $m\in\{k+1,\ldots,k+\alpha\}$, and having $k$ components $\{H_i\}$ of size $(\alpha \times \ell)$. Thus
\[
\text{colspace}[H] \subseteq
\text{colspace}[\mathbf{G}^{(m)}] .
\]
Then for every $i \in
\{1,\ldots,k\}$, we have \begin{equation}
\text{nullspace}[H_i] = \text{nullspace}[H] . \end{equation}
\end{cor}
\begin{IEEEproof}
Clearly, \begin{equation} \text{nullspace}[H] \subseteq \text{nullspace}[H_i]. \end{equation}
Let $H = \mathbf{G}^{(m)} A$, for some $(\alpha \times \ell)$ matrix A. Then
\begin{equation} H_i = G_i^{(m)} A. \end{equation}
For a vector $\underline{v} \in \text{nullspace}[H_i]$,
\begin{equation} H_i \; \underline{v} = G_i^{(m)} A \; \underline{v} =\underline{0}. \end{equation}
However, since $G_i^{(m)}$ is of full rank~(Property~\ref{pty:nec_recon}) it follows that
\begin{eqnarray} A \; \underline{v} &=&\underline{0} \\
\Rightarrow \ \mathbf{G}^{(m)} A \; \underline{v} &=& H \underline{v} = \underline{0} \\
\Rightarrow \ \text{nullspace}[H_i] &\subseteq& \text{nullspace}[H]. \end{eqnarray}
\end{IEEEproof}
The corollary says, in essence, that any linear dependence relation that holds amongst the columns of any of the components $H_i$, also extends to the columns of the entire matrix $H$ itself.
~
We next establish properties that are mandated by the repair capabilities of exact regenerating codes. Consider the situation where a failed systematic node, say node $\ell$, \ $1 \leq \ell \leq k$, is repaired using one vector~(as $\beta=1$) from each of the remaining $k-1+\alpha$ nodes.
~
\begin{defn} When considering repair of systematic node $\ell$, $1 \leq \ell \leq k$, the $\ell$th component $\{ \underline{\gamma}^{(m,\ell)}_\ell\}$ of each of the $\alpha$ vectors $\{ \underline{\gamma}^{(m,\ell)} \mid k+1 \leq m \leq k+ \alpha \}$ passed by the $\alpha$ parity nodes will be termed as the {\em desired component}. The remaining
components $\{ \underline{\gamma}^{(m,\ell)}_i \mid i \neq \ell \}$ will be termed as {\em interference components}.
\end{defn}
~
The next property highlights the necessity of interference alignment in any exact-repair MSR code. Clearly, the vectors passed by the remaining $(k-1)$ systematic nodes have $\ell^{\text{th}}$ component equal to $\underline{0}$, and thus the onus of recovering the `desired' $\ell^{\text{th}}$ component of replacement node $\ell$ falls on the $\alpha$ parity nodes. However, the vectors passed by the parity nodes have non-zero `interference' components that can be nulled out only by the vectors passed by the systematic nodes. This forces an alignment in these interference components, and this is shown more formally below.
~
\begin{pty}[Necessity of Interference Alignment]\label{pty:IA_necessary} In the vectors $\{ \underline{\gamma}^{(m,\ell)} \mid k+1 \leq m \leq k+\alpha \}$ passed by the $\alpha$ parity nodes for the repair of any systematic node (say, node $\ell$), the set of $\alpha$ interference components $\{ \underline{\gamma}^{(m,\ell)}_i \}$, $1 \leq i \leq k$, $ i \neq \ell$ must necessarily be \textit{aligned}, and the desired components $\{ \underline{\gamma}^{(m,\ell)}_\ell \}$ must necessarily be linearly independent.
\end{pty}
\begin{IEEEproof}
We assume without loss of generality that $\ell=1$, i.e., we consider repair of systematic node $1$. The matrix depicted in Fig.~\ref{fig:non_ach_3} consists of the $\alpha$ vectors needed to be recovered in the replacement node $\ell$, alongside the $d$ vectors passed by the $d$ helper nodes $2,\ldots,k+\alpha$. This matrix may be decomposed into three sub-matrices, namely: a $(B \times \alpha)$ matrix $\Gamma_1$, comprising of the $\alpha$ columns to be recovered at the replacement node; a $(B \times (k-1))$ matrix $\Gamma_2$, comprising of the $(k-1)$ vectors passed by the remaining systematic nodes; and a $(B \times \alpha)$ matrix $\Gamma_3$, comprising of the $\alpha$ vectors passed by the parity nodes.
\begin{figure}[h]
\centering
\includegraphics[trim=0.4in 2.2in 0.4in 0.4in, clip=true,width=0.6\textwidth]{fig_non_ach_3}
\caption{\small Matrix depicting the $\alpha$~(global-kernel) vectors to be recovered by replacement node 1~(represented by the matrix $\Gamma_1$), alongside the $d$ vectors passed by the helper nodes $2, \ldots,k+\alpha$~(represented by $[ \Gamma_2 \mid \Gamma_3]$).
} \label{fig:non_ach_3}
\end{figure}
The vectors $\{\underline{\gamma}_1^{(k+1,1)},~ \ldots~ ,\underline{\gamma}_1^{(k+\alpha,1)}\}$ appearing in the first row of the matrix constitute the desired component; for every $i \in \{2,\ldots,k\}$, the set of vectors $\{\underline{\gamma}_i^{(k+1,1)},~ \ldots~ ,\underline{\gamma}_i^{(k+\alpha,1)}\}$, constitute interference components. An exact-repair of node $1$ is equivalent to the recovery of $\Gamma_1$ from the columns of $\Gamma_2$ and $\Gamma_3$ through a linear transformation, and hence it must be that \begin{equation} \text{colspace} [\Gamma_1] \ \subseteq \ \text{colspace}\left[ \Gamma_2 | \Gamma_3 \right], \label{eq:IA_rk_arg_1}\end{equation} where `$|$' operator denotes concatenation. When we restrict attention to the first components of the matrices, we see that we must have \begin{equation} \text{colspace}[I_{\alpha}] \ \subseteq \ \text{colspace} \left[\underline{\gamma}_1^{(k+1,1)}~ \ldots~ \underline{\gamma}_1^{(k+\alpha,1)}\right], \end{equation} thereby forcing the desired components $\{\underline{\gamma}_1^{(k+1,1)},~ \ldots~ ,\underline{\gamma}_1^{(k+\alpha,1)}\}$ to be linearly independent. \vspace{5pt}
Further, from \eqref{eq:IA_rk_arg_1} it follows that \begin{equation} \text{colspace} \left[\Gamma_1 | \Gamma_2\right] \ \subseteq \ \text{colspace}\left[ \Gamma_2 | \Gamma_3 \right]. \label{eq:IA_rk_arg_2}\end{equation} Clearly, $\text{rank}[\Gamma_1] = \alpha$, and from Fig.~\ref{fig:non_ach_3} it can be inferred that \begin{equation} \text{rank}[\Gamma_1 | \Gamma_2] \ = \ \alpha + \text{rank}[\Gamma_2]~. \label{eq:IA_rk_arg_3}\end{equation} Moreover, as the first component in $\Gamma_3$ is of rank $\alpha$, \begin{eqnarray} \text{rank}[\Gamma_2 | \Gamma_3] \ &\leq& \ \text{rank}[\Gamma_2] + \alpha \label{eq:IA_rk_arg_4}\\ &=&\ \text{rank}[\Gamma_1 | \Gamma_2]. \label{eq:IA_rk_arg_5}\end{eqnarray} It follows from equation~\eqref{eq:IA_rk_arg_2} and~\eqref{eq:IA_rk_arg_5}, that \begin{equation} \text{colspace} \left[\Gamma_1 | \Gamma_2\right] \ = \ \text{colspace}\left[ \Gamma_2 | \Gamma_3 \right], \label{eq:IA_rk_arg_6}\end{equation}
and this forces the interference components in $\Gamma_3$ to be aligned. Thus, for $i\in\{2,\dots,k\}$,
\begin{equation} \text{colspace}\left[\underline{\gamma}_i^{(k+1,1)}~\cdots~\underline{\gamma}_i^{(k+\alpha,1)}\right] \subseteq \text{colspace}\left[\underline{\gamma}_i^{(i,1)}\right]. \end{equation}
\end{IEEEproof}
~
\begin{note} Properties~\ref{pty:nec_recon} and~\ref{pty:IA_necessary} also hold for all $\beta \geq 1$, in which case, each of the $\alpha$ helper parity nodes pass a $\beta$-dimensional subspace, and each interference component needs to be confined to a $\beta$-dimensional subspace. Furthermore, the two properties also hold for all $[n,~k,~d]$ exact-repair MSR codes, when $(k-1)$ of the $d$ helper nodes along with the replacement node are viewed as systematic.
\end{note}
\begin{figure}[t]
\centering
\includegraphics[trim=0in 1.5in 0.2in 0in,clip=true,width=0.7\textwidth]{fig_fromTo.pdf}
\caption{\small Table indicating the vectors passed by the $\alpha$ parity
nodes to repair the first $\alpha$ systematic
nodes.}\label{fig:fig_fromTo}
\end{figure}
~
The next property links the vectors stored in a parity node to the vectors it passes to aid in the repair of any set of $\alpha$ systematic nodes.
~
\begin{pty}\label{pty:alpha_ind}
For $d < 2k-1$, the vectors passed by a parity node to repair any arbitrary set of $\alpha$ systematic nodes are linearly independent, i.e., for $m \in \{k+1,\ldots,k+\alpha\}$, it must be that every subset of size $\alpha$ drawn from the set of vectors \[\left\lbrace\underline{\gamma}^{(m,1)},\ldots,\underline{\gamma}^{(m,k)}\right\rbrace \] is linearly independent. (Thus the matrix $[ \underline{\gamma}^{(m,1)}~\ldots~\underline{\gamma}^{(m,k)} ]$ may be viewed as the generator matrix of a $[k,\alpha]$-MDS code.)
\end{pty}
\begin{IEEEproof}
Consider Fig.~\ref{fig:fig_fromTo} which depicts the vectors passed
by parity nodes $\{k+1,\ldots,k+\alpha\}$ to repair systematic nodes $\{1,\ldots,\alpha\}$. From Property~\ref{pty:IA_necessary} one can infer that in column $i \in \{1,\ldots,\alpha\}$, the $i^{\text{th}}$~(desired) components of the $\alpha$ vectors are independent, and the $j^{\text{th}}$~(interference)
components for all $j \in \{1,\ldots,k\}\backslash\{i\}$ are
aligned. In particular, for all $j\in\{\alpha+1,\ldots,k\}$, the
$j^{\text{th}}$ components of each column are aligned. Note that as $d<2k-1$
we have $k>\alpha$, which guarantees that the set $\{\alpha+1,\ldots,k\}$ is non-empty and hence, the presence of an $(\alpha+1)$th component.
We will prove Property~\ref{pty:alpha_ind} by contradiction. Suppose, for example, we were to have
\begin{equation} \underline{\gamma}^{(k+1,1)} \subseteq \text{colspace}\left[\underline{\gamma}^{(k+1,2)}~\cdots~\underline{\gamma}^{(k+1,\alpha)}\right],
\end{equation}
which is an example situation under which the $\alpha$ vectors passed by parity node $(k+1)$ for the respective repair of the first $\alpha$ systematic nodes would fail to be linearly independent. Restricting our attention to component $(\alpha+1)$, we get
\begin{equation} \underline{\gamma}^{(k+1,1)}_{\alpha+1} \subseteq
\text{colspace}\left[\underline{\gamma}^{(k+1,2)}_{\alpha+1}~\cdots~\underline{\gamma}^{(k+1,\alpha)}_{\alpha+1}\right]. \label{eq:non_ach_pty_3_1}\end{equation}
Now, alignment of component $(\alpha+1)$ along each column forces the same dependence in all other parity nodes, i.e.,
\begin{equation} \underline{\gamma}^{(m,1)}_{\alpha+1} \subseteq
\text{colspace}\left[\underline{\gamma}^{(m,2)}_{\alpha+1}~\cdots~\underline{\gamma}^{(m,\alpha)}_{\alpha+1}\right]
\quad \forall m \in \{k+2,\ldots,k+\alpha\} \label{eq:non_ach_pty_3_2}.\end{equation}
Noting that a vector passed by a helper node lies in the column-space of its generator matrix, we now invoke Corollary~\ref{cor:component_colspace}:
\begin{equation} \text{nullspace}\left[\underline{\gamma}^{(m,1)}_{\alpha+1}~\cdots~\underline{\gamma}^{(m,\alpha)}_{\alpha+1}\right] = \text{nullspace}\left[\underline{\gamma}^{(m,1)}~\cdots~\underline{\gamma}^{(m,\alpha)}\right] \quad \forall m \in \{k+1,\ldots,k+\alpha\} \end{equation}
This, along with equations~\eqref{eq:non_ach_pty_3_1} and \eqref{eq:non_ach_pty_3_2}, implies
\begin{equation} \underline{\gamma}^{(m,1)} \subseteq
\text{colspace}\left[\underline{\gamma}^{(m,2)}~\cdots~\underline{\gamma}^{(m,\alpha)}\right]
\quad \forall m\in \{k+1,\ldots,k+\alpha\}. \end{equation}
Thus the dependence in the vectors passed by one parity node carries over to every other parity node.
In particular, we have
\begin{eqnarray} \underline{\gamma}^{(m,1)}_1 &\subseteq&
\text{colspace}\left[\underline{\gamma}^{(m,2)}_1~\cdots~\underline{\gamma}^{(m,\alpha)}_1\right]
\quad \forall m \in \{k+1,\ldots,k+\alpha\}.
\label{eq:fromto_onecolspothers}\end{eqnarray} However, from
Property~\ref{pty:IA_necessary}, we know that the vectors passed to
systematic nodes $2$ to $\alpha$ have their first components
aligned, i.e., \begin{equation} \text{rank}\left[
\underline{\gamma}^{(k+1,\ell)}_1~\ldots~\underline{\gamma}^{(k+\alpha,\ell)}_1\right]
\leq 1 \qquad \forall \ell \in
\{2,\ldots,\alpha\}.\label{eq:fromto_confined}\end{equation}
Aggregating all instantiations~(w.r.t. $m$) of equation~\eqref{eq:fromto_onecolspothers}, the desired component is confined to:
\begin{eqnarray}
\text{colspace}\left[\left\lbrace \underline{\gamma}^{(m,1)}_1\right\rbrace_{m=k+1}^{k+\alpha}\right] &\subseteq& \text{colspace}\left[\left\lbrace \underline{\gamma}^{(m,\ell)}_1\right\rbrace_{(m,~\ell)=(k+1,~2)}^{(k+\alpha,~\alpha)} \right]\\
\Rightarrow \text{rank}\left[\left\lbrace \underline{\gamma}^{(m,1)}_1\right\rbrace_{m=k+1}^{k+\alpha}\right] &\leq& \text{rank}\left[\left\lbrace \underline{\gamma}^{(m,\ell)}_1\right\rbrace_{(m,~\ell)=(k+1,~2)}^{(k+\alpha,~\alpha)} \right]\\
&\leq& \sum_{\ell=2}^{\alpha}\text{rank}\left[\left\lbrace \underline{\gamma}^{(m,\ell)}_1\right\rbrace_{m=k+1}^{k+\alpha} \right]\\
&\leq& \alpha-1,
\end{eqnarray}
where the last inequality follows from equation~\eqref{eq:fromto_confined}. This contradicts the assertion of Property~\ref{pty:IA_necessary} with respect to the desired component:
\begin{equation} \text{rank}\left[\left\lbrace \underline{\gamma}^{(m,1)}_1\right\rbrace_{m=k+1}^{k+\alpha}\right] = \ \alpha.\end{equation}
\end{IEEEproof}
~
\begin{note}
It turns out that an attempted proof of the analogue of this theorem for the case $\beta>1$, fails to hold.
\end{note}
~
The connection between the vectors passed by a parity node and those stored by it, resulting out of Property~\ref{pty:alpha_ind}, is presented in the following corollary.
~
\begin{cor}\label{cor:storedISpassed}
If there exists a linear, exact-repair MSR code for $d<2k-1$, then there exists an equivalent linear, exact-repair MSR code, where, for each parity node, the $\alpha$ columns of the generator matrix are respectively the vectors passed for the repair of the first $\alpha$ systematic nodes.
\end{cor}
\begin{IEEEproof}
Since a node can pass only a function of what it stores, the vectors passed by a parity node $m\in\{k+1,\ldots,k+\alpha\}$, for repair of the systematic nodes must belong to the column-space of its generator matrix, i.e.,
\begin{equation} \left[\underline{\gamma}^{(m,1)}~\cdots~\underline{\gamma}^{(m,\alpha)}\right] \subseteq \text{colspace}\left[\mathbf{G}^{(m)}\right]. \end{equation}
Further, Property~\ref{pty:alpha_ind} asserts that the vectors it passes for repair of the first $\alpha$ systematic nodes are linearly independent, i.e.,
\begin{eqnarray} \text{rank}\left[\underline{\gamma}^{(m,1)}~\cdots~\underline{\gamma}^{(m,\alpha)}\right] &=& \alpha \ = \ \text{rank}\left[\mathbf{G}^{(m)}\right]. \end{eqnarray}
It follows that the generator matrix $\mathbf{G}^{(m)}$ is a non-singular transformation of the vectors $\left[\;\underline{\gamma}^{(m,1)}~\cdots~\underline{\gamma}^{(m,\alpha)}\;\right]$ that are passed for the repair of the first $\alpha$ systematic nodes, and the two codes with generator matrices given by the two representations are hence equivalent.
\end{IEEEproof}
~
In the equivalent code, each row of Fig.~\ref{fig:fig_fromTo} corresponds to the generator matrix $\mathbf{G}^{(m)}$ of the associated parity node, i.e.,
\begin{equation} \mathbf{G}^{(m)} = \left[\underline{\gamma}^{(m,1)} \;
\cdots \; \underline{\gamma}^{(m,\alpha)} \right] \qquad \forall \ m\in\{k+1,\ldots,k+\alpha\}.\label{eq:nonach_storedISpassed} \end{equation}
Since the capabilities of a code are identical to an equivalent code, we will restrict our attention to this generator matrix for the remainder of this section. The two properties that follow highlight some additional structure in this code.
~
\begin{pty}[Code structure - what is stored] \label{pty:nonach_struct_stored}
For $d<2k-1$, any component ranging from $(\alpha+1)$ to $k$ across the generator matrices of the parity nodes differ only by the presence of a multiplicative diagonal matrix on the right, i.e.,
\begin{equation} \begin{tabular}{>{$}c<{$}>{$}c<{$}>{$}c<{$}>{$}c<{$}} G^{(k+1)}_{\alpha+1} = H_{\alpha+1} ~\Lambda^{(k+1)}_{\alpha+1}, &G^{(k+2)}_{\alpha+1} = H_{\alpha+1} ~\Lambda^{(k+2)}_{\alpha+1}, & \quad \cdots \quad & G^{(k+\alpha)}_{\alpha+1} = H_{\alpha+1}~ \Lambda^{(k+\alpha)}_{\alpha+1}\\
\vdots & \quad \vdots \quad &\quad \ddots \quad & \vdots \\
G^{(k+1)}_{k} \ = \ H_{k} ~ \Lambda^{(k+1)}_{k},&G^{(k+2)}_{k} \ = \ H_{k} ~ \Lambda^{(k+2)}_{k},& \quad \cdots \quad &G^{(k+\alpha)}_{k} \ = \ H_{k} ~ \Lambda^{(k+\alpha)}_{k}\end{tabular}
\label{eq:nonach_mxBelow}
\end{equation}
where the matrices of the form $\Lambda_*^{(*)}$ are $\alpha \times \alpha$ diagonal matrices
(and where, for instance, we can choose $H_{\alpha+1} = G^{(k+1)}_{\alpha+1}$, in which case $\Lambda^{(k+1)}_{\alpha+1}=I_{\alpha}$).
\end{pty}
\begin{IEEEproof}
Consider the first column in Fig.~\ref{fig:fig_fromTo}, comprising of the vectors passed by the $\alpha$ parity nodes to repair node $1$. Property~\ref{pty:IA_necessary} tells us that in these $\alpha$ vectors, the components ranging from $(\alpha+1)$ to $k$ constitute interference, and are hence aligned. Clearly, the same statement holds for every column in Fig.~\ref{fig:fig_fromTo}. Thus, the respective components across these columns are aligned. Since the generator matrices of the parity nodes are as in~\eqref{eq:nonach_storedISpassed}, the result follows.
\end{IEEEproof}
~
For the repair of a systematic node, a parity node passes a vector from the column-space of its generator matrix, i.e., the vector $\underline{\gamma}^{(m,\ell)}$ passed by parity node $m$ for repair of failed systematic node $\ell$ can be written in the form:
\begin{equation} \underline{\gamma}^{(m,\ell)}~ =~ \mathbf{G}^{(m)}~ \underline{\theta}^{(m,\ell)}\end{equation} for some $\alpha$-length vector $\underline{\theta}^{(m,\ell)}$.
In the equivalent code obtained in~\eqref{eq:nonach_storedISpassed}, a parity node simply stores the $\alpha$ vectors it passes to repair the first $\alpha$ systematic nodes. On the other hand, the vector passed to systematic node $\ell$, $ \alpha+1 \leq \ell \leq k$, is a linear combination of these $\alpha$ vectors. The next property employs Property~\ref{pty:alpha_ind} to show that every coefficient in this linear combination is non-zero.
~
\begin{pty}[Code structure - what is passed]\label{pty:nonach_struct_passed}
For $d<2k-1$, and a helper parity node $m$ assisting a failed systematic node $\ell$\\
(a) For $\ell \in \{1,\ldots,\alpha\}$, $\underline{\theta}^{(m,\ell)}= \underline{e}_\ell$, and\\
(b) For $\ell \in \{\alpha+1,\ldots,k\}$, every element of $\underline{\theta}^{(m,\ell)}$ is non-zero.
\end{pty}
\begin{IEEEproof}
Part~(a) is a simple consequence of the structure of the code. We will prove part~(b) by contradiction. Suppose $\theta^{(m,\ell)}_{\alpha}=0$, for some $\ell \in \{\alpha+1,\ldots,k\}$. Then $\underline{\gamma}^{(m,\ell)}$ is a linear combination of only the first $(\alpha-1)$ columns of $\mathbf{G}^{(m)}$. This implies,\begin{equation} \underline{\gamma}^{(m,\ell)} \subseteq \text{colspace}\left[\underline{\gamma}^{(m,1)} \cdots \underline{\gamma}^{(m,\alpha-1)} \right]. \end{equation}
This clearly violates Property~\ref{pty:alpha_ind}, thus leading to a contradiction.
\end{IEEEproof}
\subsection{Proof of Non-existence}
We now present the main theorem of this section, namely, the non-achievability proof. The proof, in essence, shows that the conditions of Interference Alignment necessary for exact-repair of systematic nodes, coupled with the MDS property of the code, over-constrain the system, leading to alignment in the desired components as well.
We begin with a toy example that will serve to illustrate the proof technique. Consider the case when $[n=7,~k=5,~d=6]$. Then it follows from \eqref{eq:MSR_beta1_parameters} that $(\alpha=d-k+1=2,~B=k \alpha=10)$. In this case, as depicted in Figure~\ref{fig:nonAch_finalProof}, in the vectors passed by parity nodes $6$ and $7$, (a) when repairing systematic node $3$, there is alignment in components $4$ and $5$, and (b) when repairing systematic node $4$, there is alignment in component $5$. It is shown that this, in turn, forces alignment in component $4$~(desired component) during repair of node $4$ which is in contradiction to the assertion of Property~\ref{pty:IA_necessary} with respect to the desired component being linearly independent.
\begin{figure}[h]
\centering
\includegraphics[trim=1in 8.2in 2in 3.1in,clip=true,width=.7\textwidth]{fig_nonAch_finalProof.pdf}
\caption{\small A toy-example, with parameters $[n=7,~k=5,~d=6]$, to illustrate the proof of non-existence.}\label{fig:nonAch_finalProof}
\end{figure}
~
\begin{thm} \label{thm:non_exist}
Linear, exact-repair MSR codes achieving the cut-set bound on the repair-bandwidth do not exist for $d<2k-3$ in the absence of symbol extension~(i.e., when $\beta=1$).
\end{thm}
\begin{IEEEproof}
Recall that achieving the cut-set bound on the repair bandwidth in the absence of symbol extension gives $d=k-1+\alpha$. For the parameter regime $d<2k-3$ under consideration, we get $k \geq \alpha+3$. Furthermore, since $\alpha >1$~\footnote{As discussed previously in Section~\ref{sec:intro}, $\alpha=1$ corresponds to a trivial scalar MDS code; hence, we omit this case from consideration.}, we have $n \geq k+2$~(as $n \geq d+1=k+\alpha$). Hence the system contains at least $(\alpha+3)$ systematic nodes and at least two parity nodes.
We use Property~\ref{pty:nonach_struct_stored} to express the generator matrix of any parity node, say node $m$, in the form:
\[ \mathbf{G}^{(m)} \ = \ \left[\begin{tabular}{>{$}c<{$}} G^{(m)}_1 \\
\vdots \\
G^{(m)}_{\alpha} \\
H_{\alpha+1} \Lambda^{(m)}_{\alpha+1} \\
\vdots \\
H_{k} \ \Lambda^{(m)}_{k}
\end{tabular}
\right].
\label{eq:nonach_nodemx}
\]
In this proof, we will use the notation $A \prec B$ to indicate that the matrices $A$ and $B$ are scalar multiples of each other, i.e., $A$ = $\kappa B$ for some non-zero scalar $\kappa$ and write $A \nprec B$ to indicate that matrices $A$ and $B$ are \text{not} scalar multiples of each other.
We will restrict our attention to components $(\alpha+2)$ and $(\alpha+3)$. First, consider repair of systematic node $(\alpha+1)$. By the interference alignment property, Property~\ref{pty:IA_necessary},
\begin{eqnarray}
\underline{\gamma}_{\alpha+2}^{(k+1,\alpha+1)} &\prec& \underline{\gamma}_{\alpha+2}^{(k+2,\alpha+1)} \\
\text{i.e.,}~~~~~~~~~G^{(k+1)}_{\alpha+2} ~\underline{\theta}^{(k+1,\alpha+1)} &\prec& G^{(k+2)}_{\alpha+2}~ \underline{\theta}^{(k+2,\alpha+1)}\label{eq:nonach_final_1}\\
\Rightarrow ~ H_{\alpha+2}~ \Lambda^{(k+1)}_{\alpha+2}~ \underline{\theta}^{(k+1,\alpha+1)} &\prec& H_{\alpha+2}~ \Lambda^{(k+2)}_{\alpha+2}~ \underline{\theta}^{(k+2,\alpha+1)}\label{eq:nonach_final_2}\\
\Rightarrow ~~~~~~~~~\Lambda^{(k+1)}_{\alpha+2}~ \underline{\theta}^{(k+1,\alpha+1)} &\prec& \Lambda^{(k+2)}_{\alpha+2}~ \underline{\theta}^{(k+2,\alpha+1)}\label{eq:nonach_final_3},\end{eqnarray}
where, equation~\eqref{eq:nonach_final_3} uses the non-singularity of $H_{\alpha+2}$ (which is a consequence of Property~\ref{pty:nec_recon}).
We will use the notation $\Theta^{(*,*)}$ to denote an $(\alpha \times \alpha)$ diagonal matrix, with the elements on its diagonal as the respective elements in $\underline{\theta}^{(*,*)}$. Observing that the matrices $\Lambda^{(*)}_{*}$ are diagonal matrices, we rewrite equation~\eqref{eq:nonach_final_3} as
\begin{equation} \Lambda^{(k+1)}_{\alpha+2} \Theta^{(k+1,\alpha+1)} \prec \Lambda^{(k+2)}_{\alpha+2} \Theta^{(k+2,\alpha+1)}\label{eq:nonach_final_4}.\end{equation}
Similarly, alignment conditions on the $(\alpha+3)$th component in the vectors passed for repair of systematic node $(\alpha+1)$ give
\begin{equation}\Lambda^{(k+2)}_{\alpha+3} \Theta^{(k+2,\alpha+1)} \prec \Lambda^{(k+1)}_{\alpha+3} \Theta^{(k+1,\alpha+1)} \label{eq:nonach_final_5},\end{equation}
and those on the $(\alpha+3)$th component in the vectors passed for repair of systematic node $(\alpha+2)$ give
\begin{equation}\Lambda^{(k+1)}_{\alpha+3} \Theta^{(k+1,\alpha+2)} \prec \Lambda^{(k+2)}_{\alpha+3} \Theta^{(k+2,\alpha+2)} \label{eq:nonach_final_6}.\end{equation}
Observe that in equations~\eqref{eq:nonach_final_4},~\eqref{eq:nonach_final_5} and \eqref{eq:nonach_final_6}, matrices $\Lambda^{(*)}_{*}$ and $\Theta^{(*,*)}$ are non-singular, diagonal matrices. As a consequence, a product~(of the terms respective in the left and right sides) of equations~\eqref{eq:nonach_final_4},~\eqref{eq:nonach_final_5} and~\eqref{eq:nonach_final_6}, followed by a cancellation of common terms leads to:
\begin{equation} \Lambda^{(k+1)}_{\alpha+2} \Theta^{(k+1,\alpha+2)} \prec \Lambda^{(k+2)}_{\alpha+2} \Theta^{(k+2,\alpha+2)}\label{eq:nonach_final_9}. \end{equation}
This is clearly in contradiction to Property~\ref{pty:IA_necessary}, which mandates linear independence of the desired components in vectors passed for repair of systematic node $(\alpha+2)$:
\begin{eqnarray} H_{\alpha+2} \Lambda^{(k+1)}_{\alpha+2} \underline{\theta}^{(k+1,\alpha+2)} &\nprec& H_{\alpha+2} \Lambda^{(k+2)}_{\alpha+2} \underline{\theta}^{(k+2,\alpha+2)},\label{eq:nonach_final_7}\\
\text{i.e.},\qquad \Lambda^{(k+1)}_{\alpha+2} \Theta^{(k+1,\alpha+2)} &\nprec& \Lambda^{(k+2)}_{\alpha+2} \Theta^{(k+2,\alpha+2)}\label{eq:nonach_final_8}.
\end{eqnarray}
\end{IEEEproof}
\section{Explicit Codes for $d=k+1$}\label{sec:MDSplus}
In this section, we give an explicit MSR code construction for the parameter set $\left[n,~k,~d=k+1\right]$, capable of repairing any failed node with a repair bandwidth equal to that given by the cut-set bound. This parameter set is relevant since
\begin{enumerate}[a)]
\item the total number of nodes $n$ in the system can be arbitrary~(and is not constrained to be equal to $d+1$), making the code pertinent for real-world distributed storage systems where it is natural for the system to expand/shrink,
\item $k+1$ is the smallest value of the parameter $d$ that offers a reduction in repair bandwidth, making the code suitable for networks with low connectivity.\end{enumerate}
The code is constructed for $\beta=1$, i.e., the code does not employ any symbol extension. All subsequent discussion in this section will implicitly assume $\beta=1$.
For most values of the parameters $[n, \; k , \; d]$, $d=k+1$ falls under $d<2k-3$ regime, where we have shown (Section~\ref{sec:non_exist_alpha_3}) that exact-repair is not possible. When repair is not exact, a nodal generator matrix is liable to change after a repair process. Thus, for the code construction presented in this section, we drop the global kernel viewpoint and refer directly to the symbols stored or passed.
~
As a build up to the code construction, we first inspect the trivial case of $d=k$. In this case, the cut-set lower bound on repair bandwidth is given by
\begin{equation} d \geq k = B. \end{equation}
Thus the parameter regime $d=k$ mandates the repair bandwidth to be no less than the file size $B$, and has the remaining parameters satisfying \begin{equation} \left(\alpha=1, \ B=k\right). \end{equation}
An MSR code for these parameters is necessarily an $[n,~k]$ scalar MDS code. Thus, in this code, node $i$ stores the symbol \begin{equation} \left(\underline{p}_i^t \, \underline{u}\right), \end{equation} where $\underline{u}$ is a $k$-length vector containing all the message symbols, and $\lbrace\underline{r}_i\rbrace_{i=1}^{n}$ is a set of $k$-length vectors such that any arbitrary $k$ of the $n$ vectors are linearly independent. Upon failure of a node, the replacement node can connect to any arbitrary $d=k$ nodes and download one symbol each, thereby recovering the entire message from which the desired symbol can be extracted.
\begin{figure}
\centering
\includegraphics[trim=0in 0.2in 0in 0in, clip, width=\textwidth]{fig_dkp1Evolution3.pdf}
\caption{\small Evolution of a node through multiple repairs in the MSR $d=k+1$ code.} \label{fig:dkp1_node_Evolution}
\end{figure}
When $d=k+1$, the cut-set bound~\eqref{eq:MSR_beta1_parameters} gives \begin{equation} \left( \alpha=d-k+1=2, \ B=\alpha k=2k\right) . \end{equation} Let the $2k$ message symbols be the elements of the $2k$-dimensional column vector \[\left[
\begin{tabular}{c}
$\underline{u}_1$\\
$\underline{u}_2$
\end{tabular}
\right], \] where $\underline{u}_1$ and $\underline{u}_2$ are $k$-length column vectors.
In the case of $d=k+1$, a code analogous to the $d=k$ code would have node $i$ storing the two symbols:
\begin{equation} \left(\underline{p}_i^t \, \underline{u}_1, ~~\underline{p}_i^t \,\underline{u}_2\right). \label{eq:dkp1_init} \end{equation}
Maintaining the code as in~\eqref{eq:dkp1_init}, after one or more node repairs, necessitates \textit{exact} repair of any failed node. Since in this regime, exact-repair is not possible for most values of the parameters, we allow an auxiliary component in our code, as described below.
In our construction, the symbols stored in the nodes are initialized as in~\eqref{eq:dkp1_init}. On repair of a failed node, the code allows for an auxiliary component in the second symbol. Thus, under this code, the two symbols stored in node $i,~1 \leq i \leq n$, are
\begin{equation} \text{\huge (}\underbrace{\underline{p}_i^t\,\underline{u}_1, \qquad \underline{p}_i^t \, \underline{u}_2}_{\text{Exact component}}~+\hspace{-.43cm}\underbrace{\underline{r}_i^t\,\underline{u}_1}_{\text{Auxiliary component}}\hspace{-.8cm}\text{\huge )}, \end{equation} where $\underline{r}_i$ is a $k$-length vector corresponding to the auxiliary component. Further, the value of $\underline{r}_i$ may alter when node $i$ undergoes repair. Hence we term this repair process as \textit{approximately-exact-repair}. For a better understanding, the system can be viewed as analogous to a $Z$-channel; this is depicted in Fig.~\ref{fig:dkp1_node_Evolution}, where the evolution of a node through successive repair operations is shown. In the latter half of this section, we will see that the set of vectors $\{\underline{r}_i\}_{i=1}^{n}$ do not, at any point in time, influence either the reconstruction or the repair process.
We now proceed to a formal description of the code construction.
\subsection{Code Construction:}
Let $\lbrace\underline{p}_i\rbrace_{i=1}^{n}$ be a set of $k$-length vectors such that any arbitrary $k$ of the $n$ vectors are linearly independent. Further, let $\{\underline{r}_i\}_{i=1}^{n}$ be a set of $k$-length vectors initialized to arbitrary values. Unlike $\lbrace\underline{p}_i\rbrace$, the vectors $\{\underline{r}_i\}$ do not play a role either in reconstruction or in repair. In our code, node $i$ stores the two symbols:
\begin{equation} \left(\underline{p}_i^t~\underline{u}_1, ~~
\underline{p}_i^t\,\underline{u}_2+\underline{r}_i^t\,\underline{u}_1\right). \end{equation}
Upon failure of a node, the exact component, as the name suggests, is exactly repaired. However, the auxiliary component may undergo a change. The net effect is what we term as \textit{approximately-exact-repair}.
The code is defined over the finite field $\mathbb{F}_q$ of size $q$. The sole restriction on $q$ comes from the construction of the set of vectors $\lbrace\underline{r}_i\rbrace_{i=1}^{n}$ such that every subset of $k$ vectors are linearly independent.
For instance, these vectors can be chosen from the rows of an $(n \times k)$ Vandermonde matrix or an $(n \times k)$ Cauchy matrix, in which case any finite field of size $q\geq n$ or $q \geq n+k$ respectively will suffice.
\textit{Example: } Fig.~\ref{fig:dkp1_example} depicts a sample code construction over $\mathbb{F}_{11}$ for the parameters $[n=8,~k=5,~d=6]$ with $\beta=1$ giving $(\alpha=2,\ B=10)$. Here,
\[ \left[\begin{tabular}{>{$}c<{$}} \underline{p}_1^t \\
\vdots \\
\underline{p}_8^t
\end{tabular}\right]
= \left[\begin{tabular}{>{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$}}
1&0&0&0&0\\
0&1&0&0&0\\
0&0&1&0&0\\
0&0&0&1&0\\
0&0&0&0&1\\
4&5&3&1&1\\
3&6&1&1&7\\
3&7&8&3&4
\end{tabular}
\right],~
\left[\begin{tabular}{>{$}c<{$}} \underline{r}_1^t \\
\vdots \\
\underline{r}_8^t
\end{tabular}\right] = \left[\begin{tabular}{>{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$}}
0& 0& 1& 2& 2\\
2& 0& 1& 1& 1\\
0& 0& 0& 10&0\\
1& 2& 1& 0& 1\\
1& 0& 0& 1& 0\\
0& 0& 0& 0& 0\\
0& 0& 0& 1& 0\\
1& 0& 4& 0& 0
\end{tabular}
\right].
\]
\begin{figure}[t]
\centering
\includegraphics[trim=1.1in 4.7in 2.8in 0.5in, clip, width=\textwidth]{fig_example_dkp1.pdf}
\caption{\small A sample MSR $d=k+1$ code for the parameters $[n=8,~k=5,~d=6]$, $(\beta=1,\;\alpha=2,\;B=10)$, over $\mathbb{F}_{11}$.
Also depicted is the repair of node $8$, assisted by helper nodes $1$ to $6$.} \label{fig:dkp1_example}
\end{figure}
The two theorems below show that the code described above is an $[n,~k,~d=k+1]$ MSR code by establishing respectively, the reconstruction and the repair properties of the code.
~
\begin{thm}[Reconstruction, i.e., MDS property] In the code presented, all the $B$ message symbols can be recovered by a data-collector connecting to any arbitrary $k$ nodes.\label{thm:dkp1_recon}
\end{thm}
\begin{IEEEproof}
Due to symmetry we assume (without loss of generality) that the data-collector connects to the first $k$ nodes. Then the data-collector obtains access to the $2k$ symbols stored in the first $k$ nodes:
\begin{equation} \left\lbrace\underline{p}_i^t\,\underline{u}_1, \quad
\underline{p}_i^t\,\underline{u}_2\,+\,\underline{r}_i^t\,\underline{u}_1\right\rbrace_{i=1}^{k}. \end{equation}
By construction, the vectors $\lbrace\underline{p}_i\rbrace_{i=1}^{k}$ are linearly independent, allowing the data-collector to recover the first message vector $\underline{u}_1$. Next, the data-collector subtracts the effect of $\underline{u}_1$ from the second term. Finally, in a manner analogous to the decoding of $\underline{u}_1$, the data-collector recovers the second message vector $\underline{u}_2$.
\end{IEEEproof}
~
\begin{thm}[Node repair] In the code presented, \textit{approximately} exact-repair of any failed node can be achieved by connecting to an arbitrary subset of $d~(=k+1)$ of the remaining $(n-1)$ nodes.\label{thm:dkp1_regen}
\end{thm}
\begin{IEEEproof}
Due to symmetry, it suffices to consider the case where helper nodes $\{1,\ldots,k+1\}$ assist in the repair of another failed node $f$. The two symbols stored in node $f$ prior to failure are \[\left(\underline{p}_{f}^t\,\underline{u}_1, \quad \underline{p}_{f}^t\,\underline{u}_2+\underline{r}_{f}^t\,\underline{u}_1\right).\] However, since repair is guaranteed to be only approximately exact, it suffices for the replacement node to obtain \[\left(\underline{p}_{f}^t\,\underline{u}_1, \quad
\underline{p}_{f}^t\,\underline{u}_2+\underline{\tilde{r}}_{f}^t\,\underline{u}_1\right),\] where $\underline{\tilde{r}}_{f}$ is an arbitrary vector that need not be identical to $\underline{r}_{f}$.
The helper nodes $\{1,\ldots,k+1\}$ pass one symbol each, formed by a linear combination of the symbols stored in them. More specifically, helper node $i, \, 1 \leq i \leq k+1$, under our repair algorithm, passes the symbol \begin{equation}\lambda_i\left(\underline{p}_{i}^t\,\underline{u}_1\right) \,+\,
\left( \underline{p}_{i}^t\,\underline{u}_2+\underline{r}_{i}^t\,\underline{u}_1\right). \end{equation}
We introduce some notation at this point. For $\ell \in \{k, \, k+1\}$, let $P_{\ell}$ be a $( \ell \times k)$ matrix comprising of the vectors $\underline{p}_1, \ldots,\underline{p}_{\ell}$ as its $\ell$ rows respectively. Let $R_{\ell}$ be a second $(\ell \times k)$ matrix comprising of the vectors $\underline{r}_1, \ldots,\underline{r}_{\ell}$ as its $\ell$ rows respectively. Further, let $\Lambda_{\ell}=\text{diag}\{\lambda_1,\ldots,\lambda_{\ell}\}$ be an $(\ell \times \ell)$ diagonal matrix. In terms of these matrices, the $k+1$ symbols obtained by the replacement node can be written as the $(k+1)$-length vector
\begin{equation} (\Lambda_{k+1} P_{k+1} + R_{k+1} )~\underline{u}_1 + (P_{k+1})~\underline{u}_2~\label{eq:dkp1_regenvec}.\end{equation}
The precise values of the scalars $\{\lambda_i\}_{i=1}^{k+1}$ are derived below.
~
\paragraph*{Recovery of the First Symbol}
Let $\underline{\rho}$ be the linear combination of the received symbols that the replacement node takes to recover the first symbol that was stored in the failed node, i.e., we need \begin{equation} \underline{\rho}^t \left((\Lambda_{k+1} P_{k+1} + R_{k+1} )~\underline{u}_1 + (P_{k+1})~\underline{u}_2\right) \ = \ \underline{p}_{f}^t~\underline{u}_1.\end{equation} This requires elimination of $\underline{u}_2$, i.e., we need
\begin{equation} \underline{\rho}^{t} P_{k+1} = \underline{0}^t. \label{eq:dkp1_repair_1}\end{equation}
To accomplish this, we first choose
\begin{equation} \underline{\rho} = \left[\begin{tabular}{>{$}c<{$}} \underline{\rho}_1 \\ -1 \end{tabular}\right], \end{equation}
and in order to satisfy equation~\eqref{eq:dkp1_repair_1}, we set
\begin{equation} \underline{\rho}_1^{t} = \underline{p}_{k+1}^t P_k^{-1}. \label{eq:dkp1_repair_2}\end{equation}
Note that the $(k \times k)$ matrix $P_k$ is non-singular by construction.
Now as $\underline{u}_2$ is eliminated, to obtain $\underline{p}_{f}^t~\underline{u}_1$, we need
\begin{eqnarray} \underline{\rho}^{t} \left( \Lambda_{k+1} P_{k+1} + R_{k+1} \right) & = & \underline{p}_{f}^t \\
\Rightarrow \quad \underline{\rho}_1^{t} \left( \Lambda_{k} P_{k} + R_{k} \right) & = & \underline{p}_f^t + \left( \lambda_{k+1} \; \underline{p}^{t}_{k+1} + \; \underline{r}^{t}_{k+1} \right). \label{eq:dkp1_repair_25}\end{eqnarray}
Choosing $\lambda_{k+1}=0$ and substituting the value of $\underline{\rho}^{t}_1$ from equation~\eqref{eq:dkp1_repair_2}, a few straightforward manipulations yield that choosing
\begin{equation} \Lambda_k = \left(\text{diag}\left[\underline{p}_{k+1}^t ~P_k^{-1}\right]\right)^{-1} \text{diag}\left[\left(\underline{p}_{f}^t ~-~ \underline{p}_{k+1}^t~ P_k^{-1}~R_k ~+~ \underline{r}_{k+1}^t\right) P_k^{-1}\right],\end{equation}
satisfies equation~\eqref{eq:dkp1_repair_25}, thereby enabling the replacement node to exactly recover the first symbol.
The non-singularity of the matrix $\text{diag}\left[\underline{p}_{k+1}^t ~P_k^{-1}\right]$ used here is justified as follows. Consider \begin{equation} \left[\underline{p}_{k+1}^t~ P_k^{-1}\right] P_k= \underline{p}_{k+1}^t ~. \end{equation}
Now, if any element of $\left[\underline{p}_{k+1}^t ~P_k^{-1}\right]$ is zero, it would imply that a linear combination of $(k-1)$ rows of $P_k$ can yield $\underline{p}_{k+1}^t$. However, this contradicts the linear independence of every subset of $k$ vectors in $\{\underline{p}_i\}_{i=1}^{n}$.
~
\paragraph*{Recovery of the Second Symbol}
Since the scalars $\{\lambda_i\}_{i=1}^{k+1}$ have already been utilized in the exact recovery of the first symbol, we are left with fewer degrees of freedom. This, in turn, gives rise to the presence of an auxiliary term in the second symbol.
Let $\underline{\delta}$ be the linear combination of the received symbols, that the replacement node takes, to obtain its second symbol $(\underline{p}_{f}^t~\underline{u}_2+\underline{\tilde{r}}_{f}^t~\underline{u}_1)$, i.e., we need
\begin{equation} \underline{\delta}^t \left((\Lambda_{k+1} P_{k+1} + R_{k+1} )~\underline{u}_1 + (P_{k+1})~\underline{u}_2\right) \ = \ \underline{p}_{f}^t~\underline{u}_2+\underline{\tilde{r}}_{f}^t~\underline{u}_1.\label{eq:dkp1_delta1}\end{equation}
Since the vector $\underline{\tilde{r}}_{f}$ is allowed to take any arbitrary value, the condition in~\eqref{eq:dkp1_delta1} is reduced to the requirement
\begin{equation} \underline{\delta}^{t} P_{k+1} = \underline{p}^{t}_f. \label{eq:dkp1_repair_3} \end{equation}
To accomplish this, we first choose
\begin{equation} \underline{\delta} = \left[\begin{tabular}{>{$}c<{$}} \underline{\delta}_1 \\ 0 \end{tabular}\right], \end{equation}
where, in order to satisfy equation~\eqref{eq:dkp1_repair_3}, we choose
\begin{equation} \underline{\delta}_1^{t} = \underline{p}_{f}^t P_k^{-1}~. \label{eq:dkp1_repair_4}\end{equation}
\end{IEEEproof}
In the example provided in Fig.~\ref{fig:dkp1_example}, node $8$ is repaired by downloading one symbol each from nodes $1$ to $6$. The linear combination coefficients used by the helper nodes are: \[ \left[ \lambda_1 ~\cdots~ \lambda_{6} \right] = \left[ 6~1~3~3~1~0\right]. \] The replacement node retains the exact part, and obtains a different auxiliary part, with $\tilde{\underline{r}}_{8} = \left[6~2~4~7~9\right].$
\section{Conclusions}\label{sec:conclusion}
This paper considers the problem of constructing MDS regenerating codes achieving the cut-set bound on repair bandwidth, and presents four major results. First, the construction of an explicit code, termed the MISER code, that is capable of performing data reconstruction as well as optimal exact-repair of the systematic nodes, is presented. The construction is based on the concept of interference alignment. Second, we show that interference alignment is, in fact, necessary to enable exact-repair in an MSR code. Thirdly, using the necessity of interference alignment as a stepping stone, several properties that every exact-repair MSR code must possess, are derived. It is then shown that these properties over-constrain the system in the absence of symbol extension for $d<2k-3$, leading to the non-existence of any linear, exact-repair MSR code in this regime. Finally, an explicit MSR code for $d=k+1$, suited for networks with low connectivity, is presented. This is the first explicit code in the regenerating codes literature that does not impose any restriction on the total number of nodes $n$ in the system.
|
2,869,038,153,792 | arxiv | \section{Introduction\label{sect: intr}}
One of the well-known challenges in unleashing the potential of Internet-of-Things (IoT) is the limitations of energy sources \cite{wong2017key}. In practice, a large number of sensors equipped with limited energy storage create a system performance bottleneck in realizing sustainable communications.
Recently, ambient backscatter communication (AmBC) has been proposed as a promising technique to relieve the energy shortage problem of IoT networks. Specifically, in an AmBC system, a passive tag or sensor could communicate with other nodes by backscattering the ambient RF signals such as television broadcast signals and Wi-Fi signals, instead of directly emitting radio-frequency (RF) signals by itself \cite{van2018ambient}. In fact, by switching to backscattering state or non-backscattering state, a tag or sensor can perform information transmission and utilize the spectral resources of existing systems without requiring additional power.
Thus, AmBC technology has attracted vast attention from academia and industry \cite{hoang2020ambient, liu2020transfer, Liu2020con} in recent years.
The performance of AmBC systems can be dramatically improved by accurate channel estimation.
However, channel estimation problem for AmBC is different from that of traditional communication systems due to the following factors:
\begin{enumerate}[(a)]
\item Since a passive tag is unable to independently transmit RF signals, the generation of pilot signals for channel estimation requires the cooperation of the RF source;
\item The channel coefficients of a tag under ON state (backscattering) are not consistent with those under OFF state (non-backscattering).
\end{enumerate}
Therefore, practical channel estimation schemes for AmBC systems need to be redesigned. For example, the optimal minimum mean square error (MMSE) estimator \cite{kay1993fundamentals} cannot be implemented in AmBC systems due to the lack of a precise statistical channel correlation matrix. Thus, a blind expectation maximization (EM)-based method was designed to estimate the absolute values of the channel coefficients \cite{ma2018blind}. However, its estimation performance is unsatisfactory due to the lack of RF source knowledge.
As a remedy, pilots-based methods have been developed. For instance, Ma \emph{et al.} \cite{ma2018machine} designed an EM-aided machine learning scheme. Besides, Zhao \emph{et al.} \cite{zhao2019channel} studied the channel estimation for AmBC with a massive-antenna reader and designed a channel estimation algorithm to jointly estimate the channel coefficients and the directions of arrivals. Although these two methods further improve the estimation performance, there is still a considerable performance gap between them and the optimal MMSE estimator.
Note that the pilot-based channel estimation problem can be considered as a denoising problem \cite{he2018deep, liu2020deep}. Meanwhile, the deep residual learning (DReL) has recently been proposed as a promising denoising technique \cite{he2016deep}.
Motivated by this, different from the existing deep learning based methods adopting deep neural networks to recover channel coefficients, e.g., \cite{huang2020reconfigurable, huang2019deep, chun2019deep, kang2018deep}, we model the channel estimation in AmBC as a denoising problem and develop a DReL approach exploiting a convolutional neural network (CNN)-based deep residual learning denoiser (CRLD) for channel estimation. In CRLD, a three-dimension (3D) denoising block is specifically designed to explore both the spatial and temporal correlations of the received pilot signals. The proposed method inherits the superiorities of CNN and DReL in feature extraction\cite{liu2019JSAC} and denoising to improve the estimation accuracy.
Simulations are conducted and our results show that the proposed method achieves almost the same normalized mean square error (NMSE) performance as the optimal MMSE estimator with the perfect knowledge of the statistical channel correlation matrix.
\emph{Notations}: Superscript $T$ represents the transpose. Term ${\mathcal{N}}( \bm{\mu},\mathbf{\Sigma} )$ denotes the Gaussian distribution with a mean vector $\bm{\mu}$ and a covariance matrix $\mathbf{\Sigma}$. Terms ${\bf{I}}_M$ and ${\mathbf{0}}$ represent the $M$-by-$M$ identity matrix and the zero vector, respectively. $\mathbb{R}$ indicates the set of real numbers. $E(\cdot)$ is the statistical expectation. $\|\cdot\|_2$ and $\|\cdot\|_F$ denote the Euclidean norm of a vector and the Frobenius norm of a matrix, respectively. $\max(a,b)$ represents the maximum value between $a$ and $b$.
\begin{figure}[t]
\centering
\includegraphics[width=0.56\linewidth]{AmBC_scenario.pdf}\vspace{-0.1cm}
\caption{ The considered AmBC system. }\vspace{-0.56cm}\label{Fig:System_model}
\end{figure}
\vspace{-0.1 cm}
\section{System Model}
In this letter, we consider a typical AmBC system, where a single-antenna RF source is surrounded by a single-antenna passive tag and a reader equipped with an $M$-element antenna array, as shown in Fig. \ref{Fig:System_model}.
Although the passive tag is unable to send RF signals by itself, it can transmit its binary tag symbols by deciding whether to reflect the ambient signals (symbol ``1") to the reader or to absorb them (symbol ``0"). Correspondingly, the reader can then recover the binary tag symbols from the received signals. Denote by $\mathbf{y}(n)={{[y_{1}(n),y_{2}(n),\cdots ,y_{M}(n)]}^{T}}, n \in \{ 0,1,\cdots,N_T-1\}$, the $n$-th sampling vector at the reader, where $N_T$ is the number of samples for each frame and $y_m(n)$, $m \in \{1,2, \cdots, M\}$, denotes the received sample from the $m$-th antenna element. The received sampling vector at the reader is expressed as
\begin{equation}
\label{y}
\mathbf{y}(n) = \mathbf{h}s(n) + \alpha f\mathbf{g}s(n)c(n) + \mathbf{u}(n).
\end{equation}
Here, $s(n)$ is the RF signal sample and $c(n)\in \mathcal{C} = \{0,1\}$ denotes the tag binary symbol. $\mathbf{h}=[h_1, h_2, \cdots, h_M ]^T$, of which the element $h_m \in \mathbb{R}$ represents the channel coefficient between the RF source and the $m$-th antenna of the reader{\footnotemark}\footnotetext{Note that it is convenient for a neural network to process real-valued data. Thus, a simplified real-valued model is adopted in this letter, which can be easily extended to a complex-valued model via a similar approach as in \cite{kay1993fundamentals}.}. Similarly, $\mathbf{g}=[g_1, g_2, \cdots, g_M]^T$ and $g_m \in \mathbb{R}$ is the channel coefficient between the tag and the $m$-th antenna of the reader. $\alpha \in \mathbb{R}$ is the constant reflection coefficient of the tag. Considering the channel between the RF source and the tag is dominated by a strong line-of-sight (LoS) due to short communication distance, the corresponding channel coefficient $f \in \mathbb{R}$ can be assumed to be a constant.
In addition, we assume that the noise vector $\mathbf{u}(n) \in \mathbb{R}^{M\times1}$ is an independent and identically distribution (i.i.d.) Gaussian random vector, i.e., $\mathbf{u}(n)\sim \mathcal{N}( \mathbf{0},\sigma _u^2{{\mathbf{I}}_M} )$, where $\sigma_u^2$ is the noise variance.
Based on the system model, the relative coefficient between the reflection link and the direct link can be defined as $ \zeta = E ( ||\alpha f\mathbf{g}||_2^2 ) / E ( ||\mathbf{h}||_2^2 )$, and the instantaneous signal-to-noise ratio (SNR) of the direct link is defined as $\mathrm{SNR} = E(||\mathbf{h}s(n)||_2^2) / E(||\mathbf{u}(n)||_2^2)$.
Note that the received signal can also be written as
\begin{equation}\label{y_v2}
\mathbf{y}(n) = \left\{\begin{matrix}
\mathbf{w}s(n)+\mathbf{u}(n),& c(n)=1,\\
\mathbf{h}s(n)+\mathbf{u}(n),& c(n)=0,
\end{matrix}\right.
\end{equation}
where $\mathbf{w} = \mathbf{h} + \alpha f\mathbf{g}$. In this letter, we consider a general slow fading Rayleigh channel model, i.e., $\mathbf{w} \sim \mathcal{N}( \mathbf{0},\mathbf{R}_\mathbf{w} )$ and $\mathbf{h} \sim \mathcal{N}( \mathbf{0},\mathbf{R}_\mathbf{h} )$, where $\mathbf{R}_\mathbf{w} = E(\mathbf{w}\mathbf{w}^{T})$ and $\mathbf{R}_\mathbf{h} = E(\mathbf{h}\mathbf{h}^{T})$ are the statistical channel correlation matrices of $\mathbf{w}$ and $\mathbf{h}$, respectively. In this case, the objective of channel estimation is to estimate the channel coefficient vectors: $\mathbf{w}$ for $c(n)=1$ and $\mathbf{h}$ for $c(n)=0$.
Based on this, we design a simple communication protocol for channel estimation in AmBC systems. Assume that there are $T$ frames and each frame has the same structure. As shown in Fig. \ref{Figure:AmBC-frame}, frame $t$, $t \in \{1,2,\cdots,T\}$, consists of three phases: A, B, and C. The first two phases are designed for channel estimation and the remaining phase is for data transmission, which are introduced as follows.
\begin{enumerate}[\title={Phase} A:]
\item Estimation of $\mathbf{h}$. The tag keeps the state of non-reflection for $N_a$ consecutive sampling periods and the reader estimates $\mathbf{h}$ based on the $N_a$ pilot bits.
\item Estimation of $\mathbf{w}$. The tag keeps the state of reflection for $N_b$ consecutive sampling periods and the reader estimates $\mathbf{w}$ based on the $N_b$ pilot bits.
\item Data transmission. The tag transmits $N_c$ information bits by reflecting or absorbing the RF signal.
\end{enumerate}
Note that in the designed protocol, phase A and phase B are adopted to generate pilot bits as the input of the well-trained CRLD to estimate the channel coefficients.
After that, the reader can then decode the received tag symbols in phase C exploiting the estimated channel coefficients.
In the designed protocol, we set $N_a \ll N_c$ and $N_b \ll N_c$ such that there are still enough information bits for data transmission.
Specifically, we set $s(n)=1$ for all the pilot signals and thus the channel estimation becomes a denoising problem, i.e., recovering $\mathbf{x}$ from a noisy observation
\vspace{-0.2cm}
\begin{equation}
\label{y_denoise}
\mathbf{y}(n) = \mathbf{x} + \mathbf{u}(n), \vspace{-0.2cm}
\end{equation}
where $\mathbf{x} = \mathbf{w}$ or $\mathbf{h}$ depending on $c(n)=1$ or $0$.
In the following, we will develop a denoising algorithm to recover channel coefficients from the received noisy pilot bits.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{AmBC_frame.pdf}\vspace{-0.1cm}
\caption{ The designed protocol for the considered AmBC system. }\vspace{-0.68cm}\label{Figure:AmBC-frame}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\linewidth]{CRLD.pdf}\vspace{-0.2cm}
\caption{ The proposed CRLD architecture for channel estimation in AmBC. }\vspace{-0.4cm}\label{Figure:CRLD}
\end{figure*}
\vspace{-0.1cm}
\section{CNN-based Residual Learning Denoiser}
In this section, we develop a DReL approach to learn the residual noise to recover the channel coefficients from the noisy pilot signals. Specifically, we adopt CNN to facilitate DReL via proposing a CNN-based deep residual learning denoiser (CRLD). Instead of directly learning a mapping from a noisy channel matrix to a denoised channel matrix, we specifically design a 3D denoising block to learn the residual noise from the noisy channel matrices temporally and spatially for denoising.
In the following, we will introduce the proposed CRLD architecture and the related algorithm, respectively.
\vspace{-0.2cm}
\subsection{CRLD Architecture}
As shown in Fig. \ref{Figure:CRLD}, the CRLD consists of an input layer, $B$ denoising blocks, one convolutional layer, and one output layer. The hyperparameters are summarized in Table I and each layer is introduced as follows.
\begin{table}[t]
\normalsize
\caption{Hyperparameters of the proposed CRLD}
\vspace{-0.2cm}
\centering
\small
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{c c c}
\hline
\multicolumn{3}{l}{\textbf{Input}: 3D matrix with the size of $M_a \times M_b \times P$} \\
\hline
\multicolumn{3}{l}{\textbf{Denoising Block} (CRLD has $B$ identical denoising blocks):} \\
\hspace{0.4cm} \textbf{Layers} & \textbf{Operations} & \hspace{0.4cm} \textbf{Filter Size} \\
\hspace{0.4cm} 1 & Conv + BN + ReLU & \hspace{0.4cm} $ 64 \times ( 3 \times 3 \times P ) $ \\
\hspace{0.4cm} $ 2 \sim L-1 $ & Conv + BN + ReLU & \hspace{0.4cm} $ 64 \times ( 3 \times 3 \times 64 ) $ \\
\hspace{0.4cm} $L$ & Conv & \hspace{0.4cm} $ P \times ( 3 \times 3 \times 64 ) $ \\
\multicolumn{3}{l}{\textbf{Convolution Layer}: Conv with filter size of $1 \times (M_a \times M_b \times P)$ } \\
\hline
\multicolumn{3}{l}{\textbf{Output}: Denoised channel matrix with the size of $M_a \times M_b$} \\
\hline
\end{tabular}\vspace{-0.6cm}
\end{table}
(a) \textbf{Input Layer}: Assume that there are $P$ observed pilot bits $\{\mathbf{y}(0), \mathbf{y}(1), \cdots, \mathbf{y}(P-1)\}$, $P=N_a$ or $N_b$ as defined in Fig. \ref{Figure:AmBC-frame}, to further explore the spatial and temporal correlations, we first reshape each $\mathbf{y}(p), p \in \{0,1,\cdots,P-1\}$, into a spatial two-dimension (2D) matrix form, denoted by $\mathbf{Y}(p) \in \mathbb{R}^{M_a \times M_b}$, where $1\leq M_a,M_b \leq M$, and $M_aM_b=M$, and then stack them into a 3D matrix as the network input:
\begin{equation}\label{Y}
\mathbf{Y} = \mathcal{F} ( [\mathbf{Y}(0), \mathbf{Y}(1), \cdots, \mathbf{Y}(P-1)] ),
\end{equation}
where $\mathbf{Y} \in \mathbb{R}^{ M_a \times M_b \times P}$ denotes the input of the network and $\mathcal{F}(\cdot)\mathrm{:}~\mathbb{R}^{M_a \times M_bP}\mapsto\mathbb{R}^{M_a \times M_b \times P}$ is the mapping function for stacking matrices.
Note that the proposed CRLD is a universal network structure.
Considering the practical requirement of the coverage area and the adopted carrier frequency of the reader \cite{hoang2020ambient},
we provide a realization of the proposed CRLD where the network input size is set as $M=64$, $M_a=8$, $M_b=8$, and $P=2$.
(b) \textbf{Denoising blocks}: The CRLD has $B$ denoising blocks and each of them has an identical structure, which consists of two types of layers denoted by two different colors, as shown in Fig. \ref{Figure:CRLD}:
(i) Conv+BN+ReLU
for the first $1\sim L-1$ layers: convolution\footnotemark\footnotetext{Note that each convolution output is the result of the filter and all the $P$ temporal observations. Thus, the learned features by CRLD contain the temporal information, which contributes to accurate channel estimations.} (Conv) is first operated and then a batch normalization \cite{ioffe2015batch} (BN) is applied after the Conv to facilitate the network training speed and stability. Finally, to enhance the network presentation ability, ReLU is adopted as the activation function which is defined as $y = \max(0,x)$;
(ii) Conv for the last layer: the Conv is adopted to obtain the residual noise $\mathbf{S}_i$ for the subsequent element-wise subtraction.
Therefore, the $i$-th, $i=1,2,\cdots,B$, $L$-layer subnetwork can be modeled as a non-linear function $\mathcal{R}_{\theta_i}(\cdot)$ with parameter $\theta_i$, for each denoising block, we have
\vspace{-0.1cm}
\begin{equation}\label{Y_i}
\mathbf{Y}_{i} = \mathbf{Y}_{i-1} - \mathbf{S}_i = \mathbf{Y}_{i-1} - \mathcal{R}_{\theta_i}(\mathbf{Y}_{i-1}), \forall i. \vspace{-0.1cm}
\end{equation}
Here, $\mathbf{Y}_{0}=\mathbf{Y}$ and $\mathbf{Y}_{i-1}$ and $\mathbf{Y}_{i}$ denote the input and output of the $i$-th denoising block, respectively. In addition, $\mathbf{S}_i = \mathcal{R}_{\theta_i}(\mathbf{Y}_{i-1})$ is the residual term between $\mathbf{Y}_{i-1}$ and $\mathbf{Y}_{i}$, and thus it is called as the residual noise in the literature.
(c) \textbf{Convolutional layer}: A convolutional layer, denoted by the green color box in Fig. \ref{Figure:CRLD}, is added between the last denoising block and the network output to combine the $P$ denoised channel matrices to reconstruct an $M_a$-by-$M_b$ output.
In summary, to further improve the denoising performance for channel recovery, the proposed CRLD architecture adopts $B$ denoising blocks to remove the noise gradually and finally exploits a Conv layer to reconstruct the output.
\vspace{-0.1cm}
\subsection{CRLD-based Estimation Algorithm}
Based on the proposed CRLD architecture, we then design a CRLD-based channel estimation scheme, which consists of offline training phase and online estimation phase.
\subsubsection{Offline Training Phase}
Given a training set
\begin{equation}\label{}
(\Omega_\mathbf{Y}, \Omega_\mathbf{X}) \hspace{-0.1cm}=\hspace{-0.1cm} \big\{\hspace{-0.05cm}(\mathbf{Y}^{(1)},\mathbf{X}^{(1)}),\cdots,(\mathbf{Y}^{(K)},
\mathbf{X}^{(K)})\hspace{-0.05cm}\big\}.
\end{equation}
Here, $(\mathbf{Y}^{(k)},\mathbf{X}^{(k)})$, $k \in \{ 1,2,\cdots,K\}$, denotes the $k$-th example of the training set. $\mathbf{Y}^{(k)} \in \mathbb{R}^{ M_a \times M_b \times P}$ is the network input, as defined in (\ref{Y}). $\mathbf{X}^{(k)} \in \mathbb{R}^{ M_a \times M_b }$ is the label which is the matrix form of $\mathbf{w}$ or $\mathbf{h}$, as defined in (\ref{y_denoise}).
According to the MMSE criterion \cite{kay1993fundamentals}, the cost function of the offline training phase can be expressed as \vspace{-0.2cm}
\begin{equation}\label{}
J_{\mathrm{MSE}}(\theta) = \sum\limits_{k=1}^{K}||\mathbf{X}^{(k)} - \tilde{\mathbf{X}}^{(k)}_{\theta}||_F^2, \vspace{-0.2cm}
\end{equation}
where $\tilde{\mathbf{X}}^{(k)}_{\theta}$ is the output of CRLD.
We can then use the backpropagation (BP) algorithm \cite{goodfellow2016deep}
to progressively update the network weights and finally obtain a well-trained CRLD:
\begin{equation}\label{CRLD}
h_{\theta^*}(\mathbf{Y}) = f_{\theta^*_{B+1}}\left(\mathbf{Y} - \sum\limits_{i = 1}^{B}\mathcal{R}_{\theta_i^*}\left(\mathbf{Y}_{i-1}\right)\right),
\end{equation}
where $h_{\theta^*}(\cdot)$ denotes the expression of the well-trained CRLD with the well-trained weight $\theta^*=\{\theta_1^*,\theta_2^*,\cdots,\theta_{B+1}^*\}$.
\subsubsection{Online Estimation Phase}
Given the pilot-based test data $\mathbf{Y}_{\mathrm{test}}$, we can then directly obtain the output of CRLD: $\tilde{\mathbf{X}}_{\mathrm{CRLD}} = h_{\theta^*}(\mathbf{Y}_{\mathrm{test}})$. Reshaping $\tilde{\mathbf{X}}_{\mathrm{CRLD}}$ into a $M$-by-$1$ vector $\tilde{\mathbf{x}}_{\mathrm{CRLD}}$, we finally obtain the estimated channel coefficient vector, denoted by $\tilde{\mathbf{w}} = \tilde{\mathbf{x}}_{\mathrm{CRLD}}$ or $\tilde{\mathbf{h}}=\tilde{\mathbf{x}}_{\mathrm{CRLD}}$.
\subsubsection{Algorithm Steps}
The proposed algorithm is summarized in \textbf{Algorithm 1}, where $i$ and $I$ are the iteration index and the maximum iteration number, respectively.
In addition, $I$ is the maximum iteration number which is controlled by an early stopping rule \cite{goodfellow2016deep}.
\begin{table}[t]
\small
\centering
\begin{tabular}{l}
\toprule[1.8pt] \vspace{-0.4cm}\\
\hspace{-0.1cm} \textbf{Algorithm 1} {CRLD-based Estimation Algorithm} \\
\toprule[1.8pt] \vspace{-0.3cm}\\
\textbf{Initialization:} $i = 0$ \\
\textbf{Offline Training Phase:} \\
1:\hspace{0.75cm}\textbf{Input:} Training set $({\Omega }_{\mathbf{Y}},{\Omega }_{\mathbf{X}})$\\
2:\hspace{1.1cm}\textbf{while} $i \leq I $ \textbf{do} weights update: \\
3:\hspace{1.6cm}Update $\theta$ by BP algorithm on $J_{\mathrm{MSE}}(\theta)$ \\
\hspace{1.8cm} $i = i + 1$ \\
4:\hspace{1.1cm}\textbf{end while} \\
5:\hspace{0.75cm}\textbf{Output}: Well-trained CRLD ${h}_{\theta^*}( \cdot )$ as defined in (\ref{CRLD})\\
\textbf{Online Estimation Phase:} \\
6:\hspace{0.6cm}\textbf{Input:} Test data $\mathbf{Y}_{\mathrm{test}}$ \\
7:\hspace{0.95cm}\textbf{do} channel estimation with CRLD ${h}_{\theta^*}( \cdot )$ \\
8:\hspace{0.6cm}\textbf{Output:} $\tilde{\mathbf{X}}_{\mathrm{CRLD}}$, i.e., $\tilde{\mathbf{w}}$ or $\tilde{\mathbf{h}}$. \\
\bottomrule[1.8pt]
\end{tabular}\vspace{-0.5cm}
\end{table}
\vspace{-0.4cm}
\subsection{Theoretical Analysis}
To offer more insight of the proposed CRLD method, we then analyze the output of CRLD and characterize its properties theoretically.
Note that the BN and the ReLU operations are adopted for enhancing the training speed and the network stability. Thus, we mainly investigate the effect of the convolution operations on the CRLD output.
For the convenience of analysis, let $\tilde{M}_b = PM_b$, a 2D matrix \vspace{-0.1cm}
\begin{equation}\label{Y_2D}
\tilde{\mathbf{Y}} = [\mathbf{Y}(0), \mathbf{Y}(1), \cdots, \mathbf{Y}(P-1)] \in \mathbb{R}^{ M_a \times {\tilde{M}_b}} \vspace{-0.1cm}
\end{equation}
is considered as the input of CRLD and it can be easily extended to the 3D input based on (\ref{Y}).
Since the convolution operation can be formulated as a production of two matrices \cite{goodfellow2016deep}, the well-trained subnetwork can be expressed as
\begin{equation}\label{R_analysis_v1}
\mathcal{R}_{\theta_i}(\tilde{\mathbf{Y}}_{i-1}) = \tilde{\mathbf{Y}}_{i-1}\mathbf{W}^*_i,
\end{equation}
where $\mathbf{W}^*_i$ represents the well-trained network weights of the subnetwork.
Thus, based on (\ref{Y_i}), we have $\tilde{\mathbf{Y}}_i = \prod_{j=1}^{i}\tilde{\mathbf{Y}}\tilde{\mathbf{W}}^*_{j},$
where $\tilde{\mathbf{Y}}_0 =\tilde{\mathbf{Y}}$ and $\tilde{\mathbf{W}}^*_j = \mathbf{I}_P - \mathbf{W}^*_i$. In this case, we can rewrite (\ref{R_analysis_v1}) as \vspace{-0.3cm}
\begin{equation}\label{R_analysis_v2}
\mathcal{R}_{\theta_i}(\tilde{\mathbf{Y}}_{i-1}) = \prod_{j=1}^{i-1}\tilde{\mathbf{Y}}\tilde{\mathbf{W}}^*_j\mathbf{W}^*_i, \forall i \in \{2,3,\cdots, 8\} \vspace{-0.3cm}
\end{equation}
Thus, $\mathbf{Y}_B$ can be written as \vspace{-0.1cm}
\begin{equation}\label{Y_B_P1}
\tilde{\mathbf{Y}}_B = \tilde{\mathbf{Y}} - \sum\limits_{i = 1}^{B}\mathcal{R}_{\theta_i}\left(\tilde{\mathbf{Y}}_{i-1}\right) = \tilde{\mathbf{Y}} (\mathbf{I}_{\tilde{M}_b}-\mathbf{W}), \vspace{-0.1cm}
\end{equation}
where $\mathbf{W} = \mathbf{W}_1 + \sum_{i=2}^{B}\prod_{j=1}^{i-1}\tilde{\mathbf{W}}_j\mathbf{W}_i$, and $\mathbf{I}_{\tilde{M}_b}$ denotes the ${\tilde{M}_b}$-by-${\tilde{M}_b}$ identity matrix.
Finally, the well-trained CRLD, i.e., the CRLD-based estimator can be expressed as
\begin{equation}\label{X_CRLD}
\mathbf{X}_{\mathrm{CRLD}} = \tilde{\mathbf{Y}} (\mathbf{I}_{\tilde{M}_b}-\mathbf{W}^*) \mathbf{W}^*_{B+1},
\end{equation}
where $\mathbf{W}^*_{B+1}$ denotes the well-trained weights of the convolutional layer $f_{\theta_{B+1}}(\cdot)$.
On the other hand, the expression of the optimal MMSE estimator is
\begin{equation}\label{X_MMSE}
\mathbf{X}_{\mathrm{MMSE}} = \tilde{\mathbf{Y}} \left(\mathbf{I}_{{\tilde{M}_b}} - \alpha \mathbf{S}^H \left(\alpha \mathbf{S} \mathbf{S}^H + \mathbf{R_X}^{-1}\right)^{-1} \mathbf{S}\right) \alpha \mathbf{S}^H \mathbf{R}_{\mathbf{X}},
\end{equation}
where $\mathbf{X}\in \mathbb{R}^{M_a \times M_b}$ is the matrix form of the channel vector $\mathbf{x}$, $\mathbf{S} = [\mathbf{I}_{M_b},\mathbf{I}_{M_b},\cdots,\mathbf{I}_{M_b}] \in \mathbb{R}^{M_b \times {\tilde{M}_b}}$, $\mathbf{R_X}= E(\mathbf{X}^H \mathbf{X})$ denotes the statistical correlation matrix and $\alpha = \frac{1}{M_a\sigma_u^2}$.
It can been seen from (\ref{X_CRLD}) that the weight matrices $\mathbf{W}^*$ and $ \mathbf{W}_{B+1}^*$ can be learned from the training set through the offline training of the proposed CRLD.
Since we adopt the MMSE-based cost function for the offline training, thus, if the training set is sufficiently large, the proposed CRLD-based estimator can learn and mimic the expression of the optimal estimator under the MMSE criterion \cite{kay1998fundamentals}, i.e., the expression of the optimal MMSE estimator in (\ref{X_MMSE}).
In fact, the proposed CRLD achieves the optimal MMSE performance when the weights approach $\mathbf{W}^*=\frac{1}{M_a\sigma_u^2} \mathbf{S}^H \left(\frac{1}{M_a\sigma_u^2} \mathbf{S} \mathbf{S}^H + \mathbf{R_X}^{-1}\right)^{-1} \mathbf{S}$ and $\mathbf{W}^*_{B+1}=\frac{1}{M_a\sigma_u^2} \mathbf{S}^H \mathbf{R}_{\mathbf{X}}$ for a large enough training set. This will be verified through the simulation results in Section IV.
\section{Simulation Results}
In this section, we provide simulation results to verify the efficiency of the proposed algorithm. As shown in Fig. 1, a classical AmBC system with a $64$-element multi-antenna reader is considered for simulation. In the simulation, the ambient source is modeled by a Gaussian random variable and Rayleigh channel model is adopted, as defined in (\ref{y_v2}). The hyperparameters of the proposed CRLD are summarized in Table I, where we set $M_a = M_b = 8$, $B = 3$, $L = 8$, and $P = N_a = N_b \in [2,16]$. To evaluate the channel estimation performance, we compare the proposed CRLD method with the optimal MMSE method and the least square (LS) method \cite{kay1993fundamentals}. The normalized MSE (NMSE) is adopted as the performance metric which is defined as \vspace{-0.1cm}
\begin{equation}\label{}
\mathrm{NMSE} = {E \left( \|\mathbf{x} - \tilde{\mathbf{x}}\|_2^2 \right)} / {E \left( \|\mathbf{x}\|_2^2 \right)}, \vspace{-0.1cm}
\end{equation}
where $\mathbf{x}$ and $\tilde{\mathbf{x}}$ are the ground truth and the estimated value, respectively.
All the presented simulation results are obtained through averaging $100,000$ Monte Carlo realizations.
\begin{figure}[t]
\centering\vspace{-0.1cm}
\includegraphics[width=3.1in,height=2.2in]{NMSE_SNR.pdf}\vspace{-0.1cm}
\begin{tabular}{l l}
\hspace{0.8cm}{\scriptsize{(a) $\mathbf{h}$ for $c(n)=0$. }} \hspace{1.7cm}{\scriptsize{(b) $\mathbf{w}$ for $c(n)=1$. }} \hspace{0.3cm}\\
\end{tabular}\vspace{-0.26cm}
\caption{NMSE versus SNR under $\zeta$ = $-5$ dB and $N_a = N_b = 2$.}
\vspace{-0.2cm}
\label{Figure:NMSE_SNR}
\end{figure}
We first evaluate the NMSE performance with different SNRs in Fig. \ref{Figure:NMSE_SNR}. Note that the optimal MMSE method requires the perfect statistical channel correlation matrices, which is not always available in practice. Thus, we merely present it for benchmarking and its expression was defined in (\ref{X_MMSE}).
In particular, the proposed CRLD method approaches the optimal MMSE method based on the perfect statistical channel correlation matrix in all considered scenarios.
On the other hand, it can be seen from Fig.~\ref{Figure:NMSE_SNR} that in the high SNR regime, the performance of the LS method approaches that of the optimal MMSE method as the impact of noise is limited. However, the LS method still has a large performance gap compared with the MMSE method and the proposed CRLD method in the low SNR regime. For example, the CRLD can achieve a SNR gain of $4$ $\mathrm{dB}$ in terms of $\mathrm{NMSE} \approx 10^{-0.5}$ compared with the LS method. This is because the LS method treats the channel coefficients as deterministic but unknown constants, while the proposed method and the MMSE method handle the impact of the channel as a random variable. Thus, the latter two schemes can exploit the prior statistical knowledge of the channel matrices to further improve the estimation accuracy.
Fig. \ref{Figure:NMSE_pilot} presents the results of NMSE with different numbers of pilots in a noisy communication environment. It is shown that the NMSEs of all the methods decrease with the increasing number of pilot symbols, and the CRLD method can always achieve the same performance as that of the optimal MMSE method. The reason is that our proposed method can efficiently exploit the temporal correlations of the pilot signals for improving the accuracy of channel estimation.
\begin{figure}[t]
\centering\vspace{-0.1cm}
\includegraphics[width=3.1in,height=2.2in]{NMSE_N.pdf}\vspace{-0.1cm}
\begin{tabular}{l l}
\hspace{0.8cm}{\scriptsize{(a) $\mathbf{h}$ for $c(n)=0$. }} \hspace{1.7cm}{\scriptsize{(b) $\mathbf{w}$ for $c(n)=1$. }} \hspace{0.3cm}\\
\end{tabular}\vspace{-0.26cm}
\caption{\hspace{-0.1cm}NMSE versus the number of pilots under SNR = $-6$ dB, $\zeta$ = $-5$ dB.}
\label{Figure:NMSE_pilot}
\end{figure}
\begin{table}[t]
\normalsize
\caption{Computational complexity of different estimation algorithms }
\vspace{-0.1 cm}
\centering
\small
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{c c c}
\hline
{\textbf{\footnotesize{Algorithm}}} & {\textbf{\footnotesize{Online Estimation}}} & {\textbf{\footnotesize{Offline Training}}} \\
\hline
{\textbf{\footnotesize{LS}}}
& \footnotesize{$O( MP)$}
& -
\\
{\textbf{\footnotesize{MMSE}}}
& \footnotesize{$O(P^3+MP^2)$}
& -
\\
{{\textbf{\footnotesize{CRLD}}}} &
\scriptsize{{$O\left( BM\sum\limits_{l = 1}^{L}n_{l-1} s_l^2 n_l\right)$}} &
\scriptsize{$O\left( N_tIBM\sum\limits_{l = 1}^{L}n_{l-1} s_l^2 n_l \right)$}
\\
\hline
\end{tabular}
\vspace{-0.3cm}
\end{table}
Finally, we investigated the computational complexities of different algorithms and summarize them in Table II. Here, $s_l$ denotes the side length of the $l$-th convolutional layer's filter and $n_{l-1}$ and $n_l$ represent the depthes of the input feature map and the output feature map of the $l$-th convolutional layer, respectively.
In addition, $N_t$ denotes the number of training examples.
It is shown that the computational complexity of the LS and MMSE methods only come from the online detection, while the CRLD has an additional complexity due to the offline training.
Specifically, the LS and MMSE methods have fixed complexities, while the complexity of the CRLD changes with the network size.
Correspondingly, we then execute these algorithms on a PC with a i7-8700 3.20 GHz CPU and a Nvidia GeForce RTX 2070 GPU
under $B=3, M=64, P=2, L=8$, $s_l=3$ for $l\in\{1, 2,3,\cdots, 8\}$, $n_l=64$ for $l\in \{1, 2,3,\cdots, 7\}$, $n_0=n_8=2$
and the time costs are $2.5 \times 10^{-5}$ (LS method), $1.6 \times 10^{-4}$ (MMSE method), and $1.2 \times 10^{-4}$ (CRLD method).
Therefore, although the proposed CRLD has a higher complexity compared with the MMSE and LS methods, the associated time cost can be greatly reduced by exploiting the parallel computing of GPU.
\section{Conclusion}
This letter modeled the channel estimation as a denoising problem and developed a DReL approach for channel estimation in AmBC systems. We first designed a communication protocol and then proposed a novel CRLD-based estimation scheme, which consists of an offline training phase and an online estimation phase. The proposed CRLD adopts multiple 3D denoising blocks to intelligently exploit the spatial and temporal correlations of the pilot signals, which further improves the estimation accuracy.
Theoretical analysis was also provided to characterize the properties of CRLD.
Simulation results showed that the proposed method is able to achieve a close-to-optimal performance obtained by the MMSE method.
\bibliographystyle{ieeetr}
\setlength{\baselineskip}{10pt}
|
2,869,038,153,793 | arxiv | \section{Introduction}
\newcommand{\zk}{\ensuremath{\mathfrak{k}}}
Let $a$ and $q$ be two positive coprime integers, $0<a<q$.
By the Euclidean algorithm, a rational $a/q$ can be uniquely represented as a regular continued fraction
\begin{equation}\label{exe}
\frac{a}{q}=[0;b_1,\dots,b_s]=
\cfrac{1}{b_1 +\cfrac{1}{b_2 +\cfrac{1}{b_3+\cdots +\cfrac{1}{b_s}}}}
,\qquad b_s \ge 2.
\end{equation}
Assuming $q$ is known, we use $b_j(a)$, $j=1,\ldots,s=s(a)$ to denote the partial quotients of $a/q$; that is,
\[
\frac aq := [ 0; b_1(a),\ldots,b_{s}(a)].
\]
Zaremba's famous conjecture \cite{zaremba1972methode} posits that there is an absolute constant $\zk$ with the following property:
for any positive integer $q$ there exists $a$ coprime to $q$ such that in the continued fraction expansion (\ref{exe}) all partial quotients are bounded:
\[
b_j (a) \le \zk,\,\, 1\le j \le s = s(a).
\]
In fact, Zaremba conjectured that $\zk=5$.
For large prime $q$, even $ \zk=2$ should be enough, as conjectured by Hensley \cite{hensley_SL2}, \cite{hensley1996}.
This theme is rather popular especially at the last time, see, e.g.,
papers
\cite{bourgain2011zarembas,bourgain2014zarembas}, \cite{FK}, \cite{hensley1989distribution}--\cite{hensley1996},
\cite{KanIV}--\cite{Mosh_A+B},
\cite{Nied} and many others.
The history of the question can be found, e.g., in \cite{NG_S}.
Here we obtain the following "modular"\, version of Zaremba's conjecture.
The first theorem in this direction was
proved by Hensley
in \cite{hensley_SL2} and after that in \cite{MOW_MIX}, \cite{MOW_Schottky}.
\begin{theorem}
There is an absolute constant $\zk$ such that for any prime number $p$
there exist some positive integers $q = O(p^{30})$, $q\equiv 0 \pmod p$ and $a$, $a$ coprime with $q$ having the property that the ratio $a/q$ has partial quotients bounded by $\zk$.
\label{t:main_intr}
\end{theorem}
Also, we can say something nontrivial about finite continued fractions with $\zk=2$.
It differs our paper from \cite{bourgain2011zarembas}, \cite{bourgain2014zarembas}, \cite{KanIV}, \cite{MOW_MIX}, \cite{MOW_Schottky}.
\begin{theorem}
There is an absolute constant $C>0$ such that for any prime number $p$
there exist some positive integers $q = O(p^{C})$, $q\equiv 0 \pmod p$ and $a$, $a$ coprime with $q$ having the property that the ratio $a/q$ has partial quotients bounded by $2$.
\label{t:main_intr2}
\end{theorem}
Our
proof uses growth results in $\SL_2 (\F_p)$ and some well--known facts about the representation theory of $\SL_2 (\F_q)$.
We study a combinatorial question about intersection of powers of a certain set of matrices $A \subseteq \SL_2 (\F_q)$ with an arbitrary Borel subgroup and this seems like a new innovation.
In principle, results from \cite{hensley_SL2} can be written in a form similar to Theorem \ref{t:main_intr} in an effective way but the dependence of $q$ on $p$ in
\cite{hensley_SL2}
is rather poor.
Thus Theorem \ref{t:main_intr} can be considered as an explicit version (with very concrete constants)
of
Hensley's results as well as rather effective Theorem 2 from \cite{MOW_Schottky}.
Also, the methods of paper \cite{hensley_SL2} and papers \cite{MOW_MIX}, \cite{MOW_Schottky} are
very
different from ours.
We thank I.D. Kan for useful discussions and remarks.
\section{Definitions}
Let $\Gr$ be a group with the identity $1$.
Given two sets $A,B\subset \Gr$, define the \textit{product set} of $A$ and $B$ as
$$AB:=\{ab ~:~ a\in{A},\,b\in{B}\}\,.$$
In a similar way we define the higher product sets, e.g., $A^3$ is $AAA$.
Let $A^{-1} := \{a^{-1} ~:~ a\in A \}$.
The Ruzsa triangle inequality \cite{Ruz} says that
\[
|C| |AB| \le |AC||C^{-1}B|
\]
for any sets $A,B,C \subseteq \Gr$.
As usual, having two subsets $A,B$ of a group $\Gr$ denote by
\[
\E(A,B) = |\{ (a,a_1,b,b_1) \in A^2 \times B^2 ~:~ a^{-1} b = a^{-1}_1 b_1 \}|
\]
the {\it common energy} of $A$ and $B$.
Clearly, $\E(A,B) = \E(B,A)$ and by the Cauchy--Schwarz inequality
\[
\E(A,B) |A^{-1} B| \ge |A|^2 |B|^2 \,.
\]
We use representation function notations like $r_{AB} (x)$ or $r_{AB^{-1}} (x)$, which counts the number of ways $x \in \Gr$ can be expressed as a product $ab$ or $ab^{-1}$ with $a\in A$, $b\in B$, respectively.
For example, $|A| = r_{AA^{-1}}(1)$ and $\E (A,B) = r_{AA^{-1}BB^{-1}}(1) =\sum_x r^2_{A^{-1}B} (x)$.
In this paper we use the same letter to denote a set $A\subseteq \Gr$ and its characteristic function $A: \Gr \to \{0,1 \}$.
We write $\F^*_q$ for $\F_q \setminus \{0\}$.
The signs $\ll$ and $\gg$ are the usual Vinogradov symbols.
All logarithms are to base $2$.
\section{On the representation theory of $\SL_2 (\F_p)$ and basis properties of its subsets}
First of all, we recall some notions and simple facts from the representation theory, see, e.g., \cite{Naimark} or \cite{Serr_representations}.
For a finite group $\Gr$ let $\FF{\Gr}$ be the set of all irreducible unitary representations of $\Gr$.
It is well--known that size of $\FF{\Gr}$ coincides with the number of all conjugate classes of $\Gr$.
For $\rho \in \FF{\Gr}$ denote by $d_\rho$ the dimension of this representation.
We write $\langle \cdot, \cdot \rangle$ for the corresponding Hilbert--Schmidt scalar product
$\langle A, B \rangle = \langle A, B \rangle_{HS}:= \tr (AB^*)$, where $A,B$ are any two matrices of the same sizes.
Put $\| A\| = \sqrt{\langle A, A \rangle}$.
Clearly, $\langle \rho(g) A, \rho(g) B \rangle = \langle A, B \rangle$ and $\langle AX, Y\rangle = \langle X, A^* Y\rangle$.
Also, we have $\sum_{\rho \in \FF{\Gr}} d^2_\rho = |\Gr|$.
For any $f:\Gr \to \mathbb{C}$ and $\rho \in \FF{\Gr}$ define the matrix $\FF{f} (\rho)$, which is called the Fourier transform of $f$ at $\rho$ by the formula
\begin{equation}\label{f:Fourier_representations}
\FF{f} (\rho) = \sum_{g\in \Gr} f(g) \rho (g) \,.
\end{equation}
Then the inverse formula takes place
\begin{equation}\label{f:inverse_representations}
f(g) = \frac{1}{|\Gr|} \sum_{\rho \in \FF{\Gr}} d_\rho \langle \FF{f} (\rho), \rho (g^{-1}) \rangle \,,
\end{equation}
and the Parseval identity is
\begin{equation}\label{f:Parseval_representations}
\sum_{g\in \Gr} |f(g)|^2 = \frac{1}{|\Gr|} \sum_{\rho \in \FF{\Gr}} d_\rho \| \FF{f} (\rho) \|^2 \,.
\end{equation}
The main property of the Fourier transform is the convolution formula
\begin{equation}\label{f:convolution_representations}
\FF{f*g} (\rho) = \FF{f} (\rho) \FF{g} (\rho) \,,
\end{equation}
where the convolution of two functions $f,g : \Gr \to \mathbb{C}$ is defined as
\[
(f*g) (x) = \sum_{y\in \Gr} f(y) g(y^{-1}x) \,.
\]
Finally, it is easy to check that for any matrices $A,B$ one has $\| AB\| \le \| A\|_{o} \| B\|$ and $\| A\|_{o} \le \| A \|$, where the operator $l^2$--norm $\| A\|_{o}$ is just the absolute value of the maximal eigenvalue of $A$.
In particular, it shows that $\| \cdot \|$ is indeed a matrix norm.
Now consider the group $\SL_2 (\F_q)$ of matrices
\[
g=
\left( {\begin{array}{cc}
a & b \\
c & d \\
\end{array} } \right) = (ab|cd) \,, \quad \quad a,b,c,d\in \F_q \,, \quad \quad ad-bc=1 \,.
\]
Clearly, $|\SL_2 (\F_q)| = q^3-q$.
Denote by $\B$ the standard Borel subgroup of all upper--triangular matrices from $\SL_2 (\F_q)$, by $\U \subset \B$ denote the standard unipotent subgroup of $\SL_2 (\F_q)$ of matrices $(1u|01)$, $u \in \F_q$ and by $\D \subset \B$ denote the subgroup of diagonal matrices.
$\B$ and all its conjugates form all maximal proper subgroups of $\SL_2 (\F_p)$.
Also, let $I_n$ be the identity matrix and $Z_n$ be the zero matrix of size $n\times n$.
Detailed description of the representation theory of $\SL_2 (\F_q)$ can be found in \cite[Chapter II, Section 5]{Naimark}.
We formulate the main result from book \cite{Naimark} concerning this theme.
\begin{theorem}
Let $q$ be an odd power.
There are $q+3$ nontrivial representations of $\SL_2 (\F_q)$, namely,\\\
$\bullet~$ $\frac{q-3}{2}$ representations $T_\chi$ of dimension $q+1$ indexed via $\frac{q-3}{2}$ nontrival multiplicative characters $\chi$ on $\F^*_q$, $\chi^2 \neq 1$,\\
$\bullet~$ a representation $\tilde{T}_1$ of dimension $q$,\\
$\bullet~$ two representations $T^{+}_{\chi_1}$, $T^{-}_{\chi_1}$ of dimension $\frac{q+1}{2}$, $\chi^2_1 = 1$, \\
$\bullet~$ two representations $S^{+}_{\pi_1}$, $S^{-}_{\pi_1}$ of dimension $\frac{q-1}{2}$, \\
$\bullet~$ $\frac{q-1}{2}$ representations $S_\pi$ of dimension $q-1$ indexed via $\frac{q-1}{2}$ nontrival multiplicative characters $\pi$ on an arbitrary quadratic extension of $\F_q$, $\pi^2 \neq 1$.
\label{t:Naimark}
\end{theorem}
By $d_{\min}$, $d_{\max}$ denote the minimum/maximum over dimensions of all nontrivial representations of a group $\Gr$.
Thus the result above tells us that in the case $\Gr = \SL_2 (\F_q)$ these quantities differ just in two times roughly.
Below we assume that $q\ge 3$.
Theorem \ref{t:Naimark} has two consequences, although, a slightly weaker result than Lemma \ref{l:A^n_large} can be obtained via the classical Theorem of Frobenius \cite{Frobenius}, see, e.g., \cite{sh_as}. Originally, similar arguments were suggested in \cite{SX}.
\begin{lemma}
Let $n\ge 3$ be an integer, $A\subseteq \SL_2 (\F_q)$ be a set and $|A| \ge 2(q+1)^2 q^{2/n}$.
Then $A^n = \SL_2 (\F_q)$.
Generally, if for some sets $X_1, \dots, X_n \subseteq \SL_2 (\F_q)$ one has
\[
\prod_{j=1}^n |X_j| \ge (2q (q+1))^n (q-1)^{2} \,,
\]
then $X_1 \dots X_n = \SL_2 (\F_q)$.
\label{l:A^n_large}
\end{lemma}
\begin{proof}
Using formula \eqref{f:Parseval_representations} with $f=A$, we have for an arbitrary nontrivial representation $\rho$ that
\begin{equation}\label{f:Fourier_est}
\| A \|_{o} < \left(\frac{|A| |\SL_2 (\F_q)|}{d_{\min}} \right)^{1/2} = \left(\frac{|A| (q^3-q)}{d_{\min}} \right)^{1/2} \,.
\end{equation}
Hence for any $x\in \SL_2 (\F_q)$ we obtain via formulae \eqref{f:inverse_representations}, \eqref{f:Parseval_representations}
and estimate \eqref{f:Fourier_est} that
\[
A^n (x) > \frac{|A|^n}{|\SL_2 (\F_q)|} - \left(\frac{|A| (q^3-q)}{d_{\min}} \right)^{(n-2)/2} |A| \ge 0 \,,
\]
provided $|A|^n \ge 2^{n-2} (q+1)^n q^n (q-1)^2$.
The second part of the lemma can be obtained similarly.
This completes the proof.
$\hfill\Box$
\end{proof}
\begin{remark}
It is easy to see (or consult Lemma \ref{l:B_Wiener} below) that bound \eqref{f:Fourier_est} is sharp, e.g., take $A=\B$.
\end{remark}
\bigskip
For any function $f : \Gr \to \mathbb{C}$ consider the Wiener norm of $f$ defined as
\begin{equation}\label{def:Wiener}
\| f\|_W := \frac{1}{|\Gr|} \sum_{\rho \in \FF{\Gr}} d_\rho \| \FF{f} (\rho) \| \,.
\end{equation}
\begin{lemma}
We have $\| \B\|_W = 1$.
Moreover, $\| \FF{\B} (\tilde{T}_1) \| = \| \FF{\B} (\tilde{T}_1) \|_o = |\B|$ and the Fourier transform of $\B$ vanishes on all other nontrivial representations.
\label{l:B_Wiener}
\end{lemma}
\begin{proof}
We
introduce
even three proofs of upper and lower bounds of $\| \B\|_W$, although, the first and the third ones being shorter give slightly worse constants.
Also, they do not provide full description of non--vanishing representations of $\B$.
Since $\B$ is a subgroup, we see using \eqref{f:Parseval_representations} twice that
\[
|\B|^2 = |\{b_1 b_2 = b_3 ~:~ b_1,b_2,b_3 \in \B\}| = \frac{1}{|\SL_2 (\F_q)|} \sum_{\rho \in \FF{\Gr}} d_\rho \langle \FF{\B}^2 (\rho), \FF{\B} (\rho) \rangle
\le
\]
\[
\le
\frac{1}{|\SL_2 (\F_q)|} \sum_{\rho} d_\rho \langle \FF{\B} (\rho), \FF{\B} (\rho) \rangle \| \FF{\B} (\rho)\|_{o}
\le
\frac{|\B|}{|\SL_2 (\F_q)|} \sum_{\rho} d_\rho \langle \FF{\B} (\rho), \FF{\B} (\rho) \rangle = |\B|^2 \,,
\]
because, clearly, $\| \FF{\B} (\rho)\|_{o} \le |\B|$.
It means that for any representation $\rho$ either $\| \FF{\B} (\rho)\| = 0$ (and hence $\| \FF{\B} (\rho)\|_o = 0$) or $\| \FF{\B} (\rho)\|_{o} = |\B|$.
But another application of \eqref{f:Parseval_representations} gives us
\begin{equation}\label{tmp:01.10_1}
|\B| = \frac{1}{|\SL_2 (\F_q)|} \sum_{\rho} d_\rho \|\FF{\B} (\rho) \|^2
\end{equation}
and hence the number $m$ of nontrivial representations $\rho$ such that $\| \FF{\B} (\rho)\| \ge \| \FF{\B} (\rho)\|_{o} = |\B|$ is bounded in view of Theorem \ref{t:Naimark} as
\[
|\B| \ge \frac{|\B|^2}{|\SL_2 (\F_q)|} \left( 1 + \frac{m(q-1)}{2} \right) \,.
\]
In other words, $m\le 2q/(q-1)$.
Hence
\begin{equation}\label{f:Borel_1-}
\| \B\|_W \le \frac{|\B|}{|\SL_2 (\F_q)|} + \frac{m |\B|}{|\SL_2 (\F_q)|}\cdot d_{\max}
\le
\frac{|\B|}{|\SL_2 (\F_q)|} + \frac{2q (q+1) |\B|}{|\SL_2 (\F_q)|(q-1)} \le 4 \,.
\end{equation}
A similar argument gives us a lower bound for $\| \B\|_W$ of the same sort.
Let us give another proof which replaces $4$ to $1$ and uses the representation theory of $\SL_2 (\F_q)$ in a slightly more extensive way.
For $u_b \in \U$, $u_b=(1b|01)$, we have \cite[pages 121--123]{Naimark} that
in a certain orthogonal basis the following holds
$\tilde{T}_1 (u_b) = \mathrm{diag} (e(bj))$, $j=0,1,\dots,q-1$
and for $g_\la = (\la 0|0 \la^{-1}) \in \D$ the matrix $\tilde{T}_1 (g_\la)$ is the direct sum of $I_1$ and a permutation matrix of size $(q-1) \times (q-1)$.
Clearly, $\B = \D \U = \U \D$ and hence $\FF{\B} (\rho) = \FF{\D} (\rho) \FF{\U} (\rho)$ for any representation $\rho$.
But from above $\FF{\U} (\tilde{T}_1)$ is the direct sum $qI_1 \oplus Z_{q-1}$ and $\FF{\D} (\tilde{T}_1) = (q-1) I_1 \oplus 2\cdot J$, where
$J = (J_{ij})_{i,j=1}^{q-1}$ is a certain $(q-1) \times (q-1)$ matrix with all components equal one for $i/j$ belonging to the set of quadratic residues
(such precise description of $J$ is not really important for us).
Hence
\[
\FF{\B} (\tilde{T}_1) = \FF{\D} (\tilde{T}_1) \FF{\U} (\tilde{T}_1) = q(q-1) I_1 \oplus Z_{q-1} \,.
\]
Thus $\| \FF{\B} (\tilde{T}_1) \| = \| \FF{\B} (\tilde{T}_1) \|_o = |\B|$.
Applying formula \eqref{tmp:01.10_1}, we obtain
\begin{equation}\label{f:Borel_lower}
|\B| \ge \frac{|\B|^2}{|\SL_2 (\F_q)|} + \frac{q}{|\SL_2 (\F_q)|} \|\FF{\B} (\tilde{T}_1) \|^2
= \frac{|\B|^2}{|\SL_2 (\F_q)|} (1 + q ) = |\B| \,.
\end{equation}
It follows that for any other representations Fourier coefficients of $\B$ vanish.
Finally,
\begin{equation}\label{f:Borel_1}
\| \B\|_W = \frac{|\B|}{|\SL_2 (\F_q)|} + \frac{q |\B|}{|\SL_2 (\F_q)|} = 1
\end{equation}
as required.
For the last proof it is enough to look at inequality \eqref{f:Borel_lower} and apply Theorem \ref{t:Naimark}, which gives that $\FF{\B}(T_\chi)$ must vanish thanks to dimension of $T_\chi$.
Further
if we have two nontrivial non--vanishing representations $S_\pi$ or $T^{\pm}_{\chi_1}$, then it is again contradicts \eqref{f:Borel_lower} because
sum of their dimensions is too large.
Hence there is the only one nontrivial non--vanishing representation (and calculations from the second proof show that it is indeed $\tilde{T}_1$)
or one of the following pairs $(T^{\pm}_{\chi_1}, S^{\pm}_{\pi_1})$ or $(S^{+}_{\pi_1}, S^{-}_{\pi_1})$.
Thus a rough form of identity \eqref{f:Borel_1}, say, bound \eqref{f:Borel_1-} follows and, actually, we have not use any concrete basis in our first and the third
arguments.
This completes the proof of the lemma.
$\hfill\Box$
\end{proof}
\begin{remark}
One can show in the same way that an analogue of Lemma \ref{l:B_Wiener} takes place for any subgroup $\G$ of an arbitrary group $\Gr$, namely,
$\| \G\|_W \ll d_{\max}/d_{\min}$.
\end{remark}
Lemma \ref{l:B_Wiener} gives us an alternative way to show that $A^3 \cap \B \neq \emptyset$.
Indeed, just use estimate \eqref{f:Fourier_est} and write
\[
r_{A^3\B} (1) \ge \frac{|A|^3 |\B|}{|\SL_2 (\F_q)|} - \|\B \|_W \left(\frac{|A|(q^3-q)}{d_{\min}}\right)^{3/2}
=
\frac{|A|^3 |\B|}{|\SL_2 (\F_q)|} - \left(\frac{|A|(q^3-q)}{d_{\min}}\right)^{3/2} > 0 \,,
\]
provided $|A| \gg q^{8/3}$.
We improve this bound in the next section.
\section{On intersections of the product set with the Borel subgroup}
It was shown in the previous section (see Lemma \ref{l:A^n_large}) that for any $A \subseteq \SL_2 (\F_q)$ one has $A^3 = \SL_2 (\F_p)$, provided $|A|^3 \gg q^8$
and in the same way
the last result
holds for
three different sets, namely, given $X,Y,Z \subseteq \SL_2 (\F_q)$
with
$|X||Y||Z| \gg q^8$, we have $XYZ=\SL_2 (\F_q)$.
It is easy to see that in this generality the last result is sharp.
Indeed, let $X=S\B$, $Y=\B T$, where $S,T$ are two sets of sizes $\sqrt{q}/2$ which are chosen as $|X| \sim |S| |\B|$ and $|Y| \sim |T| |\B|$
(e.g., take $S,T$ from left/right cosets of $\B$ thanks to the Bruhat decomposition).
Then $XY=S\B T$, and hence $|XY| \le |S||T||B| \le |\SL_2 (\F_q)|/2$.
Thus we take $Z^{-1}$ equals the complement to $XY$ in $\SL_2 (\F_q)$ and we see that the product set $XYZ$ does not contain $1$ but $|X||Y||Z| \gg q^8$.
\bigskip
Nevertheless, in the "symmetric"\, case of the same set $A$ this $8/3$ bound can be improved, see Theorem \ref{t:8/3-c} below.
We need a simple lemma and the proof of this result, as well as the proof of Theorem \ref{t:8/3-c}
extensively play on non--commutative properties of $\SL_2 (\F_q)$.
\begin{lemma}
Let $g\notin \B$ be a fixed element from $\SL_2 (\F_q)$.
Then for any $x$ one has
\[
r_{\B g\B} (x) \le q-1 \,.
\]
\label{l:BgB}
\end{lemma}
\begin{proof}
Let $g=(ab|cd)$ and $x=(\a \beta |\gamma \d)$.
By our assumption $c\neq 0$.
We have
\begin{equation}\label{tmp:16.10_1}
\left( {\begin{array}{cc}
\la & u \\
0 & \la^{-1} \\
\end{array} } \right)
\left( {\begin{array}{cc}
a & b \\
c & d \\
\end{array} } \right)
\left( {\begin{array}{cc}
\mu & v \\
0 & \mu^{-1} \\
\end{array} } \right)
=
\left( {\begin{array}{cc}
(\la a + u c)\mu & * \\
\mu c/\la & vc/\la + d/(\la \mu) \\
\end{array} } \right)
=
\left( {\begin{array}{cc}
\a & \beta \\
\gamma & \d \\
\end{array} } \right) \,.
\end{equation}
In other words, $\mu = \la \gamma c^{-1} \neq 0$ (hence $\gamma \neq 0$ automatically) and from
\[
\a = (\la a + u c)\mu = \la \gamma c^{-1} (\la a + u c)
\]
we see that having $\la$ we determine $u$ uniquely (then, equation \eqref{tmp:16.10_1} gives us $\mu, v$ automatically).
This completes the proof.
$\hfill\Box$
\end{proof}
\bigskip
Lemma \ref{l:BgB} quickly implies a result on the Bruhat decomposition of $\SL_2 (\F_q)$.
\begin{corollary}
Let $g\in \SL_2 (\F_q) \setminus \B$.
Then $\B g\B = \SL_2 (\F_q) \setminus \B$.
\end{corollary}
\begin{proof}
Clearly, $\B \cap \B g\B = \emptyset$ because $g\in \SL_2 (\F_q) \setminus \B$.
On the other hand, by the Cauchy--Schwartz inequality and Lemma \ref{l:BgB}, we have
\[
|\B g\B| \ge \frac{|\B|^4}{\E(\B,g\B)} \ge \frac{|\B|^4}{(q-1) |\B|^2} = q^3 - q^2 = |\SL_2 (\F_q) \setminus \B| \,.
\]
This completes the proof.
$\hfill\Box$
\end{proof}
\bigskip
Using growth of products of $\B$ as in the last corollary, one can combinatorially improve the constant $8/3$ (to do this combine Lemma \ref{l:A^n_large} and bound \eqref{f:Borel_sum-product} below).
We suggest another method which uses the representation theory of $\SL_2 (\F_q)$ more extensively and which allows to improve this constant further.
\begin{theorem}
Let $A\subseteq \SL_2 (\F_q)$ be a set,
$|A| \ge 4 q^{18/7}$.
Then
$A^3 \cap \B \neq \emptyset$.
Generally, $A^n \cap \B \neq \emptyset$ provided $|A| \ge 4 q^{2+\frac{4}{3n-2}}$.
\label{t:8/3-c}
\end{theorem}
\begin{proof}
Let $g\notin \B$ and put $A^{\eps}_g = A^{\eps} \cap g\B$, where $\eps \in \{ 1,-1\}$.
Also, let $\D = \max_{\eps,\, g\notin \B} |A^\eps_g|$.
Since we can assume $A \cap \B = \emptyset$, it follows that
\begin{equation}\label{tmp:04.10_1-}
\E(A, \B) = \sum_{x} r^2_{A^{-1} \B} (x) = \sum_{x \notin \B} r^2_{A^{-1} \B} (x) \le \D |\B| |A|
\end{equation}
and similarly for $\E(A^{-1}, \B)$.
On the other hand, from \eqref{tmp:04.10_1-} and by the second part of Lemma \ref{l:B_Wiener}, we see that
\begin{equation}\label{tmp:12.10_1}
\D |\B| |A| \ge \E(A, \B) = \frac{1}{|\SL_2 (\F_q)|} \sum_{\rho} d_\rho \| \FF{A}^* (\rho) \FF{\B} (\rho) \|^2
=
\frac{q}{|\SL_2 (\F_q)|} \| \FF{A}^* (\tilde{T}_1) \FF{\B} (\tilde{T}_1) \|^2
\,,
\end{equation}
and, again, similarly for $\| \FF{A} (\tilde{T}_1) \FF{\B} (\tilde{T}_1) \|^2$.
Now consider the equation $b_1 a' a'' a b_2 = 1$ or, equivalently the equation $a'' a b_2 = (a')^{-1} b_1^{-1}$, where $a, a',a'' \in A$ and $b_1, b_2 \in \B$.
Clearly, if $A^3 \cap \B = \emptyset$, then this equation has no solutions.
Combining Lemma \ref{l:B_Wiener} with bound \eqref{tmp:12.10_1} and calculations as in the proof of Lemma \ref{l:A^n_large}, we see that this equation can be solved provided
\[
\frac{q}{|\SL_2 (\F_q)|} | \langle \FF{A}^2 (\tilde{T}_1) \FF{\B} (\tilde{T}_1), \FF{A}^* (\tilde{T}_1) \FF{\B}^* (\tilde{T}_1) \rangle |
\le \frac{q}{|\SL_2 (\F_q)|} \|\FF{A}^2 (\tilde{T}_1) \FF{\B} (\tilde{T}_1) \|
\cdot \| \FF{A}^* (\tilde{T}_1) \FF{\B} (\tilde{T}_1) \|
\le
\]
\[
\le \frac{q}{|\SL_2 (\F_q)|} \|\FF{A} (\tilde{T}_1) \FF{\B} (\tilde{T}_1) \| \|\FF{A}^* (\tilde{T}_1) \FF{\B} (\tilde{T}_1) \| \| \FF{A} \|_o
\le
\D |\B| |A| \| \FF{A} \|_o
<
\frac{|A|^3 |\B|^2}{|\SL_2 (\F_q)|} \,.
\]
In other words, in view of \eqref{f:Fourier_est}
it is enough
to have
\begin{equation}\label{tmp:04.10_2-}
|A|^4 \ge 2 (q+1)^2 \D^2 \cdot |A| q(q+1)
\end{equation}
or, equivalently,
\begin{equation}\label{tmp:04.10_2}
2 q (q+1)^3 \D^2 \le |A|^3 \,.
\end{equation}
Now let us obtain another bound which works well when $\D$ is large.
Choose $g\notin \B$ and $\eps \in \{1,-1\}$ such that $\D = |A^\eps_g|$.
Using Lemma \ref{l:BgB}, we
derive
\begin{equation}\label{f:B_Ag-}
\E(\B,A^\eps_g) = \sum_{x} r^2_{\B A^\eps_g} (x) \le \sum_{x} r_{\B A^\eps_g} (x) r_{\B g \B} (x) \le (q-1) |\B| |A^\eps_g| \,,
\end{equation}
and hence by the Cauchy--Schwarz inequality, we get
\begin{equation}\label{f:B_Ag}
|\B A^\eps_g| \ge \frac{|\B|^2 |A^\eps_g|^2}{\E(\B,A^\eps_g)} \ge \frac{|\B| |A^\eps_g|}{q-1} = q \D \,.
\end{equation}
Consider the equation $a_g (a' a'')^\eps =b$, where $b\in \B$, $a_g \in A^\eps_g$ and $a',a'' \in A$.
Clearly, if $A^3 \cap \B = \emptyset$, then this equation has no solutions.
To solve
$a_g (a' a'')^\eps =b$
it is enough to solve the equation $z (a' a'')^\eps = 1$, where now $z\in \B A^\eps_g$.
Applying the second part of Lemma \ref{l:A^n_large} combining with \eqref{f:B_Ag}, we obtain that it is enough to have
\[
8 q^3 (q+1)^3 (q-1)^{2} \le q\D |A|^2 \le |\B A^\eps_g| |A|^2
\]
or, in other words,
\begin{equation}\label{tmp:04.10_1}
8 q^2 (q+1)^3 (q-1)^{2} \le \D |A|^2 \,.
\end{equation}
Considering the second power of \eqref{tmp:04.10_1} and multiplying it with \eqref{tmp:04.10_2}, we obtain
\[
|A|^{7} \ge 2^{14} q^{18} \ge 2^7 q^5 (q+1)^{9} (q-1)^{4}
\]
as required.
In the general case inequality \eqref{tmp:04.10_2} can be rewritten as
\[
|A|^n \ge 2^{n-2} \D^2 (q+1)^n q^{n-2}
\]
and using the second part of Lemma \ref{l:A^n_large}, we obtain an analogue of \eqref{tmp:04.10_1}
\[
|A|^{n-1} \D \ge 2^n q^{n-1} (q+1)^n (q-1)^{2} \,.
\]
Combining the last two bounds, we derive the required result.
This completes the proof.
$\hfill\Box$
\end{proof}
\begin{remark}
It is easy to see that Theorem \ref{t:8/3-c}, as well as Lemma \ref{l:BgB} (and also Lemma \ref{l:B_Wiener}) take place for any Borel subgroup not just for the standard one.
\label{r:BgB_B*}
\end{remark}
\begin{remark}
It is easy to see that the arguments of the proof of Theorem \ref{t:8/3-c}
give the following combinatorial statement about left/right multiplication of an arbitrary set $A$ by $\B$
(just combine bounds \eqref{tmp:04.10_1-} and \eqref{f:B_Ag}), namely,
\begin{equation}\label{f:Borel_sum-product}
\max\{ |A\B|, |\B A| \} \gg \min\{q^{3/2} |A|^{1/2}, |A|^2 q^{-2} \} \,.
\end{equation}
\end{remark}
\bigskip
As we have seen by
Theorem \ref{t:8/3-c}
we know that
$A^n \cap \B \neq \emptyset$ for large $n$ but under the condition $|A| \gg q^{2+\eps}$ for a certain $\eps>0$.
For the purpose of the next section we need to break the described $q^2$--barrier and we do this for prime $q$, using growth in $\SL_2 (\F_p)$.
Let us recall quickly what is known about growth of generating sets in $\SL_2 (\F_p)$.
In paper \cite{H} Helfgott obtained his famous result in this direction
and we proved in \cite{RS_SL2} the following form of Helfgott's result.
\begin{theorem}
Let $A \subseteq \SL_2 (\F_p)$ be a set, $A=A^{-1}$ which generates the whole group.
Then $|AAA| \gg |A|^{1+1/20}$.
\label{t:Misha_Je}
\end{theorem}
Thus in the case of an arbitrary symmetric generating set and a prime number $p$
Theorem \ref{t:Misha_Je}, combining with Theorem \ref{t:8/3-c},
allow to obtain some bounds which guarantee that $A^n = \SL_2 (\F_p)$.
For example, if $A$ generates $\SL_2(\F_p)$, $A=A^{-1}$,
and $|A| \gg p^{2-\epsilon}$, $\epsilon < \frac{2}{21}$,
then $A^n \cap \B \neq \emptyset$ for $n\ge \frac{84-42\epsilon}{2-21\epsilon}$.
On the other hand,
the methods from \cite{H}, \cite{RS_SL2} allow to obtain the following result about generation of $\SL_2 (\F_p)$ via large and not necessary symmetric sets
(the condition of non--symmetricity of $A$ is rather crucial for us, see the next section).
\begin{theorem}
Let $A\subseteq \SL_2 (\F_p)$ be a generating set, $p\ge 5$ and $|A| \gg p^{2-\epsilon}$, $\epsilon < \frac{2}{25}$.
Then $A^n \cap \B \neq \emptyset$ for $n\ge \frac{100-50\epsilon}{2-25 \epsilon}$.
Also, $A^n = \SL_2 (\F_p)$, provided $n\ge \frac{144}{2-25 \epsilon}$.
\label{t:Misha_Je_large}
\end{theorem}
\begin{proof}
Put $K=|AAA|/|A|$.
We can assume that, say, $|A| \le p^{2+2/35}$ because otherwise one can apply Theorem \ref{t:8/3-c}.
We call an element $g\in \SL_2 (\F_p)$ to be regular if $\tr(g) \neq 0, \pm 2$ and let $\mathcal{C}_g$ be the correspondent conjugate class, namely,
\[
\mathcal{C}_g = \{s \in \SL_2 (\F_p) ~:~ \tr(s) = \tr(g) \} \,.
\]
Let $T$ be a maximal torus (in $\SL_2 (\F_p)$ it is just a maximal commutative subgroup) such that there is $g\in T\cap A^{-1}A$ and $g \neq 1$.
By \cite[Lemma 5]{RS_SL2} such torus $T_*$, containing a regular element $g$, exists,
otherwise $K\gg |A|^{2/3}$.
Firstly, suppose that for a certain $h\in A$ the torus $T'=hTh^{-1}$ has no such property, i.e., there are no
nontrivial
elements from $A^{-1}A \cap T'$.
Then for the element $g'=hgh^{-1} \in T'$ (in the case $T=T_*$ the element $g'$ is regular) the projection $a\to ag'a^{-1}$, $a\in A$ is one--to--one.
Hence $|A^2 A^{-1} A A^{-2} \cap \mathcal{C}_g| \ge |A|$.
By \cite[Lemma 11]{RS_SL2}, we have $|S \cap \mathcal{C}_g| \ll |S^{-1}S|^{2/3} + p$ for any set $S$ and regular $g$.
Using the Ruzsa triangle inequality, we obtain
\begin{equation}\label{tmp:15.10_I}
|(A^2 A^{-1} A A^{-2})^{-1}(A^2 A^{-1} A A^{-2})| \le |A|^{-1} |A^2 A^{-1} A A^{-3}| |A^3 A^{-1} A A^{-2}|
=
\end{equation}
\[
=
|A|^{-1} |A^3 A^{-1} A A^{-2}|^2 \le |A|^{-1} (|A|^{-1} |A^3 A^{-2}| |A^2 A^{-2}| )^2 \le |A|^{-1} (|A|^{-3} |A^4| |A^3|^3 )^2 \le K^{12} |A|
\]
and hence
\[
|A| \ll |(A^2 A^{-1} A A^{-2})^{-1}(A^2 A^{-1} A A^{-2})|^{2/3} + p \ll K^{8} |A|^{2/3} \,.
\]
It gives us $K\gg |A|^{1/24}$.
In the complementary second case (see \cite{RS_SL2}) thanks to the fact that $A$ is a generating set, we
suppose that for {\it any} $h\in \SL_2(\F_p)$ there is a
nontrivial
element from $A^{-1}A$ belonging to the torus $hTh^{-1}$.
Then $A^{-1}A$ is partitioned between these tori and hence again by \cite[Lemma 11]{RS_SL2}, as well as the Ruzsa triangle inequality, we obtain
\[
|(AA^{-1}AA^{-1})^{-1}(AA^{-1}AA^{-1})| \le |A|^{-1} |A^2 A^{-1} A A^{-1}|^2
\le
\]
\[
\le
|A|^{-1} (|A|^{-1} |A^2 A^{-2}| |A^2 A^{-1}|)^2
\le |A|^{-1} (|A|^{-3} |A^3|^4)^2 \le K^8 |A|
\]
and whence
\[
K^2 |A| \ge |A^{-1}A| \ge \sum_{h \in \SL_2 (\F_p)/N(T_*)} |A^{-1} A \cap h T_* h^{-1}|
\gg
\]
\[
\gg
p^2 \cdot \frac{|A|}{|(AA^{-1}AA^{-1})^{-1}(AA^{-1}AA^{-1})|^{2/3}}
\ge p^2 |A|^{1/3} K^{-16/3} \,,
\]
where $N(T)$ is the normalizer of any torus $T$, $|N(T)| \asymp |T| \asymp p$.
Hence thanks to our assumption $|A| \le p^{2+2/35}$, we have $K \gg p^{3/11} |A|^{-1/11} \gg |A|^{1/24}$.
In other words, we always obtain $|AAA| \gg p^{2+\frac{2-25\epsilon}{24}}$.
After that apply Theorem \ref{t:8/3-c} to find that $A^n \cap \B \neq \emptyset$ for $n\ge \frac{100-50\epsilon}{2-25 \epsilon}$.
If we use Lemma \ref{l:A^n_large} instead of Theorem \ref{t:8/3-c}, then we obtain $A^n = \SL_2 (\F_p)$, provided $n\ge \frac{144}{2-25 \epsilon}$.
This completes the proof.
$\hfill\Box$
\end{proof}
\bigskip
Thus for sufficiently small $\epsilon>0$ one can take $n=51$ to get $A^n \cap \B \neq \emptyset$ (and $n=73$ to obtain $A^n = \SL_2 (\F_p)$).
In the next section we improve this bound for a special set $A$ but nevertheless the arguments of the proof of Theorem \ref{t:Misha_Je_large} will be used in the proof of Theorem \ref{t:main_intr2} from the Introduction.
\bigskip
We finish this section showing that generating sets $A$ of sizes close to $p^2$ (actually, the condition $|A| =\Omega(p^{3/2+\eps})$ is enough) with small tripling constant $K=|A^3|/|A|$ avoid all Borel subgroups.
\begin{lemma}
Let $A\subseteq \SL_2 (\F_p)$ be a generating set, $p\ge 5$ and $K=|A^3|/|A|$.
Then for any Borel subgroup $\B_*$ one has $|A\cap \B_*| \le 2p K^{5/3} |A|^{1/3}$.
\label{l:A_cap_B}
\end{lemma}
\begin{proof}
We obtain the result for the standard Borel subgroup $\B$ and after that apply the conjugation to prove our Lemma in full generality.
Let $\gamma \in \F_p^*$ be any number and $l_\gamma$ be the line
$$ l_\gamma = \{ (\gamma u| 0 \gamma^{-1}) ~:~ u\in \F_p \} \subset \SL_2 (\F_p) \,.$$
By \cite[Lemma 7]{RS_SL2}, we have $|A\cap l_\gamma| \le 2 |A^3 A^{-1} A|^{1/3}$.
Using the last bound, as well as the Ruzsa triangle inequality, we obtain
\[
|A\cap \B| \le \sum_{\gamma \in \F^*_p} |A\cap l_\gamma| \le 2 p |A^3 A^{-1} A|^{1/3}
\le
2p (|A^4||A^{-2} A|/|A|)^{1/3}
\le
2p K^{5/3} |A|^{1/3} \,.
\]
This completes the proof.
$\hfill\Box$
\end{proof}
\begin{remark}
Examining the proof of Lemma 7 from \cite{RS_SL2} one can equally write $|A\cap l_\gamma| \le 2 |A^3 A^{-2}|^{1/3}$ and hence by the calculations above
$|A\cap \B_*| \le 2p K^{4/3} |A|^{1/3}$.
Nevertheless, his better estimate has no influence to the final bound in Theorem \ref{t:main_intr}.
\end{remark}
\begin{remark}
Bounds for intersections of $A\subseteq \SL_2 (\F_q)$, $K=|A^3|/|A|$ with $g\B_*$, where $g\notin \B_*$ are much simpler and follow from Lemma \ref{l:BgB} (also, see Remark \ref{r:BgB_B*}).
Indeed, by this result putting $A_* = A\cap g\B_*$, we have
\[
K|A|\ge |AA| \ge |A_* A_*| \ge \frac{|A_*|^4}{\E(A^{-1}_*, A_*)} \ge \frac{|A_*|^4}{\E(A^{-1}_*,g\B_*)} \ge \frac{|A_*|^2}{q-1}
\]
without any assumptions on generating properties of $A$.
\label{r:sB}
\end{remark}
\section{On Zaremba's conjecture}
In this section we
apply methods of the proofs of Theorems \ref{t:8/3-c}, \ref{t:Misha_Je_large} to Zaremba conjecture but also we use the specific of this problem, i.e. the special form of the correspondent set of matrices from $\SL_2 (\F_p)$.
\bigskip
Denote by $F_M(Q)$ the set of all {\it rational} numbers $\frac{u}{v}, (u,v) = 1$ from $[0,1]$ with all partial quotients in (\ref{exe}) not exceeding $M$ and with $ v\le Q$:
\[
F_M(Q)=\left\{
\frac uv=[0;b_1,\ldots,b_s]\colon (u,v)=1, 0\leq u\leq v\leq Q,\, b_1,\ldots,b_s\leq M
\right\} \,.
\]
By $F_M$ denote the set of all {\it irrational}
numbers from $[0,1]$ with partial quotients less than or equal to $M$.
From \cite{hensley1992continued} we know that the Hausdorff dimension $w_M$ of the set $F_M$ satisfies
\begin{equation}
w_M = 1- \frac{6}{\pi^2}\frac{1}{M} -
\frac{72}{\pi^4}\frac{\log M}{M^2} + O\left(\frac{1}{M^2}\right),
\,\,\, M \to \infty \,,
\label{HHD}
\end{equation}
however here we need a simpler result from \cite{hensley1989distribution}, which states that
\begin{equation}\label{oop}
1-w_{M} \asymp \frac{1}{M}
\end{equation}
with absolute constants in the sign $\asymp$.
Explicit estimates for dimensions of $F_M$ for certain values of $M$ can be found in \cite{jenkinson2004density}, \cite{JP} and in other papers.
For example, see \cite{JP}
\[
w_2 = 0.5312805062772051416244686...
\]
In papers \cite{hensley1989distribution,hensley1990distribution} Hensley gives the bound
\begin{equation}
|F_M(Q)| \asymp_M Q^{2w_M} \,.
\label{QLOW}
\end{equation}
\bigskip
Now we are ready to prove Theorem \ref{t:main_intr} from the Introduction.
One has
\begin{equation}\label{f:continuants_aj}
\left( {\begin{array}{cc}
0 & 1 \\
1 & b_1 \\
\end{array} } \right)
\dots
\left( {\begin{array}{cc}
0 & 1 \\
1 & b_s \\
\end{array} } \right)
=
\left( {\begin{array}{cc}
p_{s-1} & p_s \\
q_{s-1} & q_s \\
\end{array} } \right) \,,
\end{equation}
where $p_s/q_s =[0;b_1,\dots, b_s]$ and $p_{s-1}/q_{s-1} =[0;b_1,\dots, b_{s-1}]$.
Clearly, $p_{s-1} q_s - p_s q_{s-1} = (-1)^{s}$.
Let $Q=p-1$ and consider the set $F_M(Q)$.
Any $u/v \in F_M(Q)$ corresponds to a matrix from \eqref{f:continuants_aj} such that $b_j \le M$.
The set $F_M(Q)$ splits into ratios with even $s$ and with odd $s$,
in other words
$F_M(Q) = F^{even}_M(Q) \bigsqcup F^{odd}_M(Q)$.
Let $A \subseteq \SL_2 (\F_p)$ be the set of matrices of the form above with even $s$.
It is easy to see from \eqref{QLOW}, multiplying if it is needed the set $F^{odd}_M (Q)$ by $(01|1b)^{-1}$, $1\le b \le M$ that
$|F^{even}_M(Q)| \gg_M |F_M (Q)| \gg_M Q^{2w_M}$.
It is easy to check that if for a certain $n$ one has $A^n \cap \B \neq \emptyset$, then $q_{s-1}$
equals zero modulo $p$ and hence there is $u/v \in F_M ((2p)^n)$ such that $v\equiv 0 \pmod p$.
In a similar way,
we can easily assume that for any $g = (ab|cd)\in A$ all entries $a,b,c,d$ are nonzero (and hence by the construction they are nonzero modulo $p$), see, e.g., \cite[page 46]{hensley_SL2} or the proof of Lemma \ref{l:M^3} below (the same paper \cite{hensley_SL2} contains the fact that $A$ is a generating subset of $\SL_2 (\F_p)$).
Analogously, we can suppose that all $g \in A$ are regular, that is, $\tr(g) \neq 0,\pm 2$.
Let $K = |AAA|/|A|$ and $\tilde{K} = |AA|/|A| = K^\a$, $0\le \a \le 1$.
We need to
estimate
from below cardinality of the set of all possible traces of $A$, that is, cardinality of the set of sums $q_{s} + p_{s-1}$
(this expression is called "cyclical continuant").
Fix $p_{s-1}$ and $q_s$.
Then $p_{s-1} q_s - 1 = p_s q_{s-1}$ and thus $p_s$ is a divisor of $p_{s-1} q_s - 1$.
In particular, the number of such $p_s$ is at most $p^\eps$ for any $\eps>0$.
But now knowing the pair $(p_s,q_s)$, we determine the correspondent matrix \eqref{f:continuants_aj} from $A$ uniquely.
Hence the number of different pairs $(p_{s-1}, q_s)$ is at least $\Omega_M (p^{-\eps}|F_M(Q)|)$
and thus the number of different traces of all matrices from $A$ is $\Omega_M (p^{-1-\eps} |A|)$.
Actually, one can improve the last bound to $\Omega_M (p^{-1} |A|)$.
\begin{lemma}
The number of all possible sums $q_{s} + p_{s-1}$ is at least $\Omega(|A|/(M^3 p))$.
\label{l:M^3}
\end{lemma}
\begin{proof}
As above fix
$q_s$ and $p_{s-1}$.
It is well--known (see, e.g., \cite{hensley_SL2}) that
$q_s = \langle b_1,\dots, b_s \rangle$,
$p_s = \langle b_2,\dots, b_s \rangle$,
$q_{s-1} = \langle b_1,\dots, b_{s-1} \rangle$,
$p_{s-1} = \langle b_2,\dots, b_{s-1} \rangle$, where by $\langle x_1,\dots, x_n \rangle$ we have denoted the corresponding continuant.
We know that
\begin{equation}\label{tmp:01.11_1}
-p_{s} q_{s-1} = - q_s p_{s-1} + 1 \,.
\end{equation}
Substituting the well--known formula $p_{s} = b_s p_{s-1} + p_{s-2}$ into \eqref{tmp:01.11_1}, we obtain
\begin{equation}\label{tmp:01.11_2}
-b_s p_{s-1} q_{s-1} \equiv - q_s p_{s-1} + 1 \pmod {p_{s-2}} \,.
\end{equation}
and thus for any fixed $b_s \neq 0 \pmod {p_{s-2}}$ the number $q_{s-1}$ is uniquely determined modulo $p_{s-2} = \langle b_2,\dots, b_{s-2} \rangle $.
But applying the recurrence formula for continuants again, we get
\[
q_{s-1} = b_{s-1} \langle b_1,\dots, b_{s-2} \rangle + \langle b_1,\dots, b_{s-3} \rangle \le (b_{s-1} + 1) \langle b_1,\dots, b_{s-2} \rangle =
\]
\[
= (b_{s-1} + 1) ( b_1 p_{s-2} + \langle b_3,\dots, b_{s-2} \rangle) \le (b_{s-1} + 1) (b_1+1) p_{s-2} \,.
\]
It follows that there are at most $(M+1)^2$ possibilities for $q_{s-1}$.
Now if $b_s \equiv 0 \pmod {p_{s-2}}$, then $M\ge b_s \ge p_{s-2} \ge \left(\frac{1+\sqrt{5}}{2}\right)^{s-2}$ and hence $s\ll \log M$.
It gives us, say, at most $M^s \ll M^{O(1)} \le |A|/2$
matrices from $A$.
This completes the proof of the lemma.
$\hfill\Box$
\end{proof}
\bigskip
Now recall \cite[Lemma 12]{RS_SL2}, which is a variant of the Helfgott map \cite{H} from \cite{Brendan_rich} (we have already used similar arguments in the proof of Theorem \ref{t:Misha_Je_large}).
For the sake of the completeness we give the proof of a "statistical"\,
version
of this result.
\begin{lemma}
Let $\Gr$ be any group and $A\subseteq \Gr$ be a finite set.
Then for an arbitrary $g\in \Gr$, there is $A_0 \subseteq A$, $|A_0| \ge |A|/2$ such that for any $a_0 \in A_0$ the following holds
\begin{equation}\label{f:CS_ineq}
|A|/2 \le |{\rm Conj} (g) \cap AgA^{-1}| \cdot |{\rm Centr}(g) \cap a_0^{-1} A| \,.
\end{equation}
Here ${\rm Conj} (g)$ is the conjugacy class and ${\rm Centr}(g)$ is the centrlizer of $g$ in $\Gr$.
\label{l:CS_ineq}
\end{lemma}
\begin{proof}
Let $\_phi : A \to {\rm Conj} (g) \cap AgA^{-1}$ be the Helfgott map $\_phi(a) := a g a^{-1}$.
One sees that $\_phi(a) = \_phi (b)$ iff
\[
b^{-1} a g = g b^{-1} a \,.
\]
In other words, $b^{-1} a \in {\rm Centr}(g) \cap A^{-1} A$.
Clearly, then
\[
|A| = \sum_{c\in {\rm Conj} (g) \cap AgA^{-1}} |\{ a\in A ~:~ \_phi (a) = c\}|
\le
\]
\begin{equation}\label{tmp:01.11_10}
\le
2 \sum_{c\in {\rm Conj} (g) \cap AgA^{-1} ~:~ |\{ a\in A ~:~ \_phi (a) = c\}| \ge |A|/(2|{\rm Conj} (g) \cap AgA^{-1}|)} |\{ a\in A ~:~ \_phi (a) = c\}| \,.
\end{equation}
For $c\in \_phi (A) \subseteq {\rm Conj} (g) \cap AgA^{-1}$ put $A (c) = \_phi^{-1} (c) \subseteq A$ and let
$$
A_0 = \bigsqcup_{c ~:~ |A (c)| \ge |A|/(2|{\rm Conj} (g) \cap AgA^{-1}|)} A (c) \,.
$$
In other words, estimate \eqref{tmp:01.11_10} gives us
\[
|A_0| = \sum_c |A (c)| \ge |A|/2 \,.
\]
But for any $b\in A_0$ one has $|{\rm Centr}(g) \cap b^{-1} A| \ge |A|/(2|{\rm Conj} (g) \cap AgA^{-1}|)$
as required.
This completes the proof of the lemma.
$\hfill\Box$
\end{proof}
\bigskip
Now summing inequality \eqref{f:CS_ineq} over all $g\in A$ with different traces, we obtain in view of the Ruzsa triangle inequality and Lemma \ref{l:M^3} that
\begin{equation}\label{tmp:15.10_1}
|A|^2 p^{-1} \ll_M |AAA^{-1}| \cdot \max_{g\in A} |{\rm Centr}(g) \cap a_0^{-1}(g) A|
\le
K \tilde{K} |A| \cdot \max_{g\in A} |{\rm Centr}(g) \cap a_0^{-1}(g) A| \,.
\end{equation}
Here for every $g\in A$ we have taken a concrete $a_0 (g) \in A_0 (g)$ but in view of Lemma \ref{l:CS_ineq} it is known that there are a lot of them and we will use this fact a little bit later.
Now by \cite[Lemma 4.7]{H}, we see
that
\[
|(a_0^{-1}(g) A) g_* (a_0^{-1}(g) A) g^{-1}_* (a_0^{-1}(g) A)^{-1}| \gg |{\rm Centr}(g) \cap a_0^{-1}(g) A|^3 \,,
\]
where $g_* = (ab|cd)$ is any element from $A$ such that $abcd \neq 0$ in the basis where $g$ has the diagonal form.
Thanks to Lemma \ref{l:A_cap_B} and Remark \ref{r:sB} we can choose $g_* = a_0 (g)$, otherwise $|A| \ll p^{3/2} K^{5/2}$.
In the last case if, say, $|A| \gg p^{2-1/35}$, then $K\gg p^{33/175}$ and hence $|A^3| \gg p^{2+4/25}$.
Using Theorem \ref{t:8/3-c}, we see that one can take $n=27$ and this is better than we want to prove.
Then with this choice of $g_*$, we have by the Ruzsa triangle inequality
\[
|A^2 g^{-1}_* A^{-1}| \le |A^2 A^{-2}| \le K^2 |A| \,,
\]
and hence $|{\rm Centr}(g) \cap a_0^{-1}(g) A| \ll K^{2/3} |A|^{1/3}$.
Substituting the last bound into \eqref{tmp:15.10_1}, we get
\begin{equation}\label{tmp:15.10_1-}
|A|^2 p^{-1} \ll_M K \tilde{K} |A| \cdot K^{2/3} |A|^{1/3}
\end{equation}
and hence
\begin{equation}\label{tmp:15.10_2}
K \gg_M (|A|^2 p^{-3})^{\frac{1}{5+3\a}} \gg p^{\frac{4w_M}{5+3\a} - \frac{3}{5+3\a}} \,.
\end{equation}
In other words, $|AAA| \gg_M p^{2+\frac{w_M (14+6\a) - 13- 6\a}{5+3\a}}$.
Take $M$ sufficiently large
such that $w_M (14+6\a) - 13- 6\a >0$.
Using Theorem \ref{t:8/3-c}, we see that for any
\begin{equation}\label{f:n_1}
n\ge \frac{w_M (28+12\a)- 6}{w_M (14+6\a) - 13- 6\a}
\end{equation}
one has $A^n \cap \B \neq \emptyset$.
On the other hand, from \eqref{tmp:15.10_2}, we get
\[
|AA| = |A|K^\a \gg p^{2+ \frac{w_M(10+10\a) -10 - 9 \a}{5+3\a}} \,.
\]
Suppose that $w_M(10+10\a) -10 - 9 \a > 0$.
It can be done if $\a>0$ and if we take sufficiently large $M$.
Applying Theorem \ref{t:8/3-c} one more time, we derive that for any
\begin{equation}\label{f:n_2}
n \ge \frac{2}{3} \cdot \frac{w_M(20+20\a) - 6 \a}{w_M(10+10\a) -10 - 9 \a}
\end{equation}
one has $A^n \cap \B \neq \emptyset$.
Comparing \eqref{f:n_1} and \eqref{f:n_2}, we choose $\a$ optimally when
\[
\a^2 (120 w^2_M - 12w_M - 72) + \a(400 w_M^2 -368 w_M + 6)
+
280 w_M^2 + 180 - 500 w_M = 0
\]
and it gives
\[
18 \a^2 + 19 \a - 20 = 0
\]
and whence
$\a = \frac{-19+\sqrt{1801}}{36} + o_M (1)$ as
$M\to +\infty$.
Hence from \eqref{f:n_1}, say, we obtain $n\ge \frac{47+\sqrt{1801}}{3} + o_M (1) > 29.81 + o_M (1)$.
Taking sufficiently large $M$,
we can choose $n=30$.
If $\a=0$, then for sufficiently large $M$ estimate \eqref{f:n_1} allows us to take $n = 23$.
This completes the proof.
$\hfill\Box$
\bigskip
Combining the arguments
above
with Theorems \ref{t:8/3-c}, \ref{t:Misha_Je_large}, we obtain Theorem \ref{t:main_intr2} from the Introduction.
Actually, if we apply the second part of Theorem \ref{t:Misha_Je_large}, then we generate the whole $\SL_2 (\F_p)$ (and this differs our method from \cite{MOW_Schottky}, say).
Because in the case $\zk =2$ we use results about growth in $\SL_2 (\F_p)$ for relatively small asymmetric set $A$ ($|A| \gg p^{2w_2} \gg p^{1.062}$)
our absolute constant $C$ is
large.
It is easy to see that the arguments of this section on trace of the set $A$ begin to work for $w_M > 3/4$ (see Lemma \ref{l:A_cap_B},
as well as
estimates \eqref{tmp:15.10_1}, \eqref{tmp:15.10_1-}) and in this case the constant $C$ can be decreased, although it remains rather large.
|
2,869,038,153,794 | arxiv | \section{Introduction}
Circular orbits near rotating black holes is an important issue in
astrophysics and are of interest theoretically on their own since they
possess a number of nontrivial properties. There are three main types of
such orbits -\ the photon orbit, the marginally bound and the marginally
stable ones. The latter is also called the innermost stable circular orbit
(ISCO). It was shown in a "classic" paper \cite{72} for the Kerr metric that
in the extremal limit, all three orbits share the same value of the
Boyer-Lindquist radial coordinate $r_{0}$ that coincides with the horizon
radius $r_{H}$. Hereafter, subscript "H" refers to the quantity taken on the
horizon. However, it was stressed in \cite{72} that it does not mean that
they coincide with the horizon. The radial coordinate becomes degenerate
there and is \ unsuitable for analysis. A more thorough inspection showed
that the proper distance between each orbit and the horizon and between
different orbits is not zero in the limit under discussion (moreover, the
distance between the marginally bound orbit and ISCO even diverges).
Meanwhile, some subtleties concerning the properties of these orbits
remained unnoticed until recently. As was pointed out in \cite{ted11}, the
very meaning of location of orbits depends crucially on what slice of
space-time is chosen for inspection. There are such slices that the proper
distance to the horizon tends to zero, so in the extremal limit these orbits
asymptotically approach the horizon.
Formally, $\lim_{\kappa \rightarrow 0}r_{0}=r_{H}$, where $r_{0}$ is the
radius of the orbit, $\kappa $ is the surface gravity ($\kappa =0$ for
extremal black holes). However, the situation when a black hole is extremal
from the very beginning should be considered separately. Time to time
debates on this issue have been continuing \cite{ind2}, \cite{m} in spite of
the fact that, clearly, a trajectory of a massive particle cannot lie on the
light-like surface. Detailed analysis of this circumstance was done in \cit
{circkerr} for the Kerr metric. With the help of a coordinate system
suggested below, we carry out analysis for a generic axially symmetric
rotating black hole and show that the corresponding circular orbit is indeed
fake.
Although such orbits with $r_{0}=r_{H}$ are fake, we show that the values of
black hole parameters for which the radii of these orbits coincide with
r_{H}$ do have some physical meaning. This meaning is twofold. Circular
orbits with radii slightly above the horizon do exist, then in the main
approximation the black hole parameters under discussion are not arbitrary
but take some fixed values. On the other hand, these values also
characterize dynamics, when a particle moves along the trajectory that is
not exactly circular. Then, the rate with which a particle asymptocially
approaches the horizon changes when black hole angular momentum (or another
parameter) crosses these values in the space of parameters.
The aim of the present work is to give a general description of the
equatorial near-horizon orbits in question for a generic stationary
axially-symmetric black hole. We do not specify its metric and do not assume
it to be necessarily the Kerr or Kerr--Newman one. This deviation can be
attributed to matter that inevitably surrounds a black hole in astrophysics.
In this sense, a black hole is "dirty". In the equatorial plane, we
introduce a coordinate system that generalizes the version of the coordinate
system suggested\ for the Kerr metric in \cite{dor}. Hopefully, this system
can be of interest not only for the issue of near-horizon circular orbits
but can have some general value. We consider separately the extremal limits
of nonextremal black holes and extremal black holes that leads to
qualitatively different results.
The properties of near-horizon circular orbits are intimately related to the
so-called Ba\~{n}ados-Silk-West (BSW) effect \cite{ban}. According to this
effect, if two particles collide near a black \ hole, the energy in their
centre of mass $E_{c.m.}$ can be, under certain conditions, formally
unbound. The fact that the on-horizon orbit of a massive particle is fake
turns out to be intimately connected with impossibility to gain infinite
E_{c.m.}.$ This quantity remains finite in any act of collision, although it
can become as large as one likes (this can be called "the kinematic
censorship"). The relation between the high energy collisions and properties
of ISCO close to the horizon was considered first in \cite{circkerr} for the
near-extremal Kerr metric and generalized in \cite{circ} for dirty black
holes. In the present paper, we discuss the aforementioned relation both for
near-extremal and extremal black holes. We also give geometric explanation
to the high energy collisions of particles near ISCO. This can be considered
as a counterpart of geometric explanation given for the standard BSW effect
\cite{cqg}.
We would like to stress that studies of motion of particles in the
background of the Kerr-Newman black hole has attracted interest during many
years until recently \cite{nar} - \cite{ruf3}. In general, details of motion
are quite complicated and depend crucially on the concrete properties of the
metric. In our approach, we restrict ourselves by the near-horizon region
only. This enables us to trace some features that rely on the properties of
the horizon in a model-independent way. This can be thought of as one more
manifestation of \ universality of black hole physics.
Throughout the paper, we put fundamental constants $G=c=1$.
\section{Coordinate system and equations of motion}
We consider the metric of the form
\begin{equation}
ds^{2}=-N^{2}dt^{2}+g_{\phi }(d\phi -\omega dt)^{2}+\frac{dr^{2}}{A
+g_{\theta }d\theta ^{2}\text{,} \label{met}
\end{equation
where the metric coefficients do not depend on $t$ and $\phi $.
Correspondingly, the energy $E=-mu_{0}$ and angular momentum $L=mu_{\phi
\text{ }}$are conserved. Here, $m\,\ $is the particle's mass$,u^{\mu }$ is
the four-velocity. In what follows, we restrict ourselves by motion in the
equatorial plane $\theta =\frac{\pi }{2}$ only. Then, it is convenient to
redefine the radial coordinate $r\rightarrow \rho $ in such a way that
\begin{equation}
ds^{2}=-N^{2}dt^{2}+g_{\phi }(d\phi -\omega dt)^{2}+\frac{d\rho ^{2}}{N^{2}}.
\end{equation}
Equations of motion for a test particles moving along the geodesic rea
\begin{equation}
m\frac{dt}{d\tau }=\frac{X}{N^{2}}\text{,} \label{t}
\end{equation
\begin{equation}
X\equiv E-L\omega \text{,} \label{X}
\end{equation
\begin{equation}
m\frac{d\phi }{d\tau }=\frac{L}{g_{\phi }}+\frac{\omega X}{N^{2}}\text{,}
\label{phi}
\end{equation
where $\tau $ is the proper time.
The forward in time condition $\frac{dt}{d\tau }>0$ requires
\begin{equation}
X\geq 0\text{,} \label{ft}
\end{equation
where $X=0$ is possible on the horizon only, where $N=0$.
Using also the normalization condition $u_{\mu }u^{\mu }=-1$, one can infer
tha
\begin{equation}
m^{2}(\frac{d\rho }{d\tau })^{2}=Z^{2}\equiv X^{2}-N^{2}(\frac{L^{2}}{g
+m^{2})\text{.} \label{ro}
\end{equation}
We also may introduce new (barred) coordinates according t
\begin{equation}
dt=d\bar{t}-\frac{\chi d\rho }{N^{2}}\text{,} \label{tt}
\end{equation
\begin{equation}
d\phi =d\bar{\phi}-\frac{wd\rho }{N^{2}},\rho =\bar{\rho}\text{,}
\end{equation
where the functions $\chi $ and $w$ depend on $\rho $ only. Then
\begin{equation}
d\phi -\omega dt=d\bar{\phi}-\omega d\bar{t}+\frac{\omega \chi -w}{N^{2}
d\rho ,
\end{equation
\begin{equation}
g_{\rho \rho }=\frac{1-\chi ^{2}}{N^{2}}+\frac{g_{\phi }(\omega \chi -w)^{2
}{N^{4}}\text{.}
\end{equation
We choose the functions $\chi $, $w$ to kill divergences in the metric
coefficient $g_{\rho \rho }$. To this end, we requir
\begin{equation}
\chi ^{2}=1-\mu N^{2}\text{,} \label{hi}
\end{equation
\begin{equation}
\omega \chi -w=N^{2}h\text{,} \label{w}
\end{equation
where $h(\rho )$ and $\mu (\rho )$ are bounded functions, $h(\rho _{H})\neq
0 $, $\mu (\rho _{H})\neq 0$. We obtain
\begin{equation}
d\phi -\omega dt=d\bar{\phi}-\omega d\bar{t}+hd\rho \text{.}
\end{equation
It is convenient to choose $\mu =1$, $h=-w$. Then
\begin{equation}
\chi =\sqrt{1-N^{2}\text{ }}\text{, }w\chi =\omega \text{.} \label{ww}
\end{equation}
As a result
\begin{equation}
ds^{2}=-N^{2}d\bar{t}^{2}+g_{\phi }(d\bar{\phi}-\omega d\bar{t
)^{2}+(1+g_{\phi }w^{2})d\rho ^{2}-2g_{\phi }(d\bar{\phi}-\omega d\bar{t
)wd\rho +2\chi d\bar{t}d\rho . \label{md}
\end{equation}
It can be also rewritten in the for
\begin{equation}
ds^{2}=-d\bar{t}^{2}+(1+g_{\phi }w^{2})(d\rho +\chi d\bar{t}-\frac{g_{\phi }
}{1+g_{\phi }w^{2}}d\bar{\phi})^{2}+\frac{g_{\phi }}{1+g_{\phi }w^{2}}d\bar
\phi}^{2}\text{.} \label{md2}
\end{equation}
For the energy and angular momentum we hav
\begin{equation}
\mathcal{E}\equiv \frac{E}{m}=\frac{\bar{E}}{m}=-g_{\bar{t}\bar{t}}\frac{
\bar{t}}{d\tau }+\omega g_{\phi }\frac{d\bar{\phi}}{d\tau }-(\omega g_{\phi
}w+\chi )\frac{d\rho }{d\tau }\text{,} \label{E}
\end{equation
\begin{equation}
\mathcal{L}\equiv \frac{L}{m}=\frac{\bar{L}}{m}=g_{\phi }\frac{d\bar{\phi}}
d\tau }-\omega g_{\phi }\frac{d\bar{t}}{d\tau }-g_{\phi }w\frac{d\rho }
d\tau }. \label{L}
\end{equation
These quantities can be also written as $\mathcal{E}=-u_{\mu }\xi ^{\mu }$,
\mathcal{L}=u_{\mu }\eta ^{\mu }$, where $\xi ^{\mu }$ and $\eta ^{\mu }$
are Killing vectors responsible for time translation and azimuthal rotation,
respectively, $u^{\mu }=\frac{dx^{\mu }}{d\lambda }$, $\lambda $ is the
affine parameter along the geodesic. Therefore, the expressions (\ref{E}),
\ref{L}) for $\mathcal{E}$ and $\mathcal{L}$ are valid both in the massive
and massless cases.
Let us consider motion in the inward direction, so $\frac{d\rho }{d\tau }<0
. It follows from (\ref{ro}) and (\ref{tt}) tha
\begin{equation}
m\frac{d\bar{t}}{d\tau }=\frac{X-\chi Z}{N^{2}}\text{,} \label{tz}
\end{equation
\begin{equation}
\frac{d\bar{\phi}}{d\tau }=\frac{L}{mg_{\phi }}+\frac{\omega X-wZ}{mN^{2}
\text{.} \label{phi1}
\end{equation
One finds from (\ref{tz}
\begin{equation}
\bar{t}=-\int \frac{d\rho (X-\chi Z)}{ZN^{2}}\text{.} \label{int}
\end{equation
If $X_{H}\neq 0$, near the horizon $Z\approx X_{H}+O(N^{2})$, so the
integral converges. Thus, in contrast to the time $t$ (analogue of the
Boyer-Lindquist time), the time $\bar{t}$ required to reach the horizon is
finite. Only for a special trajectory with $X_{H}=0$ (so-called critical
particles - see below) $\bar{t}$ is infinite.
In the particular case $L=0$, $E=m$ we have from (\ref{tz}) and (\ref{ro}),
\ref{ww}) or from (\ref{E}), (\ref{L}) tha
\begin{equation}
\frac{d\bar{t}}{d\tau }=1\text{, }\frac{d\bar{\phi}}{d\tau }=0\text{,}
\end{equation
so $\bar{\phi}=const$, $\bar{t}$ coincides with the proper time, that agrees
with the form of the metric (\ref{md2}). It generalizes the corresponding
property of the Kerr metric in Doran coordinates \cite{dor}, \cite{ted11}.
As now $g_{\rho \rho }$ is bounded near the horizon, the proper distance to
the horizon remains finite on the slice $\bar{t}=const$, including the
extremal limit.
\section{Kerr metric}
In the particular case of the Kerr metric, we have in the equatorial plane
\theta =\frac{\pi }{2}$
\begin{equation}
\frac{d\rho }{dr}=\sqrt{\frac{r^{3}}{r^{3}+ra^{2}+2Ma^{2}}}\text{,}
\end{equation
\begin{equation}
N^{2}=\frac{\Delta r^{2}}{(r^{2}+a^{2})^{2}-a^{2}\Delta }=\frac
r(r^{2}-2Mr+a^{2})}{r^{3}+ra^{2}+2Ma^{2}}\text{,}
\end{equation
\begin{equation}
g_{\phi }=r^{2}+a^{2}+\frac{2M}{r}a^{2}\text{,}
\end{equation
\begin{equation}
\omega =\frac{2Ma}{r^{3}+ra^{2}+2Ma^{2}}\text{.}
\end{equation
Here, $M\,\ $\ is a black hole mass, $a=\frac{J}{M^{2}}$, $J$ being a black
hole angular momentum. The choice
\begin{equation}
w=\frac{a}{r^{2}+a^{2}}\sqrt{\frac{2M(r^{2}+a^{2})}{r^{3}+ra^{2}+2Ma^{2}}}
\end{equation
satisfies eq. (\ref{hi}), (\ref{w}). Substituting it into (\ref{md2}), we
obtain our metric in the form
\begin{equation}
ds^{2}=-d\bar{t}^{2}+(\alpha ^{-1}\beta dr+\alpha (d\bar{t}-ad\bar{\phi
))^{2}+(r^{2}+a^{2})d\bar{\phi}^{2},
\end{equation
\begin{equation}
\beta ^{2}=\frac{2Mr}{r^{2}+a^{2}}\text{, }\alpha ^{2}=\frac{2M}{r}\text{.}
\end{equation
that corresponds to eq. (18) of \cite{dor} and eqs. (2), (3) of \cite{ted11}
in which one should put $\theta =\frac{\pi }{2}$.
\section{Circular orbits for near-extremal black holes}
Circular orbits ($r=r_{0}=const$\thinspace $)$ are characterized by the
conditions
\begin{equation}
Z^{2}(r_{0})=0, \label{z0}
\end{equation
\begin{equation}
\left( \frac{dZ^{2}}{dr}\right) _{r=r_{0}}=0\text{,} \label{1}
\end{equation
where $Z^{2}$ is defined in (\ref{ro}) and we returned from $\rho $ to $r$.
For a black \ hole with arbitrary parameters, the solution of Eqs. (\ref{z0
), (\ref{1}) is rather complicated even in the simplest case of the Kerr
metric. For a generic dirty black hole (\ref{met}) it is impossible to find
the solution analytically at all. However, it turns out that if a
nonextremal black hole is close to its extremal state, there are some
general features that enabled us to develop a general approach to the
analysis of such orbits \cite{circ}. If the surface gravity $\kappa =\frac{
}{2}(\frac{\partial N^{2}}{\partial \rho })_{\rho =\rho _{H}}$ of a black
hole is small ($\kappa =0$ corresponds to the extremal state), then, it
turns out that for ISCO
\begin{equation}
N\sim r_{0}-r_{H}=O(\kappa ^{2/3}), \label{23}
\end{equation
and, for the photon and marginally bound near-horizon orbits
\begin{equation}
N\sim r_{0}-r_{H}=O(\kappa )\text{.} \label{k1}
\end{equation}
These relations follow from eqs. (46) and (61) of \cite{circ},
correspondingly.
Thus in the extremal limit $\kappa \rightarrow 0$, the corresponding radius
approaches the horizon, $r_{0}\rightarrow r_{H}$. However, in doing so, the
proper distance within the slice $t=const$ between the horizon and the ISCO
\begin{equation}
l=O(\ln \frac{1}{\kappa }),
\end{equation
as is shown in \cite{circ}. Between the horizon and the marginally bound or
photon orbit
\begin{equation}
l=O(1)\text{.}
\end{equation
These results agree with \cite{72}.
If, instead of $t=const$, we choose the slice $\bar{t}=const,$ the metric is
regular, $g_{\rho \rho }$ is finite in the vicinity of the horizon, so the
proper distance
\begin{equation}
\lim_{\kappa \rightarrow 0}l=0
\end{equation
that generalizes observation of Ref. \cite{ted11}.
One reservation is in order. The near-horizon ISCO in the background of
near-extremal black holes exist not always but under some additional
constraints on the metric parameters. See Ref. \cite{circ} and Sec. VII and
IX below. In particular, for the near-extremal Reissner-Nordstr\"{o}m metric
ISCO cannot lie in the vicinity of the horizon (if we restrict ourselves by
geodesic trajectories).
\section{Geometric properties}
The properties of near-horizon obits can be considered from a more general
viewpoint. And, this reveals relation between the issue under discussion and
another issue - namely, so-called Ba\~{n}ados-Silk-West (BSW) effect (see
below). Let us expand the four-velocity of a particle with respect to the
basis that contains two lightlike vectors $l^{\mu }$ and $N^{\mu }$ (so
l_{\mu }l^{\mu }=0=N_{\mu }N^{\mu })$ and two spatial vectors. For the case
under discussion, when a particle follows a circular equatorial orbit,
r=const$ and also $\theta =const$, so as a matter of fact it is sufficient
to use only two basis vectors $l^{\mu }$ and $N^{\mu }$. We normalize them
in such a way tha
\begin{equation}
l^{\mu }N_{\mu }=-1\text{.}
\end{equation
Then
\begin{equation}
u^{\mu }=\beta N^{\mu }+\frac{1}{2\alpha }l^{\mu }\text{.} \label{u}
\end{equation}
It is convenient to choose
\begin{equation}
l^{\mu }=(1,0,0,\omega +\frac{N}{\sqrt{g_{\phi }}}),
\end{equation}
\begin{equation}
N^{\mu }=\frac{1}{2N^{2}}(1,0,0,\omega -\frac{N}{\sqrt{g_{\phi }}})\text{,}
\end{equation
$x^{\mu }=(t,r,\theta ,\phi )$. In the horizon limit
\begin{equation}
l^{\mu }\rightarrow \xi ^{\mu }+\omega _{H}\eta ^{\mu }\text{,}
\end{equation
where $\xi ^{\mu }$ and $\eta ^{\mu }$ are the Killing vectors responsible
for time translation and rotation, respectively.
Equations of motion for a geodesic particle (\ref{t}) - (\ref{phi}) tell us
that on the circular orbit
\begin{equation}
X=N\sqrt{\frac{L^{2}}{g_{\phi }}+m^{2}}\text{,} \label{xc}
\end{equation
\begin{equation}
\beta =\frac{1}{m}(X-\frac{LN}{\sqrt{g_{\phi }}})\text{,}
\end{equation
\begin{equation}
\frac{1}{\alpha }=\frac{1}{mN^{2}}(X+\frac{NL}{\sqrt{g_{\phi }}})\text{.}
\end{equation}
Thus, for small $N$, we have from (\ref{xc}) tha
\begin{equation}
\beta =pN=O(N)\text{,} \label{be}
\end{equation
\begin{equation}
\alpha =qN=O(N)\,\text{,} \label{al}
\end{equation
where the coefficients are equal t
\begin{equation}
p=\frac{1}{m}(\sqrt{\frac{L^{2}}{g_{\phi }}+m^{2}}-\frac{N}{\sqrt{g_{\phi }}
)_{H}=O(1)\text{,}
\end{equation
\begin{equation}
q^{-1}=\frac{1}{m}(\sqrt{\frac{L^{2}}{g_{\phi }}+m^{2}}+\frac{N}{\sqrt
g_{\phi }}})_{H}=O(1).
\end{equation
Also, it follows from (\ref{t}) - (\ref{phi}) tha
\begin{equation}
\frac{d\phi }{dt}=\frac{u^{\phi }}{u^{t}}=\omega _{H}+O(N)\text{,}
\end{equation
\begin{equation}
\frac{l^{\phi }}{l^{t}}=\omega _{H}+O(N)\text{.}
\end{equation
In this sense, the orbit does indeed become asymptotically parallel to the
horizon generator, for which $\frac{d\phi }{dt}=\omega _{H}$. However,
although the coefficient at $l^{\mu }$ (which, in turn, approaches the
horizon generator) is much larger than that at $N^{\mu }$, both terms in
\ref{u}) give comparable contribution to the norm of $u^{\mu }$ \cite{gp}
because of the property $l_{\mu }N^{\mu }=0$.
\section{Extremal black hole and fake trajectory on horizon}
In the previous section, we discussed the limit $\kappa \rightarrow 0$. What
happens if $\kappa =0$ from the very beginning? Formally, one is tempted to
put $\kappa =0$ in (\ref{23}), (\ref{k1}).\ However, the process of
derivation in \cite{circ} essentially implied that although $\kappa $ is
small, $\kappa \neq 0.$ For $\kappa =0$, the asymptotic expansion of the
metric coefficient has another form as compared to the nonextremal case, so
it is necessary to proceed anew. According to general properties, for the
extremal black holes the expansion of the metric coefficient $\omega $ reads
\cite{t}
\begin{equation}
\omega =\omega _{H}-B_{1}N+B_{2}N^{2}+O(N^{3})\text{,} \label{omb}
\end{equation
whence one obtains from (\ref{X}) tha
\begin{equation}
X=X_{H}+(B_{1}N-B_{2}N^{2})L+O(N^{3})\text{,} \label{xh}
\end{equation
where $B_{1}$ and $B_{2}$ are model-dependent coefficients.
If $X_{H}=0$, it seems that there exists an exact solution eqs. (\ref{z0}),
\ref{1}) that reads $N=0$, $X=0$. It \textit{would seem }that it describes
the trajectory that lies within the horizon. However, it contradicts the
fact that the time-like trajectory cannot lie on the null surface.
In Introduction in \cite{m}, the attempt was made to assign a meaning to
such orbits on the extremal horizon. It is based on the results of \cit
{ted11}, where it was revealed that the proper distance to the horizon
depends essentially on a slice (see also Sec. II - IV above). Then, an orbit
can in a sense lie on the horizon, if the slice $\bar{t}=const$ of the Doran
coordinate system \cite{dor} is chosen. However, for the situation discussed
in \cite{ted11}, black holes are near-extremal but not exactly extremal,
there is a parameter $\kappa $ that can be as small as one likes but
nonzero. The proper distance between the circular orbit and the horizon
remains finite for any $\kappa $ on both types of slices ($t=const$ or $\bar
t}=const$). Only asymptotically, in the limit $\kappa \rightarrow 0$, the
proper distance to the horizon on the slice $\bar{t}=const$ approaches zero,
so the claim "the circular orbit lies on the horizon" is to be understood in
the asymptotic sense. But now, for extremal black holes, $\kappa =0$ exactly
from the very beginning, so the reference to the approach of \cite{ted11}
does not save the matter. Here, the situation should be considered anew.
Actually, we have in (\ref{ro}) with $\frac{d\rho }{d\tau }=\frac{dr}{d\tau
=0$ the uncertainty of the type \thinspace $\frac{0}{0}$. To resolve this
uncertainty, the original coordinates are insufficient since they become
degenerate on the horizon. Instead, we use the barred coordinates in which
the metric takes the form (\ref{md}).
Below, we generalize approach of Sec.III C of \cite{circkerr}, where this
issue was considered for the Kerr metric. For the circular orbit, $\frac
d\rho }{d\tau }=0$. On the horizon, $\bar{N}=0$, $g_{\bar{t}\bar{t}}=g_{\phi
}\omega _{H}^{2}$. After substitution into (\ref{E}), (\ref{L}), we see that
in this cas
\begin{equation}
\mathcal{E}-\omega _{H}\mathcal{L}=0\text{.} \label{crit}
\end{equation
Then, direct calculation gives u
\begin{equation}
u_{\mu }u^{\mu }=\mathcal{E}^{2}\frac{1}{g_{\phi }\omega _{H}^{2}}\text{.}
\end{equation}
As $u_{\mu }u^{\mu }=-1$ for time-like curves and $u_{\mu }u^{\mu }=0$ for
lightlike ones, we conclude that for trajectories of physical particles the
only possibility is to $\mathcal{E}=0=\mathcal{L}$, the curve must be
light-like. Thus the time-like on-horizon orbit is fake.
We also infer from (\ref{E}) and (\ref{L}) tha
\begin{equation}
\frac{d\bar{\phi}}{dt}=\omega _{H}\text{.}
\end{equation
Actually, it means that for massless particles, the trajectory in question
coincides with the horizon generator.
\section{Circular orbits for near-critical particles}
In what follows, we use terminology borrowed from works on studies of the
BSW effect. We call a particle critical if $X_{H}=0$ and usual if $X_{H}>0$
is generic. If $X_{H}\neq 0$ but is small, we call a particle near-critical.
It follows from (\ref{crit}) that a particle whose trajectory lies on the
horizon is necessarily critical. Such a trajectory can be realized for
photon and is forbidden for massive particles. Meanwhile, although a
trajectory that lies exactly on the horizon is impossible for massive
particles, near-horizon circular orbits are allowed, if corresponding
particles are near-critical (not exactly critical), as we will see it below.
It is convenient to use $N$ as a radial coordinate. Then, analogue of eq.
\ref{1}) read
\begin{equation}
(\frac{dZ^{2}}{dN})_{N=N_{0}}=0\text{,} \label{zn}
\end{equation
where $N_{0}$ corresponds to the circular orbit. Then, it follows from (\re
{z0}) and (\ref{zn}) tha
\begin{equation}
X_{0}=N_{0}\sqrt{Y_{0}}\text{,} \label{xn}
\end{equation
\begin{equation}
\frac{1}{2}(\frac{dZ^{2}}{dN})_{0}=N_{0}(B\sqrt{Y}L-Y-\frac{N}{2}\frac{dY}{d
})_{0}\text{,} \label{1d}
\end{equation
wher
\begin{equation}
Y=\frac{L^{2}}{g}+m^{2}\text{,} \label{Y}
\end{equation
\begin{equation}
B=-\frac{d\omega }{dN}\text{,}
\end{equation
subscript "0" means that the corresponding quantities are calculated on the
circular orbit.
To avoid fake orbits with $N_{0}=0$, we are interested in the solution with
N_{0}\neq 0$, so we hav
\begin{equation}
(B\sqrt{Y}-Y-\frac{N}{2}\frac{dY}{dN})_{0}=0\text{.} \label{b}
\end{equation}
Thus there are two equations (\ref{xn}) and (\ref{b}) for three unknowns
X_{0}$, $N_{0}$, $L$. We can fix, say, $N_{0}$ and, in principle, calculate
X_{0}$ and $L_{0}$ (or $E$ and $L_{0})$. Eq. (\ref{b}) is exact.
If we are interested in just near-horizon orbits, we must require
N_{0}\approx 0$ and solve the equations iteratively in the form of the
Taylor expansio
\begin{equation}
L=L_{0}+L_{1}N_{0}+L_{2}N_{0}^{2}+... \label{ln}
\end{equation}
In the main approximation in which corrections due to small $N_{0}$ are
discarded, we obtai
\begin{equation}
B_{1}L_{0}=\sqrt{Y_{H}}=\sqrt{\frac{L_{0}^{2}}{g_{H}}+m^{2}}\text{,}
\label{s}
\end{equation
where $B_{1}$ is the coefficient entering the expansion (\ref{omb}). Also,
in the same approximation one can take $X_{H}\approx 0$ according to (\re
{xn}), whenc
\begin{equation}
L_{0}=\frac{E}{\omega _{H}}\text{.} \label{le}
\end{equation}
It follows from (\ref{s}), (\ref{le}) tha
\begin{equation}
L_{0}=\frac{m}{\sqrt{B_{1}^{2}-\frac{1}{g_{H}}}}\text{, }E=\frac{m\omega _{H
}{\sqrt{B_{1}^{2}-\frac{1}{g_{H}}}}\text{,} \label{lme}
\end{equation
where we assume that $m\neq 0$ (for the case $m=0$, see below). The formulas
(\ref{lme}) imply that
\begin{equation}
B_{1}\sqrt{g_{H}}>1\text{.} \label{bg}
\end{equation}
If (\ref{s}), (\ref{bg}) are violated, there are no circular near-horizon
orbits for massive particles. Eq. (\ref{s}) constraints black hole
parameters, for example the angular momentum of a black hole. In particular,
for the Kerr-Newman black hole this leads to some distinct values of the
black hole angular momentum \cite{m} (see also Sec. VII and IX below). In
particular, for the Reissner-Nordstr\"{o}m metric $\omega =0=B_{1}$, eq.
\ref{bg}) is violated and this means that the near-horizon ISCO does not
exist in this case.
For different types of orbits eq. (\ref{s}) or (\ref{lme}) leads to
different relations.
\subsection{Photon orbit}
Putting $m=0$ in (\ref{s}), we obtai
\begin{equation}
B_{1}^{2}=\frac{1}{g_{H}}\text{.} \label{phg}
\end{equation}
\subsection{Marginally bound orbit}
Putting $E=m,$ in the zero approximation we obtain from (\ref{z0}) tha
\begin{equation}
L_{0}=\frac{m}{\omega _{H}}\text{.} \label{mbg}
\end{equation
Then, (\ref{s}) gives u
\begin{equation}
B_{1}^{2}=\frac{1}{g_{H}}+\omega _{H}^{2}\text{.} \label{mbge}
\end{equation}
\subsection{ISCO}
By definition, the conditio
\begin{equation}
\left( \frac{d^{2}Z^{2}}{dz^{2}}\right) _{0}=0 \label{2d}
\end{equation
should be satisfied for this orbit in addition to (\ref{z0}) and (\ref{zn})
since eq. (\ref{2d}) gives the boundary between stable and unstable orbits.
In the main approximation, $\frac{1}{2}\left( \frac{d^{2}Z^{2}}{dz^{2}
\right) _{0}\approx N_{0}L^{2}(\frac{1}{g^{2}}\frac{dg}{dN}-2B_{2}B_{1})_{0}
, so we hav
\begin{equation}
S\equiv (\frac{1}{g^{2}}\frac{dg}{dN}-2B_{2}B_{1})_{H}=0\text{,} \label{z02}
\end{equation
where we neglected in (\ref{z02})\ the difference between the quantities
calculated on the circular orbit and on the horizon. In addition to (\re
{z02}), eq. (\ref{lme}) should be satisfied.
\subsection{Estimate of $X_{H}$}
It is instructive to find $X_{H}$ for all types of orbits under discussion.
It follows from (\ref{xh}), (\ref{xn}) and (\ref{Y}) tha
\begin{equation}
X_{H}=N_{0}(\sqrt{Y}-B_{1}L)+O(N_{0}^{2})\text{.}
\end{equation
The terms of the order $O(N_{0})$ mutually cancel, s
\begin{equation}
X_{H}=O(N_{0}^{2})\text{.} \label{xoh}
\end{equation
Thus the particle that moves on the circular orbit near the extremal black
hole turns out to be not only near-critical but should have anomalously
small $X_{0}$ to keep following such an orbit.
\subsection{Extremal Kerr-Newman black hole}
For the equatorial orbit ($\theta =\frac{\pi }{2}$) in the extremal
Kerr-Newman background, one can calculate the relevant quantities (using,
say, the Boyer-Lindquiste coordinates) and find
\begin{equation}
g_{H}=\frac{(M^{2}+a^{2})^{2}}{M^{2}}\text{,} \label{gh}
\end{equation
\begin{equation}
\left( \frac{dg}{dN}\right) _{H}=2\frac{(M^{2}-a^{2})}{M^{4}
(M^{2}+a^{2})^{2}\text{,}
\end{equation}
\begin{equation}
B_{1}=\frac{2a}{M^{2}+a^{2}}\text{,} \label{b1m}
\end{equation
\begin{equation}
B_{2}=\frac{a^{3}}{M^{2}(M^{2}+a^{2})}\text{,}
\end{equation
\begin{equation}
\omega _{H}=\frac{a}{M^{2}+a^{2}}\text{,} \label{okn}
\end{equation
\begin{equation}
S=\frac{2(M^{2}-2a^{2})}{M^{2}(M^{2}+a^{2})}\text{.} \label{S}
\end{equation
Then, one can find from eqs. (\ref{s}), (\ref{le}), (\ref{S}) that $a=a_{0}
, where
\begin{equation}
\frac{a_{0}}{M}=\frac{1}{2},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{2}} \label{a0}
\end{equation
$\ $\ for the photon orbit, marginally bound one and ISCO, respectively.
These values agree with \cite{bal}, \cite{m}.
\section{Dynamics of critical particles}
In the previous section we saw that for a circular orbit to exist in the
immediate vicinity of the horizon, the relations (\ref{s}), (\ref{ln})
should be satisfied. It \textit{would seem} that, as these relations imply
N_{0}=0$, the corresponding orbit lies exactly on the horizon. However, we
already know that such an orbit is fake for massive particles. We may look
for the solution for circular orbit in a series with respect to $N_{0}$.
Then, apart from (\ref{ln}), the similar expansion should go, say, for the
parameter $a$ (if, for definiteness, we consider the Kerr-New-Newman metric)
\begin{equation}
a_{c}=a_{0}+a_{1}N_{0}+O(N_{0}^{2})\text{,} \label{ac}
\end{equation
where $a_{0}$ is the aforementioned value, different for different kinds of
orbits, $a_{c}$ corresponds to the circular orbit.
Let, say, $a_{1}<0$ but a black hole has $a>a_{0}$. Then, the contradiction
with (\ref{ac}) tells us that for $a>a_{0}$ the circular orbits do not exist
near the horizon at all. For example, for the Kerr-Newman metric there is an
exact solution describing the circular photon orbit \cite{bal} $r_{0}=2M-2a$
that can be rewritten a
\begin{equation}
a=\frac{M}{2}-\frac{1}{2}(r_{0}-M)<\frac{M}{2}
\end{equation
for any orbit above the horizon. If $a>\frac{M}{2}$, such a solution ceases
to exist.
Formally, then it follows from (\ref{1d}) that the only solution is $N_{0}=0$
(the orbit exactly on the horizon). However, we reject this case since for
massive particles it is impossible and for massless ones it is already
described above. Thus we are led to the conclusion that, in the absence of a
circular orbits, a particle should move. We assume that it moves towards a
black hole.
To probe dynamics in this situation, it is instructive to select particles
which are exactly critical ($X_{H}=0)$ since it is this value that was a
"candidate" for a circular orbit on the horizon.
It is convenient to expand $Z^{2}$ in powers of $N$
\begin{equation}
Z^{2}=z_{2}N^{2}+z_{3}N^{3}+... \label{zx}
\end{equation}
Here, the coefficient
\begin{equation}
z_{2}=(B_{1}^{2}-\frac{1}{g_{H}})L^{2}-m^{2}
\end{equation
\begin{equation}
z_{3}=[\frac{1}{g_{H}^{2}}(\frac{dg}{dN})_{H}-2B_{1}B_{2}]L^{2}\text{,}
\end{equation
correspond just to (\ref{s}), (\ref{z02}) but now they are, on general, do
not vanish.
It is also convenient to writ
\begin{equation}
N=F(\rho )(\rho -\rho _{H})=H(r)(r-r_{H})\text{, }F(\rho _{H})\equiv
F_{1}\neq 0\text{, }H(r_{H})=H_{1}\neq 0\text{.}
\end{equation}
Then, for $z_{2}>0$ we obtain from (\ref{ro}
\begin{equation}
\frac{dN}{d\tau }\approx -F_{1}N\sqrt{z_{2}}\text{,}
\end{equation
\begin{equation}
r-r_{H}\approx r_{1}\exp (-F\sqrt{z_{2}}\tau )\text{,}
\end{equation
where $r_{1}$ is another constant.
Let now $z_{2}=0$, $z_{3}>0$. In a similar way, we find from (\ref{ro}) tha
\begin{equation}
\frac{1}{F}\frac{dN}{d\tau }\approx -\sqrt{z_{3}}N^{3/2}\text{,}
\end{equation}
\begin{equation}
r-r_{H}\approx \frac{4}{F_{1}^{2}\tau H_{1}^{2}z_{3}}.
\end{equation}
In general, if $X_{H}=0$ and $z_{2}(r_{+})=z_{3}(r_{+})=...z_{k}=0$,
z_{k+1}>0$
\begin{equation}
r-r_{H}\sim \lambda ^{-\frac{2}{k-1}}\text{,}
\end{equation
where $\lambda $ is the affine parameter (proper time for a massive
particle).
\section{Circular orbits in vicinity of near-extremal black holes}
We saw that the circular orbits exist for selected values of black hole
parameters only (say, the discrete values of the angular momentum $a=a_{0}
). In the main approximation, these values do not depend on the posiiton of
the near-horizon orbit, in the next approximation $a_{0}$ acquires small
corrections of the order $N_{0}$. This is in sharp contrast with the
situation for near-extremal black holes, when the surface gravity $\kappa $
is small but nonzero. It is shown in \cite{circ} that for such black holes
circular orbits almost always exist, under rather weak restriction that
black hole parameters obey some inequalities (see below). For example, for
the case the Kerr-Newman metric it means that $M^{2}$ is to be close to
a^{2}+Q^{2}$ (where $Q$ is the black hole electric charge) but the ratio
\frac{a}{M}$ is arbitrary in some finite interval. Thus, the situation for
extremal black holes in the aspects under discussion cannot be considered as
a limit $\kappa \rightarrow 0$ of that for near-extremal black holes.
Actually, we have two distinct situations.
1) Near-extremal black holes. Here, black hole parameters are free ones that
change contiuously in some interval. The radius of the circular orbit $r_{0}$
near the horizon is not arbitrary but is defined by black hole parameteres
from equilibrium conditions \cite{circ}.
2) Extremal black holes. Here, black hole parameters are fixed but the
quantity $N_{0}$ that characterizes the location of the orbit is a small
free parameter.
In a sense, both cases are complimentary to each other.
To gain further insight, how it happens, let us consider briefly the case of
near-extremal black holes (for more details one can consult ref. \cite{circ
). Near the horizon
\begin{equation}
N^{2}\approx 2\kappa x+Dx^{2}\text{,} \label{nx}
\end{equation
\begin{equation}
\omega \approx \omega _{H}-b_{1}x\text{,} \label{ob}
\end{equation
where $x=\rho -\rho _{H}$, $\kappa $ is a small but nonzero parameter, $D~
and $b_{1}$ are constants. Then, eq. (\ref{z0}), (\ref{1}) give us, in the
main approximation (say, for the marginally bound orbit
\begin{equation}
\sqrt{2\kappa x+Dx^{2}}b_{1}L=(\kappa +Dx)\sqrt{Y_{H}}\text{.} \label{yd}
\end{equation}
Taking into account (\ref{omb}), (\ref{ob}), (\ref{nx}), we see that
b_{1}=B_{1}\sqrt{D}$, whenc
\begin{equation}
B_{1}L=\sqrt{Y_{H}}\frac{\kappa +Dx}{\sqrt{2\kappa Dx+D^{2}x^{2}}}.
\label{dk}
\end{equation}
If $\kappa \neq 0$, it follows from (\ref{dk}) tha
\begin{equation}
B_{1}L>\sqrt{Y_{H}}\text{.} \label{by}
\end{equation
In view (\ref{Y}), it is seen that for the existence of the circular orbit,
\ref{bg}) should be satisfied. It is worth noting that eq. (\ref{by})
corresponds to eq. (44) of \cite{circ} in which one should require
positivity of the quantity $L_{0}^{2}$.
We can make the substitution $x=\kappa D\alpha $ and obtain the equation
with respect to $\alpha $
\begin{equation}
\sqrt{\alpha ^{2}+2\alpha }B_{1}L_{0}=(1+\alpha )\sqrt{Y_{H}}. \label{ay}
\end{equation
For the marginally bound orbit, we should insert $L_{0}=\frac{m}{\omega _{H}}
$ in (\ref{ay}). Its solution gives us $\alpha $ as a quite definite
function of black hole parameters $B_{1}$, $\omega _{H}$, $g_{H},$ so the
location of the orbit near the horizon is fixed. This is done for any small
\kappa $.
In a similar way, for the photon orbit we substitute $m=0$ into (\ref{ay}).
Then, we have another equation for $\alpha $
\begin{equation}
\sqrt{\alpha ^{2}+2\alpha }B_{1}=\frac{1+\alpha }{\sqrt{g_{H}}}\text{.}
\label{phne}
\end{equation}
By contrast, if $\kappa =0$ exactly, we obtain from (\ref{yd}) the equation
x(B_{1}L-\sqrt{Y_{H}})=0$. As the orbit does not lie on the horizon, $x\neq
0 $, and we infer from it that the condition (\ref{s}) is to be satisfied.
Thus we see that a nontrivial play of small quantities $x$ and $\kappa $
takes place in such a way that the same eqs. (\ref{z0}), (\ref{1}) constrain
the different entities for near-extremal and extremal black holes. In the
first case, it is the location of the circular orbit, in the second one it
is the restriction on black hole parameters.
Let us consider now the situation for the ISCO. Then, direct calculations of
the second derivative in eq. (\ref{2d}) shows that eqs. (\ref{s}), (\ref{lme
) are indeed satisfied. (This corresponds to eq. 37 of \cite{circ}.) Now $E$
and $L_{0}$ themselves can be found from these equations and give rise to
eq. (\ref{lme}). Eq. (\ref{1}) for the ISCO requires more terms in the
expansion then in (\ref{yd}) and leads to (\ref{23}), as is shown in \cit
{circ} in agreement with the previous results on the Kerr metric \cit
{circkerr}.
For all thee types of orbits, inequality (\ref{bg})\ on black hole
parameters should be satisfied but, unlike the extremal case, it does not
select any discrete values of them. Say, for the Kerr-Newman metric one can
only infer from (\ref{bg}) that $\frac{a}{M}>\frac{1}{2}$.
Let us summarize which equations govern the behavior of which orbits. For
near-extremal black holes, these are eq. (\ref{ay}) with $E=m$ for the
marginally bound orbit or eq. (\ref{phne}) for the photon orbit. For the
ISCO, this is eq. (\ref{s}) and one more equation that is the consequence of
(\ref{z0}) and (\ref{1}). It looks more complicated and can be found in eqs.
(36), (39)\ - (41) of \cite{circ}. In all three cases one finds $x_{0}$ as
the function of the black hole parameters.
For pure extremal black holes the equaitons under discussion are (\ref{s})
with $E=m$ (the marginally bound orbit) or (\ref{s}) with $m=0$ (the photon
orbit). For the ISCO these are (\ref{s}) and eq. (\ref{z02}) (where for the
Kerr-Newman metric\thinspace $\ S$ should be taken from (\ref{S})). The
aforementioned equations give constraints on black hole parameters. The
radius of the orbit is arbitrary but in the near-horizon region is
restricted by the condition $N_{0}\ll 1$.
\section{Velocity on circular near-horizon orbits}
It was observed in \cite{72} that in the extremal limit of the Kerr metric
the velocity $V$ measured by a locally nonrotating observer is not equal to
1, as one could na\"{\i}vely expect. For the ISCO, it was found that $V
\frac{1}{2}$. For the marginally bound orbit, it turned out that $V=\frac{1}
\sqrt{2}}$. Now, we generalize these results, obtain the similar ones for
pure extremal black holes and compare the both.
If a particle moves in the background of the stationary metric (\ref{met}),
the following relation holds:
\begin{equation}
X=\frac{mN}{\sqrt{1-V^{2}}}, \label{mn}
\end{equation
see eq. (15) of \cite{k}. Taking into account also eq. (\ref{xn}) for the
circular orbit, one obtain
\begin{equation}
V=\frac{L_{0}}{\sqrt{L_{0}^{2}+m^{2}g_{H}}}\text{.}
\end{equation}
\subsection{Near-extremal black holes}
Different orbits should be considered separately.
\subsubsection{Marginally bound orbit}
Now, taking into account (\ref{mbg}), we obtai
\begin{equation}
V=\frac{1}{\sqrt{1+\omega _{H}^{2}g_{H}}}\text{.} \label{vmb}
\end{equation
For the extremal Kerr-Newman black hole, it is seen from (\ref{gh}) and (\re
{okn}) that $\omega _{H}^{2}g_{H}=\frac{a^{2}}{M^{2}}$, so we obtai
\begin{equation}
V=\frac{M}{\sqrt{M^{2}+a^{2}}}\text{.} \label{vma}
\end{equation}
In the extremal Kerr case, $a=M\,$, so $V=\frac{1}{\sqrt{2}}$ in agreement
with \cite{72}.
\subsubsection{ISCO}
In this case, taking into account (\ref{lme}), one obtain
\begin{equation}
V=\frac{1}{B_{1}\sqrt{g_{H}}}\text{.} \label{visco}
\end{equation
It is implied that (\ref{bg}) is satisfied, so $V<1$.
For the extremal Kerr-Newman metric, it folows from (\ref{gh}) and (\ref{b1m
) that $B_{1}\sqrt{g_{H}}=2\frac{a}{M}$, so
\begin{equation}
V=\frac{M}{2a}\text{,} \label{visca}
\end{equation
where now $a>\frac{M}{2}$ for the existence of the orbit under discussion.
In the extremal Kerr case, $a=M$ and we obtain $V=\frac{1}{2}$ in agreement
with \cite{72}.
\subsubsection{Photon circular orbit}
Now, by analogy with the massive case, we can introduce $X=\nu _{0}-\omega L
, where $\nu _{0}$ is the conserved quantity having the meaning of the
frequency measured at infinity, $\nu $ is the locally measured frequency
\cite{k}. Instead of (\ref{mn}), no
\begin{equation}
\nu _{0}-\omega L=\nu N\text{,}
\end{equation
(see eq. 40 of \cite{k}). In this case, instead of velocity, it makes sense
to speak about the effective gamma-factor $\gamma =\frac{\nu }{\nu _{0}}$.
Using (\ref{xn}) with $m=0$, one obtains for the near-critical particle with
$\nu _{0}\approx \omega _{H}L$ tha
\begin{equation}
\gamma =\frac{1}{\omega _{H}\sqrt{g_{H}}}\text{.} \label{gap}
\end{equation
For the Kerr-Newman case, it is seen from (\ref{gh}), (\ref{okn}) that
\omega _{H}\sqrt{g_{H}}=\frac{a}{M}$. After substitution into (\ref{gap}),
one find
\begin{equation}
\gamma =\frac{M}{a}. \label{gam}
\end{equation}
In the extremal Kerr case $\gamma =1$.
\subsection{Extremal black holes}
The previous formulas for the velocity are valid but with the additional
constraint that now (\ref{s}) should be satisfied. Correspondingly, for the
Kerr-Newman case $a$ is no longer a free parameter but equal to $a_{0}$,
where $a_{0}$ should be taken from eq. (\ref{a0}) and substituted into (\re
{vma}) or (\ref{gam}). Below, we list the values of the velocity on
near-horizon circular orbits for the Ker-Newman metric.
\subsubsection{Marginally bound orbit}
By substitution $a=\frac{1}{\sqrt{3}}$ into (\ref{vma}), we get
\begin{equation}
V=\frac{\sqrt{3}}{2}
\end{equation}
\subsubsection{ISCO}
Now, $a=\frac{1}{\sqrt{2}}$, and we find from (\ref{visca}) that
\begin{equation}
V=\frac{1}{\sqrt{2}}\text{.}
\end{equation
Thus, the same value $V=\frac{1}{\sqrt{2}}$ is obtained for the marginally
bound orbit in the near-extremal case and for the ISCO in the pure extremal
one.
\subsubsection{Photon orbit}
For this orbit, $a=\frac{1}{2}$, so (\ref{gam}) gives us
\begin{equation}
\gamma =2\text{.}
\end{equation}
Thus we see that the values of the velocity or Lorentz factor on the
near-horizon orbit cannot be obtained as a limit $\kappa \rightarrow 0$ of
these values on orbits in the vicinity of near-extremal black holes.
\subsection{Radially moving critical particle}
For comparison, we also consider the case when a particle is exactly
critical, $E=\omega _{H}L$. The particle's orbit is not circular, the
particle approaches the horizon. Then, using (\ref{xh}) instead of (\ref{xn
), one finds from (\ref{mn}) tha
\begin{equation}
V=\sqrt{1-\frac{\omega _{H}^{2}m^{2}}{B_{1}^{2}E^{2}}}\text{.}
\end{equation
For the Kerr-Newman case, $\frac{B_{1}}{\omega _{H}}=2$, s
\begin{equation}
V=\sqrt{1-\frac{m^{2}}{4E^{2}}}\text{.}
\end{equation
If a particle falls from infinity with the zero initial velocity, $m=E$.
Then, $V=\frac{\sqrt{3}}{2}$.
\section{General features of circular and would-be circular orbits in the
near-horizon region}
Thus there are two typical kinds of circular or almost circular orbits in
the immediate vicinity of the horizon. (i) There exist true circular orbits
that require the particle to be very close to the critical state (\ref{xoh})
but not coincide with the critical one nonetheless. For a fixed black hole
metric, this is possible for some special values of parameters only. Say,
for the Kerr-Newman black hole the parameter $\frac{a}{M}=\frac{1}{\sqrt{2}}
, $\frac{1}{\sqrt{3}}$, $\frac{1}{2}$ (plus corrections $O(N_{0})$) for the
ISCO, marginally bound and photon orbits respectively. (ii) One can specify
X_{H}=0$ (or take $X_{H}=o(N_{0}^{2})$ )$.$Then, circular orbits do not
exist at all in the near-horizon region. Instead, a particle inspirals
approaching the horizon asymptotically that takes an infinitely long proper
time \cite{ted}, \cite{gp}, \cite{prd}. The rate is either exponential or
power-like depending on a black hole parameters.
In the case of the extremal Kerr-Newman metric the concrete characteristic
values of $\frac{a}{M}$ found above coincide with those in \cite{m} the
first approximation. However, interpretation is qualitatively different. In
case (i) the true circular orbits of massive particles are absent from the
horizon $N=0$. In case (ii) these values determine the rate with which the
particle approaches the horizon.
\section{Relation to the Ba\~{n}ados-Silk-West effect and the kinematic
censorship}
In the near-horizon region, $N$ is small, so on the circular orbit $X$ is
also small according to (\ref{xc}). This is realized for small surface
gravity $\kappa $ (\ref{23}), (\ref{1}), i.e. for near-extremal black holes.
Meanwhile, such orbits play an essential role in the so-called the Ba\~{n
ados-Silk-West (BSW) effect \cite{ban}. Namely, if two particles collide
near a black hole, the energy centre of mass $E_{c.m.}$ grows unbound,
provided one of particles is critical or near-critical. For the orbits under
discussion, according to (\ref{xn}), $X$ has the order $N$, so a
corresponding particle is near-critical. Let us consider in more detail, how
the properties of the orbit are related to the BSW effect.
Let two particles with masses $m_{1}$ and $m_{2}$ collide. By definition,
the energy in the centre of mas
\begin{equation}
E_{c.m.}^{2}=-P_{\mu }P^{\mu }=m_{1}^{2}+m_{2}^{2}+2m_{1}m_{2}\gamma \text{,}
\label{cm}
\end{equation
where $P^{\mu }=p_{1}^{\mu }+$ $p_{2}^{\mu }$ is their total momentum,
p_{1.2}=m_{1,2}$u$_{1,2}^{\mu }$, the Lorentz factor of relative motion
\begin{equation}
\gamma =-u_{1\mu }u_{2}^{\mu }. \label{ga}
\end{equation}
Using the expansion (\ref{u}), one obtain
\begin{equation}
\gamma =\frac{1}{2}(\frac{\beta _{1}}{\alpha _{2}}+\frac{\beta _{2}}{\alpha
_{1}})\text{.} \label{gab}
\end{equation
If a particle 1 orbits a black hole near the horizon while particle 2 is
usual, it follows from (\ref{be}), (\ref{al}) tha
\begin{equation}
\gamma \approx \frac{1}{2N}\frac{\beta _{2}}{q_{1}}
\end{equation
becomes unbound. The above formulas can be viewed as modification of
approach of \cite{cqg}, where it was applied to radial motion, to the case
of orbital motion.
On one hand, the proximity of the four-velocity to the horizon generator is
due to the fact that the coefficient at $l^{\mu }$ in (\ref{u}) $\ \alpha
_{1}^{-1}=O(N^{-1})$ is much larger than the coefficient at $N^{\mu }$ equal
to $\beta _{1}=O(N)$ . From the other hand, the ratio $\frac{\beta _{2}}
\alpha _{1}}$ enters the expression for the Lorentz factor, so the same
parameter $N^{-1}$ controls both phenomena.
In this context, it is worth reminding that collision between one critical
and one usual particle leads to unbound $E_{c.m.}=O(N^{-1/2})$ \cite{prd}.
Had the collision been possible on the horizon exactly ($N=0$) we would have
obtained infinite $E_{c.m.}$ And, as a proper time required to reach the
horizon is finite for a usual particle, these divergences would have been,
at least in principle, observable. This would be unphysical: in any event
the energy that can be obtained in any frame cannot be infinite (it can be
called "principle of kinematic censorship"). Fortunately, the trajectory of
a massive particle on the horizon is impossible, so the experiment under
discussion is impossible as well.
However, there is one more potentially dangerous scenario. It is realized
when particle 1 is massless that needs a separate treatment since in this
case the orbit of particle 1 is not fake. In this case, eqs. (\ref{u}), (\re
{gab}) are not applicable directly since the particle is characterized by a
light-like wave vector $k^{\mu }$ (or $p^{\mu }=
h{\hskip-.2em}\llap{\protect\rule[1.1ex]{.325em}{.1ex}}{\hskip.2em
$k^{\mu }$, where
h{\hskip-.2em}\llap{\protect\rule[1.1ex]{.325em}{.1ex}}{\hskip.2em}
is the Planck constant) instead of the time-like vector $u^{\mu }$. Formulas
(\ref{cm}) and (\ref{ga}) need some modification. Then,
\begin{equation}
E_{c.m.}^{2}=m^{2}+2m\gamma \text{,}
\end{equation
\begin{equation}
\gamma =-u_{\mu }p^{\mu }\text{.} \label{up}
\end{equation
According to general rules, if a photon 1 is critical and the massive
particle 2 is usual, $\gamma $ grows unbound for collisions near the horizon
\cite{k}.
Let a usual massive particle 2 cross the extremal horizon where it collides
with the photon (particle 1) that follows the horizon generator exactly. Is
the quantity $E_{c.m.}$ finite or infinite?
To answer this question, we need to evaluate the scalar product (\ref{up}).
It is conveneint to use the coordinate system in which the metric has the
form (\ref{md}). The terms with $\mu =\bar{t}$ and $\mu =\bar{\phi}$ do not
contribute since for the photon under consideration $\bar{E}=0=\bar{L}$ as
is explained in\ Sec. VI. Taking also into account that for photon moving
along the horizon $\rho =const$, we obtain tha
\begin{equation}
-p_{\mu }u^{\mu }=(g_{\phi }\omega ^{2}-1)_{H}\left( \frac{d\rho }{d\tau
\right) _{2}\left( \frac{d\bar{t}}{d\lambda }\right) _{1}\text{,}
\end{equation
$\lambda $ is an affine parameter for the photon trajectory, we took into
account (\ref{md}) and (\ref{ww}).
But $\left( \frac{d\rho }{d\tau }\right) _{2}$ is finite as it follows from
\ref{ro}). It is easy to understand that $\left( \frac{d\bar{t}}{d\lambda
\right) _{1}$ should be finite as well since this quantity is defined in any
point of a horizon for any value of an affine parameter $\lambda $ and the
coordinates (\ref{md}) are regular on the horizon. Therefore, a particle
cannot have infinite $\left( \frac{d\bar{t}}{d\lambda }\right) _{1}$ in all
points of its trajectory. As a result, the Lorentz factor of relative motion
$\gamma $ and $E_{c.m.}$ are also finite. Therefore, the kinematic
censorship is preserved.
According to general picture described in detail in \cite{prd}, \cite{k},
collision between one usual and one critical particle leads to the BSW
effect.\ However, in all cases discussed in the aforementioned references,
there exists a small but nonzero parameter that controls the process. For a
particle slowly moving in a radial direction and approaching the extremal
horizon this is $\tau ^{-1}$ since the proper time $\tau $ required for the
critical particle to reach the horizon is infinite \cite{ted}, \cite{gp},
\cite{prd}, for collision inside the nonextremal horizon it is proximity of
the point of collision to the bifurcation surface \cite{inner}, for the
circular orbit in the background of the near-extremal black hole it is small
$\kappa $, etc.
Now, we can add one more case to this number of cases. In the situation
discussed in the present work, where is no small parameter but another
factor prevents infinite $E_{c.m.}$. Either there is no critical trajectory
(the massive case) or, as an exception, general rule does not work and
collision does not give rise to infinite \thinspace $E_{c.m.}$ $\ $(a photon
moving along the horizon generator).
\section{Conclusion}
We have constructed a coordinate system regular on the horizon that
generalizes a similar system for the Kerr metric \cite{dor}.\ With its help,
it is shown that the circular near-horizon orbits of near-extremal black
holes asymptotically approach the generators of the horizon on the slices
where the new time $\bar{t}=const$ thus generalizing the observation made in
\cite{ted11}.
In the extremal case, general approach to the description of circular
equatorial orbits near dirty black holes is suggested. It would seem that
there exist circular orbits with $r=r_{H}$ exactly on the horizon but such
orbits are fake for massive particles. Circular orbits are possible for
near-critical particles. Their radial distance to the horizon can be made as
small as one likes but, nonetheless, their radius does not coincide with
that of the horizon. If parameters of a black hole satisfy the exact
relation corresponding to a fake orbit on the horizon, this has physical
meaning despite the fact the orbit is fake. First, they correspond to the
value of the parameters (say, the angular momentum of \ a black hole
a=a_{0} $) such that around these values (when $a=a_{0}+a_{1}N_{0}+$...)
there exist circular orbits with small $N_{0}$. Second, these distance
values manifest itself in dynamics, when a fine-tuned (critical) particle
moves on orbits that are not exactly circular, slowly approaching the
horizon. When parameters of a black hole (say, its angular momentum) cross
a_{0}$, this results in the change of the rate with which a particle
approaches the horizon asymptotically.
The velocities measured by a local nonrotating observer are found on the
near-horizon orbits. In general, they do not coincide with the values
obtained in the extremal limits of near-extremal metrics. It is demonstrated
that properties of near-horizon circular orbits near extremal black holes
cannot be understood as the extremal limit of corresponding properties of
near-extremal black holes.
Connection with the BSW effect is revealed. The fact that on-horizon
circular orbits on the horizon are fake preserves the kinematic censorship
(the finiteness of energy in collisions) from violation.
It would be of interest to extend the approach and results of the present
work to the case of nonequatorial orbits.
\begin{acknowledgments}
This work was funded by the subsidy allocated to Kazan Federal University
for the state assignment in the sphere of scientific activities.
\end{acknowledgments}
|
2,869,038,153,795 | arxiv | \section{Introduction}
Networks have provided a step change in modelling complex systems ranging from
man-made to natural ones \cite{newman2003structure,boccaletti2006complex,pastor2014epidemic}.
The study of disease transmission on networks has particularly benefitted
from this modelling paradigm by uncovering the role and impact of contact
heterogeneity and clustering \cite{pastor2014epidemic}, to name just a few.
While networks provide a clear departure from classic compartmental models,
the role of mean-field models remains crucial.
These offer a reliable way to obtaining analytical results
and thus uncovering the interplay between network properties and the dynamic
processes on networks. For example, the epidemic threshold \cite{SatorrasVespignani,liurostvas} and
final epidemic size \cite{keeling1999effects} can be given in terms of explicit or implicit mathematical expressions
which clearly illustrate how network and disease parameters combine.
Probably the most widely spread and well-known mean-field model for network epidemics is the degree-based mean-field (DBMF) model, also known as heterogeneous mean-field \cite{SatorrasVespignani,pastor2014epidemic}.
Similarly, pairwise models \cite{rand2009,keeling1999effects,gross2006epidemic,hebert2013pathogen} continue to provide a fruitful framework
for modelling dynamic or adaptive networks involving epidemics \cite{gross2006epidemic,szabo2014oscillating}, social interactions \cite{demirel2014moment} and ecological
systems \cite{hebert2013pathogen}. Such models come with the added benefit of some degree of analytical tractability and the means toward
explicit analytical quantities such as the basic reproduction number and final epidemic size \cite{keeling1999effects}.
Recently, however there is renewed interest in modelling non-Markovian processes, such as epidemics on
networks \cite{min2011spreading,cooper2013non,van2013non,jo2014analytically},
random walks \cite{hoffmann2012generalized} and temporal networks \cite{moinet2014burstiness}.
This recent burst of research focusing on non-Markovian dynamics is strongly motivated
by empirical observations. These show
that for many real world settings, the Markovian framework is not satisfactory in
describing temporal statistics, such as time intervals between discrete,
consecutive events. Examples include inter-order and inter-trade durations in financial markets \cite{scalas2006durations}, socio-networks \cite{malmgren2008poissonian},
or individual-to-individual contacts being dynamic \cite{moinet2014burstiness}. In the context of epidemiology,
the period of infectiousness has paramount importance \cite{lloyd2001realistic,distrib}, and researchers departed from the simplifying assumption of exponential distributions by
approximating the empirical distribution of observed infectious periods of various diseases by log-normal and gamma (smallpox \cite{lognormal,gamma}), fixed-length (measles \cite{fixed}) or Weibull distributions (ebola \cite{ebola}). The reliable tools and mathematical machinery of Markovian theory do not translate directly
to modelling and analysis of non-Markovian
systems, and this is the main source of many challenges.
In this letter, we present the first analog
of pairwise models for non-Markovian epidemics, and show that this is equivalent to
a set of delay differential equations which (a) show excellent agreement with simulation and (b) allows us to give an implicit
analytic expression for the final epidemic size, as well as to define
a new $\mathcal{R}_0$-like quantity which emerges naturally from our calculations.
We consider an undirected and unweighted network with $N$ nodes and an average degree $n$.
Each node can be susceptible ($S$), infected ($I$) and recovered ($R$).
For Markovian epidemics, with transmission rate $\tau$ and recovery rate $\gamma$,
the epidemic is well approximated by the pairwise model \cite{keeling1999effects} given below
\begin{eqnarray*}
\dot{[S]}&=&-\tau [SI], \dot{[I]}=\tau [SI]-\gamma [I], \dot{[SS]}=-2\tau [SSI],\\
\dot{[SI]}&=&\tau [SSI]-\tau [ISI]-\tau [SI]-\gamma [SI],
\end{eqnarray*}
where $[X]$, $[XY]$ and $[XYZ]$ are the expected number of nodes in state $X$, links
in state $X-Y$ and triples in state $X-Y-Z$, respectively. Considering the
network at a given time, then counting amounts to $[X]=\sum_{i=1}^{N}X_i$, $[XY]=\sum_{i,j=1}^{N}X_iY_jg_{ij}$ and $[XYZ]=\sum_{i,j,k=1}^{N}X_iY_jZ_kg_{ij}g_{jk}$, where $X, Y, Z \in \{S, I, R\}$, and $G=(g_{ij})_{i,j=1,2,\dots,N}$ is the adjacency matrix of the network such that $g_{ii}=0$, $g_{ij}=g_{ji}$ and $g_{ij}=g_{ji}=1$ if nodes $i$ and $j$ are connected and zero otherwise. Moreover, $X_i$ returns one if node $i$ is in state $X$ and zero otherwise. The dependence on higher order moments can be broken
by using that $[XSY]=\frac{n-1}{n} \frac{[XS] [SY]}{[S]}$ \cite{keeling1999effects}. Applying this leads to the following self-consistent system
\begin{eqnarray}
\label{eq:pairmarkov}
\dot{[S]}&=&-\tau [SI], \dot{[I]}=\tau [SI]-\gamma [I],\nonumber \\
\dot{[SS]}&=&-2\tau \frac{n-1}{n}\frac{[SS][SI]}{[S]},\nonumber \\
\dot{[SI]}&=&\tau \frac{n-1}{n} \frac{[SS][SI]}{[S]}-\tau \frac{n-1}{n} \frac{[SI][SI]}{[S]} \nonumber \\
&-&\tau [SI]-\gamma [SI].
\end{eqnarray}
By applying the closure at the level of pairs, $[XY]=n[X]\frac{[Y]}{N}$, system (\ref{eq:pairmarkov}) reduces to the classic compartmental $SIR$ model,
\begin{equation}
\label{eq:markmeanfield}
\dot{S}=-\tau \frac{n}{N} S I, \dot{I}=\tau \frac{n}{N} S I-\gamma I.
\end{equation}
We wish to apply the previous approach to the case when the recovery time is not exponentially distributed.
First, a fixed infectious period, denoted by $\sigma$, is considered, and the derivation of the pairwise model from first principles is illustrated.
We show that the non-Markovian dynamics can be described by a delay differential equation with constant delay.
The infection process is assumed to be Markovian, thus the equation for $[S]$ is the same as before, namely $\dot{[S]}(t)=-\tau [SI](t)$.
The number of infected nodes at time $t$ is replenished by $\tau [SI](t)$ and is depleted by $\tau [SI](t-\sigma)$, and this yields
$\dot{[I]}(t)=\tau [SI](t)-\tau [SI](t-\sigma)$. The equation for the number of $S-S$ links is the same because the infection process is Markovian, see (\ref{eq:pairmarkov}).
In a similar manner, the number of $S-I$ links is replenished by $\tau \frac{n-1}{n} \frac{[SS](t)[SI](t)}{[S](t)}$, which is the rate of depletion of $S-S$ links. Furthermore, depletion
occurs due to the infection within $S-I$ pairs, $\tau [SI](t)$, and due to the infection of a central $S$ node in an $I-S-I$ triple, $\frac{\tau(n-1)}{n [S](t)} [SI](t)[SI](t)$. On the other
hand, there are $S-I$ links, which survive the time interval $\sigma$, that will be removed due to the recovery of the $I$ node. However, one needs to account for the removal of $S-I$ links
which were created precisely $\sigma$ times ago. Naively, one would believe that this term is simply proportional to $\tau \frac{n-1}{n}\frac{[SS](t-\sigma)[SI](t-\sigma)}{[S](t-\sigma)}$.
However, one must account for the fact that in the time interval $(t-\sigma,t)$ an $S-I$ link could have been destroyed either due to within pair infection or by infection of the $S$ node from outside. Hence, it is obvious that a discount factor needs to be determined to account for this effect.
To calculate this factor, $S-I$ links, that are created at the same time, are considered as a cohort denoted by $x$, and we model infection within and from outside by writing down the following evolution equation,
\begin{align}
x'(t)=-\frac{\tau(n-1)}{n [S](t)} [SI](t)x(t) -\tau x(t), \label{cohort}
\end{align}
where, the first term denotes the `outer' infection of the $S$ node, while the second term stands for `inner' infection of the $S$ node.
We note that the outside infection is simply proportional to the probability that an $S$ node with an already engaged link has a further susceptible neighbour, $\frac{(n-1)[SI]}{n [S]}$.
The solution of equation \eqref{cohort} in time interval $[t-\sigma,t]$ is
\begin{align*}
& x(t)=x(t-\sigma) e^{-\int_{t-\sigma}^{t} \left( \frac{\tau(n-1)}{n [S](u)} [SI](u) + \tau\right) du},
\end{align*}
and this provides the depletion or discount rate of $S-I$ links.
In this case, $x(t-\sigma)=\tau \frac{n-1}{n} \frac{[SS](t-\sigma)[SI](t-\sigma)}{[S](t-\sigma)}$, which is the replenishment of $S-I$ links. Therefore, summarising all the above, the pairwise DDE for the non-Markovian case is
\begin{align}
\label{eq:pairnonmark}
&\dot{[S]}(t)=-\tau [SI](t), \dot{[I]}(t)= \tau [SI](t) - \tau [SI](t-\sigma) \nonumber \\
&\dot{[SS]}(t)=-2\tau \frac{n-1}{n} \frac{[SS](t) [SI](t)}{[S](t)}, \dot{[SI]}(t)= -\tau [SI](t) \nonumber \\
&-\frac{\tau(n-1)}{n [S](t)} [SI](t)[SI](t)+\tau \frac{n-1}{n}\frac{[SS](t)[SI](t)}{[S](t)} \\
&-\tau \frac{n-1}{n}\frac{[SS](t-\sigma)[SI](t-\sigma)}{[S](t-\sigma)} e^{-\int_{t-\sigma}^{t} \left([SI](u) \frac{\tau(n-1)}{n [S](u)}+\tau \right)du}. \nonumber
\end{align}
This system is now the main subject of our investigation from analytical and numerical point of view.
Similarly to Markovian case, the non-Markovian mean-field model for fixed infectious period is
\begin{eqnarray}
\label{eq:nonmarkmeanfield}
\dot{S}(t)&=&-\tau \frac{n}{N}S(t)I(t),\nonumber \\
\dot{I}(t)&=&\tau \frac{n}{N} S(t)I(t)-\tau \frac{n}{N} S(t-\sigma)I(t-\sigma).
\end{eqnarray}
The most important qualitative results for $SIR$ models are the explicit formula of basic reproduction number and an implicit equation for the final epidemic size. In what follows, we introduce a general concept for the reproduction number associated to pairwise models, and we refer to this as the \textit{pairwise reproduction number}. Using this concept, the final size relations for the above mean-field, classic pairwise and DDE-based pairwise models are derived. Reproduction numbers play a crucial role in mathematical epidemiology, so we begin by investigating these. The basic reproduction number $\mathcal{R}_{0}$ denotes the expected number of secondary infections caused by a `typical' infected individual during its infectious period when placed in a fully susceptible population, which is a definition understood at the level of nodes (individuals). On the other hand, the pairwise model is written at the level of links and describes the dynamics of susceptible ($S-S$) and infected ($S-I$) links. This fact gives us an opportunity to define a new type of reproduction numbers, which we call \textit{pairwise reproduction number} and denote it by $\mathcal{R}^{p}_{0}$. More precisely, we distinguish the following two useful quantities: (a) the \textit{basic} reproduction number is the expected lifetime of an $I$ \textbf{node} multiplied by the number of newly infected \textbf{nodes} per unit time, and (b) the \textit{pairwise} reproduction number is the expected lifetime of an $S-I$ \textbf{link} multiplied by the number of newly generated $S-I$ \textbf{links} per unit time.
An infected node is removed due to its recovery, thus in general the expected lifetime is the expected value of a random variable $X$ corresponding to the distribution of the length of infectious periods. In contrast, an $S-I$ link can be removed due to the recovery of the $I$ node but also due to the infection of the $S$ node. Therefore, the expected lifetime of the $S-I$ link is the expected value of the minimum of two random variables. If we assume that the process of infection along such a link has density function $f_i$ with survival function $\xi_i$, and the process of recovery has density function $f_r$ with survival function $\xi_r$, then, denoting by $Z$ the random variable defined by the lifetime of an $S-I$ link, we have
\begin{equation}
\label{eq:lifetime}
\mathbb{E}(Z)=\int_{-\infty}^{\infty} x \left(f_i(x) \xi_r(x)+f_r(x) \xi_i(x)\right) dx.
\end{equation}
From the assumption that the infection time along $S-I$ links is exponentially distributed, the number of newly infected nodes per unit time is $\frac{n}{N}\tau [S]_0$ in the mean-field model, and the expected number of newly infected links is $\tau \frac{n-1}{n}\frac{[SS]_0}{[S]_0}=\tau \frac{n-1}{N}{[S]_0}$ in the pairwise model, where we used the approximation $[SS]_0=\frac{n}{N}[S]^2_{0}$.
We illustrate how to use the formula \eqref{eq:lifetime} in the case of fixed length infectious period ($\sigma$). In this case, the survival function is
\[
\xi_r(t)= \begin{cases}
1 & \textrm{if $0\leq t<\sigma$,} \\
0 & \textrm{if $t\geq\sigma$}, \\
\end{cases}
\]
and the density function $f_r(t)$ is the Dirac-delta $\delta(t-\sigma)$. Using fundamental properties of the delta function, we have
\begin{eqnarray*}
\mathbb{E}(Z)&=&\int_{-\infty}^{\infty} x f_i(x) \xi_r(x) dx + \int_{-\infty}^{\infty} x f_r(x) \xi_i(x) dx \nonumber\\
&=&\left(-\sigma e^{-\tau \sigma}+\frac{1-e^{-\tau \sigma}}{\tau}\right)+\sigma e^{-\tau \sigma},
\end{eqnarray*}
and multiplying this result by the number of newly generated $S-I$ links, the formula in Table \ref{R0_table} for $\mathcal{R}_0^p$ follows.
More importantly, it is noteworthy to highlight the general result that $\mathbb{E}(Z)$, upon using that the infection process is Markovian, reduces to
evaluating the Laplace transform of the density of the recovery time. This provides a very general result, which in many cases leads to an explicit analytical result for $\mathcal{R}^p_0$, see Table \ref{R0_table}.
\begin{table}[Htbp]
\begin{ruledtabular}
\begin{tabular}{c c c }
& $\mathcal{R}_0$ & $\mathcal{R}^p_0$ \\
Markovian & $\frac{n}{N}\frac{\tau}{\gamma}S_{0} $ & $\frac{n-1}{N}\frac{\tau}{\tau+\gamma}[S]_{0} $ \\
Fixed & $\frac{n}{N}\tau \sigma S_{0} $ & $\frac{n-1}{N}(1-e^{-\tau \sigma})[S]_{0} $ \\
General & $\frac{n}{N}\tau \mathbb{E}(X) S_{0}$ & $\frac{n-1}{N}\left(1-\mathcal L[f_r](\tau)\right)[S]_0$ \\
\end{tabular}
\end{ruledtabular}
\caption{Basic and pairwise reproduction numbers for different recovery distributions. $\mathcal L[f_r](\tau)$ denotes the Laplace transform of $f_r$, the density of the recovery process, at $\tau$.}
\label{R0_table}
\end{table}
For the standard Markovian mean-field model, the process of calculating the final epidemic size is well-known.
From Eq. (\ref{eq:markmeanfield}), we evaluate $d I/d S$ and integrate it to obtain
\begin{equation*}
\ln\left(\frac{S_{\infty}}{S_0}\right)=\frac{\tau}{\gamma}\frac{n}{N}S_0 \left(\frac{S_{\infty}}{S_0}-1\right).
\end{equation*}
Using that $\mathcal{R}_0=\frac{\tau}{\gamma}\frac{n}{N}S_0$, we have
\begin{equation}
\label{eq:standardfs}
\ln\left(\frac{S_{\infty}}{S_0}\right)=\mathcal{R}_0 \left(\frac{S_{\infty}}{S_0}-1\right).
\end{equation}
The final epidemic size (i.e. the total number of infections) can be easily computed by using $R_{\infty}=N-S_{\infty}$. In the non-Markovian case, the calculations (which are included in the supplemental material) are rather different and the resulting final size relation is
\vspace*{-0.6ex}
\begin{equation}
\label{eq:meannonmarkov}
\ln\left(\frac{S_{\infty}}{S_0}\right)=\tau \frac{n}{N} \sigma S_0 \left(\frac{S_{\infty}}{S_0}-1\right).
\end{equation}
As in this case $\mathcal{R}_0=\tau\frac{n}{N} \sigma S_0$, the final size relation (\ref{eq:meannonmarkov}) shows the `standard' form of (\ref{eq:standardfs}).
\label{pairwise}
The dynamical systems \eqref{eq:pairmarkov} and \eqref{eq:pairnonmark} can be manipulated conveniently to derive an analytic relation between the final epidemic size and the basic reproduction number. This is known for the Markovian case but it is a new result for the non-Markovian one.
While the full derivation for the non-Markovian case is given in the supplemental material, the main steps of the calculations are: (a) find an invariant to reduce the dimensionality of the system, (b) integrate the equation for $[SI](t)$, (c) integrate the equation for $[S](t)$ on $[0,\infty)$ and (d) employ algebraic manipulations to obtain the final size relation. Following this procedure yields
\begin{eqnarray}
\label{eq:pairnonmarkov}
\frac{s_\infty^{\frac{1}{n}}-1}{\frac{1}{n-1}}=\frac{n-1}{N}\left(1-e^{-\tau \sigma}\right)[S]_0\left(s_\infty^{\frac{n-1}{n}}-1\right),
\end{eqnarray}
where $s_{\infty}=\frac{[S]_\infty}{[S]_0}$ and the attack rate is simply $1-s_\infty$. Using the same technique for the Markovian case leads to
\begin{eqnarray}
\label{eq:markovpair}
\frac{s_\infty^{\frac{1}{n}}-1}{\frac{1}{n-1}}=\frac{n-1}{N}\frac{\tau}{\tau+\gamma}[S]_0
\left(s_\infty^{\frac{n-1}{n}}-1\right).
\end{eqnarray}
Upon inspecting the two relations above, the following important observations can be made.
First, the implicit relation between final size and $\mathcal{R}^p_0$ is conserved between the Markovian and non-Markovian model.
Moreover, upon using the values of $\mathcal{R}^p_0$ as given in Table \ref{R0_table}, equations \eqref{eq:pairnonmarkov} and \eqref{eq:markovpair} can be cast in the following general form
\begin{eqnarray}
\label{eq:standardpfs}
\frac{s_\infty^{\frac{1}{n}}-1}{\frac{1}{n-1}}=\mathcal{R}^p_0\left(s_\infty^{\frac{n-1}{n}}-1\right).
\end{eqnarray}
In fact we conjecture that this relation will hold true for pairwise models with different infectivity period profiles.
The second observation is that taking the limit of $n \rightarrow \infty$ in \eqref{eq:standardpfs} gives rise to
\begin{equation}
\label{eq:sinffs}
\ln (s_\infty) = \mathcal{R}^p_0 (s_\infty -1),
\end{equation}
which is equivalent to the `standard' form of (\ref{eq:standardfs})
\begin{figure*}
\centering
\begin{minipage}[b]{.45\textwidth}
\includegraphics[width=\linewidth]{SIR_I_homogeneous}
\includegraphics[width=\linewidth]{SIR_I_homogeneous_1c}
\end{minipage}
\begin{minipage}[b]{.45\textwidth}
\includegraphics[width=\linewidth]{SIR_I_erdosrenyi_1b}
\topinset{\includegraphics[height=2.8cm]{Final_size_2}}{\includegraphics[height=5cm]{Final_size_tau_ar}}{45pt}{51pt}
\end{minipage}
\caption{Simulations of non-Markovian epidemics on networks with $N=1000$ nodes: (a) solid lines show the numerical solution of (\ref{eq:pairnonmark}) and the circles/squares/diamonds correspond to simulations for homogeneous networks with $\langle k\rangle=5/10/15$, respectively; (b) the same as before but for Erd\H os-R\'enyi random networks with $\langle k\rangle=5/10/15$; (c) the solid and dashed lines show the numerical solution of pairwise (\ref{eq:pairnonmark}) and mean-field (\ref{eq:nonmarkmeanfield}) models, respectively and, for homogeneous networks with $\langle k\rangle=5$ and $\langle k\rangle=15$. For (a), (b) and (c) the transmission rate is $\tau=0.55$ and the infectious period is fixed, $\sigma=1$. Finally, (d) the diamonds/circles/squares correspond to numerical simulations using homogeneous network with $\langle k\rangle=15$ and using fixed and two different but gamma distributed infectious periods ($\circ$ - shape $\alpha=2$, scale $\beta=\frac{1}{2}$, $\square$ - shape $\alpha=\frac{1}{2}$, scale $\beta=2$), respectively. The solid lines correspond to the analytical final epidemic size for fixed (\ref{eq:pairnonmarkov}) and general (\ref{eq:geninffinalsize}) infectious periods.
The inset shows the analytical and the simulated final epidemic sizes plotted against the pairwise reproduction number.}
\label{fig:1}
\end{figure*}
To test the validity of our model we implemented an algorithm to simulate the non-Markovian SIR process with arbitrary recovery times, and considered random networks with $N=1000$ nodes. In Fig.~\ref{fig:1}(a,b) homogenous and Erd\H{o}s-R\'enyi random networks are considered, respectively. Here, the mean of 100 simulations is compared to the solution of system \eqref{eq:pairnonmark}. The agreement is excellent for homogenous networks even for low degrees. Despite the pairwise model not accounting explicitly for the network's degree distribution, the agreement is surprisingly good for relatively dense Erd\H{o}s-R\'enyi networks.
In Fig.~\ref{fig:1}(c) we compare and contrast the differences between simulations, mean-field and pairwise models for the non-Markovian case.
For denser networks ($\langle k\rangle=15$), both models perform well with the pairwise yielding a better agreement with output from simulation.
However, the difference is striking for sparser networks ($\langle k\rangle=5$), where the mean-field approximation performs very poorly, while the pairwise DDE model leads to good agreement even in this case.
In Fig.~\ref{fig:1} (d), analytic final size relations are tested against simulation results for a range of different infectious period distributions, all sharing the same mean. The horizontal axis corresponds to the transmission rate $\tau$, and the plot highlights the threshold dynamics, as well as the necessity to correctly model the recovery time distribution in order to avoid under or over estimation of the final epidemic size. Based on Table \ref{R0_table}, the analytical expressions for $\mathcal{R}^p_0$ can be computed for the Markovian (or exponential), fixed and gamma-distributed recovery times. These values are
\begin{eqnarray*}
&\mathcal{R}&^p_{0,\mathrm{\Gamma}(\frac{1}{2},2)}=c\left(1-\frac{1}{\sqrt{1+2 \tau}}\right), \mathcal{R}^p_{0,\mathrm{Exp}(1)}=c\left(\frac{\tau}{\tau+1}\right),\\
&\mathcal{R}&^p_{0,\mathrm{\Gamma}(2,\frac{1}{2})}=c\left(1-\frac{4}{(2+\tau)^2}\right), \mathcal{R}^p_{0,\mathrm{Fixed}(1)}=c \left(1-e^{-\tau}\right),
\end{eqnarray*}
where $c=\frac{(n-1)[S]_0}{N}$, and satisfy the following inequality
\begin{equation}
\mathcal{R}^p_{0,\mathrm{\Gamma}(\frac{1}{2},2)}\leq \mathcal{R}^p_{0,\mathrm{Exp}(1)}\leq\mathcal{R}^p_{0,\mathrm{\Gamma}(2,\frac{1}{2})}\leq \mathcal{R}^p_{0,\mathrm{Fixed}(1)}.
\end{equation}
We note that (a) all recovery time distributions have the same mean $1$ and (b) the variances satisfy the converse inequality, with higher variance in recovery time (i.e. 2, 1, 1/2 and 0) giving a smaller $\mathcal{R}^p_0$ value, despite $\tau$ being fixed. The overall agreement between the analytic results of the pairwise model and the stochastic simulations is excellent and confirms the validity of our
final size relations. The inset in Fig.~1 (d) illustrates how the final epidemic size depends on the pairwise reproduction number, and shows that the same value of $\mathcal{R}^p_0$ produces the same attack rate, regardless of the distribution from where it is originated from, in accordance with our formula \eqref{eq:standardpfs}.
We have introduced a generalization of pairwise models to non-Markovian epidemics with fixed infectious period. The resulting model is a system of delay differential equations with constant delay and we have provided as a full as possible analytical and numerical analysis of this model and benchmarked its performance against explicit stochastic network simulations. We have presented a new concept of reproduction numbers introducing the \textit{pairwise} reproduction number $\mathcal{R}^p_0$ and have derived the final epidemic size relation for non-Markovian mean-field and pairwise DDE models.
The numerical solution of the non-Markovian pairwise DDE shows excellent agreement with results based on explicit stochastic network simulations and sheds some light on the impact of non-Markovianity. More importantly, via the analytic results we can gain insight how and where non-Markovianity enters and impacts upon important epidemic descriptors.
The model and results in this paper should provide a framework for deeper and more comprehensive analysis of non-Markovian processes on networks and these should not be restricted to epidemics with fixed delays. Preliminary investigations indicate that our model can be extended to consider arbitrary recovery time distributions. In this case, the resulting model is a more complex integro-differential equation requiring a more challenging and elaborate analysis. Nevertheless, it turns out that the final epidemic size relation, upon assuming a general probability distribution with density function ($f_r$), yields
\begin{equation}
\label{eq:geninffinalsize}
\frac{s_\infty^{\frac{1}{n}}-1}{\frac{1}{n-1}}=\frac{n-1}{N}\left(1-\mathcal L[f_r](\tau)\right)[S]_0
\left(s_\infty^{\frac{n-1}{n}}-1\right),
\end{equation}
which agrees with the general equation suggested in \eqref{eq:standardpfs}. For recovery of fixed length, relation \eqref{eq:geninffinalsize} reduces to \eqref{eq:pairnonmarkov}. The validity of our general final size relation \eqref{eq:geninffinalsize} was tested also for different gamma distributions, see Fig.~1(d), showing a strong predictive power for general non-Markovian epidemics on networks. The difficulty of modelling non-Markovian processes is well known, but our current framework can pave the way for identifying fruitful links between different areas of delay differential equations, stochastic processes, dynamical systems and epidemiological models.
\section{References}
|
2,869,038,153,796 | arxiv | \section{Introduction}
The spins of single dopants in solids are key elements in the development of solid-state quantum-information technologies~\cite{qc_defects,qc_review}. In particular, nitrogen-vacancy (NV) colour centers in diamond are promising quantum processors: single defects can be detected using confocal microscopy~\cite{nv_confocal,single_nv}, their spin state can be initialized, manipulated, and readout optically~\cite{nv_optical_pumping,nv_electron_rabi,single_spin_spectrosocpy,c13_environment}, and their quantum coherence survives at room temperatures~\cite{long_coherence_time}. One of the remaining challenges is to control the spin-spin interactions to perform quantum-logic operations, and major steps along this direction have already been accomplished. The hyperfine coupling between the NV electron spin and the nuclear spins of neighboring impurities ($^{13}$C,$^{15}$N) offers a unique opportunity to build small quantum registers~\cite{nv_n_coupling,nv_nuclear_single_shot,c13_entanglement,c13_environment,c13_register}. These devices can be scaled up by means of ion implantation techniques, yielding periodic arrays of NV centers~\cite{cold_ion_implantation}. However, the controlled couplings now require longer-range interactions, as provided by optical channels~\cite{nv_photon}, or magnetic dipole-dipole couplings between the electron spins~\cite{nv_nv_gate}.
Although the feasibility of the magnetic-coupling approach has been demonstrated recently~\cite{nv_nv_gate}, fabricated NV arrays often suffer from shorter electron coherence times that affect the fidelity of the quantum gates. From this perspective, $^{14}$N or $^{15}$N nuclear spins would be better-suited qubits due to their longer coherence times, together with the availability of single-shot readout~\cite{nv_nuclear_single_shot}. Unfortunately, the direct nuclear dipole-dipole interaction is negligible, which necessitates the search for alternative schemes to couple the nuclear spins. This letter presents a theoretical proposal for implementing robust quantum gates between two distant nuclear-spin qubits mediated by the long-range dipolar interaction between electron spins. The main idea is to exploit the long nuclear coherence times for storage, and to use the electronic degrees of freedom as a quantum bus that mediates the nuclear spin interaction. Such a general scheme can be applied to different setups, and has also been proposed for quantum-Hall systems~\cite{Hall}. Active control of the spins via microwave fields allows reaching high fidelities, even in the presence of the magnetic noise associated to the complex mesoscopic environment of solid-state systems. In fact, the nuclear driving acts as a continuous decoupling mechanism~\cite{cont_decoupling} that minimizes the effects of the noise, and provides a new tool in addition to pulsed techniques~\cite{dynamical_decoupling_nv}.
{\it The model.-} We consider two NV defects $j=1,2$, whose unpaired electrons form a spin-triplet ground state $S_j=1$, and focus on $^{14}$N with a nuclear spin $I_j=1$. The Hamiltonian that describes each NV center is $H_j=H_j^{\text{(e)}}+H_j^{\text{(n)}}+H_{j}^{\text{(e-n)}},$
\begin{equation}
\label{local_hamiltonian}
\begin{split}
H_j^{\text{(e)\phantom{()}}}&=D_j\left((S_j^z)^2-\textstyle{\frac{1}{3}}\boldsymbol{S}_j^2\right)+ g_{\text{e}}\mu_{\text{B}}\boldsymbol{B}\cdot\boldsymbol{S}_j,\\
H_j^{\text{(n)\phantom{()}}}&=-P_j\left((I_j^z)^2-\textstyle{\frac{1}{3}}\boldsymbol{I}_j^2\right)- g_{\text{n}}\mu_{\text{N}}\boldsymbol{B}\cdot\boldsymbol{I}_j,\\
H_j^{\text{(e-n)}}&= A^{\shortparallel}_jS_j^zI_j^z+\textstyle\frac{1}{2} A^{\bot}_j(S_j^{+}I_j^-+S_j^{-}I_j^+),\\
\end{split}
\end{equation}
where $\boldsymbol{S}_j,\boldsymbol{I}_j$ are the electronic and nuclear spin-1 operators, and ${S}_j^{\pm}=S^x_j\pm \ii S^y_j$, ${I}_j^{\pm}=I^x_j\pm \ii I^y_j$ the usual ladder operators. Here, $D_j$($P_j$) stands for the zero-field splitting of the electronic (nuclear) ground state, $\boldsymbol{B}$ is an external magnetic field, $\mu_{\rm B} (\mu_{\rm N})$ is the Bohr (nuclear) magneton, and $g_{\rm e}(g_{\rm n})$ is the electron (nuclear) g-factor. The electron-nuclei interaction is quantified by the hyperfine longitudinal (transverse) coupling $A_j^{\shortparallel}$ ($A_j^{\bot}$). The present discussion is focused on a single pair of closely-spaced NV centers, and we use the realistic parameters of the experiment in~\cite{nv_nv_gate}. We emphasize, however, that this scheme can be extended to arrays of implanted NV centers, provided that their distance is small enough. Let us also remark the hierarchy of couplings, $D_j\gg P_j\gtrsim A_j^{\shortparallel},A_j^{\bot}$, and $g_{\text{e}}\mu_{\text{B}}\gg g_{\text{n}}\mu_{\text{N}}$
(see Table~\ref{o_magnitude}, where $\hbar=1$). Finally, we introduce the secular dipole-dipole interaction between the electron spins
\begin{equation}
\label{dipole_hamiltonian}
H_{12}^{\text{(e-e)}}=J_{12}\left(3S_1^zS_2^z-\boldsymbol{S}_1\cdot \boldsymbol{S}_2\right),
\end{equation}
where $J_{12}=g_{\text{e}}^2\mu_{\text{B}}^2(1-3\cos^2\theta_{12})/2 c r_{12}^3$ in gaussian units, $\boldsymbol{r}_{12}$ is the distance between the NV centers, $\cos \theta_{12}=\boldsymbol{e}_{z}\cdot \boldsymbol{r}_{12}/r_{12}$, and $c$ is the speed of light. For the distances reached in the experiment, $r_{12}\approx10$nm, the dipolar coupling $J_{12}\approx 70\text{kHz}$ is the smaller energy scale in the problem. As mentioned above, the magnetic dipole-dipole interaction between the nuclear spins is completely negligible since $(g_{\text{n}}\mu_{\rm N}/g_{\text{e}}\mu_{\rm B})^2\approx 10^{-8}$, and an indirect mechanism for the nuclear coupling is thus required.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{effective_coupling_bis.pdf}
\caption{\label{cones} {\bf Effective nuclear spin-spin interaction.} (a) Schematic diagram of the electron-mediated interaction between the nuclear spins, which exploits the magnetic dipolar interaction and the local hyperfine coupling. (b) Diagram of the energy levels of the two NV centers. Since $D_j$ is the largest energy scale, the energies are clustered in manifolds determined by the electronic spin. The transverse part of the hyperfine coupling $A_{j}^{\bot}$ induces transitions between different manifolds, and mediates an effective XX interaction between the nuclear spins. (c) Schematic diagram of the Zeeman splitting for the electronic and nuclear energy levels. By carefully selecting the microwave frequencies, we drive a particular electronic and nuclear transition. (d) Energy levels of the driven Hamiltonian. For very strong driving $\Omega_{\text{e}}$, the electronic spin in the lowest manifold is $\ket{-}\propto\ket{0}-\ket{{\rm -1}}$. The hyperfine coupling $A_{j}^{\shortparallel}$ induces virtual transitions to the excited manifold, split by the dipolar interaction and the inhomogeneous broadening, and leads to an effective ZZ interaction. }
\label{effective_coupling}
\end{figure}
{\it Effective static interactions.-} In Fig.~\ref{effective_coupling}{\bf(a)}, we represent schematically the process leading to nuclear spin-spin interactions. The hyperfine interaction couples the nuclear to the electronic spins of each NV center, which are in turn coupled through the magnetic dipole-dipole interaction. Therefore, one may use the electrons as a bus to mediate the nuclear coupling. A naive estimate of this coupling follows from Fig.~\ref{effective_coupling}{\bf(b)}, where we represent the energy spectrum of $H_0=\sum_{j}(H_j^{\text{(e)}}+H_j^{\text{(n)}})+H_{12}^{\text{(e-e)}}$. Due to the energy-scale hierarchy in Table~\ref{o_magnitude}, the levels are clustered in manifolds determined by the electronic spins $\ket{m_1,m_2}_{\rm e}$. The dynamics within the ground-state manifold, $\ket{0,0}_{\rm e}$, corresponds to nuclear spin flips $\ket{M_1,M_2}_{\rm n}\to\ket{M'_1,M'_2}_{\rm n},$ with $M_j,M'_j=0,\pm 1$, and follows from second-order processes where the hyperfine coupling virtually populates states from the excited manifold. Therefore, a crude estimate of the dynamics is $H_{\text{eff}}\approx J_{\text{eff}}I_1^+I_2^-+\text{H.c.}$, where $J_{\text{eff}}\propto(A_1^{\bot}A_2^{\bot})/D$. A more careful Schrieffer-Wolff-type calculation takes into account the two possible channels, symmetric or anti-symmetric, which lead to the destructive interference of this coupling $J_{\text{eff}}\propto (A_1^{\bot}A_2^{\bot})/D-(A_1^{\bot}A_2^{\bot})/D$. It is precisely the role of the magnetic dipole-dipole interaction to split these channels, suppressing the perfect destructive interference, and leading to
\begin{equation}
\label{eff_static}
H_{\rm eff}^{\rm xx}=J_{\rm eff}^{\rm xx}(I_1^+I_2^-+I_1^-I_2^+)-\hspace{-0.5ex}\sum_{j}P_j(I_j^z)^2,\hspace{0.5ex}J_{\text{eff}}^{\rm xx}=\frac{2A_1^{\bot}A_2^{\bot}}{D^2}J_{12}.
\end{equation}
This Hamiltonian describes the {\it flip-flop interaction} between the $^{14}$N nuclei leading to an exchange of the spin excitations.
\begin{table*}
\centering
\caption{{\bf Specific values of the coupling strengths} }
\begin{tabular}{ c c c c c c c c c c c c c}
\hline
\hline
$D_j$ & $P_j$ & $A_j^{\shortparallel},A_j^{\bot}$ & $J_{12}$ & $g_{\text{e}}\mu_{\text{B}}$ & $g_{\text{n}}\mu_{\text{N}} $& $B $ & $\Omega_{\text{e}}$ & $\Omega_{\text{n}}$ & $J^{\rm xx}_{\text{eff}}$ & $J^{\rm zz}_{\text{eff}}$ \\
\hline
\hspace{0.5 ex}2.87 GHz\hspace{0.5ex} &\hspace{0.5ex} 5.04 MHz \hspace{0.5ex}& \hspace{0.5ex}2.1,2.3 MHz\hspace{0.5ex} &\hspace{0.5ex} 70 kHz\hspace{0.5ex} & \hspace{0.5ex}2.8 MHz$\cdot$ G$^{-1}$\hspace{0.5ex} & \hspace{0.5ex} 0.31 kHz$\cdot$ G$^{-1}$ \hspace{0.5ex} & \hspace{0.5ex} 30 G \hspace{0.5ex} & \hspace{0.5ex} 15 MHz \hspace{0.5ex} & \hspace{0.5ex} 1 kHz \hspace{0.5ex} & \hspace{0.5ex} 0.1 Hz \hspace{0.5ex} & \hspace{0.5ex} 0.1 kHz \hspace{0.5ex} \\
\hline
\hline
\end{tabular}
\label{o_magnitude}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{gate_sequence.pdf}
\caption{{\bf Nuclear quantum gate between two NV centers:} Scheme for the initialization, evolution, and read-out of the effective $J^{\rm xx}_{\rm eff}$ interaction in (a), and $J^{\rm zz}_{\rm eff}$ interaction in (b). Here, $X_{j,\phi}={\rm exp}({\ii\phi\tau^x_j}/2),Y_{j,\theta}={\rm exp}(\ii\theta\tau_j^y/2)$ represent finite pulses, and $\mathcal{P}$($\mathcal{M}$) stand for the electron (nuclear) spin polarization. (c) Comparison between the effective dynamics under the Hamiltonian~\eqref{eff_static} and the exact time evolution under Eqs.~\eqref{local_hamiltonian}-\eqref{dipole_hamiltonian} for the $J^{\rm xx}_{\rm eff}$ nuclear interaction. The expectation values represented correspond to the nuclear spin $\langle I_j^z\rangle$, and electronic spin $\langle S_{\rm tot}^z\rangle=\langle S_{1}^z+S_{2}^z\rangle$. (d) Comparison between the exact~\eqref{driven_hamiltonian} and effective~\eqref{ising} dynamics for the $J^{\rm zz}_{\rm eff}$ nuclear interaction, together with an echo scheme that allow us to get rid of the fast single-nuclei dynamics. We represent the nuclear expectation values $\delta\langle \tau_j^x\rangle=\langle \tau_j^x\rangle_{t_{\rm f}}-\langle \tau_j^x\rangle_{t_{\rm 0}}$. The dotted lines correspond to $J_{12}=0$, where there is no interaction induced on the nuclei. (e) Performance of the ZZ-gate in the presence of different strengths of the electron dephasing noise $b_j=\{5,15,25,35,50,55\}$kHz, where the nuclear noise is $B_j=0.1b_j$. The corresponding Ramsey decoherence times are roughly $T_{2{\rm e}}\approx\{0.2,0.07,0.04, 0.03, 0.02,0.018\}$ms, $T_{2{\rm n}}=0.1T_{2{\rm e}}$. }
\label{nuclear_gate}
\end{figure*}
In Fig.~\ref{nuclear_gate}{\bf(a)}, we present a scheme for the electron-mediated gate between two NV nuclei based on Eq.~\eqref{eff_static}, referred as the nuclear XX gate. The initialization yields the state $\ket{\psi_0}=\ket{\phi_{\rm e}}\otimes\ket{\varphi_{\rm n}}=\ket{0,0}_{\rm e}\otimes\ket{0,1}_{\rm n}$, where electrons belong to the ground-state manifold of Fig.~\ref{effective_coupling}{\bf(b)}, and the dynamics of the spin excitation is determined by virtual electron spin-flip processes. In Fig.~\ref{nuclear_gate}{\bf(c)}, we study numerically the accuracy of the effective Hamiltonian~\eqref{eff_static}, which is compared to the exact evolution under the total Hamiltonian~\eqref{local_hamiltonian}-\eqref{dipole_hamiltonian}. One observes that the electron state remains in the ground-state, whereas there is a periodic exchange of the spin excitation between the nuclei. The remarkable agreement of both predictions justifies the validity of the effective nuclear spin-spin Hamiltonian in Eq.~\eqref{eff_static}. Unfortunately, the parameters in Table~\ref{o_magnitude} yield a vanishingly-small coupling $J^{\rm xx}_{\text{eff}}\approx 0.1$Hz, which is far too slow to produce any observable coherent coupling between the nuclei. Even if not of practical use, the above derivation gives a neat account of the mechanism of electron-mediated interactions, and will help us in understanding how to raise the interaction strength.
A possibility to overcome this problem is to apply a magnetic field, such that the Zeeman shift reduces $D\to D-g_{\text{e}}\mu_{\text{B}}B$, thus enhancing $J^{\rm xx}_{\text{eff}}$. Yet, one faces two important problems: {\it i)} In general, the axes of the NV centers are not aligned, and each electronic spin experiences a different Zeeman shift. For the large fields required, this inhomogeneity might exceed the dipolar coupling, and thus spoil the scheme. {\it ii)} The dephasing exerted by the environment would have a contribution that ruins the coherence of the interaction. We demonstrate below that there is a different approach that overcomes both problems simultaneously, and yet enhances the nuclear spin interaction: {\it continuous microwave driving}~\cite{cont_decoupling}.
{\it Effective driven interactions.-} We discuss now the effects of a continuous microwave field that drives both the electronic and nuclear spins. The effect of the driving is two-fold: {\it i)} By addressing each NV center with different microwave fields, one can independently tune their frequencies so that they become resonant with a particular transition. This allows us to overcome the problems associated with both the inhomogeneous broadening, and the different Zeeman shifts. Moreover, this can be used for single addressing of NV's, especially when combined with magnetic gradients. {\it ii)} By tuning the microwave frequency on resonance with the transition, one introduces a new energy scale that governs the system, namely the Rabi frequency. This parameter can be tuned by controlling the microwave power, allowing us to enhance $J_{\text{eff}}$.
Let us consider the Zeeman effect associated to $B=30$ G in Fig.~\ref{effective_coupling}{\bf(c)}. By setting the microwave frequencies to $\omega_{\text{e}j}=D_j-g_{\text{e}}\mu_{\text{B}}B_j, \omega_{\text{n}j}=P_j-g_{\text{n}}\mu_{\text{N}}B_j$, one resonantly drives the transitions between the electronic and nuclear levels $m_j=0\leftrightarrow-1$, $M_j=0\leftrightarrow -1$. These driving terms can be written as
\begin{equation}
\label{driving}
H_{{\rm d}}(t)=\sum_{j}\Omega_{\text{e}}\sigma^x_j\cos{ \omega_{\text{e}_j} t}+\Omega_{\text{n}}\tau^x_j\cos{ \omega_{\text{n}_j} t},
\end{equation}
where the Rabi frequencies of the electronic and nuclear transitions are $\Omega_{\text{e}},\Omega_{\text{n}}$, and the electronic and nuclear Pauli matrices $\sigma_j^x$, $\tau_j^x$.
In the interaction picture with respect to $H_{0,1}=\sum_{j}D_j(S_j^z)^2-P_j(I_j^z)^2+g_{\text{e}}\mu_{\text{B}}B_jS_j^z-g_{\text{n}}\mu_{\text{N}}B_jI_j^z$, one can neglect the rapidly oscillating terms associated to the transverse part of Zeeman shifts, and the hyperfine coupling. This rotating wave approximation is justified for the parameters shown in Table~\ref{o_magnitude}. Additionally, we consider two NV centers with different axes, which allows us to neglect the transverse part of the magnetic dipole coupling. For weak-enough driving, we arrive at the the total {\it driven Hamiltonian}
\begin{equation}
\label{driven_hamiltonian}
H_0=\hspace{-0.5ex}\sum_{j}\hspace{-0.5ex}\left(\textstyle\frac{1}{2} \Omega_{\text{e}}\sigma^x_j+\textstyle\frac{1}{2}\Omega_{\text{n}}\tau^x_j\right)+2J_{12}S_1^zS_2^z,\hspace{1ex}H_1\hspace{-0.5ex}=\hspace{-0.5ex}\sum_{j}A_j^{\shortparallel}S_j^zI_j^z.\\
\end{equation}
We stress that these approximations are justified by the parameters in Table~\ref{o_magnitude}, and supported by numerical simulations.
We derive now the electron-mediated nuclear spin interactions starting from Eq.~\eqref{driven_hamiltonian}. We note that there is again a hierarchy in the couplings $\Omega_{\text{e}}\gg A_j^{\shortparallel}\gg J_{12}\gg \Omega_{\text{n}}$, which leads to the clustering of energy levels shown in Fig.~\ref{effective_coupling}{\bf(d)}. By considering the electron ground-state, the nuclear spins can interact through virtual electron spin-flips to the excited manifolds. In this driven regime, it is the longitudinal hyperfine coupling $A_j^{\shortparallel}$ which induces such virtual transitions. A Schrieffer-Wolff-type calculation yields the nuclear Hamiltonian
\begin{equation}
\label{ising}
H_{\text{eff}}^{\rm zz}\hspace{-0.5ex}=J^{\rm zz}_{\text{eff}}\tau_1^z\tau_2^z+\hspace{-0.5ex}\sum_j\hspace{-0.5ex}\Omega_{\text{n}}\tau_j^x-\textstyle\frac{1}{4} A_j^{\shortparallel}\tau_j^z,\hspace{0.5ex} J^{\rm zz}_{\text{eff}}=\hspace{-0.5ex}\frac{-A_1^{\shortparallel}A_2^{\shortparallel}}{8\Omega_e}\hspace{-0.5ex}\left(\hspace{-0.5ex}\frac{J_{12}}{\Omega_{\text{e}}}+2\xi\hspace{-0.5ex}\right)\hspace{-0.5ex},
\end{equation}
where we considered the inhomogeneous broadening of the hyperfine couplings $ \xi=2[(A_2^{\shortparallel})^2-(A_1^{\shortparallel})^2]/\Omega_eJ_{12}$. This Hamiltonian is an {\it Ising magnetic interaction} between the nuclear spins, which are additionally subjected to a { transverse field} due to the driving, and a { longitudinal field} due to the hyperfine coupling. As advanced previously, we have been able to enhance the electron-mediated nuclear interaction, which becomes $J^{\rm zz}_{\text{eff}}\approx 0.1$kHz for the parameters in Table~\ref{o_magnitude}. Remarkably, the strength of the nuclear spin interaction has increased by three orders of magnitude $J^{\rm zz}_{\text{eff}}\approx 10^3J^{\rm xx}_{\text{eff}}$.
In Fig.~\ref{nuclear_gate}{\bf(b)}, we schematically describe the necessary ingredients for the nuclear ZZ gate. The initialization consists of the electron (nuclear) spin polarization $\mathcal{P}$($\mathcal{M}$), together with single-spin gates. $\mathcal{P}$ is obtained by the optical pumping cycle available for NV centers~\cite{nv_optical_pumping,nv_electron_rabi}, whereas $\mathcal{M}$ is based on the techniques developed for the nuclear single-shot measurement~\cite{nv_nuclear_single_shot}, followed by the electron state-dependent fluorescence~\cite{nv_optical_pumping,nv_electron_rabi}. Once polarized, $\ket{0,0}_{\rm e}\otimes\ket{0,0}_{\rm n}$, one applies unitary gates based on microwave pulses of different duration, $
Y_{j,\frac{\pi}{2}}=(\mathbb{I}+\ii\tau_j^y),\hspace{1ex} Y_{j,-\frac{\pi}{2}}=(\mathbb{I}-\ii\tau_j^y)$ (also for the electron spin), which lead to $\ket{\psi_0}=\ket{{\rm --}}_{\rm e}\otimes\ket{{\rm -+}}_{\rm n}$. The evolution of this state is dictated by the interaction-picture Hamiltonian~\eqref{ising}, which leads to $U^{\rm zz}_{t_2,t_1}=\end{equation}^{-\ii H_{0,1}t_2}\end{equation}^{-\ii H_{\rm eff}^{\rm zz}(t_2-t_1)}\end{equation}^{+\ii H_{0,1}t_1}$. Due to the longitudinal field, and the additional contributions of $H_{0,1}$, the simple periodic exchange of the nuclear spin excitation shall be accompanied by fast oscillations. In order to observe neatly the effect of the interaction, one may perform a spin-echo sequence, such that the nuclear spins are inverted at half the gate time by a microwave pulse $X_{j,\pi}=\ii\tau_{j}^x$. In this case, the fast single-nuclei oscillations refocus after the spin-echo period $t_{\rm f}$, and one observes solely the effect of the interaction. In Fig.~\ref{nuclear_gate}{\bf(d)},
we compare the effective description~\eqref{ising} to the Hamiltonian~\eqref{driven_hamiltonian}, which display a clear agreement. In particular, when the echo period matches twice the ZZ-gate time $t_{\rm f}=2t_{\rm zz}=\pi/2J_{{\rm eff}}^{{\rm zz}}\approx 9$ms, one finds a perfect excitation exchange $\langle \tau^x_1\rangle:-1\to+1,\langle\tau^x_2\rangle:+1\to-1$. Note that for $J_{12}=0$, this effect is completely absent. Finally, considering $t_{\rm f}=t_{\rm zz}$, and setting the echo pulse along the y-axis, the dynamics generates a entangled nuclear state $\ket{\psi_0}=\ket{{\rm --}}_{\rm e}\otimes\ket{{\rm -_y+_y}}_{\rm n}\to\ket{\psi_{\rm f}}=\ket{{\rm --}}_{\rm e}\otimes(\ket{{\rm -_y+_y}}_{\rm n}+\ket{{\rm +_y-_y}}_{\rm n})/\sqrt{2}$. Once the gate has been performed, the nuclear operators $\langle I_j^z\rangle$, $\langle \tau_j^x\rangle$ must be measured. Since the state-dependent fluorescence is particular to the electron spins, one should map the nuclear information onto the electrons, and then measure. This can be achieved in a quantum non-demolition fashion by using a microwave on a electron-spin transition conditioned to the nuclei~\cite{nv_nuclear_single_shot}.
{\it Decoupling from decoherence.-} So far, our discussion has focused on the idealized situation of isolated NV centers. However, every quantum system is inevitably coupled to an environment that degrades its coherence. This phenomenon, known as {\it decoherence}, must be seriously accounted for in solid-state materials,
where the system-environment coupling is usually strong. In the particular case of NV centers, the major source of decoherence is the coupling to
other impurity spins, such as single substitutional nitrogen electron spin (P1 center) in type Ib diamond~\cite{adjustable_spin_bath_nv}, or $^{13}$C isotopes in type IIa~\cite{c13_environment}. The microscopic description of the spin bath is an intricate many-body problem, and is a current subject of intense research. Here, we use a phenomenological model of the bath that yields a fluctuating magnetic field shifting the resonance frequencies. Due to the spin interactions, this effective field is modeled as a stochastic Ornstein-Uhlenbeck process~\cite{dynamical_decoupling_nv,rabi_oscillations_decay}
\begin{equation}
\label{noise}
H_{\rm noise}=\sum_j \left( b_{j}(t)S_j^z+ B_{j}(t)I_j^z\right),
\end{equation}
where $b_{j}(t), B_{j}(t)$ are random processes with autocorrelation $
\langle b_j(t)b_j(0)\rangle = b_j^2\end{equation}^{-r_j t}, \hspace{1ex} \langle B_j(t)B_j(0)\rangle =B_j^2\end{equation}^{-R_j t},$
where $b_j^2,B_j^2$ represent variance of the zero-mean gaussian distributions, and $r_j,R_j$ the inverse of their correlation times. In particular, the decoherence time of an electronic (nuclear) Ramsey experiment is given by $T_{2\rm e}=1/b_j$ ($T_{2\rm n}=1/B_j$). By considering the particular time-dependence of these stochastic processes, we numerically integrate the noisy dynamics, and average for $N=10^3$ realizations of the random process. This allows us to study the effects of decoherence on the gate.
For the slow XX gate (Fig.~\ref{nuclear_gate}{\bf(a)}), the limiting factor is the nuclear dephasing time, which can attain values of $T_{2{\rm n}}\approx 10{\rm ms}$. Even for the purest samples, the coherence of the gate is completely lost much before the target time $t_{\rm xx}\approx 4.5$s is reached. Therefore, the performance of this gate is extremely poor. For the fast ZZ gate (Fig.~\ref{nuclear_gate}{\bf(b)}), not only the nuclear-spin dephasing, but also the electron-spin dephasing limit the gate accuracy. In the dressed-state basis (see Fig.~\ref{effective_coupling}{\bf(d)}), the electron dephasing tries to induce a transition between the different manifolds, introducing additional noise in the nuclei. However, due to the strong driving $\Omega_{\rm e}$, these processes are partially suppressed. Additionally, a sufficiently strong nuclear driving, $\Omega_{\rm n}\gg B_j,(b_jA_j^{\shortparallel}/\Omega_{\rm e})$, provides an additional decoupling mechanism that enhances further the gate performance. In Fig.~\ref{nuclear_gate}{\bf(e)}, one observes the announced decoupling, since the gate performance at the target time $t_{\rm zz}\approx 4.5$ms is extremely good even for shorter electronic coherence times ranging from $T_{2{\rm e}}\approx 0.1$ms to $T_{2{\rm e}}\approx 50\mu$s. Due to the decoupling mechanism, the gate accuracy will actually be limited by the decay times $T_{1\rm{e}}$. Moreover, at this time scale, energy will be pumped into the system by the continuous driving. However, note that this limitation can be overcome since $T_{1{\rm e}}$ can be increased by orders of magnitude by cooling. Accordingly, one can achieve high fidelities.
Let us finally note that the effective decoupling mechanism presented here can also be used to improve the electron-spin gates based on the direct dipole interaction~\cite{nv_nv_gate}. In that case, the role of the microwave driving is to prolong dephasing times and to bring the two dressed electronic transitions to resonance to overcome the inhomogeneous broadening.
{\it Conclusions and outlook.-} We have demonstrated the feasibility for engineering electron-mediated spin-spin interactions between
the nuclei of two NV-centers. By continuous microwave driving, this scheme allow us to decouple from
the electronic and nuclear dephasing sources, and increase the effective interactions by three orders of
magnitude magnitude thus achieving $J_{\rm eff}^{\rm zz}\approx 0.1$kHz for distances of existing pairs of NV-centers~\cite{nv_nv_gate}. This scheme opens the possibility
for the realization of quantum information processors, quantum simulators and quantum-sensors~\cite{nv_magnetrometry} on the
basis of NV-centers in diamond. Finally, we would like to stress the generality of this scheme, which can be applied to other solid-state technologies that are candidates for quantum-information processing.
{\it Acknowledgements.--} This work was supported by the EU STREP projects
HIP, PICC, and by the Alexander von Humboldt Foundation. We thank J. Cai for useful discussions.
\vspace{-4ex}
|
2,869,038,153,797 | arxiv | \section{Introduction}
In recent years, deep convolutional neural networks (CNNs) have proven
to be highly effective general models for a multitude of computer
vision problems \citep{krizhevsky2012imagenet,long2015fully,radford2015unsupervised,newell2016stacked}.
One such problem is \emph{coordinate regression}, where the goal is
to predict a fixed number of location coordinates corresponding to
points of interest in an input image. A well-known instance of this
problem is human pose estimation, for which CNNs are state-of-the-art.
In this paper we study CNN-based solutions to coordinate regression,
using the single-person pose estimation task as an exemplar. Such
solutions may exhibit the desirable properties of spatial generalization
and/or end-to-end differentiability.
Spatial generalization is the ability of a model to generalize knowledge
obtained at one location during training to another at inference time.
If a spatially generalizable model observes a tennis ball in the top-left
of an image during training, it should be able to successfully locate
a similar tennis ball at a previously unseen location in a new image
(\eg the bottom right). It follows that this property will make a
positive contribution to the overall generalization of a coordinate
regression model, since the goal is to find items anywhere in the
image. In general, the success of CNNs is understood to be a result
of the high generalization ability afforded by spatially shared parameters
\citep{lecun1990handwritten}. To maximize this advantage, care must
be taken to avoid trainable layers which can overfit on global structure.
\citet{lin2013nin} note that ``fully connected layers are prone
to overfitting, thus hampering the generalization ability of the overall
network''.
An end-to-end differentiable model can be composed with other differentiable
layers to form a larger model without losing the ability to train
using backpropagation \citep{rumelhart1985backprop}. In the case
of coordinate regression, being end-to-end differentiable means being
able to propagate gradients all the way from the output numerical
coordinates to the input image. It is possible to train a coordinate
regression model without this property, such as by matching predicted
heatmaps to target heatmaps generated from the ground truth locations.
However, this approach cannot be used in architectures where the numerical
coordinates are learned implicitly as intermediate values, including
the prominent example of Spatial Transformer Networks \citep{jaderberg2015spatial}.
There are many CNN-based solutions to other computer vision tasks,
such as classification and semantic segmentation, which exhibit both
spatial generalization and end-to-end differentiability. However,
existing solutions for coordinate regression sacrifice one property
or the other.
The most successful existing coordinate regression approach is to
apply a loss directly to output heatmaps rather than numerical coordinates
\citep{tompson2014joint,newell2016stacked,yang2017learning}. Synthetic
heatmaps are generated for each training example by rendering a spherical
2D Gaussian centered on the ground truth coordinates. The model is
trained to produce output images which resemble the synthetic heatmaps
using mean-square-error loss. During inference, numerical coordinates
are obtained from the model's output by computing the argmax of pixel
values, which is a non-differentiable operation. Although this approach
has good spatial generalization, it does have a few disadvantages.
Most notably, gradient flow begins at the heatmap rather than the
numerical coordinates (\figref{hm_arch}). This leads to a disconnect
between the loss function being optimized (similarity between heatmaps)
and the metric we are actually interested in (the distance between
predicted coordinates and ground truth). Only the brightest pixel
is used to calculate numerical coordinates at inference time, but
all of the pixels contribute to the loss during training. Making predictions
based on the argmax also introduces quantization issues, since the
coordinates have their precision tied to the heatmap's resolution.
Another coordinate regression approach is to add a fully connected
layer which produces numerical coordinates \citep{toshev2014deeppose,jaderberg2015spatial}.
An attractive (and sometimes \emph{required}) property of this approach
is that it is possible to backpropagate all the way from the predicted
numerical coordinates to the input image. However, the weights of
the fully-connected layer are highly dependent on the spatial distribution
of the inputs during training. To illustrate this point, consider
an extreme situation where the training set consists entirely of coordinates
located within the left-hand half of the image. Many of the fully
connected layer's input activations will be useless, and as a result
weights corresponding to the right-hand side of the image will not
be trained properly. So although the convolutional part of the model
is spatially invariant, the model as a whole will not generalize well
to objects on the right-hand side of the image. This is an inefficient
usage of the training data, and causes particularly bad performance
on small datasets.
\begin{figure}
\begin{centering}
\subfloat[\label{fig:hm_arch}Heatmap matching]{\begin{centering}
\includegraphics{build/hm_arch. tikz}
\par\end{centering}
}
\par\end{centering}
\begin{centering}
\subfloat[Fully connected]{\begin{centering}
\includegraphics{build/fc_arch. tikz}
\par\end{centering}
}
\par\end{centering}
\centering{}\subfloat[DSNT (ours)]{\begin{centering}
\includegraphics{build/dsnt_arch. tikz}
\par\end{centering}
}\caption{\label{fig:arch_comparison}Comparison of coordinate regression model
architectures. The arrows indicate inference (black) and gradient
flow (dashed red).}
\end{figure}
We propose our \emph{differentiable spatial to numerical transform}
(DSNT) layer as an alternative to existing approaches. The DSNT layer
may be used to adapt existing CNN architectures, such as a pretrained
ResNet \citep{he2016resnet}, to coordinate regression problems. Our
technique fully preserves the spatial generalization and end-to-end
differentiability of the model, without introducing additional parameters.
\figref{arch_comparison} illustrates how the DSNT layer fits into
the model as a whole in comparison to fully connected and heatmap
matching approaches. \tblref{desirable_props} summarizes the features
that DSTN poses which selectively appear in fully connected (FC) and
heatmap matching (HM) based approaches.
We find that DSNT is able to consistently outperform the accuracy
of heatmap matching and fully connected approaches across a variety
of architectures on the MPII human pose dataset \citep{andriluka20142d},
and is therefore a suitable replacement in most situations. Our experiments
show that state-of-the-art stacked hourglass models \citep{newell2016stacked}
achieve higher accuracy when heatmap matching is replaced with DSNT.
For ResNet-34 models, DSNT outperforms heatmap matching by 90.5\%
with $7\times7$ pixel heatmaps, and by 2.0\% with $56\times56$ pixel
heatmaps. Since accuracy at low heatmap resolution is much better
with DSNT, a wider variety of efficient architectures may be considered
for coordinate regression. For instance, a simple ResNet-50 network
with DSNT is comparable in accuracy to an 8-stack hourglass network,
but exhibits triple the speed and half of the memory usage during
inference.
\begin{table}
\begin{centering}
\begin{tabular}{|c|>{\centering}p{0.9cm}|>{\centering}p{0.9cm}|>{\centering}p{0.9cm}|}
\hline
& HM & FC & DSNT\tabularnewline
\hline
\hline
Fully differentiable & \xmark & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}} & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}}\tabularnewline
\hline
Spatially generalizable & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}} & \xmark & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}}\tabularnewline
\hline
No parameters & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}} & \xmark & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}}\tabularnewline
\hline
Good for high-res output & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}} & \xmark & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}}\tabularnewline
\hline
Good for low-res output & \xmark & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}} & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}}\tabularnewline
\hline
Direct coordinate loss & \xmark & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}} & \textcolor{darkgreen}{\ding{51}}}\newcommand{\xmark}{\textcolor{darkred}{\ding{55}}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Presence of desirable properties in heatmap matching (HM), fully connected
output (FC), and differentiable spatial to numerical transform (DSNT).}
\label{tbl:desirable_props}
\end{table}
The DSNT layer presented in this paper is very similar to the soft-argmax
operation of \citet{luvizon2017human}, which was developed in parallel
with our own work. The soft-argmax has also been applied to different
problem domains prior to this \citep{yi2016lift,levine2016end}. However,
we extend the idea further by proposing a regularization strategy
which increases prediction accuracy. Additionally, we conduct a comprehensive
set of experiments exploring configurations and properties of the
operation, and the trade-off between accuracy and inference speed
in the context of complete pose estimation models.
\section{Related Work}
Heatmap matching and fully connected layers are prevalent in existing
solutions to problems including human pose estimation and Spatial
Transformer Networks. As such, the following section describes how
existing coordinate regression approaches are applied in those contexts.
Although this paper focuses on pose estimation as an exemplar of the
DSNT layer's capability, our approach is broadly applicable to any
coordinate regression problem.
\subsection{Human pose estimation}
DeepPose \citep{toshev2014deeppose} is one of the earliest CNN-based
models to perform well on the human pose estimation task, and helped
pioneer the current dominance of deep learning in this area. In order
to predict pose joint locations, DeepPose uses a multi-stage cascade
of CNNs with fully connected outputs. The first stage of the cascade
predicts the absolute coordinates of the joint locations, and subsequent
stages refine the predictions by producing relative position deltas.
The authors argue that the cascade arrangement enables reasoning about
human pose at a higher level, since later stages are able to analyze
global structure.
Shortly after DeepPose was published, \citet{tompson2014joint} proposed
a higher accuracy model which uses heatmap matching to calculate loss.
Heatmap matching has since become overwhelmingly dominant amongst
human pose estimation models, including the state-of-the-art stacked
hourglass architecture \citep{newell2016stacked} which is fundamental
to current leaders of the MPII single person pose estimation challenge
\citep{yang2017learning,chen2017advposenet,chou2017self,chu2017multi}.
Each ``hourglass'' in a stacked hourglass network uses the first
half of its layers to downsample activations, and the second half
to upsample back to the original size. By stacking multiple hourglasses
together, the network is able to process data in a repeated bottom-up,
top-down fashion, achieving an effect similar to DeepPose's cascade.
Skip layers are used extensively throughout the architecture, both
within and across individual hourglasses, which makes the model easier
to train with backpropagation.
Very recent research suggests that adversarial training \citep{goodfellow2014generative}
aids in the prediction of likely joint positions by having a discriminator
learn the difference between coherent and nonsensical poses \citep{chen2017advposenet,chou2017self}.
Although we do not conduct such experiments in this paper, we observe
that adversarial training is orthogonal to our findings and could
be combined with our DSNT layer as future work.
\subsection{Spatial Transformer Networks}
The internal Localisation Network component of Spatial Transformer
Networks \citep{jaderberg2015spatial} uses a fully connected layer
to predict translation transformation parameters, which are effectively
just 2D location coordinates. It is not possible to use heatmap matching
in such a model, as gradients must be passed backwards through the
coordinate calculations. In contrast, our DSNT layer could be used
as a drop-in replacement for calculating the translation parameters.
\section{Main idea}
We introduce a new differentiable layer for adapting fully convolutional
networks (FCNs) to coordinate regression. FCNs are a broad class of
CNNs which rely solely on spatially invariant operations to produce
their outputs \citep{lin2013nin}, and are hence naturally spatially
generalizable. Most CNNs with fully connected output layers can be
converted into FCNs by simply removing the fully connected layers.
FCNs are already spatially generalizable and end-to-end differentiable,
so we design our new layer in such a way that these two desirable
properties are preserved. This new layer—which we call the DSNT layer—is
placed at the output of the FCN and transforms spatial heatmaps into
numerical coordinates.
Activations are represented spatially throughout an FCN, which is
very useful for tasks like semantic segmentation \citep{long2015fully}
where the output is intended to be spatial. However, for coordinate
regression tasks like human pose estimation the output needs to be
coordinate pairs. This begs the question: how do we transform spatial
activations into numerical coordinates such that we can still effectively
train the model?
\begin{figure}
\centering{}\noindent\begin{minipage}[t]{1\linewidth}\begin{center}
\begin{minipage}[t]{0.3\linewidth}\begin{center}
\captionsetup[subfigure]{width=1.0\textwidth}\subfloat[Example image and with pose overlay]{\begin{centering}
\hspace{0.09cm}\includegraphics[width=1\linewidth]{build/example_heatmap_skel. tikz}
\par\end{centering}
}
\par\end{center}\end{minipage}\hfill{}\begin{minipage}[t]{0.3\linewidth}\begin{center}
\captionsetup[subfigure]{width=1.0\textwidth}\subfloat[\label{fig:gauss_heatmap}Training target for heatmap matching]{\begin{centering}
\hspace{0.09cm}\includegraphics[width=1\linewidth]{build/example_heatmap_synth. tikz}
\par\end{centering}
}
\par\end{center}\end{minipage}\hfill{}\begin{minipage}[t]{0.3\linewidth}\begin{center}
\captionsetup[subfigure]{width=1.0\textwidth}\subfloat[\label{fig:learned_heatmap}Heatmap learned implicitly with DSNT]{\begin{centering}
\hspace{0.09cm}\includegraphics[width=1\linewidth]{build/example_heatmap_dsnt. tikz}
\par\end{centering}
}
\par\end{center}\end{minipage}\hfill{}
\par\end{center}\end{minipage}\caption{\label{fig:heatmap}Spatial representations of an example neck location.
Image (b) is a 2D Gaussian rendered at the ground truth location,
whereas (c) is learned freely by a model.}
\end{figure}
Consider the case of locating a person's neck in the input image.
This location may be represented spatially as a heatmap (\figref{gauss_heatmap}),
and can be learned by an FCN since it is simply a single-channel image.
The purpose of the DSNT layer is to transform such a heatmap into
numerical coordinates, which is the form of output we require for
coordinate regression. However, we have to be careful about how we
approach designing the DSNT, since we want the layer to be part of
an end-to-end trainable model. For example, if we simply take the
location of the brightest pixel then we cannot calculate meaningful
gradients during training. Therefore, we design the DSNT layer such
that it is able to propagate smooth gradients back through all heatmap
pixels from the numerical coordinates.
In contrast to heatmap matching techniques, we do not require applying
a loss directly to the heatmap output by the FCN to make it resemble
\figref{gauss_heatmap}. Instead, the heatmap is learned indirectly
by optimizing a loss applied to the predicted coordinates output by
the model as a whole. This means that during training the heatmap
will evolve to produce accurate coordinates via the DSNT layer. An
example of an implicitly learned heatmap is shown in \figref{learned_heatmap}.
\section{The Differentiable Spatial to Numerical Transform\label{sec:Calculating-the-DSNT}}
In this section we describe the technical details of our differentiable
spatial to numerical transform (DSNT) layer. The DSNT layer has no
trainable parameters, is fully differentiable, and generalizes spatially.
Accordingly, it is possible to use our layer as part of a CNN model
to enable numerical coordinate outputs without sacrificing end-to-end
learning with backpropagation.
The input to the DSNT is a single-channel normalized heatmap, $\hat{\bm{Z}}$,
represented as an $m\times n$ matrix where $m$ and $n$ correspond
to the heatmap resolution. By ``normalized'' we mean that all elements
of $\hat{\bm{Z}}$ are non-negative and sum to one—the same conditions
which must be fulfilled by a probability distribution. Using such
a normalized heatmap guarantees that predicted coordinates will always
lie within the spatial extent of the heatmap itself. The unnormalized
heatmap output of an FCN, $\bm{Z}$, can be normalized by applying
a heatmap activation function $\hat{\bm{Z}}=\phi(\bm{Z})$. Suitable
choices for $\phi(\bm{Z})$ are discussed in \secref{Heatmap-activation}.
\begin{figure}
\begin{centering}
\includegraphics[width=1\linewidth]{build/heatmap_inner_product. tikz}\vspace{-0.5cm}
\par\end{centering}
\begin{centering}
\begin{center}
\resizebox{\linewidth}{!}{$x=\left\langle \hat{\bm{Z}},\bm{X}\right\rangle _{F}=\left(\begin{array}{cccccc}
& & 0.1\times0.4 & +\\
0.1\times0.0 & + & 0.6\times0.4 & + & 0.1\times0.8 & +\\
& & 0.1\times0.4
\end{array}\right)=0.4$}
\par\end{center}
\begin{center}
\resizebox{\linewidth}{!}{$y=\left\langle \hat{\bm{Z}},\bm{Y}\right\rangle _{F}=\left(\begin{array}{cccccc}
& & 0.1\times-0.4 & +\\
0.1\times0.0 & + & 0.6\times0.0 & + & 0.1\times0.0 & +\\
& & 0.1\times0.4
\end{array}\right)=0.0$}
\par\end{center}
\par\end{centering}
\centering{}\caption{\label{fig:heatmap_inner_product}Coordinate calculation using the
differentiable spatial to numerical transform (DSNT).}
\end{figure}
Let $\bm{X}$ and $\bm{Y}$ be $m\times n$ matrices, where $X_{i,j}=\frac{2j-(n+1)}{n}$
and $Y_{i,j}=\frac{2i-(m+1)}{m}$. That is, each entry of $\bm{X}$
and $\bm{Y}$ contains its own $x$- or $y$-coordinate respectively,
scaled such that the top-left corner of the image is at $(-1,-1)$
and bottom-right is at $(1,1)$.
By taking a probabilistic interpretation of $\hat{\bm{Z}}$ we can
represent the coordinates, $\bm{\mathrm{c}}$, as a discrete bivariate
random vector with mass function $p(\bm{\mathrm{c}})$ defined as
\[
\operatorname{Pr}(\bm{\mathrm{c}}=\left[\begin{array}{cc}
X_{i,j} & Y_{i,j}\end{array}\right])=\hat{Z}_{i,j}
\]
for all $i=1\ldots m,j=1\ldots n$.
In the heatmap matching approach to coordinate regression, the predicted
numerical coordinates are analogous to the mode of $\bm{\mathrm{c}}$.
For the DSNT layer we instead take our prediction to be the mean of
$\bm{\mathrm{c}}$, denoted $\bm{\mu}=\mathbb{E}[\bm{\mathrm{c}}]$.
Unlike the mode, the mean can a) have its derivative calculated, allowing
us to backpropagate through the DSNT layer; and b) predict coordinates
with sub-pixel precision. \eqref{dsnt} details how the expectation
is calculated, and hence defines the DSNT operation. We use $\left\langle \cdot,\cdot\right\rangle _{F}$
to denote the Frobenius inner product, which is equivalent to taking
the scalar dot product of vectorized matrices.
\begin{equation}
\operatorname{DSNT}(\hat{\bm{Z}})=\bm{\mu}=\left[\begin{array}{cc}
\left\langle \hat{\bm{Z}},\bm{X}\right\rangle {}_{F} & \left\langle \hat{\bm{Z}},\bm{Y}\right\rangle {}_{F}\end{array}\right]\label{eq:dsnt}
\end{equation}
\figref{heatmap_inner_product} illustrates the DSNT operation with
an example. Notice how the symmetrical off-center values of the heatmap
cancel each other out in the calculations. In practice, this property
tends to cause the network to learn heatmaps which are roughly symmetrical
about the predicted location.
One seemingly apparent flaw with using the mean instead of the mode
is that the predicted coordinates will be affected adversely by outliers
in the heatmap. However, it is important to keep in mind that the
heatmap itself is learned with the objective of optimizing coordinate
accuracy. Therefore, during training the model is encouraged to threshold
its activations such that outliers are simply not placed in the heatmap
at all. That is, the network is specifically punished for polluting
the heatmap with low confidence outliers \emph{because} they would
adversely affect results, and hence the model can simply learn to
avoid such situations.
\subsection{Heatmap activation\label{sec:Heatmap-activation}}
As mentioned earlier, a heatmap activation function $\phi(\bm{Z})$
is required to normalize the heatmap before applying the DSNT. Here
we will describe several choices for this function by decomposing
the activation into two parts. Firstly, each element of the input
image $\bm{Z}$ undergoes rectification to produce a non-negative
output. The rectified image $\bm{Z}'$ is then normalized using the
$L^{1}$ norm so that the elements sum to one (\ie $\hat{\bm{Z}}=(\sum Z'_{i,j})^{-1}\bm{Z}'$).
\begin{table}
\centering{}\caption{\label{tbl:heatmap_rectification}Heatmap activation functions and
their corresponding human pose estimation results.}
\begin{tabular}{lll}
\toprule
Name & Rectification & PCKh\tabularnewline
\midrule
\textbf{Softmax} & $Z'_{i,j}=\exp(Z_{i,j})$ & \textbf{86.81\%}\tabularnewline
Abs & $Z'_{i,j}=\left|Z_{i,j}\right|$ & 86.48\%\tabularnewline
ReLU & $Z'_{i,j}=\max(0,Z_{i,j})$ & 86.69\%\tabularnewline
Sigmoid & $Z'_{i,j}=(1+\exp(-Z_{i,j}))^{-1}$ & 86.71\%\tabularnewline
\bottomrule
\end{tabular}
\end{table}
\tblref{heatmap_rectification} shows some possible options for the
rectification function, along with validation set PCKh accuracy measurements
on the MPII human pose dataset. These results were gathered using
ResNet-34 models pretrained on ImageNet, dilated to produce a heatmap
resolution of $28\times28$ pixels. No regularization was used. Although
the choice of rectification function does not appear to have a large
impact on results, our experiments indicate that softmax works best.
\section{Loss function}
Since the DSNT layer outputs numerical coordinates, it is possible
to directly calculate the two-dimensional Euclidean distance between
the prediction $\bm{\mu}$ and ground truth $\bm{p}$. We take advantage
of this fact to formulate the core term of our loss function (\eqref{euclidean_loss}).
\begin{equation}
\mathcal{L}_{euc}(\bm{\mu},\bm{p})=\left\Vert \bm{p}-\bm{\mu}\right\Vert _{2}\label{eq:euclidean_loss}
\end{equation}
The Euclidean loss function has the advantage of directly optimizing
the metric we are interested in: the distance between the predicted
and actual locations.
\begin{figure}
\centering{}\includegraphics[width=1\linewidth]{build/mse_loss_problem. tikz}\caption{\label{fig:mse-problem}When heatmap matching, it is possible for
predictions to worsen despite the pixel-wise MSE improving.}
\end{figure}
Contrast this with the mean-square-error (MSE) loss used in heatmap
matching, which optimizes the pixel-wise similarity between the output
and a synthetic heatmap generated from ground truth locations. The
pixel-wise MSE loss is a much less direct way of optimizing the metric
that we actually care about. During training, the model is completely
ignorant of the fact that coordinate predictions are based solely
on the brightest heatmap pixel. Another way to put this is that despite
the Euclidean loss having a global minimum when the MSE loss is zero,
we aren't guaranteed that an optimization step which improves MSE
loss will improve our results. \figref{mse-problem} illustrates an
example situation where improving the MSE loss degrades the predictive
accuracy of the model. In this case we see that the output with a
single pixel at the correct location has worse MSE but better location
prediction than an almost perfectly matching heatmap with the brightest
pixel placed incorrectly.
\subsection{Regularization}
There are many different possible heatmaps that will lead to the same
coordinates being output from the DSNT layer. For example, the spread
of the heatmap has no effect on the output—blobs resembling 2D Gaussians
with large variance and small variance can produce identical coordinates.
Although such freedom may be viewed as beneficial, a potential drawback
is that the model does not have strongly supervised pixel-wise gradients
through the heatmap during training. Experimentally, we find that
providing such supervision via regularization can yield marked performance
improvements over vanilla DSNT.
\eqref{combined_loss} shows how regularization is incorporated into
the DSNT loss function. A regularization coefficient, $\lambda$,
is used to set the strength of the regularizer, $\mathcal{\mathcal{L}}_{reg}$.
\begin{equation}
\mathcal{L}(\hat{\bm{Z}},\bm{p})=\mathcal{L}_{euc}(\operatorname{DSNT}(\hat{\bm{Z}}),\bm{p})+\lambda\mathcal{L}_{reg}(\hat{\bm{Z}})\label{eq:combined_loss}
\end{equation}
\subsubsection{Variance regularization}
By expanding upon the probabilistic interpretation of the DSNT layer
(\secref{Calculating-the-DSNT}), we can calculate the variance of
coordinates. This is described for $x$-coordinates in \eqref{variance}
($y$-coordinates are handled similarly). The calculated variance
represents the ``spread'' of the blob in the heatmap, which is analogous
to the size of the synthetic 2D Gaussian drawn in the heatmap matching
approach.
\begin{eqnarray}
\operatorname{Var}[\mathrm{c}_{x}] & = & \mathbb{E}[(\mathrm{c}_{x}-\mathbb{E}[\mathrm{c}_{x}])^{2}]\label{eq:variance}\\
& = & \left\langle \hat{\bm{Z}},(\bm{X}-\mu_{x})\odot(\bm{X}-\mu_{x})\right\rangle _{F}\nonumber
\end{eqnarray}
We are now able to introduce a variance regularization term, \eqref{reg-var}.
The ``spread'' of the learned heatmaps is controlled by a hyperparameter,
the target variance, $\sigma_{t}^{2}$. Note that this regularization
term does not directly constrain the specific shape of learned heatmaps.
\begin{equation}
\mathcal{L}_{var}(\hat{\bm{Z}})=(\operatorname{Var}[\mathrm{c}_{x}]-\sigma_{t}^{2})^{2}+(\operatorname{Var}[\mathrm{c}_{y}]-\sigma_{t}^{2})^{2}\label{eq:reg-var}
\end{equation}
\subsubsection{Distribution regularization}
Alternatively, we can impose even stricter regularization on the appearance
of the heatmap to directly encourage a certain shape. More specifically,
to force the heatmap to resemble a spherical Gaussian, we can minimize
the divergence between the generated heatmap and an appropriate target
normal distribution. \eqref{reg-divergence} defines the distribution
regularization term, where $D(\cdot||\cdot)$ is a divergence measure
(\eg Jensen-Shannon divergence).
\begin{equation}
\mathcal{L}_{D}(\hat{\bm{Z}},\bm{p})=D(p(\bm{\mathrm{c}})||\mathcal{N}(\bm{p},\sigma_{t}^{2}\bm{I}_{2}))\label{eq:reg-divergence}
\end{equation}
Adding a regularization term of this form is similar to incorporating
the usual heatmap matching objective into the DSNT loss function.
\subsubsection*{Selecting the best regularization}
\begin{table}
\centering{}\caption{\label{tbl:reg_results}Pose estimation results for different regularization
terms, using a ResNet-34@28px model.}
\begin{tabular}{llcc}
\toprule
\multirow{2}{*}{Regularization} & \multirow{2}{*}{$\lambda$} & \multicolumn{2}{c}{Validation PCKh}\tabularnewline
& & $\sigma_{t}=1$ & $\sigma_{t}=2$\tabularnewline
\midrule
None & N/A & \multicolumn{2}{c}{86.86\%}\tabularnewline
Variance & 100 & 84.58\% & 85.88\%\tabularnewline
Kullback-Leibler & 1 & 84.67\% & 84.15\%\tabularnewline
\textbf{Jensen-Shannon} & \textbf{1} & \textbf{87.59\%} & \textbf{86.71\%}\tabularnewline
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\begin{centering}
\includegraphics[width=1\linewidth]{build/reg_heatmaps. tikz}
\par\end{centering}
\caption{\label{fig:reg_hm_appearance}Heatmap appearance for models trained
with different regularization terms (red = right wrist, blue = left
wrist).}
\end{figure}
\begin{figure}
\centering{}\clearpage{}\begingroup
\makeatletter
\providecommand\color[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package color not loaded in conjunction with
terminal option `colourtext' }{See the gnuplot documentation for explanation. }{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.} \renewcommand\color[2][]{} } \providecommand\includegraphics[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package graphicx or graphics not loaded }{See the gnuplot documentation for explanation. }{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.} \renewcommand\includegraphics[2][]{} } \providecommand\rotatebox[2]{#2} \@ifundefined{ifGPcolor}{ \newif\ifGPcolor
\GPcolortrue
}{} \@ifundefined{ifGPblacktext}{ \newif\ifGPblacktext
\GPblacktexttrue
}{} \let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{} \gdef\gplfronttext{} \makeatother
\ifGPblacktext
\def\colorrgb#1{} \def\colorgray#1{} \else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}} \expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}} \expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}} \expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}} \expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}} \expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}} \expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}} \expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}} \expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}} \else
\def\colorrgb#1{\color{black}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color{black}} \expandafter\def\csname LT1\endcsname{\color{black}} \expandafter\def\csname LT2\endcsname{\color{black}} \expandafter\def\csname LT3\endcsname{\color{black}} \expandafter\def\csname LT4\endcsname{\color{black}} \expandafter\def\csname LT5\endcsname{\color{black}} \expandafter\def\csname LT6\endcsname{\color{black}} \expandafter\def\csname LT7\endcsname{\color{black}} \expandafter\def\csname LT8\endcsname{\color{black}} \fi
\fi
\setlength{\unitlength}{0.0500bp} \ifx\gptboxheight\undefined \newlength{\gptboxheight} \newlength{\gptboxwidth} \newsavebox{\gptboxtext} \fi \setlength{\fboxrule}{0.5pt} \setlength{\fboxsep}{1pt}\begin{picture}(4700.00,2260.00) \gplgaddtomacro\gplbacktext{ \csname LTb\endcsname \put(727,595){\makebox(0,0)[r]{\strut{}85\%}} \csname LTb\endcsname \put(727,1082){\makebox(0,0)[r]{\strut{}86\%}} \csname LTb\endcsname \put(727,1568){\makebox(0,0)[r]{\strut{}87\%}} \csname LTb\endcsname \put(727,2055){\makebox(0,0)[r]{\strut{}88\%}} \csname LTb\endcsname \put(937,409){\makebox(0,0){\strut{}$0.5$}} \csname LTb\endcsname \put(1479,409){\makebox(0,0){\strut{}$1$}} \csname LTb\endcsname \put(2021,409){\makebox(0,0){\strut{}$2$}} \csname LTb\endcsname \put(2563,409){\makebox(0,0){\strut{}$4$}} } \gplgaddtomacro\gplfronttext{ \csname LTb\endcsname \put(124,1325){\rotatebox{-270}{\makebox(0,0){\strut{}PCKh total}}} \csname LTb\endcsname \put(1750,130){\makebox(0,0){\strut{}$\sigma_t$ (px)}} \csname LTb\endcsname \put(1662,762){\makebox(0,0)[r]{\strut{}$\lambda=1.0$}} } \gplgaddtomacro\gplbacktext{ \csname LTb\endcsname \put(2662,595){\makebox(0,0)[r]{\strut{}}} \csname LTb\endcsname \put(2662,1082){\makebox(0,0)[r]{\strut{}}} \csname LTb\endcsname \put(2662,1568){\makebox(0,0)[r]{\strut{}}} \csname LTb\endcsname \put(2662,2055){\makebox(0,0)[r]{\strut{}}} \csname LTb\endcsname \put(2885,409){\makebox(0,0){\strut{}$0.2$}} \csname LTb\endcsname \put(3561,409){\makebox(0,0){\strut{}$1$}} \csname LTb\endcsname \put(4238,409){\makebox(0,0){\strut{}$5$}} \csname LTb\endcsname \put(4529,409){\makebox(0,0){\strut{}$10$}} } \gplgaddtomacro\gplfronttext{ \csname LTb\endcsname \put(3685,130){\makebox(0,0){\strut{}$\lambda$}} \csname LTb\endcsname \put(3648,762){\makebox(0,0)[r]{\strut{}$\sigma_t=1.0$}} } \gplbacktext
\put(0,0){\includegraphics{build/reg_js_params}} \gplfronttext
\end{picture}\endgroup
\clearpage{}\vspace{-0.5cm}
\caption{\label{fig:reg_params}Varying the Gaussian size and regularization
strength for JS regularization.}
\end{figure}
In order to determine the best performing regularization term, we
conducted a series of experiments on the MPII human pose dataset with
a ResNet-34@28px model.
Firstly, we compared different options for the regularization function,
$\mathcal{L}_{reg}$: variance regularization, and distribution regularization
with Kullback-Leibler (KL) and Jensen-Shannon (JS) divergences. The
pose estimation results in \tblref{reg_results} indicate that JS
distribution regularization achieves the highest accuracy. The sample
heatmap images shown in \figref{reg_hm_appearance} illustrate how
dramatically the choice of regularization term can change the appearance
of heatmaps. For example, distribution regularization (using either
KL or JS divergence) very effectively encourages the production of
distinctly Gaussian-shaped blobs. In contrast, variance regularization
with $\sigma_{t}=2$ results in an interesting strategy of splitting
the heatmap into four blobs around the joint.
We conducted further experiments to determine the optimal regularization
hyperparameters (\figref{reg_params}). The accuracy of the model
was found to be quite robust with respect to the regularization strength,
$\lambda$ (\eqref{combined_loss}). In terms of the target Gaussian
standard deviation, $\sigma_{t}$, values in the range of half a pixel
to one pixel were found to work well.
\section{Experiments\label{sec:Experimental-Results}}
\begin{figure*}
\begin{minipage}[t]{0.32\linewidth}\begin{center}
\clearpage{}\begingroup
\makeatletter
\providecommand\color[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package color not loaded in conjunction with
terminal option `colourtext' }{See the gnuplot documentation for explanation. }{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.} \renewcommand\color[2][]{} } \providecommand\includegraphics[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package graphicx or graphics not loaded }{See the gnuplot documentation for explanation. }{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.} \renewcommand\includegraphics[2][]{} } \providecommand\rotatebox[2]{#2} \@ifundefined{ifGPcolor}{ \newif\ifGPcolor
\GPcolortrue
}{} \@ifundefined{ifGPblacktext}{ \newif\ifGPblacktext
\GPblacktexttrue
}{} \let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{} \gdef\gplfronttext{} \makeatother
\ifGPblacktext
\def\colorrgb#1{} \def\colorgray#1{} \else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}} \expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}} \expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}} \expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}} \expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}} \expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}} \expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}} \expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}} \expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}} \else
\def\colorrgb#1{\color{black}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color{black}} \expandafter\def\csname LT1\endcsname{\color{black}} \expandafter\def\csname LT2\endcsname{\color{black}} \expandafter\def\csname LT3\endcsname{\color{black}} \expandafter\def\csname LT4\endcsname{\color{black}} \expandafter\def\csname LT5\endcsname{\color{black}} \expandafter\def\csname LT6\endcsname{\color{black}} \expandafter\def\csname LT7\endcsname{\color{black}} \expandafter\def\csname LT8\endcsname{\color{black}} \fi
\fi
\setlength{\unitlength}{0.0500bp} \ifx\gptboxheight\undefined \newlength{\gptboxheight} \newlength{\gptboxwidth} \newsavebox{\gptboxtext} \fi \setlength{\fboxrule}{0.5pt} \setlength{\fboxsep}{1pt}\begin{picture}(3440.00,2540.00) \gplgaddtomacro\gplbacktext{ \csname LTb\endcsname \put(747,769){\makebox(0,0)[r]{\strut{}80\%}} \csname LTb\endcsname \put(747,1117){\makebox(0,0)[r]{\strut{}82\%}} \csname LTb\endcsname \put(747,1465){\makebox(0,0)[r]{\strut{}84\%}} \csname LTb\endcsname \put(747,1813){\makebox(0,0)[r]{\strut{}86\%}} \csname LTb\endcsname \put(747,2161){\makebox(0,0)[r]{\strut{}88\%}} \csname LTb\endcsname \put(920,409){\makebox(0,0){\strut{}$7$}} \csname LTb\endcsname \put(1634,409){\makebox(0,0){\strut{}$14$}} \csname LTb\endcsname \put(2348,409){\makebox(0,0){\strut{}$28$}} \csname LTb\endcsname \put(3062,409){\makebox(0,0){\strut{}$56$}} } \gplgaddtomacro\gplfronttext{ \csname LTb\endcsname \put(144,1465){\rotatebox{-270}{\makebox(0,0){\strut{}PCKh total}}} \csname LTb\endcsname \put(1991,130){\makebox(0,0){\strut{}Heatmap resolution (px)}} \csname LTb\endcsname \put(2345,1320){\makebox(0,0)[r]{\strut{}\footnotesize HM}} \csname LTb\endcsname \put(2345,1134){\makebox(0,0)[r]{\strut{}\footnotesize FC}} \csname LTb\endcsname \put(2345,948){\makebox(0,0)[r]{\strut{}\footnotesize DSNT}} \csname LTb\endcsname \put(2345,762){\makebox(0,0)[r]{\strut{}\footnotesize DSNTr}} } \gplbacktext
\put(0,0){\includegraphics{build/outstrat}} \gplfronttext
\end{picture}\endgroup
\clearpage{}\vspace{-0.8cm}
\par\end{center}
\caption{\label{fig:outstrat}Varying output resolution and strategy for ResNet-34
models.}
\end{minipage}\hfill{}\begin{minipage}[t]{0.32\linewidth}\begin{center}
\clearpage{}\begingroup
\makeatletter
\providecommand\color[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package color not loaded in conjunction with
terminal option `colourtext' }{See the gnuplot documentation for explanation. }{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.} \renewcommand\color[2][]{} } \providecommand\includegraphics[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package graphicx or graphics not loaded }{See the gnuplot documentation for explanation. }{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.} \renewcommand\includegraphics[2][]{} } \providecommand\rotatebox[2]{#2} \@ifundefined{ifGPcolor}{ \newif\ifGPcolor
\GPcolortrue
}{} \@ifundefined{ifGPblacktext}{ \newif\ifGPblacktext
\GPblacktexttrue
}{} \let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{} \gdef\gplfronttext{} \makeatother
\ifGPblacktext
\def\colorrgb#1{} \def\colorgray#1{} \else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}} \expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}} \expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}} \expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}} \expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}} \expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}} \expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}} \expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}} \expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}} \else
\def\colorrgb#1{\color{black}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color{black}} \expandafter\def\csname LT1\endcsname{\color{black}} \expandafter\def\csname LT2\endcsname{\color{black}} \expandafter\def\csname LT3\endcsname{\color{black}} \expandafter\def\csname LT4\endcsname{\color{black}} \expandafter\def\csname LT5\endcsname{\color{black}} \expandafter\def\csname LT6\endcsname{\color{black}} \expandafter\def\csname LT7\endcsname{\color{black}} \expandafter\def\csname LT8\endcsname{\color{black}} \fi
\fi
\setlength{\unitlength}{0.0500bp} \ifx\gptboxheight\undefined \newlength{\gptboxheight} \newlength{\gptboxwidth} \newsavebox{\gptboxtext} \fi \setlength{\fboxrule}{0.5pt} \setlength{\fboxsep}{1pt}\begin{picture}(3440.00,2540.00) \gplgaddtomacro\gplbacktext{ \csname LTb\endcsname \put(747,753){\makebox(0,0)[r]{\strut{}84\%}} \csname LTb\endcsname \put(747,1070){\makebox(0,0)[r]{\strut{}85\%}} \csname LTb\endcsname \put(747,1386){\makebox(0,0)[r]{\strut{}86\%}} \csname LTb\endcsname \put(747,1702){\makebox(0,0)[r]{\strut{}87\%}} \csname LTb\endcsname \put(747,2019){\makebox(0,0)[r]{\strut{}88\%}} \csname LTb\endcsname \put(747,2335){\makebox(0,0)[r]{\strut{}89\%}} \csname LTb\endcsname \put(1032,409){\makebox(0,0){\strut{}$18$}} \csname LTb\endcsname \put(1397,409){\makebox(0,0){\strut{}$34$}} \csname LTb\endcsname \put(1763,409){\makebox(0,0){\strut{}$50$}} \csname LTb\endcsname \put(2927,409){\makebox(0,0){\strut{}$101$}} } \gplgaddtomacro\gplfronttext{ \csname LTb\endcsname \put(144,1465){\rotatebox{-270}{\makebox(0,0){\strut{}PCKh total}}} \csname LTb\endcsname \put(1991,130){\makebox(0,0){\strut{}Depth (ResNet-$x$)}} \csname LTb\endcsname \put(2331,1320){\makebox(0,0){\strut{}Resolution}} \csname LTb\endcsname \put(2345,1134){\makebox(0,0)[r]{\strut{}\footnotesize 14 px}} \csname LTb\endcsname \put(2345,948){\makebox(0,0)[r]{\strut{}\footnotesize 28 px}} \csname LTb\endcsname \put(2345,762){\makebox(0,0)[r]{\strut{}\footnotesize 56 px}} } \gplbacktext
\put(0,0){\includegraphics{build/depth}} \gplfronttext
\end{picture}\endgroup
\clearpage{}\vspace{-0.8cm}
\par\end{center}
\caption{\label{fig:depth}Varying ResNet \citep{he2016resnet} depth and heatmap
resolution for DSNTr.}
\end{minipage}\hfill{}\begin{minipage}[t]{0.32\linewidth}\begin{center}
\clearpage{}\begingroup
\makeatletter
\providecommand\color[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package color not loaded in conjunction with
terminal option `colourtext' }{See the gnuplot documentation for explanation. }{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.} \renewcommand\color[2][]{} } \providecommand\includegraphics[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package graphicx or graphics not loaded }{See the gnuplot documentation for explanation. }{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.} \renewcommand\includegraphics[2][]{} } \providecommand\rotatebox[2]{#2} \@ifundefined{ifGPcolor}{ \newif\ifGPcolor
\GPcolortrue
}{} \@ifundefined{ifGPblacktext}{ \newif\ifGPblacktext
\GPblacktexttrue
}{} \let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{} \gdef\gplfronttext{} \makeatother
\ifGPblacktext
\def\colorrgb#1{} \def\colorgray#1{} \else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}} \expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}} \expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}} \expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}} \expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}} \expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}} \expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}} \expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}} \expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}} \else
\def\colorrgb#1{\color{black}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color{black}} \expandafter\def\csname LT1\endcsname{\color{black}} \expandafter\def\csname LT2\endcsname{\color{black}} \expandafter\def\csname LT3\endcsname{\color{black}} \expandafter\def\csname LT4\endcsname{\color{black}} \expandafter\def\csname LT5\endcsname{\color{black}} \expandafter\def\csname LT6\endcsname{\color{black}} \expandafter\def\csname LT7\endcsname{\color{black}} \expandafter\def\csname LT8\endcsname{\color{black}} \fi
\fi
\setlength{\unitlength}{0.0500bp} \ifx\gptboxheight\undefined \newlength{\gptboxheight} \newlength{\gptboxwidth} \newsavebox{\gptboxtext} \fi \setlength{\fboxrule}{0.5pt} \setlength{\fboxsep}{1pt}\begin{picture}(3440.00,2540.00) \gplgaddtomacro\gplbacktext{ \csname LTb\endcsname \put(747,595){\makebox(0,0)[r]{\strut{}82\%}} \csname LTb\endcsname \put(747,1030){\makebox(0,0)[r]{\strut{}84\%}} \csname LTb\endcsname \put(747,1465){\makebox(0,0)[r]{\strut{}86\%}} \csname LTb\endcsname \put(747,1900){\makebox(0,0)[r]{\strut{}88\%}} \csname LTb\endcsname \put(747,2335){\makebox(0,0)[r]{\strut{}90\%}} \csname LTb\endcsname \put(983,409){\makebox(0,0){\strut{}$1$}} \csname LTb\endcsname \put(1655,409){\makebox(0,0){\strut{}$2$}} \csname LTb\endcsname \put(2327,409){\makebox(0,0){\strut{}$4$}} \csname LTb\endcsname \put(2999,409){\makebox(0,0){\strut{}$8$}} } \gplgaddtomacro\gplfronttext{ \csname LTb\endcsname \put(144,1465){\rotatebox{-270}{\makebox(0,0){\strut{}PCKh total}}} \csname LTb\endcsname \put(1991,130){\makebox(0,0){\strut{}Stacks}} \csname LTb\endcsname \put(2345,1134){\makebox(0,0)[r]{\strut{}\footnotesize HM}} \csname LTb\endcsname \put(2345,948){\makebox(0,0)[r]{\strut{}\footnotesize DSNT}} \csname LTb\endcsname \put(2345,762){\makebox(0,0)[r]{\strut{}\footnotesize DSNTr}} } \gplbacktext
\put(0,0){\includegraphics{build/hourglass}} \gplfronttext
\end{picture}\endgroup
\clearpage{}\vspace{-0.8cm}
\par\end{center}
\caption{\label{fig:hourglass}Varying output strategy and stack count for
hourglass \citep{newell2016stacked} models.}
\end{minipage}
\end{figure*}
\subsection{Model base}
We conducted experiments using two different fully convolutional model
architectures for the CNN portion of the coordinate regression network
(see \figref{arch_comparison}).
\begin{description}
\item [{ResNet}] The ResNet architecture \citep{he2016resnet} is well-known
for performing extremely well in classification tasks. We converted
ImageNet-pretrained ResNet models into fully convolutional networks
(FCNs) by removing the final fully connected classification layer.
Such models produce $7\times7$ px spatial heatmap outputs. However,
we were able to adjust the heatmap resolution of the FCN using dilated
convolutions, as proposed by \citet{yu2016dilated}. More specifically,
we change the convolution stride from 2 to 1 in one or more downsampling
stages, then use dilated convolutions in subsequent layers to preserve
the receptive field size. For each downsampling stage modified in
this way, the heatmap resolution increases by a factor of two.
\item [{Stacked hourglass}] The stacked hourglass architecture \citep{newell2016stacked}
is currently state-of-the-art for human pose estimation \citep{yang2017learning,chen2017advposenet,chou2017self,chu2017multi}.
The heatmap resolution of this architecture is $64\times64$ px.
\end{description}
\subsection{Output strategy}
\begin{description}
\item [{Heatmap matching (HM)}] We follow the specific technique used
by \citet{newell2016stacked}. MSE pixel-wise loss is applied directly
to the output of the FCN. During inference, numeric coordinates are
calculated based on the brightest pixel of the heatmap, with small
adjustments to the location made based on the brightness of adjacent
pixels.
\item [{Fully connected (FC)}] A softmax heatmap activation is applied
to the output of the FCN, followed by a fully connected layer which
produces numerical coordinates. The model is trained with Euclidean
loss.
\item [{DSNT}] Same as fully connected, but with our DSNT layer instead
of the fully connected layer.
\item [{DSNT with regularization (DSNTr)}] Same as DSNT, but with the
inclusion of a regularization term in the loss function. The method
of regularization we selected was Jensen-Shannon divergence with $\sigma_{t}=1$
and $\lambda=1$, which empirically performed best.
\end{description}
\subsection{Dataset and training}
We use the MPII human pose dataset \citep{andriluka20142d} to evaluate
the effectiveness of our DSNT layer on an important real-world task.
The dataset contains images of 28,883 people with up to 16 joint annotations
each, along with approximate person location and scale labels to facilitate
the cropping of single-person poses.
Samples from the dataset were augmented during training time using
the same scheme as \citet{newell2016stacked}, which consists of horizontal
flips, 75\%-125\% scaling, $\pm30$ degree rotation, and 60\%-140\%
channel-wise pixel value scaling. Since the test set labels are not
public, we evaluate on the fixed validation set used in \citep{tompson2015efficient}
and \citep{newell2016stacked}.
The models were optimized with RMSProp \citep{tieleman2012rmsprop}
using an initial learning rate of $2.5\times10^{-4}$. Each model
was trained for 120 epochs, with the learning rate reduced by a factor
of 10 at epochs 60 and 90 (an epoch is one complete pass over the
training set). Training was completed on single Maxwell-architecture
NVIDIA Titan X GPUs.
Our ResNet-based networks were trained using mini-batches of 32 samples
each, with the exception of highly memory-intensive configurations
(\eg ResNet-101@28px). The stacked hourglass models were trained
using mini-batches of 6 samples each. Our implementation code for
DSNT, written in PyTorch, is available online\footnote{https://github.com/anibali/dsntnn}.
\subsection{Results}
\begin{table*}
\begin{centering}
\caption{\label{tbl:pose_results}MPII human pose test set PCKh accuracies
and inference-time efficiency results.}
\bgroup\tabcolsep=0.1cm\centerline{ \begin{tabular}{l>{\raggedright}p{0.8cm}>{\raggedright}p{0.8cm}>{\raggedright}p{0.8cm}>{\raggedright}p{0.8cm}>{\raggedright}p{0.8cm}>{\raggedright}p{0.8cm}>{\raggedright}p{0.8cm}lll}
\toprule
Method & Head & Shoul. & Elbow & Wrist & Hip & Knee & Ankle & Total & Time (ms){*} & Memory{*}\tabularnewline
\midrule
{\small{}\citet{tompson2015efficient}} & {\small{}96.1} & {\small{}91.9} & {\small{}83.9} & {\small{}77.8} & {\small{}80.9} & {\small{}72.3} & {\small{}64.8} & {\small{}82.0} & {\small{}-} & {\small{}-}\tabularnewline
{\small{}\citet{rafi2016efficient}} & {\small{}97.2} & {\small{}93.9} & {\small{}86.4} & {\small{}81.3} & {\small{}86.8} & {\small{}80.6} & {\small{}73.4} & {\small{}86.3} & {\small{}27.6$\pm$0.1} & {\small{}2768 MiB}\tabularnewline
{\small{}\citet{wei2016convolutional}} & {\small{}97.8} & {\small{}95.0} & {\small{}88.7} & {\small{}84.0} & {\small{}88.4} & {\small{}82.8} & {\small{}79.4} & {\small{}88.5} & {\small{}106.8$\pm$0.2} & {\small{}5832 MiB}\tabularnewline
{\small{}Bulat et al. \citep{bulat2016human}} & {\small{}97.9} & {\small{}95.1} & {\small{}89.9} & {\small{}85.3} & {\small{}89.4} & {\small{}85.7} & {\small{}81.7} & {\small{}89.7} & {\small{}41.3$\pm$0.2} & {\small{}1432 MiB}\tabularnewline
{\small{}\citet{newell2016stacked}} & {\small{}98.2} & {\small{}96.3} & {\small{}91.2} & {\small{}87.1} & {\small{}90.1} & {\small{}87.4} & {\small{}83.6} & {\small{}90.9} & {\small{}60.5$\pm$0.1} & {\small{}1229 MiB}\tabularnewline
{\small{}\citet{yang2017learning}} & \textbf{\small{}98.5} & \textbf{\small{}96.7} & \textbf{\small{}92.5} & \textbf{\small{}88.7} & \textbf{\small{}91.1} & \textbf{\small{}88.6} & \textbf{\small{}86.0} & \textbf{\small{}92.0} & {\small{}194.6$\pm$76.8} & {\small{}1476 MiB}\tabularnewline
\midrule
{\small{}DSNTr ResNet-50@28px} & {\small{}97.8} & {\small{}96.0} & {\small{}90.0} & {\small{}84.3} & {\small{}89.8} & {\small{}85.2} & {\small{}79.7} & {\small{}89.5} & \textbf{\small{}18.6$\pm$0.5} & \textbf{\small{}636 MiB}\tabularnewline
\bottomrule
\end{tabular}}\egroup
\par\end{centering}
\centering{}{\small{}\smallskip{}
{*} Any test time data augmentations (horizontal flips, multi-scale)
were disabled for time and memory measurements.}{\small \par}
\end{table*}
The PCKh performance metric is the percentage of joints with predicted
locations that are no further than half of the head segment length
from the ground truth. As per the evaluation code provided by MPII,
we exclude the pelvis and thorax joints from the average total PCKh.
In order to compare the different approaches to coordinate regression,
we conducted a series of experiments with a ResNet-34-based network
(\figref{outstrat}). The heatmap matching achieved a very low PCKh
of 44\% at $7\times7$ px heatmap resolution, which falls outside
the bounds of the figure. As the resolution increases, the performance
of heatmap matching improves relative to the other approaches, which
is evidence of the quantization effects inherent to calculating coordinates
via a pixel-wise argmax. This demonstrates that heatmap matching is
not suitable for models which generate low-resolution heatmaps, whereas
DSNT is largely robust to heatmap size. At higher resolutions, the
fully connected approach performs worst. Our DSNT approach exhibits
good performance across all resolutions—even $7\times7$ px—due to
the predictions produced by DSNT not having precision dependent on
pixel size.
Regularization improves DSNT accuracy in all cases except the lowest
resolution, where boundary effects come into play (\ie a 1 pixel
standard deviation Gaussian drawn in a $7\times7$ px image is likely
to clip heavily, which adversely affects the DSNT calculation). Fully
connected output was found to be worse than heatmap matching at higher
resolutions, and worse than DSNT in general.
We conducted further experiments with ResNet-based \citep{he2016resnet}
models to evaluate the impact that depth has on performance. The results
in \figref{depth} suggest that higher heatmap resolution is beneficial
at any depth. However, the trade-off is that increasing resolution
with dilations has a large impact on memory consumption and computational
cost. For this reason, we could not train ResNet-101@56px. PCKh was
found to increase significantly with depth up until ResNet-50, with
only a slight gain observed when increasing the depth even further
to ResNet-101.
\begin{figure}
\begin{centering}
\clearpage{}\begingroup
\makeatletter
\providecommand\color[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package color not loaded in conjunction with
terminal option `colourtext' }{See the gnuplot documentation for explanation. }{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.} \renewcommand\color[2][]{} } \providecommand\includegraphics[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package graphicx or graphics not loaded }{See the gnuplot documentation for explanation. }{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.} \renewcommand\includegraphics[2][]{} } \providecommand\rotatebox[2]{#2} \@ifundefined{ifGPcolor}{ \newif\ifGPcolor
\GPcolortrue
}{} \@ifundefined{ifGPblacktext}{ \newif\ifGPblacktext
\GPblacktexttrue
}{} \let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{} \gdef\gplfronttext{} \makeatother
\ifGPblacktext
\def\colorrgb#1{} \def\colorgray#1{} \else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}} \expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}} \expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}} \expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}} \expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}} \expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}} \expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}} \expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}} \expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}} \else
\def\colorrgb#1{\color{black}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color{black}} \expandafter\def\csname LT1\endcsname{\color{black}} \expandafter\def\csname LT2\endcsname{\color{black}} \expandafter\def\csname LT3\endcsname{\color{black}} \expandafter\def\csname LT4\endcsname{\color{black}} \expandafter\def\csname LT5\endcsname{\color{black}} \expandafter\def\csname LT6\endcsname{\color{black}} \expandafter\def\csname LT7\endcsname{\color{black}} \expandafter\def\csname LT8\endcsname{\color{black}} \fi
\fi
\setlength{\unitlength}{0.0500bp} \ifx\gptboxheight\undefined \newlength{\gptboxheight} \newlength{\gptboxwidth} \newsavebox{\gptboxtext} \fi \setlength{\fboxrule}{0.5pt} \setlength{\fboxsep}{1pt}\begin{picture}(4240.00,2540.00) \gplgaddtomacro\gplbacktext{ \csname LTb\endcsname \put(747,740){\makebox(0,0)[r]{\strut{}84\%}} \csname LTb\endcsname \put(747,1030){\makebox(0,0)[r]{\strut{}85\%}} \csname LTb\endcsname \put(747,1320){\makebox(0,0)[r]{\strut{}86\%}} \csname LTb\endcsname \put(747,1610){\makebox(0,0)[r]{\strut{}87\%}} \csname LTb\endcsname \put(747,1900){\makebox(0,0)[r]{\strut{}88\%}} \csname LTb\endcsname \put(747,2190){\makebox(0,0)[r]{\strut{}89\%}} \csname LTb\endcsname \put(849,409){\makebox(0,0){\strut{}$0$}} \csname LTb\endcsname \put(1235,409){\makebox(0,0){\strut{}$10$}} \csname LTb\endcsname \put(1620,409){\makebox(0,0){\strut{}$20$}} \csname LTb\endcsname \put(2006,409){\makebox(0,0){\strut{}$30$}} \csname LTb\endcsname \put(2391,409){\makebox(0,0){\strut{}$40$}} \csname LTb\endcsname \put(2777,409){\makebox(0,0){\strut{}$50$}} \csname LTb\endcsname \put(3162,409){\makebox(0,0){\strut{}$60$}} \csname LTb\endcsname \put(3548,409){\makebox(0,0){\strut{}$70$}} \csname LTb\endcsname \put(3933,409){\makebox(0,0){\strut{}$80$}} } \gplgaddtomacro\gplfronttext{ \csname LTb\endcsname \put(144,1465){\rotatebox{-270}{\makebox(0,0){\strut{}PCKh total}}} \csname LTb\endcsname \put(2391,130){\makebox(0,0){\strut{}Inference time (ms)}} \csname LTb\endcsname \put(1531,747){\makebox(0,0){\strut{}\footnotesize \textcolor[HTML]{360054}{HG1}}} \csname LTb\endcsname \put(1750,1496){\makebox(0,0){\strut{}\footnotesize \textcolor[HTML]{360054}{HG2}}} \csname LTb\endcsname \put(2287,1843){\makebox(0,0){\strut{}\footnotesize \textcolor[HTML]{360054}{HG4}}} \csname LTb\endcsname \put(3521,2081){\makebox(0,0){\strut{}\footnotesize \textcolor[HTML]{360054}{HG8}}} \csname LTb\endcsname \put(3145,948){\makebox(0,0)[r]{\strut{}\footnotesize Hourglass, HM \citep{newell2016stacked}}} \csname LTb\endcsname \put(1165,1354){\makebox(0,0){\strut{}\footnotesize \textcolor[HTML]{00680e}{14px}}} \csname LTb\endcsname \put(1630,2027){\makebox(0,0){\strut{}\footnotesize \textcolor[HTML]{00680e}{28px}}} \csname LTb\endcsname \put(2828,1926){\makebox(0,0){\strut{}\footnotesize \textcolor[HTML]{00680e}{56px}}} \csname LTb\endcsname \put(3145,762){\makebox(0,0)[r]{\strut{}\footnotesize ResNet-50, DSNTr}} } \gplbacktext
\put(0,0){\includegraphics{build/time}} \gplfronttext
\end{picture}\endgroup
\clearpage{}\vspace{-0.5cm}
\par\end{centering}
\caption{\label{fig:time}Validation accuracy vs inference time, closer to
the top-left is better. Labels show heatmap resolution (ResNet models)
or stack count (hourglass models).}
\end{figure}
In addition to ResNet, we also trained stacked hourglass networks
\citep{newell2016stacked}. Even though the stacked hourglass architecture
was developed using heatmap matching, we found that models trained
using DSNT with regularization achieved consistently better results
(\figref{hourglass}). Analysis of misclassified examples revealed
that DSNT was less accurate for predicting edge case joints that lie
very close to the image boundary, which is expected due to how the
layer works.
\figref{time} directly compares stacked hourglass networks trained
with heatmap matching and our ResNet-based networks trained with DSNT
and regularization. Although the 8-stack hourglass network was found
to have the highest overall accuracy, the ResNet-based models were
found to be much faster with only modest concessions in terms of accuracy.
For instance, ResNet-50@28px has 8\% fewer parameters, requires less
than half of the memory during training, and is over $3\times$ faster
at inference than HG8, whilst still achieving \textasciitilde{}99\%
of the PCKh score.
\begin{figure}
\begin{centering}
\clearpage{}\begingroup
\makeatletter
\providecommand\color[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package color not loaded in conjunction with
terminal option `colourtext' }{See the gnuplot documentation for explanation. }{Either use 'blacktext' in gnuplot or load the package
color.sty in LaTeX.} \renewcommand\color[2][]{} } \providecommand\includegraphics[2][]{ \GenericError{(gnuplot) \space\space\space\@spaces}{ Package graphicx or graphics not loaded }{See the gnuplot documentation for explanation. }{The gnuplot epslatex terminal needs graphicx.sty or graphics.sty.} \renewcommand\includegraphics[2][]{} } \providecommand\rotatebox[2]{#2} \@ifundefined{ifGPcolor}{ \newif\ifGPcolor
\GPcolortrue
}{} \@ifundefined{ifGPblacktext}{ \newif\ifGPblacktext
\GPblacktexttrue
}{} \let\gplgaddtomacro\g@addto@macro
\gdef\gplbacktext{} \gdef\gplfronttext{} \makeatother
\ifGPblacktext
\def\colorrgb#1{} \def\colorgray#1{} \else
\ifGPcolor
\def\colorrgb#1{\color[rgb]{#1}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color[rgb]{1,0,0}} \expandafter\def\csname LT1\endcsname{\color[rgb]{0,1,0}} \expandafter\def\csname LT2\endcsname{\color[rgb]{0,0,1}} \expandafter\def\csname LT3\endcsname{\color[rgb]{1,0,1}} \expandafter\def\csname LT4\endcsname{\color[rgb]{0,1,1}} \expandafter\def\csname LT5\endcsname{\color[rgb]{1,1,0}} \expandafter\def\csname LT6\endcsname{\color[rgb]{0,0,0}} \expandafter\def\csname LT7\endcsname{\color[rgb]{1,0.3,0}} \expandafter\def\csname LT8\endcsname{\color[rgb]{0.5,0.5,0.5}} \else
\def\colorrgb#1{\color{black}} \def\colorgray#1{\color[gray]{#1}} \expandafter\def\csname LTw\endcsname{\color{white}} \expandafter\def\csname LTb\endcsname{\color{black}} \expandafter\def\csname LTa\endcsname{\color{black}} \expandafter\def\csname LT0\endcsname{\color{black}} \expandafter\def\csname LT1\endcsname{\color{black}} \expandafter\def\csname LT2\endcsname{\color{black}} \expandafter\def\csname LT3\endcsname{\color{black}} \expandafter\def\csname LT4\endcsname{\color{black}} \expandafter\def\csname LT5\endcsname{\color{black}} \expandafter\def\csname LT6\endcsname{\color{black}} \expandafter\def\csname LT7\endcsname{\color{black}} \expandafter\def\csname LT8\endcsname{\color{black}} \fi
\fi
\setlength{\unitlength}{0.0500bp} \ifx\gptboxheight\undefined \newlength{\gptboxheight} \newlength{\gptboxwidth} \newsavebox{\gptboxtext} \fi \setlength{\fboxrule}{0.5pt} \setlength{\fboxsep}{1pt}\begin{picture}(4240.00,2540.00) \gplgaddtomacro\gplbacktext{ \csname LTb\endcsname \put(747,595){\makebox(0,0)[r]{\strut{}20\%}} \csname LTb\endcsname \put(747,885){\makebox(0,0)[r]{\strut{}30\%}} \csname LTb\endcsname \put(747,1175){\makebox(0,0)[r]{\strut{}40\%}} \csname LTb\endcsname \put(747,1465){\makebox(0,0)[r]{\strut{}50\%}} \csname LTb\endcsname \put(747,1755){\makebox(0,0)[r]{\strut{}60\%}} \csname LTb\endcsname \put(747,2045){\makebox(0,0)[r]{\strut{}70\%}} \csname LTb\endcsname \put(747,2335){\makebox(0,0)[r]{\strut{}80\%}} \csname LTb\endcsname \put(1030,409){\makebox(0,0){\strut{}$1024$}} \csname LTb\endcsname \put(1937,409){\makebox(0,0){\strut{}$2048$}} \csname LTb\endcsname \put(2845,409){\makebox(0,0){\strut{}$4096$}} \csname LTb\endcsname \put(3752,409){\makebox(0,0){\strut{}$8192$}} } \gplgaddtomacro\gplfronttext{ \csname LTb\endcsname \put(144,1465){\rotatebox{-270}{\makebox(0,0){\strut{}PCKh total}}} \csname LTb\endcsname \put(2391,130){\makebox(0,0){\strut{}Number of samples}} \csname LTb\endcsname \put(3145,1134){\makebox(0,0)[r]{\strut{}\footnotesize HM}} \csname LTb\endcsname \put(3145,948){\makebox(0,0)[r]{\strut{}\footnotesize FC}} \csname LTb\endcsname \put(3145,762){\makebox(0,0)[r]{\strut{}\footnotesize DSNTr}} } \gplbacktext
\put(0,0){\includegraphics{build/nsamples}} \gplfronttext
\end{picture}\endgroup
\clearpage{}\vspace{-0.5cm}
\par\end{centering}
\caption{\label{fig:nsamples}Varying number of training samples (no augmentation)
for ResNet-34@28px models.}
\end{figure}
Spatial generalization was tested by training models with a restricted
training set size and no data augmentation. \figref{nsamples} shows
that fully connected output exhibits very poor spatial generalization,
achieving the extremely low PCKh score of 22\% when trained on 1024
samples. On the other hand, both DSNT and heatmap matching perform
much better with fewer samples, indicating better generalization.
Finally, we evaluated our ResNet-50@28px DSNTr model on the test set.
The results in \tblref{pose_results} show that our solution, using
a much smaller and simpler model (ResNet-50), was able to achieve
accuracy competitive with more complex models. A consequence of using
a smaller model is that ResNet-50@28px infers significantly faster
and uses less memory than all other methods shown in the table. Note
that we determined the running time and memory usage of the other
methods by downloading pretrained models.
\section{Conclusion}
There are multiple possible approaches to using CNNs for numerical
coordinate regression tasks, each of which affects the behavior of
the model in different ways—a fully connected output layer reduces
spatial generalization, and heatmap matching introduces issues with
differentiability and quantization. In contrast, our proposed DSNT
layer can be used to adapt fully convolutional networks for coordinate
regression without introducing these problems. We have shown that
models built with DSNT can achieve competitive results on real human
pose data without complex task-specific architectures, forming a strong
baseline. Such models also offer a better accuracy to inference speed
trade-off when compared with stacked hourglass models.
Interesting directions for future work are to integrate DSNT with
complex pose estimation approaches (\eg adversarial training \citep{chou2017self,chen2017advposenet}),
or to use DSNT as an internal layer for models where intermediate
coordinate prediction is required (\eg Spatial Transformer Networks
\citep{jaderberg2015spatial}).
\bibliographystyle{IEEEtranN}
\phantomsection\addcontentsline{toc}{section}{\refname} |
2,869,038,153,798 | arxiv | \section{Introduction}
\label{sec:intro}
Photons are able to probe any or all stages of
a relativistic heavy-ion collision\cite{es80} since their
mean free paths are much larger than the transverse size of
the hot and dense region and will therefore escape after production
without rescattering. Massive photons, or dileptons, share
this property. Therefore, photons and dileptons
are thought to provide exciting means of probing even the
central region cleanly\cite{elf76,es78,kk81,fh82,bs83,rh85,gs86}.
Observation of quark-gluon plasma through
the use of photons is contingent, among other things, on the
production rate or ultimately the yield being distinguishable
from hadron gas. Yet, current understanding is that
hadron gas and quark-gluon plasma produce energetic
photons nearly equally frequently at fixed temperature
$\approx 200$ MeV\cite{jkplds}. While studying low-mass
dilepton spectra after integrating the rates over
space-time history the same conclusion was reached: the two
phases produced massive photons more or less equally\cite{khcgve}.
If any important contribution has been ignored, e.g. strange
particles or heavy mesons, it would be very useful to know about
such things.
Within the hadron gas photon production will originate in many
ways as there may be several hadronic species produced directly
or perhaps pair-produced thermally since the temperatures can
rise to extreme values and approach the critical value $T_{c}$, somewhere
in the range 150--200 MeV, where crossover to the deconfined and
chirally symmetric phase is presumed to be. Produced photons might
be categorized according to their energy as follows.
For $E_{\gamma}$ below 0.5 GeV bremsstrahlung from
charged particle scattering and decay will be important.
Also, reactions in which a large mass hadron (compared to the
combined mass of the initial hadrons) is in the final state
such as $\pi\pi\rightarrow\rho\gamma$ are endothermic and tend to
give large contributions at low photon energies.
For higher energies direct decays give a substantial contribution
as does the process $\pi\rho\rightarrow \pi\gamma$.
It has recently been shown that inclusion of the $a_{1}$ resonance
in $\pi\rho\rightarrow \pi\gamma$ scattering results in a
large rate of photon production, larger than all others combined\cite{shuryak}.
When interference effects are treated properly the effect of
the $a_{1}$ is changed somewhat but remains important\cite{song93}.
Vector meson decays are also important: the $\omega$ has a
relatively large decay rate into a neutral pion and photon. Folding
in a Bose-Einstein distribution for the $\omega$ results in
a significant thermal production rate~\cite{jkplds}.
Kaon abundances are 2--3 times smaller than pion's
at temperatures near $T_{c}$, so they will frequently scatter with pions and
with rhos. With the pions, $K^*$(892) resonances can be formed and
with the rhos, scattering through $K_{1}(1270)$ is probable.
The $K\rho$ decay channel of $K_{1}(1270)$ is exceptional in that
it is only 71 MeV/c
center-of-mass momentum above threshold and therefore has
a cross section (proportional to $1/{\bbox{p}}_{\rm c.m.}^{2}$) which is
relatively large. This resonance has recently been shown to affect
kaon mean free paths in hot matter by tens of percents\cite{khsp}. It
is therefore interesting to study reactions involving the $K_{1}$ and
their contributions to photon production.
The large family of non-strange and strange mesons will
comprise the hot thermalized system. At temperatures near 100 MeV or so,
the system will be populated mostly by pions since
Boltzmann weightings strongly suppress the heavier ones. As the temperature
rises to something near $T_{c}$, the situation can become different
since the suppression is effectively weaker.
Using zero chemical potentials in Boltzmann distributions at a temperature
of 200 MeV, there are $2.9\times 10^{-1}$ $\pi$\, s,
$1.5\times 10^{-1}$ $\rho$\,s and $1.3\times 10^{-1}$ $K^*$\,s
per fm$^{3}$, while there are $2.9\times 10^{-2}$ $K_{1}(1270)$\,s and
$2.6\times 10^{-2}$ $b_{1}(1235)$\,s per fm$^{3}$\cite{khsp}.
The latter is quite important because it decays predominantly
into a $\pi\omega$ combination and has a full width of 155 $\pm 8$ MeV.
The $\omega$ might be on shell in which case it just contributes to the
equilibrium number of omegas. It might also be off shell and
will subsequently decay (off shell) into $\pi^{0}\gamma$. The product
of the density times overall decay seems sizeable, so folding in the
dynamics, kinematics and thermally distributed phase space could
result in a significant thermal production rate.
Theoretical framework for photon production rates through
radiative decay from a thermalized system will be presented
in the next section. Discussion of the model
dependence of hadronic and electromagnetic transitions upon their
specific interactions will also appear.
They are modelled by effective Lagrangians describing
both $AV\phi$ and $VV^{\prime}\phi$
three-point vertices, where $A$ is an axial-vector field, $V$ and
$V^{\prime}$ are vector fields and $\phi$ is a pseudoscalar field.
Then in section \ref{sec:results} results are presented for
one- and two-hadron radiative decays for a temperature 200 MeV.
The channel $b_{1}\rightarrow \pi\pi^{0}\gamma$ is found to be important,
and so further dependence on temperature is studied. Section
\ref{sec:broadening} contains an estimate for the collisional broadening
of $\omega$ in the above mentioned $b_{1}$ radiative
decay. Modified photon production rates follow.
Finally, section \ref{sec:conclusion} contains
concluding remarks and possible ramifications of these results.
\section{Thermal radiative decay}
\label{sec:general_theory}
The rate for photon production from a thermalized system at temperature
$T$ whose size is small relative to the photon mean free path is
proportional to the imaginary part of the retarded photon
self-energy $\Pi^{\mu\nu}_{R}$ and a thermal weighting
as\cite{weldon83,lmtt85,cgjk91}
\begin{equation}
E_{\gamma}{dR\over d^{3}p_{\gamma}} = {-2g^{\mu\nu}\over (2\pi)^{3}}
{\rm Im}\Pi^{R}_{\mu\nu} {1\over e^{E_{\gamma}/T}-1}.
\end{equation}
For temperatures $100< T < T_{c}$, the largest contributions to
the self-energy will
be one- and two-loop diagrams consisting of $\pi$\,s and $\rho$\,s.
Near $T_{c}$, contributions
from diagrams in which heavier non-strange and strange particles
occupy the loops also become important.
Figure~\ref{fig:self-energy}a shows one-loop $\pi a_{1}$ and $K K_{1}$
contributions, and \ref{fig:self-energy}b shows a two-loop $\pi \omega$
contribution [with $\omega$ further splitting to $b_{1}\pi$]. If
the imaginary part of any of these diagrams (obtained by cutting them)
gives a calculated width in vacuum that is relatively sizeable, then
even being less abundant than pions or rhos in the hot system, it is
interesting to ask about their contributions
to photon production. Cutting the diagram in Fig.~\ref{fig:self-energy}a
results in single-hadron radiative decays of either $a_{1}$ or
$K_{1}$ axial-vectors. They are shown in Fig.~\ref{fig:decays}a.
The diagram in Fig.~\ref{fig:self-energy}b is of two-loop order and therefore
its cut diagram has four external lines. Kinematics allows two
possibilities for the photon being in the final state. First, there could
be $\pi b_{1}\rightarrow \omega \rightarrow \pi^{0}\gamma$ scattering.
Secondly, a two-pion radiative decay
$b_{1}\rightarrow \omega\pi\rightarrow \pi\pi^{0}\gamma$ can proceed.
The production rate from the first process turns out
to be rather unimportant as compared with the second.
Its diagram is in Fig.~\ref{fig:decays}b.
Elementary radiative hadron decays,
$h_{a} \rightarrow \sum_{b} h_{b} + \gamma$, can proceed
in two ways. One of the incoming or outgoing particles
can emit a photon (bremsstrahlung) or secondly, the photon can
be emitted directly. Direct emission reflects the internal structure
and has vanishing phase space for vanishing photon energy.
In a thermal
calculation the decay rate is important but so too
is the number density of the decaying hadron. Besides
$\rho\rightarrow \pi\pi\gamma$, the many-hadron final states have
been neglected since they were assumed
small. But as will soon be seen, at least one is not.
The thermal rate for radiative hadronic decay
is
\begin{eqnarray}
E_{\gamma}{dR\over d^{3}p_{\gamma}} &=& {\cal N}\int
{d^{3}p_{a}\over (2\pi)^{3}2E_{a}}
f(E_{a})
|{\cal M}|^{2} (2\pi)^{4} \delta^{4}(p_{a}-p_{1}-\ldots -p_{n}-p_{\gamma})
\nonumber\\
& & \times\left\lbrace\prod\limits_{i=1}^{n}{d^{3}p_{i}\over (2\pi)^{3}2E_{i}}
[1\pm f(E_{i})]\right\rbrace {1\over (2\pi)^{3}2}.
\label{eq:decayrate}
\end{eqnarray}
where ${\cal N}$ is the degeneracy and $f(E)$ is either a Bose-Einstein
or Fermi-Dirac distribution depending on the species. Bose
enhancement(Pauli suppression) is enforced by choosing the $+(-)$ sign
in the square-bracketed term.
Identifying the species of hadrons and their interaction with
other hadrons into which they might decay, and specifying their
interaction with the electromagnetic field completely defines the
problem. What remains is to carry out the necessary four-vector
algebra and phase space integration.
The model Lagrangians used are the
following. First for the $AV\phi$ interaction\cite{shuryak}
\begin{equation}
L_{AV\phi} = g\, \left. A^{\mu}\right((p_{V}\cdot p_{\phi})g_{\mu\nu}
-p_{V\, \mu}p_{\phi \, \nu}\left) V^{\nu} \right. \phi ,
\label{eq:lavphi}
\end{equation}
and then for the $VV^{\prime}\phi$ vertex\cite{um88}
\begin{eqnarray}
L_{VV^{\prime}\phi} &=& g^{\prime}\,
\epsilon_{\mu\nu\alpha\beta}\, p_{V}^{\mu}\, V^{\nu}
\, p_{V^{\prime}}^{\alpha}\, V^{\prime\, \beta}\, \phi
\label{eq:lvvphi}
\end{eqnarray}
where $\epsilon_{\mu\nu\alpha\beta}$ is the totally antisymmetric
unit tensor. Modelling the interactions depicted by vertices in
the diagrams of Figs.~\ref{fig:decays}a and b this way, their
individual decay widths are
calculated to be
\begin{equation}
\Gamma_{\omega\rightarrow \pi^{0}\gamma} =
{g_{\omega\pi^{0}\gamma}^{2} \over 12\pi m_{\omega}^{2}} |{\bbox{p}}\/|
\left(p_{\pi^{0}}\cdot p_{\gamma}\right)^{2}
\end{equation}
\begin{eqnarray}
\Gamma_{b_{1}\rightarrow \pi\omega} &=&
{g_{b_{1}\pi\omega}^{2} \over 24\pi m_{b_{1}}^{2}} |{\bbox{p}}\/|
\left(2(p_{\pi}\cdot p_{\omega})^{2}+m_{\omega}^{2}(m_{\pi}^{2}
+{\bbox{p}}^{2})\right) \\
\Gamma_{a_{1}\rightarrow \pi\rho} &=&
{g_{a_{1}\pi\rho}^{2} \over 24\pi m_{a_{1}}^{2}} |{\bbox{p}}\/|
\left(2(p_{\pi}\cdot p_{\rho})^{2}+m_{\rho}^{2}(m_{\pi}^{2}
+{\bbox{p}}^{2})\right)
\end{eqnarray}
and
\begin{equation}
\Gamma_{K_{1}\rightarrow \rho K} = {g_{K_{1}\rho K}^{2} \over 24\pi
m_{K_{1}}^{2}} |{\bbox{p}}\/| \left( 2(p_{K}\cdot p_{\rho})^{2}+
m_{\rho}^{2}(m_{K}^{2}+{\bbox{p}}^{2})\right),
\end{equation}
where ${\bbox{p}}$ is the center-of-mass momentum of the decay products.
These determine the coupling constants
to be $g_{\omega \pi^{0}\gamma}=0.7$,
$g_{b_{1} \pi\omega}=10.3$,
$g_{a_{1} \pi\rho}=14.8$ and
$g_{K_{1}\rho K} = 12.0$ GeV$^{-1}$ in order that the
partial decay widths are 0.7, 155, 400 and 37.8 MeV respectively,
so they match results from the Review of Particle
Properties\cite{pdg}.
An estimate for coupling the axial-vectors $a_{1}$ and $K_{1}$
to the photon field (and a pion or kaon respectively) is now needed.
Decay of the $b_{1}$ into a pion
and photon is not considered since it results in a small rate.
Here is where a model is employed since no
experimental information on these decays
exist. By vector-meson dominance, the coupling for
these are
\begin{eqnarray}
g_{a_{1}\pi\gamma} &=& (e/f_{\rho})g_{a_{1}\rho\pi}\nonumber\\
g_{K_{1}K\gamma} &=& (e/f_{\rho})g_{K_{1}\rho K}
\end{eqnarray}
where $f_{\rho}^{2}/4\pi$ = 2.9 is the
$\rho \pi\pi$ coupling. Numerically
these coupling constants become $g_{a_{1}\gamma\pi}$ = 0.74 GeV$^{-1}$ and
$g_{K_{1}\gamma K}$ = 0.60 GeV$^{-1}$.
Using the calculated expressions
\begin{eqnarray}
\Gamma_{a_{1}\rightarrow \gamma \pi} &=& {g_{a_{1}\gamma\pi}^{2} \over 12\pi
m_{a_{1}}^{2}} |{\bbox{p}}\/| \left(p_{\gamma}\cdot p_{\pi}\right)^{2}\\
\Gamma_{K_{1}\rightarrow \gamma K} &=& {g_{K_{1}\gamma K}^{2} \over 12\pi
m_{K_{1}}^{2}} |{\bbox{p}}\/| \left(p_{\gamma}\cdot p_{K}\right)^{2},
\end{eqnarray}
electromagnetic decay widths of $\Gamma_{a_{1}\rightarrow \gamma \pi}$ =1.4 MeV
and $\Gamma_{K_{1}\rightarrow \gamma K}$ = 1.5 MeV are obtained.
Matrix elements for photon production corresponding to the processes
in Fig.~\ref{fig:decays}a for $a_{1}$ and $K_{1}$ decay are
\begin{eqnarray}
{\cal M} &=& g_{a_{1}\gamma \pi}\, \left. \epsilon_{a_{1}}^{\mu}\,
\right((p_{\pi}\cdot p_{\gamma})g_{\mu\nu} - p_{\gamma \, \mu}p_{\pi \, \nu}
\left) \, \epsilon_{\gamma}^{\nu} \right.
\end{eqnarray}
and
\begin{eqnarray}
{\cal M} &=& g_{K_{1}\gamma K}\, \left. \epsilon_{K_{1}}^{\mu}\,
\right((p_{K}\cdot p_{\gamma})g_{\mu\nu} - p_{\gamma \, \mu}p_{K \, \nu}
\left) \, \epsilon_{\gamma}^{\nu}, \right.
\end{eqnarray}
where the $\epsilon\,$s are polarization vectors for the respective
(vector) fields. Each one depends therefore on a spin index which
is not explicitly written. The two-pion radiative decay depicted
in Fig.~\ref{fig:decays}b has a matrix element of
\begin{eqnarray}
{\cal M} &=& g_{b_{1}\pi\omega}\, \, g_{\omega\pi^{0}\gamma}\,
\left. \epsilon_{b_{1}}^{\mu}\,\right((p_{\pi}\cdot p_{\omega})g_{\mu\nu}
-p_{\omega \, \mu}p_{\pi \, \nu}\left) D^{\nu\alpha}\,
\epsilon_{\beta\alpha\sigma\lambda}\,
p_{\omega}^{\beta}\, p_{\gamma}^{\sigma}\, \epsilon_{\gamma}^{\lambda} \right.
\label{eq:mb1decay}
\end{eqnarray}
where the pion momenta refer to the pion emitted from the
$b_{1}\omega\pi$ vertex in the diagram of Fig.~\ref{fig:decays}b and
the propagator for the $\omega$ in the same diagram
is
\begin{equation}
D^{\nu\alpha}(l) = (g^{\nu\alpha} - l^{\nu}l^{\alpha}/m_{\omega}^{2})
{1\over l^{2}-m_{\omega}^{2}-im_{\omega}\Gamma_{\omega}}.
\label{eq:omegaprop}
\end{equation}
The width is taken as the vacuum value of $\Gamma_{\omega}$= 8.43 MeV,
although modification due to the presence of matter will be discussed
later. Squaring the matrix elements, contracting all the indices and summing
over spin states is the first step.
Using Eq.~(\ref{eq:decayrate}) the production rate reduces to an
integral over one- and two-particle phase space respectively, for
processes from Figs.~\ref{fig:decays}a and b.
\section{Results}
\label{sec:results}
Thermal photon production rates at 200 MeV temperature
for the processes $a_{1}\rightarrow \pi\gamma$,
$K_{1}\rightarrow K\gamma$ and $b_{1}\rightarrow \pi\pi^{0}\gamma$ are
shown in Fig.~\ref{fig:three}. Results for $a_{1}\rightarrow \pi\pi\gamma$
and $K_{1}\rightarrow K\pi\gamma$ will not be discussed here since they
result in smaller rates. Features common to these processes are
that they turn over and approach zero for $E_{\gamma}\rightarrow 0$
simply due to vanishing phase space in this limit. Unlike processes
such as $\rho\rightarrow \pi\pi\gamma$, these axial-vector {\em direct}
decays cannot proceed without the
photon present. They peak at slightly different photon
energies due to the differences in the axial-vector and
decay particle masses. Their slopes also reflect these differences:
the $K\gamma$ shows the largest slope, then the $\pi\pi^{0}\gamma$
followed by the $\pi\gamma$ final state results. The
most startling feature is the overall magnitude of the
$b_{1}$ decay. For this temperature its peak is seven times larger
than the others. The reason for this is twofold. First
is the relative abundance of $b_{1}$,
for which there are roughly half as many $b_{1}$\,s as there are
omegas per unit volume. This is larger than one would expect,
but isospin degeneracy and the high temperature are responsible.
Secondly, $b_{1}$ decays predominantly into a $\pi\omega$ combination,
which subsequently decays into $\pi\pi^{0}\gamma$. Roughly speaking,
the overall rate for this is
\begin{equation}
{\Gamma_{b_{1}\rightarrow\pi\omega}
\Gamma_{\omega\rightarrow\pi^{0}\gamma} \over
\Gamma_{\omega}^{\rm full}} = 13.2 \ {\rm MeV},
\label{eq:roughrate}
\end{equation}
which is a factor 18 larger than simple $\omega\rightarrow \pi^{0}\gamma$
decay. Multiplying the density of $b_{1}$ mesons times the rate from
Eq.~(\ref{eq:roughrate}) results in a factor 9 more than radiative
$\omega$ decay. When comparing thermal photon production via
$b_{1}\rightarrow \pi\pi^0\gamma$ with the corresponding thermal production
rate for $\omega \rightarrow \pi^{0}\gamma$ presented at $T$ = 200 MeV
in Ref.\cite{jkplds}, this factor of 9 is consistent.
Kinematics and Bose-Einstein distributions complicate matters
as does the phase space integration, but the result can be understood
with the above simple argument.
One-pion radiative decay of real omegas is not included within
the $b_{1}$ decay---they really are separate contributions.
Imagine $b_{1}$ being instead very massive. Thermal
photon production via $b_{1}$ decay would apprach zero in the
infinite-mass limit simply due to vanishing equilibrium number density
$\bar{n}_{b_{1}}\rightarrow 0$. Omega radiative
decay, on the other hand, is
not affected by such a change and $\omega$
decay must contribute. Summing both processes does not amount to
double counting.
Radiative decay of $b_{1}$ turns out to be rather large
(for a limited range in photon energy) due in part
to the abundance. The natural question to ask therefore, is about
its rise with increasing temperature; or alternatively, its fall with
decreasing temperature. For this reason
Fig.~\ref{fig:four} is shown at three values of temperature 100, 150
and 200 MeV. Each result is superimposed on the rate of photon production
from $\pi\rho\rightarrow \pi\gamma$ as calculated using an effective
chiral Lagrangian including effects of the $a_{1}$ resonance
coherently\cite{song93}. As the temperature drops, the photon
energy for which the two processes are equal shifts downward.
But the noteworthy feature of dominance of the
two-pion radiative decay of $b_{1}$ for a narrow range of
photons energies basically remains even at temperature 100 MeV.
\section{Collisional broadening of omega}
\label{sec:broadening}
The hadron-gas environment is quite different from free space, and
one can therefore justifiably question the applicability of the
matrix element of Eq.~(\ref{eq:mb1decay}). In particular, the $\omega$
propagator is just taken to be the free space vector propagator which
contains the vacuum width of 8.43 MeV and no shift in pole position.
The vacuum width corresponds to a lifetime of 23 fm/c. Yet, the mean free
path of an $\omega$ in this hot matter is at most a few fermis since
it can scatter with pions to form a $b_{1}$ resonance\cite{khsp3}, so it
will likely rescatter before decaying. To account for this, a
collisional broadened width is computed. Roughly it is $n\sigma v$, where
$n$ is the pion density, $\sigma$ is the $\pi\omega$ cross section,
and $v$ is their relative velocity. If there are 0.3 pions per fm$^{3}$,
if the cross section is 1 fm$^{2}$ and the relative velocity $v/c$ = 0.5, an
extra {\em collisional} width of 30 MeV should be added to the
vacuum width. Rather than use this crude estimate, the
expression
\begin{equation}
\Gamma^{\rm coll}_{\omega}(E_{\omega}) =
\int\, ds \, {d^{3}p_{\pi}\over (2\pi)^{3}}
f(E_{\pi}) \sigma_{\pi\omega}(s) v_{rel} \delta\left(s-(p_{\pi}
+p_{\omega})^{2}\right)
\end{equation}
is used, where
\begin{equation}
v_{rel} = {\sqrt{(p_{\pi}\cdot p_{\omega})^{2}-4m_{\pi}^{2}m_{\omega}^{2}}
\over E_{\pi}E_{\omega}}.
\end{equation}
A Breit-Wigner form for the cross section
\begin{equation}
\sigma_{\pi\omega}(\sqrt{s}) = {\pi \over {\bbox{k}}^{2}}
{\Gamma_{b_{1}\rightarrow \pi\omega}^{2} \over (\sqrt{s}-m_{b_{1}})^{2}
+\Gamma_{b_{1}}^{2}/4}
\end{equation}
is used with ${\bbox{k}}$ being the center-of-mass momentum and the
full and partial widths taken to be 155 MeV.
The collision rate (or width) is presented in Fig.~\ref{fig:collwidth}
for 100, 150 and 200 MeV temperature. Energy dependence aside, it is
indeed of order 30 MeV. It is
now added to the vacuum width resulting
in an energy-dependent full width of
\begin{equation}
\Gamma_{\omega}^{\rm full}(E_{\omega}) = \Gamma_{\omega}^{\rm vac} +
\Gamma_{\omega}^{\rm coll}(E_{\omega}).
\end{equation}
The propagator of Eq.~(\ref{eq:omegaprop}) is now modified by
replacing the omega width by this energy-dependent full width $\Gamma_{\omega}
\rightarrow \Gamma_{\omega}^{\rm full}(E_{\omega})$.
Integrating over the phase space for the initial and final
hadrons as in Eq.~(\ref{eq:decayrate}) sums over contributions from all
kinematically allowed squared four momentum for the omega. A broader
total width in the (denominator of the) propagator will naturally
reduce the rate for producing photons.
In physical terms, the propagating omega from which radiation
originates can scatter with the strongly interacting
matter and is therefore no longer as free to decay radiatively.
Modified rates are compared in Fig.~\ref{fig:broadrate} with those
using the vacuum width. The reduction in photon production is more
pronounced at larger temperatures as expected but for
temperature 100--200 MeV, the modified rate is comparable to
$\pi\rho\rightarrow \pi\gamma$ at its peak. However,
it is no longer significantly larger than the other axial-vector
decays considered here.
To more fully appreciate the relative importance of radiative decay
Fig.~\ref{fig:last} is shown. In it, the sum of $a_{1}$, $b_{1}$ and
$K_{1}$ decays is compared at $T=200$ MeV with the dominant hadronic
scattering contributions $\pi\rho\rightarrow \pi\gamma$ and
$\pi\pi\rightarrow \rho\gamma$ as well as the decay
$\rho\rightarrow \pi\pi\gamma$ taken from Ref.~\cite{song93}. The $b_{1}$
results are those from Fig.~\ref{fig:broadrate} which include a
collisional broadened width for the omega.
Radiative decays contribute more than scattering for photon energies
0.4--0.75 GeV. Then for more energetic photons they are less important.
If one is merely concerned with
the overall order of magnitude of the photon energy
spectrum, these are clearly not so important. At some level in more
detailed analyses of the energy spectrum, these axial-vector
radiative decays do become important.
\section{Concluding Remarks}
\label{sec:conclusion}
Mechanisms for photon production in hot hadronic matter are numerous.
Most of them involve species whose abundances are too low or whose scattering
rates or decay rates are too small to compete with pion and rho
meson processes. There are some with abundances and relevant rates
that do in fact compete for limited range in photon energies. Namely,
radiative decay of the heavier axial-vector mesons $a_{1}$, $b_{1}$
and $K_{1}$ are relatively important. Contributions
from $a_{1}\rightarrow \pi\gamma$ and $K_{1}\rightarrow
K\gamma$ are comparable to $\omega\rightarrow
\pi^{0}\gamma$ with similar photon energy dependence. Contribution
of $b_{1}\rightarrow \pi\pi^{0}\gamma$ is as large as
$\pi\rho\rightarrow \pi\gamma$ near its peak which occurs at photon
energy 0.5 GeV.
However, for photon energies 0.4 GeV and less,
other processes like $\pi\pi\rightarrow \rho\gamma$
and $\rho\rightarrow \pi\pi\gamma$ become dominant\cite{jkplds,song93}.
Other heavy mesons do not contribute as strongly as the three
axial-vector radiative decays mentioned above. The $\pi\rho$ and $K\rho$
decay channels of $a_{1}$ and $K_{1}$ are very strong which result in
rather large coupling constants.
By vector-meson dominance, the electromagnetic-decay coupling constants
are also relatively large. Similar remarks can be made about the
$b_{1}$, but in addition it is also exceptional since one of its most likely
decay products is $\omega$. The $\omega$'s electromagnetic decay
rate is rather large---a partial width of 0.7 MeV. Other non-strange
and strange heavy mesons which decay to omega do so with much smaller
rates and furthermore, will not be nearly so abundant at temperatures
100--200 MeV.
Thermalization is assumed for all hadronic species in this study.
For pions and rho mesons this is quite reasonable but for heavier
hadrons it is not so clear. It may well be that the axial-vectors
considered here do not thermalize. Their average dynamics could
be different resulting in different photon production.
Modification due to such effects would be an interesting
pursuit.
The method of collisional broadening included in Sec.~\ref{sec:broadening}
is somewhat simplistic. One should really compute a finite temperature
propagator and self-energy for the omega to two-loop order. The presence
of matter modifies the width by modifying the imaginary part
of the self-energy. It also modifies the pole position of the
propagator, i.e. gives the omega an effective mass, by introducing a
real part to the self-energy.
Such a calculation would have appeal from the point of view of theory
that the real and imaginary parts would be computed consistently.
What has been done in the present work
represents only a simple, first approximation to the effect.
Dilepton production via hadronic decay is limited, of course, to
invariant masses for the pair at most equal to the mass of the parent
hadron less any final state hadrons. Therefore, higher mass pairs can
only come from very massive hadrons. One might even consider charmed
non-strange and strange
mesons, but even at these temperatures they would be rarely produced
indeed. On the other hand, for intermediate dilepton invariant
masses the results presented
here lead naturally to questions about the role of axial vectors,
the $b_{1}$ in particular, on the invariant mass spectrum of lepton
pairs.
\section*{Acknowledgement}
This work was supported by the National
Science Foundation under grant number PHY-9403666.
|
2,869,038,153,799 | arxiv | \section*{Introduction}
Nuclear magnetic resonance (NMR) experiments performed and directly detected in fields $<1 \,\mu$G \cite{Bernarding2006,Savukov2005,Ledbetter2008,Tayler2017} promise certain advantages over more conventional forms of NMR: the ability to perform ultra-high resolution spectroscopy without expensive superconducting magnets, and without the complications for spectral assignment often arising in intermediate fields due to strong-coupling effects \cite{Appelt2010}. Further, the absence of magnetic fields equalizes the Larmor frequencies (by setting them to zero) of distinct spin species, thereby allowing the study of physics not accessible at high fields. Examples include nuclear spin-singlet states formed by different nuclides \cite{Emondts2014a} and observation of terms in the nuclear spin-coupling Hamiltonian that are truncated at high field by large chemical-shift differences between nuclei. Such `non-secular' terms may be interesting for characterization of ordered materials \cite{Blanchard2015a}, and have been proposed as a means to detect both chirality \cite{King2016b} and molecular parity non-conservation using NMR \cite{JPNC}. Zero-Field NMR has also been explored as a test-bed for quantum-simulation \cite{Jiang2017}, offering an attractive combination of long ($>$10 s \cite{Emondts2014a}) coherence times and short ($<$ 1 $\mu$s \cite{Llor1995b}) control times.
Zero-field NMR experiments monitor the evolution of groups of coupled spins in the absence of an externally applied static magnetic field \cite{Thayer1987a,Blanchard2016}. The characteristic frequencies of the coherent evolution are determined by the spin-spin coupling Hamiltonian, which under rapid motional averaging is given by
\begin{equation}
H_J = 2\pi\sum_{i>j}J_{ij}\bm{I}_i\cdot\bm{I}_j,
\label{eq:Hj}
\end{equation}
where the $J_{ij}$ are coupling constants (we write the $2\pi$ factor because $J_{ij}$ are in customarily given in Hz), the $\bm{I}_i$ are angular-momentum operators for each spin, and we have set $\hbar=1$.
The result of such an experiment is known as a zero-field $J$-spectrum, which provides accurate fingerprints of different chemical species, despite the absence of any chemical-shift information \cite{Blanchard2013,Theis2013}.
It is a feature of $J$-spectroscopy that only heteronuclear spin systems yield directly observable spectra \cite{Sjolander2017a}.
This means that several common $^{1}$H\: containing solvents give no signal background, obviating the need for deuterated solvents.
However, it also means that, since $^{13}$C\: is only 1\% naturally abundant, the observed spectra of $^{1}$H\:-$^{13}$C\: systems featuring more than one carbon atom at natural abundance are superpositions of contributions from the different possible $^{13}$C\: isotopomers. Further complicating matters is the fact that the one-bond $J$-couplings of many organic molecules are of order $\sim$100\,Hz, which means that the peaks are spread out over only a few 100s of Hz in frequency space.
While some molecules give $J$-spectra with peaks as narrow as 20 mHz \cite{Blanchard2013}, many other molecules do not.
Additionally, spectral complexity increases rapidly with spin-system size. Taken together, these factors often lead to partially resolved or overlapping peaks, which complicates assignment.
Meanwhile, the development of two-dimensional spectroscopy \cite{Jeener1979,Morris1986} is a major reason behind the analytical power of NMR. At a minimum, 2D experiments increase signal dispersion, thereby allowing the resolution of more crowded spectra. Additionally, many pulse sequences exist that enable the mapping of coupling networks, the simplification of spectral assignment, and structure elucidation \cite{Ernst1987}.
Multiple-quantum (MQ) NMR spectroscopy \cite{Hatanaka1975,Vega1976b,Norwood1992}, which in high field concerns transitions for which $|\Delta m| > 1$, where $m$ is the quantum number for the projection of the spin angular momentum on the field axis, has also found extensive use in NMR spectroscopy. In liquid-state analytical chemistry, multiple-quantum coherence filters combined with two-dimensional detection techniques provide one of the standard ways to map coupling networks \cite{Ernst1987}. MQ-spectroscopy also provides a means of simplifying the spectra of partially ordered systems, the smaller number of MQ peaks enables otherwise intractable spectra to be readily interpreted \cite{Warren1979,Warren1980}. In the solid state, MQ coherences may be used to monitor the growth of correlated spin clusters with applications to the investigation of the structure of amorphous solids \cite{Baum1985,Gleason1987}, and more recently in studies of many-body physics \cite{Alvarez2015,Wei2016}.
In this work we introduce two-dimensional correlation and MQ experiments in the context of liquid-state zero-field $J$-spectroscopy.
Correlation spectroscopy is an attractive way to approach the problem of natural-abundance $J$-spectra containing contributions from different isotopomers, since coherence transfer between distinct molecules in liquids is not possible under normal circumstances.
We show that the spectra from different $^{13}$C\: isotopomers in ethanol may be separated from each other by observing the cross-peak pattern.
The cross-peak pattern also simplifies spectral assignment and enables the distinction between otherwise overlapping resonances.
We also show that [$^{13}$C\:\!$_2$]-acetic acid supports the zero-field equivalent of a multiple-quantum transition, demonstrating for the first time the concept of Multiple-Quantum Zero- to Ultralow-Field (MQ-ZULF) NMR.
Just as in high-field NMR, the number of MQ-ZULF transitions is significantly reduced compared to the number of single-quantum transitions, potentially leading to simpler, easier to interpret spectra.
The selection rules governing the correlation pattern between single- and multiple-quantum coherences can be used to further simplify assignment.
Finally, we note that the ability to perform 2D experiments with only one indirect dimension is a significant advantage offered by directly detected zero-field experiments, as opposed to using indirect detection in high field \cite{Suter1987a}, which requires two indirect dimensions.
\section*{Zero-Field Spin-State Manifolds}
Commonly, the most interesting features of two-dimensional spectroscopy are cross peaks due to coherence transfer from one transition to another. Therefore, selection rules that constrain the allowed pathways are important for the interpretation of 2D spectra. Here we show the origin of one such important constraint in zero-field $J$-spectroscopy.
In an isotropic liquid-state system at zero magnetic field, the nuclear spin eigenstates are also eigenstates of the total spin angular momentum operator $\bm{F}^2$ and its projection $F_\alpha$ on an arbitrary axis, and may conveniently be labeled with the quantum numbers $F$ and $m_F$ \cite{Butler2013}. This is most easily justified by noting that the nuclear spin Hamiltonian given in \cref{eq:Hj} is invariant with respect to rotations of the spin system and therefore must commute with $\bm{F}^2$ and $F_\alpha$ \cite{Butler2013}. However, for systems comprising more than two spin-1/2 nuclei, additional quantum numbers are necessary to fully define the zero-field eigenstates. It is particularly useful to consider the angular momentum of sets of magnetically equivalent spins \cite{Levitt2001}. `Equivalent' here denotes a set of indistinguishable spins that also share the same couplings to all other spins in the spin system. For a set of magnetically equivalent spins with total angular momentum $K$, there is no combination of pulses or evolution intervals that can break the equivalence and $\bm{K}^2$ commutes with all realizable effective Hamiltonians. Therefore, the presence of equivalent spins leads to selection rules in the zero-field spectra - the quantum numbers, $K$, associated with the total angular momentum of sets of equivalent spins must be conserved throughout any pulse-sequence.
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{fullAcOHenergyDiagramInkedSpace.pdf}
\caption{Energy levels of acetic acid may be grouped in two manifolds of different proton angular momentum ($K=1/2$, left, and $K=3/2$, right). Transitions and coherence transfer may only occur within each manifold. Magnetic dipole-allowed transitions have $\Delta F=\pm 1,0$ and are shown by solid lines. The dash/dot line marks a $\Delta F=\pm 2$ transition and the color coding and lines styles match \cref{fig:1DJspectra}}
\label{fig:fullAcOH_Ediagram}
\end{figure}
This selection rule reflects the existence of separate spin manifolds in zero field.
We define such manifolds to be sets of energy levels which share the same combination of equivalent-spin quantum numbers.
It follows from the above discussion that there can be no transitions of any kind between states belonging to different manifolds.
As an example consider the zero-field energy level structure of $^{13}$C\:\!$_2$-acetic acid (see Fig.\,\ref{fig:fullAcOH_Ediagram}), which, using the notation in \cite{Blanchard2016}, can be considered an (XA$_3$)Y spin system with the methyl (CH$_3$) protons being equivalent.
The Hamiltonian for this spin system is $H = 2\pi(\:^{1}\!J_{\mathrm{CH}} \bm{S}_\mathrm{X}\cdot\bm{K} +\ ^{2}\!J_{\mathrm{CH}} \bm{S}_\mathrm{Y}\cdot\bm{K} +\ ^{1}\!J_{\mathrm{CC}} \bm{S}_\mathrm{X}\cdot\bm{S}_\mathrm{Y})$, where $\bm{K}$ and $\bm{S}_{\mathrm{X}/\mathrm{Y}}$ are angular momentum operators for the proton group and carbons respectively, and $\bm{K} = \sum_{i=1}^{3}\bm{I}_i$, where $\bm{I}_i$ refer to the proton spins. The three protons make up the only set of equivalent spins in the molecule and $K$ may take two values, 1/2 and 3/2.
The allowed transitions may thus be assigned to two separate manifolds, as shown in the energy-level diagram in \cref{fig:fullAcOH_Ediagram}.
There are actually two manifolds for which $K=1/2$, but they are degenerate, and we ignore this point for simplicity.
There are no transitions between states of different $K$ and the two $K$ manifolds form entirely isolated spin systems.
The principle extends readily to systems with more than one set of equivalent spins.
For example, the Hamiltonian for the spin system of 1-$^{13}$C\:-ethanol in zero-field is $H_{J} =2\pi(\ ^{1}\!J_{\mathrm{CH}}\bm{L}\cdot\bm{S} +\ ^{2}\!J_{\mathrm{CH}}\bm{K}\cdot\bm{S} +\ ^{3}\!J_{\mathrm{HH}}\bm{K}\cdot\bm{L})$, where $\bm{S}$ and $\bm{K}$ are defined as above and $\bm{L}$ is the operator for the angular momentum of the protons in the methylene (CH$_2$) group.
1-$^{13}$C\:-ethanol is an (XA$_2$)B$_3$ spin system, so $\bm{L}=\sum_{i} \bm{I}_{\bm{A},i}$ and $\bm{K}=\sum_{i} \bm{I}_{\bm{B},i}$.
The energy level diagram for this molecule, as well as the 2-$^{13}$C\:-isotopomer is shown in \cref{fig:ethanol_Ediagram}.
In both cases there are four spin-manifolds, corresponding to the four distinct combinations of $K$ and $L$.
Since a spin-state labeled with a certain combination of conserved quantum numbers may not evolve or transform into a spin-state labeled with a different combination, it follows that in 2D zero-field spectroscopy we will not see cross-peaks between transitions belonging to different spin-manifolds.
This phenomenon is not unique to zero-field NMR; spin states belonging to different irreducible representations of a given permutation group do not mix or evolve into each other no matter the magnitude of the external field. For example, in high-field NMR a ${}^{13}$CH$_2$ group (where the protons are magnetically equivalent) gives only two peaks at the proton frequency, since transitions between the proton singlet and triplet states are forbidden.
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{ethanolEnergyDiagramSpace.pdf}
\caption{Energy levels of the two (singly labeled) $^{13}$C\: isotopomers of ethanol support four distinct manifolds corresponding to different combinations of total proton angular momentum. For clarity only one transition per manifold is plotted and the quantum number $S=\frac{1}{2}$ for the $^{13}$C\: spin has been omitted. Solid lines correspond to the 1-$^{13}$C\: isotopomer and the dotted lines to the 2-$^{13}$C\: isotopomer. The color coding and line styles match \cref{fig:1DJspectra}}
\label{fig:ethanol_Ediagram}
\end{figure}
\section*{Zero-Field Total Correlation Spectroscopy}
The building blocks of a 2D NMR experiment are the same for zero- and high-field experiments. A general schematic is shown in \cref{fig:pulseSequences}a.
First, the desired coherences are prepared in an excitation step, described by the overall propagator $U_{exc}$; this is followed by free evolution during which these coherences acquire phase.
Finally, a reconversion step, described by the propagator $U_{re}$, precedes the readout.
In this work, the excitation and reconversion sequences are generated by one or several DC magnetic-field pulses around different axes.
The pulses are much stronger than any $J$-couplings for these systems, $|\gamma B| \gg |J|$, so to a good approximation each spin is independently rotated by an angle $\theta_i = \gamma_i Bt_p$ around the pulse axis, where $\gamma_i$ is the gyromagnetic ratio of the $i^{\mathrm{th}}$ spin, $B$ is the amplitude of the pulse, and $t_p$ is its duration.
In this work, we use $^{1}$H\: + $^{13}$C\: spin systems and all pulses are calibrated to effect a $\pi$ rotation of the $^{13}$C\: spins, which means that the $^{1}$H\: spins are rotated by $\pi\times(\gamma_{^1\mathrm{H}}/\gamma_{^{13}\mathrm{C}})\approx 4\pi$.
We take the detection axis to be the z-axis and assume that the initial state in all experiments is given by magnetization along the z-axis generated by adiabatic transport from a pre-polarizing magnetic field, giving the initial deviation density matrix $\rho(0) \propto \sum_i I_{z,i}$.
This state commutes with $H$ and so does not evolve.
Evolution may be initiated by changing the relative orientation of the proton and carbon spins, for example with a $\pi$ pulse on carbon in the x/y plane \cite{Blanchard2016}.
Reference \cite{Sjolander2016} introduced a zero-field `spin-tickling' method whereby assignment of resonances was simplified by monitoring the response of the spectrum to low-amplitude irradiation of selected transitions.
We present a two-dimensional variant, where the complete spectral connectivity is established in one experiment.
Here, $t_1$ evolution is initiated with a $\pi$ pulse on carbon along the x-axis followed by a second $\pi$ pulse and $t_2$ evolution (detection).
The Fourier transform with respect to $t_1$ and $t_2$ gives a correlation spectrum between two spectral dimensions, F1 and F2, each of which corresponds to a zero-field $J$ spectrum.
The sequence is summarized schematically in Fig.\ \ref{fig:pulseSequences}b.
We refer to this type of experiment as ZF-TOCSY, so named because of the similarity to high-field Total Correlation Spectroscopy (TOCSY) \cite{Braunschweiler1983}, in which a spin-lock provides an effective strong-coupling condition during the isotropic mixing period.
The zero-field variant takes advantage of the fact that all nuclear spins are already strongly coupled at zero field, so no Hamiltonian engineering is required.
\section*{Zero-Field Multiple-Quantum Correlation Spectroscopy}
\begin{figure}[]
\includegraphics[width=\columnwidth]{sequencesSchematicNEW.png}
\caption{General scheme for 2D NMR spectroscopy and the pulse sequences used in this work. (a) An excitation sequence prepares the desired coherences, this is followed by a period of free evolution and finally a reconversion sequence and detection. (b) The ZF-TOCSY sequence. (c) An example MQ-ZULF sequence.}
\label{fig:pulseSequences}
\end{figure}
In high-field NMR spins are naturally quantized along the external magnetic field and states may be labeled with their projection, $m$, along that field. Directly observable single-quantum coherences have $\Delta m = \pm 1$, and indirectly observable multiple-quantum coherences are those for which $|\Delta m| > 1$.
Conversely, in zero field the eigenstates are eigenstates of total angular momentum, labeled $F$.
The observable in both high- and zero-field NMR experiments is total magnetization along some direction, conventionally taken to be the x-axis in high field and the z-axis in zero field.
Since the sample magnetization is represented by a vector operator, it follows from the Wigner-Eckart theorem \cite{VMK} that the allowed transitions are those with $\Delta F = 0, \pm 1$.
We suggest that the indirect observation of transitions for with $|\Delta F| > 1$ may serve as a zero-field analog to high-field MQ experiments.
An example pulse sequence that can be used to observe such transitions is shown in \cref{fig:pulseSequences}c.
In this sequence, which is modeled on the original MQ-excitation sequence, the unitary propagator for the excitation is given by $U_{exc} = P_x-U_{t_m}-P_x$, and the reconversion propagator is $U_{re} = P_x$.
$P_x$ is the propagator for a $\pi$ pulse along $x$ on $^{13}$C\: and $U_{t_m}$ the propagator for free evolution for a time $t_m$.
Starting from single-spin order, the first pulse initiates evolution under the $J$-coupling Hamiltonian, which subsequently generates multiple-spin terms.
The second pulse converts some of those terms into zero-field MQ-coherences. After $t_1$ evolution these terms are converted into observable magnetization with a readout pulse.
With the exception of the first pulse and the mixing period, this pulse sequence is identical to the ZF-TOCSY sequence and can therefore be expected to produce a similar spectrum.
The difference is that the $t_1$ period contains both observable and $n>1$ coherences and we expect the resulting spectrum to look like a ZF-TOCSY spectrum, but with additional cross-peaks in F1 due to coherences evolving at frequencies corresponding to $n>1$ transitions during $t_1$.
\begin{figure*}[t!]
\centering
\includegraphics[]{1DJSpectra.png}
\caption{One dimensional zero-field $J$-spectra of ethanol and $^{13}$C\:\!$_2$-acetic acid. The stick spectra correspond to the transition energies predicted by numerical diagonalization of the spin Hamiltonian. The transitions have been labeled and color-coded according to their energy-level manifold. For the ethanol spectrum, solid lines correspond to the 1-$^{13}$C\: isotopomer and dotted lines to the 2-$^{13}$C\: isotopomer. In the acetic acid spectrum the expected frequency of a $K$-conserving zero-field double-quantum transition is shown for reference.
Note that narrow lines at multiples of 60\,Hz arise from power line noise.}
\label{fig:1DJspectra}
\end{figure*}
The possible values of $n$ that can occur during $t_1$ depend on the spin topology.
Given a spin system containing $k$ sets of equivalent spins we can write an expression for the highest order of zero-field quantum coherence supported by each energy-level manifold.
Each manifold is labeled by $k$ quantum numbers and $n_{\mathrm{max}}$ for a particular manifold is given as twice the largest quantum number in the manifold, or twice the sum of the remaining ones, whichever number is smaller:
\begin{equation}
n_{\mathrm{max}}=2\times\mathrm{Min}\lbrace \sum_{i=1}^{k-1}f_i\ ,\ f_{k} \rbrace,
\label{eq:nmax}
\end{equation}
where $f_i$ is the total angular momentum for each set of equivalent spins and the sum runs over the $k-1$ smallest quantum numbers in the manifold.
$f_{k}$ is the largest quantum number in the manifold.
As an example consider the energy level diagram of $^{13}$C$_2$-acetic acid given in Fig.\ \ref{fig:fullAcOH_Ediagram}.
There are three groups of inequivalent spins, meaning $k=3$. The ($f_1 = 1/2, f_2 = 1/2, f_3=1/2$) manifold supports only $n=0,1$ transitions consistent with the fact that the largest quantum number, $f_3$, in the manifold is $1/2$, whereas the ($f_1 = 1/2, f_2 = 1/2, f_3 = 3/2$) manifold supports a $n=2$ transition, since $2\times (f_1+f_2) = 1$.
The MQ-ZULF experiment thus provides direct information regarding the possible quantum numbers associated with a given resonance.
\Cref{eq:nmax} can be justified as follows: $n_{max}$ denotes the largest possible change in total angular momentum that may occur in a given spin-manifold, meaning we have $n_{max}=F_{max}-F_{min}$.
Each $F$ value is the result of successively coupling all angular momenta $f_i$ in that manifold.
$F_{max} = \sum_{i=1}^{k}f_i$, and $F_{min} = |f_k - \sum_{i=1}^{k-1}f_i|$.
Depending on which of $f_k$ or $\sum_{i=1}^{k-1}f_i$ is bigger we obtain the two cases for \cref{eq:nmax}.
\begin{figure*}[t!]
\begin{subfigure}[b]{0.9\columnwidth}
\includegraphics[]{EthanolTOCSYV3.png}
\caption{Ethanol Mixture}
\label{fig:EthanolTOCSY}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.9\columnwidth}
\includegraphics[]{fullAcOH_TOCSYV3.png}
\caption{$^{13}$C\:\!$_2$ Acetic Acid}
\label{fig:fullAcOH_TOCSY}
\end{subfigure}
\caption{Zero-Field Total Correlation Spectra acquired using the protocol in Fig.\ \ref{fig:pulseSequences}b. The numerical peak assignments made in Fig.\ \ref{fig:1DJspectra} are plotted on both axes and the diagonal is shown as a dotted black line for reference.}
\end{figure*}
\section*{Methods}
We performed experimental demonstrations of the ZF-TOCSY and MQ-ZULF experiments using $^{13}$C\: labeled isotopomers of acetic acid and ethanol.
In the case of ethanol, we prepared a sample consisting of a mixture of the two singly labeled $^{13}$C\: isotopomers.
The ratio of the isotopomers in this mixture was equal, as in natural samples.
The acetic acid sample was doubly $^{13}$C\: labeled in order to ensure the presence of a $K$-conserving double-quantum transition.
The experiments were performed using a $^{87}$Rb\: vapor-cell magnetometer operating in the Spin-Exchange
Relaxation-Free (SERF) regime \cite{Kominis2003}, configured for use as an NMR spectrometer \cite{Tayler2017}.
SERF magnetometers are DC magnetic field sensors, which allows us to directly monitor the low-frequency spin evolution in zero-field.
All experiments were done using $\sim$80 $\mu$L samples in 5 mm outer diameter standard NMR tubes.
The acetic acid sample contained only $^{13}$C\:\!$_2$-acetic acid (residual singly labeled material is detectable, but this has negligible effect on the experiment) while the ethanol sample was made from $\sim$40 $\mu$L 1-$^{13}$C\:-ethanol and $\sim$40 $\mu$L 2-$^{13}$C\:-ethanol.
The samples were prepolarized in a 2 T permanent magnet and shuttled pneumatically into a magnetically shielded region for zero-field evolution and detection.
For the 2D experiments, the samples were re-polarized between every point in the indirect dimension.
\section*{Results and Discussion}
For reference, we first obtained 1D $J$ spectra of these molecules; the results are presented in \cref{fig:1DJspectra}.
The spectra are the result of summing 2000 and 3000 transients, respectively, for the acetic acid and ethanol data.
In both cases, each transient corresponds to 20 s of data acquisition.
Numerical analysis is used to assign the eigenvalues of $\bm{K}^2$ and $\bm{L}^2$ for each transition.
Extracted $J$-coupling values are provided in the appendix, along with a discussion of further details of the spectra.
Simulated spectra based on the best-fit $J$ values, together with the $K$ and $L$ assignments are also shown in \cref{fig:1DJspectra}.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{ZoomedTOCSYV2.png}
\caption{Detailed structure of the high-frequency multiplet in the ZF-TOCSY spectrum of 1-$^{13}$C\: ethanol. All peaks in this multiplet have $L$=1. The dashed and dotted lines are drawn to guide the eye to particular cross-peaks (which are marked by boxes) which confirm that the high-intensity peak at 211\,Hz actually consists of two overlapping resonances which belong to different $K$-manifolds.}
\label{fig:Zoomed}
\end{figure}
The 1D $J$-spectra may be interpreted a follows:
In $^{13}$C$_2$-acetic acid, the dominant coupling is between a single spin-1/2 carbon coupled to a group of three equivalent protons with angular momentum $K$.
An XA$_3$ spin system has two transition frequencies depending on the value of $K$, at $J$ and 2$J$, for $K$ = 1/2, and 3/2 respectively.
Couplings to the second carbon result in the spectrum being made up of groups of transitions centered at those two frequencies plus additional peaks close to zero \cite{Theis2013}.
In the experimental data we see two peaks in the 120-150\,Hz range, three peaks between 225\,Hz and 280\,Hz, and 3 peaks between 20 and 40\,Hz, while the best-fit value for $^{1}\!J_{\mathrm{CH}}$ was 129.504\,Hz.
Careful inspection of the spectrum also reveals weak signals at 6.75, 13.5, 129.5 and 259\,Hz.
These peaks can be assigned to residual 1-$^{13}$C\: and 2-$^{13}$C\: acetic acid in the sample.
Both isotopomers would yield peaks at 1$J$ and 2$J$ on account of being XA$_3$ spin systems, and the corresponding coupling constants listed in the appendix in \cref{tab:exptJCouplingsEthanol} are consistent with the positions of the weak signals.
Finally, the stick spectrum also shows the expected position of the $K$-conserving $\Delta F = \pm2$ transition, however since this transition does not correspond to oscillating magnetization it is not observed in the directly detected 1D data.
For both isotopomers of ethanol, the spectrum is to first order determined by a strong $^{1}\!J_{\mathrm{CH}}$ coupling \cite{Theis2013}, which sets up an initial splitting pattern that is further split by the weaker $^{2}\!J_{\mathrm{CH}}$ and $^{3}\!J_{\mathrm{HH}}$ couplings.
An XA$_2$ spin system in zero-field gives a single peak at 3/2$J$ while an XA$_3$ system gives one peak at the coupling frequency and another at twice the coupling frequency.
We can therefore identify the cluster of peaks at 210\,Hz with the 1-$^{13}$C\: isotopomer and the peaks around 125\,Hz and 250\,Hz with the 2-$^{13}$C\: isotopomer.
This corresponds to a one-bond $^{13}$C\:/$^{1}$H\: $J$-coupling constant of $\sim$140\,Hz when the $^{13}$C\: label is on the methylene group, and $\sim$125\,Hz when the $^{13}$C\: is on the methyl group. This is consistent with the results obtained by numerical fitting of the spectra.
While this back-of-the-envelope interpretation gives approximate values for the 1-bond coupling constants, we note that without the aid of numerical simulations it would for example be difficult to say with any certainty where the spectrum of the 1-$^{13}$C\: isotopomer ends and that of the 2-$^{13}$C\: isotopomer begins.
Without computer assistance it would also be challenging to distinguish the $K=1/2$ peaks from the $K=3/2$ peaks in the 3$J$/2 multiplet associated with the 1-$^{13}$C\: isotopomer.
However, not all spin systems are small enough that their Hamiltonians are readily diagonalizable, and in chemical analysis the exact coupling topology is not always known in advance.
Here we dmonstrate how to overcome these problems using 2D spectroscopy.
\begin{figure*}[t!]
\begin{subfigure}[b]{0.9\columnwidth}
\includegraphics[]{multiPolarFullAcOH_zTAUzV3.png}
\caption{}
\label{fig:fullAcOH_MultiPolar2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.9\columnwidth}
\includegraphics[]{indirectAcOHspectra.png}
\caption{}
\label{fig:fullAcOH_MultiPolar}
\end{subfigure}
\caption{MQ-ZF spectra of $^{13}$C\:\!$_2$ acetic acid. (a) 2D Single/multiple-quantum correlation spectrum. F1 corresponds to the indirectly detected dimension, and the numerical peak assignments of \cref{fig:1DJspectra}b are plotted on both axes for reference. (b) Slices through the 2D spectrum in \cref{fig:fullAcOH_MultiPolar2}. The x-axis corresponds to F1, and each spectrum is the projection of the data around a $\pm$ 0.5\,Hz slice at the designated frequencies in the directly detected dimension (F2). The pink bands indicate $\pm$ 245\,Hz, the expected frequency of the double-quantum transition. Note that for convenience the spectra are evenly spaced in the vertical dimension. The top panel shows the directly detected 1D $J$-spectrum.}
\end{figure*}
Figures \ref{fig:EthanolTOCSY} and \ref{fig:fullAcOH_TOCSY} show the ZF-TOCSY spectra of the ethanol mixture and the acetic acid sample acquired using the protocol in \cref{fig:pulseSequences}b.
Note that the entire spectral window (both positive and negative frequencies) is displayed in F1, whereas only the positive part of the F2 axis is shown.
This is because the recorded signal is purely real, and its Fourier Transform is therefore conjugate symmetric.
The spectra are displayed in magnitude mode, so to display all the spectral information it is therefore only necessary to plot two of the four quadrants of the Fourier transformed data.
Overtones of the power-line noise lead to vertical streaks in the data at multiples of 60\,Hz and a climate-control fan causes building vibrations at $\sim$ 10.7\,Hz.
We do not know the origin of the feature near 8.7\,Hz.
The positions of the transitions based on the fitted $J$-coupling constants are plotted on the axes for reference.
The spectra contain cross-peaks between transitions belonging to the same spin-state manifolds.
Meanwhile, there is no coherence transfer, and therefore no cross-peaks, between either the two isotopomers or peaks corresponding to the same isotopomer but different combinations of $K$ and $L$.
This allows us to confirm the numerical assignment of the peaks made in \cref{fig:1DJspectra}.
For example the peak at $\sim$238\,Hz in the ethanol spectrum in \cref{fig:EthanolTOCSY} clearly correlates to the 2$J$ multiplet at around 250\,Hz and therefore belongs to the 2-$^{13}$C\: isotopomer.
Similarly the peak at $\sim$125\,Hz does not give cross peaks with any of the three other peaks in the 1$J$ multiplet, and it therefore must belong to a separate spin-manifold, consistent with the numerical assignment of the 125\,Hz peak to $L=0$ and the three surrounding peaks to $L=1$.
With the same reasoning the ZF-TOCSY spectrum of acetic acid in \cref{fig:fullAcOH_TOCSY} can be used to distinguish between the three lower-frequency peaks of the acetic acid $J$-spectrum, and confirm the numerical assignment of the peak at 31\,Hz to the same spin-manifold as the peaks at 119 and 149\,Hz.
These assignments can all be made without numerical diagonalization.
As a further example of the use of 2D spectroscopy, consider the high-frequency multiplet of the 1-$^{13}$C\: isotopomer (at around 3/2$J$ in both F1 and F2) shown in more detail in \cref{fig:Zoomed}.
Fitting of the 1D data reveals that the high-intensity peak at $\sim$211\,Hz actually consists of two overlapping resonances with different $K$ quantum numbers.
This is confirmed by the cross-peak pattern, as \cref{fig:Zoomed} shows that the 211\,Hz peak correlates to both $K=1/2$ and $K=3/2$ resonances in F1.
In this case 2D spectroscopy allows us to distinguish overlapping resonances by correlating them to distinct, well separated, peaks.
The ZF-TOCSY experiment relies on similar physics to high-field Total Correlation Spectroscopy (TOCSY) \cite{Braunschweiler1983}, but does not require a mixing time.
This follows from the fact that the zero-field free-evolution Hamiltonian already provides strong coupling for all spin pairs (both homo- and heteronuclear) and thus allows complete coherence transfer throughout the molecule.
The same fact also means that F$_1$ and F$_2$ do not correspond to individual-spin transitions but rather to zero-field $J$-spectra.
Previous 2D experiments with direct detection were either performed with different effective Hamiltonians during the two evolution intervals \cite{Sjolander2017a}, or in the presence of a magnetic field such that the Larmor frequencies and $J$-coupling frequencies are approximately equal \cite{Shim2014a}.
These cases lead to significantly different cross-peak patters.
As mentioned above, inspection of the energy-level diagram of ${}^{13}$C$_2$-acetic acid shown in \cref{fig:fullAcOH_Ediagram} reveals a $K$ conserving, and therefore potentially observable, $\Delta F = \pm 2$ transition at $\sim$245\,Hz.
The results of a MQ-ZULF experiment designed to observe this transition, in spite of it not corresponding to oscillating magnetization, is shown in \cref{fig:fullAcOH_MultiPolar2}.
The data were acquired using the pulse sequence in \cref{fig:pulseSequences}c.
The cross-peak pattern is mostly the same as in the ZF-TOCSY spectrum in \cref{fig:fullAcOH_TOCSY}, however there is one additional peak in F1 at 245\,Hz (highlighted pink band) corresponding to oscillations during $t_1$ at the frequency of the $\Delta F = \pm2$ transition.
According to \cref{eq:nmax} only the $K=3/2$ manifold supports such a transition, and it shows up only as cross-peaks to the transitions at 19\,Hz, 38\,Hz, 225\,Hz, 265\,Hz, and 283\,Hz, thus confirming the numerical assignment made in \cref{fig:1DJspectra}b of those peaks to the $K=3/2$ manifold.
This is perhaps seen more clearly in \cref{fig:fullAcOH_MultiPolar}, which shows slices through the indirect (F1) dimension taken at the positions of the peaks in 1D spectrum.
The MQ-resonance (the expected position is indicated with a pale band) clearly shows up in the indirectly detected data, but only in those spectra that are read out at the frequencies of the $K=3/2$ transitions.
We note that the initial density matrix also contains terms proportional to scalar order $\propto\sum_{ij}\bm{I}_i\cdot\bm{I}_j$ \cite{Emondts2014a,Theis2012}, which already contains two-spin terms at time zero.
This could therefore be turned into a two-pulse experiment if pulses along $z$ are used instead, since such pulses access scalar spin-order instead of vector spin-order \cite{Emondts2014a}.
\section*{Conclusions}
We have shown that direct detection using magnetometers facilitates 2D zero-field NMR correlation spectroscopy, and how such techniques simplify assignment of crowded $J$-spectra.
The complete coherence transfer enabled by the isotropic zero-field Hamiltonian ensures that cross peaks in the ZF-TOCSY spectrum are seen between all peaks belonging to the same spin-manifold.
Consequently, ZF-TOCSY may be used not only to distinguish between different molecules, or different $^{13}$C\: isotopomers of the same molecule, but also to facilitate zero-field spectral assignment of a given isotopomer by providing a way to determine if two transitions occur within the same angular-momentum manifold.
Additionally, 2D-spectroscopy increases the maximum attainable spectral resolution by introducing a second spreading parameter in the spectrum. In particular, the ability to resolve otherwise overlapping peaks significantly increases the power of zero-field NMR for chemical fingerprinting, beyond what can already be obtained with the narrow linewidths associated with high-homogeneity zero-field environments.
We have also introduced the concept of multiple-quantum transitions in zero- to ultralow-field NMR (MQ-ZULF) and shown that such transitions may only belong to particular spin-manifolds.
Therefore, by observing which peaks correlate to a multiple-quantum transition, one can assign quantum numbers to those peaks.
Filtering zero-field coherences \cite{Ernst1987,Shaka1983} based on $\Delta F$ would allow for further simplification of spectra -- this remains an outstanding experimental challenge to be addressed in future work (preliminary efforts are discussed in the appendix).
\section*{Acknowledgements}
This work was supported by the National Science Foundation under award CHE-1709944.
The authors thank Rom{\'a}n Picazo Frutos for helpful comments on the manuscript.
|
2,869,038,153,800 | arxiv | \section{Introduction}
Theoretical investigation of hadron production in heavy-ion
collisions at high energies is usually separated into different
camps, characterized by the regions of transverse momenta $p_T$ of
the produced hadrons. At low $p_T$ statistical hadronization and
hydrodynamical models are generally used \cite{pbm, ph, kh, tat, gjs}, whereas at high
$p_T$ jet production and parton fragmentation with suitable
consideration of medium effects in perturbative QCD are the
central themes \cite{pq1, pq2, ia, mt, jet}. The two approaches have been studied
essentially independent of each other with credible success in
interpreting the data, since their dynamics are decoupled at the
energies investigated. The situation may have changed at the CERN
Large Hadron Collider (LHC), where Pb-Pb collisions have been
carried out at $\sqrt{s_{NN}}=2.76$ TeV, resulting in thousands of
soft hadrons on the one hand, and multiple hard jets on the other.
Minijets that are copiously produced at intermediate $p_T$ can
fragment into soft partons with multiplicities so high that their
effects on the hadronization of all
partons created in the soft sector
cannot be ignored. It is the aim of this paper to investigate what
those effects are and to offer an explanation of the observed
hadronic spectra of all species and for all $p_T$ measured up to
20 GeV/c.
Hard parton scattering and hydrodynamical flow are processes that
involve very different time scales. It would be hard to
incorporate them into a unified formalism that describes all
aspects of the system, including thermalization time, initial configuration,
fluid nature of the medium, its quenching effect on the hard
protons, the creation of shower partons, and the hadronization of
all partons at the end of the whole process. Our attempt here is
far from being so ambitious. We focus only on the $p_T$
dependencies of the hadrons produced from 0.5 to 20 GeV in a
formalism that can be valid throughout that range, provided that
we use some model inputs for the thermal component of the low-$p_T$ behavior to supplement
the hard component that can be calculated at high $p_T$. We use quark recombination
to treat hadronization, applied uniformly at all $p_T$. In
treating the degradation of momenta of hard and semihard partons
we shall adjust some parameters to fit the high-$p_T$ data. Since
we aim to confront the $p_T$ spectra of all observed hadrons,
$\pi, K, p, \Lambda$, $\Xi$, $\phi$ and $\Omega$, the system is
highly constrained. The primary feature of this study is to
quantify the effect of hard and semihard jets on the soft sector.
What we find is that the soft partons generated by the hard
partons are so much more at LHC, compared to
the situation at
RHIC, that any treatment without including that aspect of the
problem would be incomplete.
Our investigation of produced hadrons with various contents of
strangeness also reveals contrasting features of heavy-ion physics
not commonly addressed. Whereas hard scattering of gluons and
light quarks can readily occur at high energies, jet fragmentation
into multi-strange hadrons like $\Omega$ and $\phi$ is rare even
at LHC. But the production of $\Omega$ relative to $p$ grows
exponentially with $p_T$ even to the highest $p_T$ measured, the
data for which will be exhibited explicitly in the next section.
Surely, one cannot expect $\Omega$ to be easily produced at
$p_T=7$ GeV/c by jet fragmentation. An explanation of the observed
phenomenon must be an integral part of a description of the
production mechanism of all hadrons.
To give a description of the experimental motivation for our
study, we show in Sec.\ II several pieces of data presented in
novel ways so as to emphasize the problems that have not been
commonly discussed. It will become clear that the hadronization
problem at LHC is drastically different from that at RHIC. In the
framework of the recombination models \cite{hy1,hy2,hz1,hz2} in which
the partons just before hadronization are categorized into thermal
(T) and shower (S) partons, that difference at LHC can be
succinctly stated in the form that S is much greater than T at low
$p_T$ for light quarks, but not strange quarks. Such a statement
has no phenomenological consequence unless the hadronization of
those quarks is treated by recombination.
We do not consider here other features of heavy-ion collisions
besides $p_T$ distributions, most notably the azimuthal dependence
in non-central collision. Conventional description of elliptic
flow does not consider the effects of jets. We shall treat that
subject separately, after our concern about the shower partons
establishes a footing in the general terrain of heavy-ion physics.
To clarify the nature of our approach it is necessary to contrast it from the standard model based on hydrodynamics. If hard and semihard partons produced in high-energy-energy nuclear collisions are important in their effects on soft particles, then one should recognize that their in-medium radiated daughter partons take some time to thermalize, much longer than the rapid equilibration time ($\tau_0\sim 0.6$ fm/c) usually assumed in hydro calculations. A hard parton produced near the center of the medium in central collisions would take about 6 fm/c to reach the surface. Thus rapid thermalization is not realistic if minijets are important, as we shall show that they are at LHC. As a consequence, we cannot make use of hydro results in our approach, nor can hydro results be used to censure our calculations. For example, the thermal parton distribution\ that we consider is not to be identified with any distribution\ of the fluid constituents in the hydro medium. Also, in the hydro treatment $v_2$ is identified with elliptic flow, but it is only a possible, not a necessary, explanation. Other explanations are also possible; see, for example, Refs.\ \cite{h08,chy,hz3}. In this paper we consider only central collisions and establish the importance of shower partons at low momenta. It is suggested that a reader withhold comparison with hydro treatment until the main points advanced here can be made.
This paper is organized as follows: In Sec.\ II we show experimental features that motivate this investigation. Section III describes the general formulation of our approach to the problem. Shower parton distributions are discussed in detail in Sec.\ IV with emphasis on how the degradation of parton momenta is treated. With those partons shown to be dominant, the recombination of shower partons from nearby jets becomes a possibility that is considered in Sec.\ V. With all the basic inputs on partons at hand we then proceed to the determination of the transverse-momentum distributions of $\pi, p, K$ and $\Lambda$ in Sec.\ VI. Multi-strange hyperons and meson are treated in Sec.\ VII with detail equations given in the Appendices. Section VIII contains our conclusion.
\section{Motivating Experimental Features}
We show first some data from LHC that can be taken to suggest
something unusual about the usual observables. Compared to the
data at RHIC energies and below, it seems that simple
extrapolation to Pb-Pb collisions at 2.76 TeV is likely to miss
some new physics.
The charged-particle multiplicity density averaged over
$|\eta|<0.5$ for 0-5\% central collisions is shown in Fig.\ 1 as a
function of collision energy $\sqrt{s_{NN}}$ \cite{ps}. What is
notable is that a straight line can be drawn through all the points
in the semilog plot from $\sqrt{s_{NN}}=2.5$ GeV to 200 GeV, but
at 2.76 TeV the LHC data point deviates significantly from the
straight-line extrapolation. A power-law fit can be made to
connect the RHIC and LHC points for $\sqrt{s_{NN}}>10$ GeV, as shown in \cite{ka}, resulting in
the behavior $\propto s^{0.15}$, but that would overlook the
distinctive feature of the LHC point. The dramatic increase above
the logarithmic dependence suggests the onset of new physics.
\begin{figure}[tbph]
\vspace*{0.5cm}
\includegraphics[width=.7\textwidth]{fig1.eps}
\caption{(Color online) Inclusive charged particle multiplicity per participant pair as a function of collision energy, measured for the 6\% most central collisions, compared with the lower energy data. This figure is from Ref.\ \cite{ps}. }
\end{figure}
Another difference between LHC and RHIC is the dependence on $p_T$.
From the $p_T$ distributions measured at the two energies, 2.76
and 0.2 TeV, we can calculate their ratio. When the data points
are not in the same $p_T$ bin, we make Lagrangian interpolation between
adjacent bins in the RHIC data \cite{aa, xlz} to match the LHC bin \cite{mi, ba2}. The
result for pion is shown by the solid (black) line in Fig.\ 2. Note the
exponential increase by two orders of magnitude as $p_T$ is increased
up to 10 GeV/c. Similar increases are noted for
$p$ and $\Omega$ up to $p_T\sim 6$ GeV/c. The ratios are all around 2 for $p_T<1$ GeV/c, consistent with what we see in Fig.\ 1 where the LHC/RHIC ratio of the multiplicity densities at
mid-rapidity per participant-pair is $\approx 8.5/4\approx 2$. Of
course, most of the particles contributing to that ratio are pions with
$p_T\ ^< _\sim\ 1$ GeV/c. But for $p_T>2$ GeV/c, there are abundantly more
particles produced at LHC than at RHIC. It is not unexpected that
more high-$p_T$ particles are produced at higher collision energy.
The question is what effects do the hard scatterings of partons
have on the production of intermediate-$p_T$ hadrons at $2<p_T<6$
GeV/c. Furthermore, it is reasonable to ask whether the physics at
low $p_T$ can be treated by hydrodynamics as at RHIC, totally
decoupled from the physics at high $p_T$. If jets are copiously
produced in order to account for a factor of $10^2$ at
$p_T\sim10$ GeV/c in Fig.\ 2, why would their fragmentation
products not populate the low-$p_T$ region below 2 GeV/c? Our
knowledge on fragmentation functions derived by leptonic
collisions tells us that the distribution of hadronic products
increased monotonically with decreasing momentum fraction \cite{kkp}.
\begin{figure}[tbph]
\vspace*{-.5cm}
\includegraphics[width=.8\textwidth]{fig2.eps}
\caption{(Color online) With $\rho_h(p_T)$ denoting $dN_h/dp_Td\eta|_{\eta\approx 0}$, the ratios $\rho_h^{\rm LHC}(p_T)/\rho_h^{\rm RHIC}(p_T)$ vs $p_T$ are shown for $h=\pi, p, \Omega$.
The data are from \cite{aa, xlz, mi, ba2}.}
\end{figure}
Finally, we show another plot of data from LHC that is thought
provoking. From the $p_T$ distributions of $p$ and $\Omega$
measured by ALICE \cite{mi, ba2}, we plot their ratio vs $p_T$ as shown by the solid (black) line in Fig.\ 3.
The general trend is an exponential rise up to the highest available $p_T$
with an increase of a factor of 10. The conventional understanding of hadrons produced at
$p_T\sim7$ GeV/c is by the fragmentation of hard scattered gluons
or light-quarks. However, $s$-quark jets are highly suppressed;
moreover, even if an $s$ quark is produced at high $p_T$, its
fragmentation into $\Omega$ is even more suppressed. To our
knowledge it has never been measured, let alone at $p_T=7$ GeV/c.
Figure 3 shows that the $\Omega/p$ ratio at RHIC in dashed (red) line also grows exponentially until $p_T\approx 3$ GeV/c and then decreases slowly.
The phenomena at both energies are clearly calling for an explanation.
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig3.eps}
\caption{(Color online) The ratio $\Omega/p$ vs $p_T$ for central collision at RHIC and LHC. The data are from \cite{aa, xlz, mi, ba2}.}
\end{figure}
\section{Formulation of the Problem}
To calculate the $p_T$ distribution at mid-rapidity for all
hadrons, we use the same formalism as described earlier for Au-Au
collisions at 200 GeV \cite{hy0, hy1, hz2} and for Pb-Pb collisions at
2.76 TeV \cite{hz1}, i.e., the recombination of thermal and shower
partons. We shall use an improved version of the treatment of
momentum degradation \cite{hy3} and adjust the degradation parameters
to fit the LHC data over a wider range of $p_T$. As a consequence,
the study in Ref.\ \cite{hz1} for $p_T<5$ GeV/c is superseded because
the inclusion of harder jets up to 30 GeV/c with less momentum
degradation results in a profusion of soft shower partons.
Furthermore, we shall include also the production of multi-strange
hadrons. The high density of shower partons introduces another
complication, which is the recombination of partons from different, but adjacent, jets. Although that component turns out not to be dominant, its
effect is not negligible and must be calculated so as to ascertain its magnitude.
The basic framework that describes the recombination of thermal and shower partons at midrapidity
is straightforward \cite{hy0, hy1, hz1, hz2}
\begin{eqnarray}
p^0{dN^M\over dp_T}&=&\int {dp_1\over p_1}{dp_2\over p_2} F_{q_1\bar q_2}(p_1,p_2) R_{q_1\bar q_2}^M(p_1,p_2,p_T) \label{31} \\
p^0{dN^B\over dp_T}&=&\int \left[\prod_{i=1}^3 {dp_i\over p_i} \right] F_{q_1q_2q_3}(p_1,p_2,p_3) {R}_{q_1q_2q_3}^B(p_1,p_2,p_3,p_T) \label{32}
\end{eqnarray}
The essence is in the details of what the symbols mean. The LHS of Eqs.\ (\ref{31}) and (\ref{32}) are the invariant $p_T$ distributions of meson and baryon, respectively, averaged over $\eta$ at midrapidity and over all $\phi$.
They appear as invariant distribution s in the 1D momentum space, but are derived from the invariant distribution\ in 3D as follows:
\begin{eqnarray}
p^0{dN\over dp_T} = p^0p_T{1\over \Delta y}\int_{\Delta y}dy{1\over 2\pi}\int_0^{2\pi}d\phi p^0{d^3N\over d^3p}
\end{eqnarray}
with $\Delta y$ being a narrow interval at $y\approx 0$, say from $-0.5$ to $+0.5$. Thus our formalism here is not framed to address the global properties of the nuclear collisions, such as total charge multiplicity or long-range correlation.
The parton momenta $p_i$ are the transverse momenta (with the subscript $T$ omitted) of the coalescing quarks. $R^{M}$ and $R^{B}$ are the recombination functions (RFs) for meson and baryon, respectively. The central issue in the formalism is the determination of the parton distribution $F_{q_1\bar{q}_2}$ and $F_{q_1q_2q_3}$ just before hadronization. Because we intend to treat hadron produced in as wide a range in $p_T$ as experimental data on identified particles are available (for pion up to 20 GeV/c), we must consider partons that are produced in soft, semihard and hard scatterings. We group them into two classes of partons, thermal ($\rm T$) and shower ($\rm S$), and use $\cal T$ and $\cal S$ to denote their invariant distributions in $p_i$. Taking into account the recombination of different types of partons, we thus have
\begin{eqnarray}
F_{q_1\bar q_2}&=&{\cal TT+TS+SS} \label{33} \\
F_{q_1q_2q_3}&=&{\cal TTT+TTS+TSS+SSS} \label{34}
\end{eqnarray}
We do not commit ourselves to any specific hydrodynamical description of the soft partons, a position that is made more reasonable when, as will be seen below, low-$p_T$ hadrons can be strongly influenced by shower partons at LHC, thus rendering hydro approach to be inadequate even at low $p_T$. It dose not mean that we do not recognize the picture that the hot and dense medium created in heavy-ion collision expands. We leave open the issues concerning equilibration time, viscosity, freeze-out dynamics, etc., since undetermined parameters cannot be adjusted to fit the data at low $p_T$ when the effects of shower partons cannot be ignored.
More specifically, we are concerned with the copious production of hard and semihard jets, whose initiating partons can take up to ${\cal O}(10)$ fm/c to reach the surface, depending on where they are created in the overlapping nuclei and which directions they move
in the transverse plane. They can radiate gluons along their in-medium trajectories, and those gluons would take a long time to thermalize with the soft partons in the medium, longer than the short thermalization time $\tau_0\sim 0.6$ fm/c assumed in hydrodynamical treatment. The effects of such hard and semihard partons may be ignored in the soft region if they are rarely produced, as at lower collision energies. But at LHC they are important and cannot be neglected. If the basic tenets of hydro are not reliable, the notion of what is thermal must be liberated from the constraints of hydrodynamics.
The shower partons generated in the medium interact with the bulk partons, and cannot be distinguished from the latter by the time the density of all soft partons is low enough for hadronization. They are all referred to here as thermal partons in the final stage of the quark matter as they move out of the deconfinement phase. The shower partons that we consider are the fragmentation products of the hard and semihard partons that emerge from the surface after momentum degradation. They are distinguished from the
thermal partons that are in their environment. Those are the $\cal T$ and $\cal S$ in Eqs.\ (\ref{33}) and (\ref{34}).
We use a simple exponential form to represent the thermal parton distribution
\begin{eqnarray}
{\cal T}(p_1) = p_1{ dN^T_q\over dp_1} = Cp_1e^{-p_1/T} \label{35}
\end{eqnarray}
with the dimensionless prefactor $Cp_1$ necessary to yield pure exponential behavior for the pion distribution $dN^{\pi}/p_Tdp_T\propto C^2\exp(-p_T/T)$ arising from $\rm TT$ recombination only, as observed at RHIC \cite{hy1, hz3}. Thus $C$ has the dimension of inverse momentum.
The values of the parameters $C$ and $T$ wll be discussed below.
When shower partons are important at low $p_T$, then $\rm TS$ and $\rm SS$ components need to be included. Nevertheless, we retain the form of $\mathcal T(p_1)$ in Eq.\ (\ref{35}) for the thermal component.
The shower parton distribution after integration over jet momentum $ q$ and summed over all jets is
\begin{eqnarray}
{\cal S}^j(p_2)=\int {dq\over q}\sum_i \hat F_i(q) S_i^j(p_2, q), \label{36}
\end{eqnarray}
where $\hat F_i(q)$ is the distribution of hard or semihard parton of type $i$ at the medium surface after momentum degradation while transversing the medium but before fragmentation. $\hat F_i(q)$ was introduced previously for collisions at RHIC for any centrality \cite{hy3, hz3}, but will be modified below to suit our description of the physics at LHC. $S_i^j(z)$ is the unintegrated shower-parton distribution (SPD) in a jet of type $i$ fragmentation into a parton of type $j$ with momentum fraction $z$. It is determined from the fragmentation function (FF) on the basis that hadrons in a jet are formed by recombination of the shower partons in the jet \cite{hy4, hy5}. In particular, the recombination of a quark $j$ with an antiquark $\bar j$ in a jet of type $i$ forms a pion, for which the FF is $D_i^{\pi}(z_j+z_{\bar j})$. The numerical form for $S_i^j(z_j)$ can therefore be calculated from the data on $D_i^{\pi}$ and the RF for pion.
The RFs were introduced a long time ago \cite{dh, rh1} and have been applied successfully to many collision processes \cite{hy6, hy7, hy1, hy2, hz1, hz3}. Here for brevity we give only the RFs for pion and proton, leaving other hadrons to be specified later as the cases arise,
\begin{eqnarray}
R^{\pi}_{q\bar q}(p_1, p_2, p_T)&=&\frac{p_1p_2}{p_T}\delta(p_1+p_2-p_T), \label{37} \\
R^{p}_{uud}(p_1, p_2, p_3, p_T)&=&g_{st}^pg_p(y_1y_2)^{\alpha}y_3^{\beta}\delta(\sum\limits_iy_i-1),\qquad y_i=\frac{p_i}{p_T} , \label{38}
\end{eqnarray}
where $g_{st}=1/6$, $\alpha=1.75$, $\beta=1.05$, and
\begin{eqnarray}
g_p=[B(\alpha+1, \alpha+\beta+2)B(\alpha+1, \beta+1)]^{-1}, \label{39}
\end{eqnarray}
$B(a, b)$ being the Beta function.
As a note of affirmation, we recall that with these RFs used in Eqs.\ (\ref{31}) and (\ref{32}), and considering only the $\cal{TT}$ ($\cal{TTT}$) component for pion (proton), we have been able to fit the pion and proton spectra for $1<p_T<2$ GeV/c in Au-Au collisions at 200 GeV \cite{ssa} with a common value of the inverse slope in Eq.\ (\ref{35}) \cite{hz3}.
For $p_T<1$ GeV/c there is resonance contribution that Eq.\ (\ref{31}) does not account for, while for $p_T>2$ GeV/c shower parton contributions invalidate the approximation of $F_{q\bar q}$ and $F_{uud}$ by $\cal {TT}$ and $\cal{TTT}$, respectively. In the $1<p_T<2$ GeV/c interval one may find the excellent agreement with data surprising, when only the exponential form of Eq.\ (\ref{35}) is used for both pion and proton, since the proton data for $dN^p/p_Tdp_T$ is not exponential. However, it is precisely because of the momentum dependence in $R^p$ in Eq.\ (\ref{38}) and the fact that $p_0$ in Eq.\ (\ref{32}) is the transverse mass $m_T(p_T)$ at $y=0$ that renders $dN^p/p_Tdp_T$ to deviate from pure exponential. The phenomenological success there gives strong support to the recombination model. As we shall see below, the situation of dominance by $\rm TT$ and $\rm TTT$ recombination changes when the collision energy is increased tenfold, whereby $\rm TS$ and $\rm TTS$ can no longer be neglected. Thus the essence of this work is to calculate the effects of the shower partons at low and intermediate $p_T$ region in collisions at LHC.
\section{Shower Parton Distributions}
Focusing on the shower partons, we see in Eq.\ (\ref{36}) that $\hat F_i(q)$ is the distribution to be determined for collisions at LHC, since $S_i^j(p_2, q)$ is the SPD outside the nucleon medium and is independent of the collision system; it has been determined previously
from FFs in vacuum \cite{hy4}. At any particular
impact parameter $b$, $\hat F _i(q, b)$ is the average over azimuthal angle $\phi$ of $\bar F_i(q, \phi, b)$, which has three essential parts \cite{hy3}
\begin{eqnarray}
\bar F_i(q, \phi, b)=\int d\xi P_i(\xi, \phi, b)\int dkkf_i(k)G(k, q, \xi), \label{41}
\end{eqnarray}
where $f_i(k)$ is the parton density in the phase space $kdk$ at the point of creation, $k$ being the initial momentum of the hard or semihard parton $i$, and $P_i(\xi, \phi, b)$ is the probability for the parton $i$ to have a dynamical path length $\xi$ at $\phi$ and $b$. The two parts are connected by $G(k, q, \xi)$
\begin{eqnarray}
G(k, q, \xi)=q\delta(q-ke^{-\xi}), \label{42}
\end{eqnarray}
which is the momentum degradation function, relating the initial parton momentum $k$ to the final momentum $q$ at the medium surface by an exponential decay in $\xi$, the length that carries all the geometrical and dynamical information of the process through $P_i(\xi, \phi, b)$. The details of calculating $P_i(\xi, \phi, b)$ are given in Ref.\ \cite{hy3} and summarized in the Appendices in Ref.\ \cite{hz2}. We shall recall the essence below in order to re-parametrize it for suitable use at LHC.
First, we need to state why we describe momentum degradation in the way outlined above without adopting the results obtained by pQCD in the literature. Because we intend to calculate the $p_T$ distribution s of all hadrons from 1 to 20 GeV/c, we need to let $q$ in Eq.\ (\ref{36}) be integrated from low values in order for the shower partons to have their momenta be as low as 0.5 GeV/c. In practice, $q$ is integrated from 2 to 30 GeV/c. Low-order perturbative QCD is not reliable for virtuality less than 8 GeV/c, so the major portion of the contribution to the shower partons in the soft region cannot make use of the established theory. Furthermore, the usual calculation based on DGLAP evolution equation is on medium modification of the fragmentation function, while we need shower parton distribution\ for the purpose of recombination. The dependence on the medium is usually described in terms of entropy density and local flow velocity, which are hydrodynamical quantities tuned to fit low-$p_T$ data, which are exactly what we attempt to reproduce in addition to intermediate-$p_T$ data independent of fluid dynamics. For these reasons we use a phenomenological procedure that has been shown to generate the azimuthal and $p_T$ dependencies of $R_{AA}(\phi,p_T)$ at RHIC \cite{hy3} and can readily be extended to higher energy, as we now proceed to do.
The initial momentum distributions have been determined in Ref.\ \cite{sgf} for Au-Au collisions at 200 GeV and Pb-Pb collisions at 5.5 TeV. They are parametrized in the form
\begin{eqnarray}
f_i(k)=K\frac{A}{(1+k/B)^{\beta}}. \label{43}
\end{eqnarray}
We make logarithmic interpolations of the parameters between the two energies for $\ln A$, $B$ and $\beta$ and obtain for $\sqrt{s_{NN}}=2.76$ TeV the parameters shown in Table I with $K=2.5$.
\begin{table}
\tabcolsep0.2in
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& $g$ & $u$ & $d$ & $\bar u$ & $\bar d$ &s, $\bar s$\\
\hline
$A$ [$10^4$/GeV$^2$] & 6.2 &1.138 &1.266 &0.24 &0.23& 0.093\\
$B$ [GeV]& 0.98 &0.687 &0.677 &0.87 &0.88 &1.05\\
$\beta$ & 6.22 &5.67 &5.66 &5.97 &5.99 &6.12\\
\hline
\end{tabular}
\caption{Parameters for $f_i(k)$ in Eq.\ (\ref{43}).} \label{table1}
\end{table}
The connection between geometry and dynamics is imbedded in the probability function $P_i(\xi,\phi,b)$.
The geometrical path length $\ell$, when written more fully, is
\begin{eqnarray}
\ell(x_0, y_0, \phi, b)=\int_0^{t_1(x_1, y_1)}dt D(x(t), y(t)) \label{44}
\end{eqnarray}
that is calculable from nucleon geometry. The transverse coordinate $(x_0, y_0)$ is the initial point of creation of a hard parton, and $(x_1, y_1)$ is the exit point. The integration is weighted by the local density, $D(x, y)$, along the trajectory, which is marked by the variable $t$ that does not denote time. As the medium expands, the end point $t_1(x_1, y_1)$ increases, but $D(x(t), y(t))$ decreases, so $\ell$ is insensitive to the details of expansion dynamics. The dynamical path length $\xi$ is proportional to $\ell$, but is to be averaged over all initial points $(x_0, y_0)$, i.e.,
\begin{eqnarray}
P_i(\xi, \phi, b)=\int dx_0dy_0Q(x_0, y_0, b)\delta(\xi-\gamma_i\ell(x_0, y_0, \phi, b)) \label{45}
\end{eqnarray}
where $Q(x_0, y_0, b)$ is the probability that a hard (or semihard) parton is produced at $(x_0, y_0)$, calculable from nucleon thickness functions \cite{hy3, hz2}. The only parameter that we cannot calculate is $\gamma_i$, which incorporate the effects of energy loss during the passage of the parton through the non-uniform and expanding medium. The average dynamical path length $\bar\xi_i$, defined by
\begin{eqnarray}
\bar\xi_i(\phi, b)=\int d\xi\xi P(\xi, \phi, b), \label{46}
\end{eqnarray}
depends on geometry, and is proportional to $\gamma_i$, as can readily be seen upon substituting Eq.\ (\ref{45}) into (\ref{46}). Thus, using Eqs.\ (\ref{41})-(\ref{45}), $\hat F_i(q, b)$ can be calculated once $\gamma_i$ are specified.
In treating hadron production at RHIC we have determined $\gamma_i$ in Ref.\ \cite{hz2} and obtained excellent fits of the $p_T$ distributions of $\pi, K, p$ for $p_T<10$ GeV/c at all centralities \cite{ph1, ph2, ph3, st1, ph4, st2}. We used $\gamma_g=0.14$ for gluon and $\gamma_q=0.07$ for all light quarks, their ratio being 2 as an approximation of the color factor $C_A/C_F=9/4$. Because $\bar\xi_i(\phi, b)\propto\gamma_i$, we have $\bar\xi_g(\phi, b)/\bar\xi_q(\phi, b)=2$, which directly implies that gluons on average lose the same fraction of momentum as quarks do in half the distance of traversal through the nucleon medium. That turned out to be an important factor in enabling us to reproduce both the pion and proton spectra because at intermediate $p_T$ pions are more affected by semihard gluon minijets, while protons are more so by quark minijets, due to their recombination characteristics \cite{hz2}.
To extend the treatment of momentum degradation to collisions at LHC, we cannot expect $\gamma_i$ to remain the same as at RHIC. It has been found that the nuclear modification factor $R_{AA}$ for Pb-Pb collisions at 2.76 TeV at 0-5\% centrality decreases rapidly from $p_T=2$ GeV/c to a minimum value of 0.13 at $p_T=$ 6-7 GeV/c, after which there is a significant rise, reaching $R_{AA}\approx 0.4$ for $p_T>30$ GeV/c \cite{raa}. Such data suggest that jet quenching becomes less severe at higher momentum, so $\gamma_i$ should decrease as the hard parton momentum increases. Hence, we parametrize $\gamma_g$ as
\begin{eqnarray}
\gamma_g(q)=\frac{\gamma_0}{1+q/q_0}, \label{47}
\end{eqnarray}
with $\gamma_0$ and $q_0$ to be determined by fitting the hadronic spectra in the intermediate $p_T$ region, and we continue to set $\gamma_q=\gamma_g/2$ as before. Although the $p_T$ distributions will not be computed until Sec.\ VI after several other issues are discussed, we give here the values $\gamma_0=0.8$ and $q_0=10$ GeV/c that will be determined there, so that our present discussion can proceed with concrete numerical specificity to show the nature of physics involved. Furthermore, we shall hereafter be concerned with only the most central collisions 0-5\%. We shall therefore omit the symbol $b$ and perform all calculation with the appropriate range of impact parameter. Defining $\hat F_i(q)$ as the average of $\bar F_i(q, \phi)$ over $\phi$
\begin{eqnarray}
\hat F_i(q)=\frac{1}{2\pi}\int_0^{2\pi}d\phi\bar F_i(q, \phi), \label{48}
\end{eqnarray}
we can, using Eqs.\ (\ref{42}-\ref{45}) and following the details discussed in Refs.\ \cite{hy2, hz2}, compute $\hat F_i(q)$ for all parton types $i$ listed in Table I, and for all $q<30$ GeV/c. Although the hadron transverse momentum $p_T$ will not exceed 20 GeV/c in our calculation, so that $p_2$ in Eqs.\ (\ref{31}) and (\ref{32}) is also less than that upper limit, it is necessary to consider higher values of $q$ because of the integration in Eq.\ (\ref{36}).
In Fig.\ 4(a) we show $\hat F_g$ for gluon by the solid line, and in (b) $\hat F_i$ for $i=q$, $\bar q$ and $s$ by other line types, assuming that $\gamma_s=\gamma_q$, where the subscript $q$ denotes any of the light quarks. They are compared to $q^2f_{g, q}(q)$ for no momentum degradation (i.e., $\xi=0$) shown by the lines of open symbols. We recall that $f_i(k)$ is the initial parton distribution defined in the phase space $kdk$, while $\hat F_i(q)$ is the invariant distribution in $dq/q$. It is possible to see from Fig.\ 4 that the ratio $\hat F_i(q)/q^2f_i(q)$ increases with increasing $q$. That is a consequence of $\gamma_g(q)$ decreasing with $q$, as indicated in Eq.\ (\ref{47}).
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig4.eps}
\vspace*{-1.4cm}
\caption{(Color online) Distribution of minijets at medium surface for 0-5\% centrality. Index $i$ denotes the parton type: (a) $i=g$ for gluon, (b) $i=q$, $\bar q$, $s$ (with $\bar s$ being treated the same as $s$). The line with open squares in (a) represents the distribution of gluons without momentum degradation; the line with open circles in (b) represents the same for light quarks.}
\end{figure}
For the application of $\hat F_i(q)$ in subsequent calculation, notably in Eq.\ (\ref{36}), it is convenient to have an explicit formula. We have been able to fit $\hat F_i(q)$ very well for all $i$ by use the Tsallis distribution \cite{ct}
\begin{eqnarray}
\hat F_i(q)=\hat A_i(1+\frac{q}{n_iT_i})^{-n_i}, \label{49}
\end{eqnarray}
where $\hat A_i$, $n_i$ and $T_i$ parameters are given in Table II.
\begin{table}
\tabcolsep0.3in
\begin{tabular}{|c|c|c|c|}
\hline
$i$ & $\hat A_i$ & $n_i$ & $T_i$ [GeV/c]\\
\hline
$g$ & 8232 &3.07 &0.092 \\
$u$& 6352 &2.77 &0.051 \\
$d$ & 8090 &2.75 &0.048 \\
$\bar u$ & 437 &3.04 &0.116 \\
$\bar d$ & 407 &3.05 &0.118 \\
$s$, $\bar s$ & 133 &3.16 &0.172 \\
\hline
\end{tabular}
\caption{Parameters for $\hat F_i(q)$ in Eq.\ (\ref{49}).} \label{table1}
\end{table}
With $\hat F_i(q)$ now known explicitly, we can proceed to the calculation of $\mathcal{S}^j(p_2)$ in Eq.\ (\ref{36}). The SPDs $S_i^j(p_2, q)$ are derived in Refs.\ \cite{hy4} and summarized in \cite{hz2}. Since the fragmentation of hard and semihard partons into shower partons takes place outside the medium in our treatment, the structure of SPDs is independent of the collision energy. Thus $\mathcal{S}^j(p_2)$ at LHC differs from that at RHIC only because $\hat F_i(q)$ is now enhanced, not because of any changes in $S_i^j(p_2, q)$. While $i$ in Eq.\ (\ref{36}) is summed over all parton types listed in Table II, $j$ will only be $u$, $d$, $s$ and their antiquarks because in our formalism of recombination gluons do not directly participate in hadronization. They are always converted to $q\bar q$ pairs first, which dress themselves before becoming the constituent quarks of the produced hadrons \cite{rh1}. The conversion of gluons to $q\bar q$ pairs are referred to as enhancing the sea for hadronization at large rapidity \cite{rh1, hy7}. Here at large $p_T$ the same concept of gluon conversion applies, except that instead of enhancing the sea each $q$ and $\bar q$ can participate in forming a hadron, but in single-particle inclusive distribution only the leading partons with large momentum fractions are considered in the calculation.
Before showing the result from calculating $\mathcal{S}^j(p_2)$, we note that in using Eq.\ (\ref{36}) in practice, apart from $q$ being integrated from $q=2$ to 30 GeV/c, as mentioned earlier, the SPD $S_i^j(p_2, q)$ is made to deviate from the scaling form $S_i^j(z)$ by our insertion of a cutoff factor $c_2(p_2)$
\begin{eqnarray}
S_i^j(p_2, q)=S_i^j(p_2/q)c_2(p_2), \label{410}
\end{eqnarray}
where
\begin{eqnarray}
c_2(p_2)=1-e^{-(p_2/p_c)^2}, \hspace{0.5cm} p_c=0.5 \mbox{GeV/c}. \label{411}
\end{eqnarray}
Such a factor is necessary to render the shower partons meaningful in the soft region, for otherwise the IR divergent FF, $D_i(p_T/q)$, as $p_T\to 0$, would lead to unrealistically large $S_i^j(p_2/q)$. This point is discussed in Appendix C of Ref.\ \cite{hz2}, where $c_2(p_2)$ is denoted by $\gamma_2(p_2)$. The value of $p_c$ in Eq.\ (\ref{411}) is chosen so that we can obtain a good fit of the proton spectrum at low $p_T$, as will be shown in Sec.\ VI. The situation here for LHC is different from that at RHIC, where the shower parton are less important than the thermal partons at low $p_2$, so the precise value of $p_c$ is not significant. At LHC $\mathcal S^j(p_2)$ is dominant throughout all $p_2$ so
without the cutoff $p_c$ the divergence of $S_i^j(p_2/q)$ as $p_2\to 0$ would lead to unrealistically large hadronic distribution\ for $p_T<1$ GeV/c. By relinquishing our claim for any reliability of our model predictions in the region $p_T<1$ GeV/c, we find that what we can calculate at $p_T>1$ GeV/c is insensitive to the precise value of $p_c$. We use $p_c=0.5$ GeV/c just to fit the proton spectrum at $p_T<1$ GeV/c.
Note that we use the proton distribution as the guide, not pion, because there are resonance and other contributions to the pion distribution at very low $p_T$. The details will become more clear when the mathematical expressions for recombination are shown explicitly below.
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig5.eps}
\caption{(Color online) Thermal distribution $\mathcal{T}(p_1)$ is depicted by the dashed (blue) line for $T=0.31$ GeV. Shower parton distribution $\mathcal{S}^u$ is shown in solid (red) line with low-$p_1$ cutoff.}
\end{figure}
Substituting Eqs.\ (\ref{410}) and (\ref{411}) into (\ref{36}), we obtain the invariant shower-parton distribution $\mathcal S^j(p_2)$ after integrating over $q$ and summing over all initiating partons $i$. For $j=u$, it is shown in Fig.\ 5 by the solid (red) line, plotted against $p_2$ but labeled as $p_1$, since it is to be compared to the thermal parton distribution $\mathcal T(p_1)$ in the same figure. For $\mathcal T(p_1)$ we use Eq.\ (\ref{35}) with parameters $C$ and $T$ essentially the same as at RHIC, the details of which will discussed in Sec.\ VI.
The $\mathcal T(p_1)$ distribution is shown by the dashed (blue) line in Fig.\ 5. Evidently, $\mathcal S(p_1)$ dominates over $\mathcal T(p_1)$ for all $p_1>0.5$ GeV/c. Hereafter, for the sake of brevity we omit the superscript of quark type $j$ in $\mathcal S^j(p_1)$, as we routinely do for $\mathcal T(p_1)$, when no confusion is likely to ensue. This is the most remarkable feature about the parton distribution at LHC. Although we cannot show the phenomenology based on these distribution until later, the dominance of $\mathcal S(p_1)$ is so important that it reorients our thinking about hadron production at low and intermediate $p_T$ from this point of our discussion onward. In essence, minijets are so copiously produced at LHC that their effects at low $p_T$ cannot be ignored, thus posing a substantive question on the meaningfulness of any hydrodynamical study without taking minijets into account.
To place Fig.\ 5 in the proper context, we show the ratio $\mathcal S/\mathcal T$ by the solid line in Fig.\ 6(a). It is substantially above 1 for $p_1>0.5$ GeV/c. For comparison the ratio for the partons at RHIC is shown by the dashed line in the same figure. Some aspects of the shower partons at RHIC are discussed in Appendix A. We see in Fig.\ 6(a) that $\mathcal S/\mathcal T$ at LHC is significantly larger than that at RHIC. Whereas the latter does not exceed 1 until $p_1$ is above 2 GeV/c, the former is almost always greater than 1. Since $\mathcal T$ is the same in both, the ratio of $\mathcal S/\mathcal T$ at LHC to that at RHIC is just $\mathcal{S}^{\rm LHC}/\mathcal{S}^{\rm RHIC}$, which is shown in Fig.\ 6(b), exhibiting a factor of 7 even at $p_1\approx1$ GeV/c. It is therefore reasonable to draw a connection between the enhancement of shower partons and the increase of average multiplicity in Fig.\ 1 in going from RHIC to LHC energies.
\begin{figure}[tbph]
\begin{minipage}[h]{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig6a.eps}\label{fig6a}
\end{minipage}
\begin{minipage}[h]{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig6b.eps}\label{fig6b}
\end{minipage}
\caption{(Color online) (a) The ratios of $\mathcal{S}/\mathcal{T}$ for LHC and RHIC at 0-5\% centrality. (b) The ratio of shower-parton distribution at LHC to that at RHIC.}
\end{figure}
\section{Two-jet Recombination}
Before we embark on the actual task of computing the inclusive distributions, we discuss an issue that should arise upon examining Fig.\ 5. We see in that figure that $\mathcal S$ is larger than $\mathcal T$ for all $p_1>0.5$ GeV/c so one would expect the last terms $\mathcal{SS}$ and $\mathcal{SSS}$ in Eqs.\ (\ref{33}) and (\ref{34}) to be more important. However, those equations display only the schematic structure of the various components, and are adequate only as a general layout for use in Eqs.\ (\ref{31}) and (\ref{32}). Kinematic constraints on the shower-parton momenta that will be shown in detail in the next section result in the contribution from $\mathcal{SS}$ and $\mathcal{SSS}$ terms to be dominant only in the large $p_T$ region. There is another type of shower-parton recombination that has not been discussed above; that is the subject of our consideration in this section.
In Refs.\ \cite{hy1, hz1, hz2} where $\rm SS$ recombination is considered, the shower partons arise from the same jet. (The same applies to $\rm SSS$ for baryons as well, but will not be reiterated.) Such a term is equivalent to fragmentation, since it is from the FF, $D_i^{\pi}(z)$, that the SPDs are derived in the first place \cite{hy4}. In view of the dominance of $\mathcal{S}(p_1)$ over $\mathcal{T}(p_1)$, it is reasonable to expect the integral of $\mathcal{S}(p_1)\mathcal{S}(p_2)$ to be larger than $\mathcal{T}(p_1)\mathcal{S}(p_2)$ when convoluted with the same RF, $R^{\pi}(p_1, p_2, p_T)$. At this point it is important for us to be more explicit with indices and distinguish one-jet and two-jet recombinations, which we shall denote by $\rm (SS)^{1j}$ and $\rm (SS)^{2j}$, respectively.
In Fig.\ 7 we show the diagrams in the transverse plane for three types of recombination: (a)
$\rm TS$, (b) $\rm (SS)^{1j}$ and (c) $\rm (SS)^{2j}$. In the notation of Eq.\ (\ref{42}), $k$ is the momentum of the hard or semihard parton at creation, and $q$ is the momentum at the medium surface. The thick red vectors have the dual role of representing the jet momentum in the medium and the degradation effect described by $G(k, q, \xi)$. The thinner red lines outside the medium are the semihard partons $q_j$, which can emit shower partons represented by the thinnest red lines denoted by $p_j$. The blue dashed arrows are thermal partons. Recombination is represented by a large black blob with the outgoing open arrow depicting the produced pion.
We emphasize that the shower parton lines are inclusive in the sense that only the ones contributing to the formation of the observed hadron are shown. In particular, a gluon generates a cluster of partons which cannot all be depicted. Thus quark types and baryon numbers cannot be recognized from the schematic diagrams. Furthermore, the
lengths and angles of the vectors are not drawn to scale due to the limitation in presenting the figures clearly, and should not be taken literally.
\begin{figure}[tbph]
\centering
\vspace*{-9cm}
\includegraphics[width=0.9\textwidth]{fig7.eps}
\vspace*{-.5cm}
\caption{(Color online) Schematic diagrams for parton recombination of (a) TS, (b) SS in one jet, and (c) SS in two jets. Thick (red) lines represent partons in medium, thin (red) lines partons out of medium, thinnest (red) lines shower partons, and dashed (blue) lines thermal partons.
All lines are inclusive in the sense described in the text.}
\end{figure}
Note that in Fig.\ 7(a) and (b) the hard or semihard partons are labeled by $i$, while in (c) the two partons are labeled by $i$ and $i'$. Therein lies the essential point that $\rm TS$ and $\rm{(SS)^{1j}}$ each involves only one jet of type $i$, while $\rm (SS)^{2j}$ involves two jets of types $i$ and $i'$. Thus for $\rm{TS}$ and $\rm{(SS)^{1j}}$ there is only one hard scattering contained in $\hat F_i(q)$, while for $\rm{(SS)^{2j}}$ there are two hard scatterings contained separately in $\hat F_i(q_1)\hat F_{i'}(q_2)$. More explicitly, but leaving out integration over $q$ and summation over $i$ for now (with full expression to be shown in the next section), we have
\begin{eqnarray}
\hat F_i(q)\widehat{TS}(q, p_T)&=&\int\frac{dp_1}{p_1}\frac{dp_2}{p_2}\hat F_i(q)\mathcal {T}^{\bar q}(p_1) {S}_i^q(p_2, q)R_{q\bar q}^{\pi}(p_1, p_2, p_T), \label{51} \\
\hat F_i(q)\widehat{SS}(q, p_T)&=&\int\frac{dp_1}{p_1}\frac{dp_2}{p_2}\hat F_i(q)\left\{{S}_i^{q}(p_1, q),{S}_i^{\bar q}(p_2, q)\right\} R_{q\bar q}^{\pi}(p_1, p_2, p_T)\\ \nonumber
&=&\hat F_i(q)\frac{p_T}{q}D_i^{\pi}(p_T, q), \label{52}
\end{eqnarray}
while for $(\rm{SS})^{2j}$ we need to retain the $\phi$ variable in $\bar F_i(q, \phi)$ before it is averaged over $\phi$ in Eq.\ (\ref{48}):
\begin{eqnarray}
\widehat{\cal SS}^{2j}=\int\left[\prod\limits_{a=1}^2\frac{dp_a}{p_a}d\phi_a\right] \bar F_i(q_1, \phi_1)\bar F_{i'}(q_2, \phi_2){{S}_i^{q}(p_1, q_1)\rm {S}_{i'}^{\bar q}(p_2, q_2)}{\bf R}_{\Gamma}^{\pi}(p_1, \phi_1, p_2, \phi_2, p_T, \phi). \label{53}
\end{eqnarray}
Because there are two initiating hard partons $i$ and $i'$ we need to integrate over their respective azimuthal angels $\phi_1$ and $\phi_2$, allowing the RF $\bf R_{\Gamma}^{\pi}$
to play the role of restricting $\phi_1$ and $\phi_2$ to be really equal for the coalescence process to take place. Non-parallel partons have large relative momentum transverse to $\vec{p}_1+\vec{p}_2$, which should not exceed the binding energy of the constituents of the hadron that it is to be formed. That is different from large relative longitudinal momentum parallel to $\vec p_1+\vec p_2$ because in the parton model the momentum fractions of partons in a hadron can vary from 0 to 1 .
The azimuthal angles $\phi_1$ and $\phi_2$ may be given by a Gaussian distribution in $|\phi_1-\phi_2|$ with an appropriate width. However, since $\phi_1$ and $\phi_2$ are integrated over in Eq.\ (\ref{53}), it is simpler to adopt a factorizable form that requires the partons to be parallel but with a suitable
normalization factor $\Gamma$ that we can estimate, i.e.,
\begin{eqnarray}
{\bf R}_\Gamma^\pi(p_1,\phi_1,p_2,\phi_2,p_T,\phi) = \Gamma\delta(\phi_1-\phi_2)\delta\left({\phi_1+\phi_2\over 2} - \phi\right) R^\pi(p_1,p_2,p_T),
\label{54}
\end{eqnarray}
where $\Gamma$ is the probability that two parallel partons can recombine. Since the partons are emitted from the medium at early times, we may
consider the emitting system as being a thin almond-shaped overlap region viewed from its side in the same transverse plane at midrapidity as where
the pion is detected. For 0-5\% centrality the almond is almost circular. The partons at $\phi_i$ are parallel, but can be emitted at any
distance from the center of the circle. Looking at the emitting source edgewise, it is essentially a one-dimensional system of width approximately 10
fm, which is slightly less than $2R_A$ since high-density partons are not likely to be emitted tangentially from the edges. The two parallel partons
should be separated by a distance not greater than the diameter of a pion ($\sim 1$ fm), given that the jets have some width. Thus our estimate for
$\Gamma$ is the ratio $\sim 1/10$. We do not see that any more elaborate analysis of the coalescence process can provide a more transparent
description of ${\bf R}_\Gamma^\pi$. Applying Eq.\ (24) to (23) we obtain upon averaging over $\phi$
\begin{eqnarray}
\widehat{\cal SS}^{2j}=\Gamma\int\frac{dp_1}{p_1}\frac{dp_2}{p_2}\hat F_i(q_1) {S}_i^{q}(p_1, q_1)\hat F_{i'}(q_2){{S}_{i'}^{\bar q}(p_2, q_2)}R^{\pi}(p_1, p_2, p_T). \label{55}
\end{eqnarray}
By comparing this equation with Eq.\ (22) we see that the 2j contribution has an extra factor of $\Gamma\hat F_i(q_2)$ with $p_2$ ranging from 0 to $q_2$. On the other hand, the symmetrization of the two shower-parton product in the 1j contribution, when expressed in terms of momentum fractions $x_i=p_i/q$, reveals the ranges $0<x_2<1-x_1$, and $0<x_1<1-x_2$ in the two terms
\begin{eqnarray}
\{ {S}_i(x_1), {S}_i(x_2)\}=\frac{1}{2}\left[{S}_i(x_1) S_i(\frac{x_2}{1-x_1})+S_i(x_2) S_i(\frac{x_1}{1-x_2})\right]. \label{56}
\end{eqnarray}
Thus, when two shower partons are in the same jet, the sum of their momenta, $p_1+p_2$, cannot exceed the jet momentum $q$. That is the kinematical restriction mentioned in the beginning of this section, and corresponds to the familiar condition that $p_T<q$ in the FF $D_i^{\pi}(p_T, q)$ in Eq.\ (22).
Since the large-$q$ dependence of $\hat F_i(q)$ is power-law behaved, as given explicitly in Eq.\ (\ref{49}), the $\rm{(SS)}^{1j}$ component dominates at high $p_T$, where the components involving the thermal partons (i.e. $\rm TT$ and $\rm {TS}$) are damped due to the exponential behavior of $\mathcal{T}(p_1)$. The $\rm (SS)^{2j}$ component involves $\hat F_i(q_1)$ and $\hat F_{i'}(q_2)$ in Eq.\ (\ref{55}) so it is suppressed compared to $\rm (SS)^{1j}$, but by how much requires explicit calculation.
To take multi-jet recombination into account for the production of proton, we show more explicitly the terms in Eq.\ (\ref{34}), but still symbolically,
\begin{eqnarray}
F_{qqq} = {\cal TTT + TTS + T(SS)}^{1j} + {\cal (SSS)}^{1j} + {\cal T(SS)}^{2j} +[{\cal (SS)}^{1j}{\cal S]}^{2j} + {\cal (SSS)}^{3j} \label{57}
\end{eqnarray}
Except for the first term that does not involve any $\rm S$, the other six terms are depicted by the six figures in Fig.\ 8, respectively. The first three figures have only 1-jet and are conventional. Figure 8 (d) corresponds to Eq.\ (\ref{55}) plus one thermal parton, so the equation for it is
\begin{eqnarray}
\mathcal T \widehat{({\mathcal S\cal S})}^{2j}=\Gamma\int\frac{dp_1}{p_1}\frac{dp_2}{p_2}\frac{dp_3}{p_3}\mathcal T(p_1)\hat F_i(q_2) S_i^q(p_2, q_2)\hat F_{i'}(q_3) S_{i'}^{q'}(p_3, q_3)R^p(p_1, p_2, p_3, p_T). \label{58}
\end{eqnarray}
The last two figures can easily be obtained by straightforward generalization
\begin{eqnarray}
({\widehat{\mathcal {SSS}}})^{2j}&=&\Gamma\int\left[\prod\limits_{a=1}^3\frac{dp_a}{p_a}\right]\hat F_i(q_1) \Large\{S_i^q(p_1, q_1), S_{i}^{q'}(p_2, q_1)\Large\}\nonumber \\
&&\times \hat F_{i'}(q_2) S_{i'}^{q''}(p_3, q_2)R^p(p_1, p_2, p_3, p_T), \label{59} \\
({\widehat{\mathcal {SSS}}})^{3j}&=&\Gamma^2\int\left[\prod\limits_{a=1}^3\frac{dp_a}{p_a}\hat F_{i_a}(q_a) S_{i_a}^{q_a}(p_a, q_a)\right]R^p(p_1, p_2, p_3, p_T). \label{510}
\end{eqnarray}
Three-jet recombination is highly suppressed and will be neglected in the following.
\begin{figure}[tbph]
\centering
\vspace*{-3cm}
\includegraphics[width=0.8\textwidth]{fig8.eps}
\vspace*{-5cm}
\caption{(Color online) Diagrams
showing the inclusive processes
for proton production by recombination of partons with same line-types as in Fig.\ 7.}
\end{figure}
\section{Transverse Momentum Distributions of Hadrons}
We now calculate the $p_T$ distributions of $\pi, p, K$ and $\Lambda$ produced at $\eta\sim0$ and for 0-5\% centrality in Pb-Pb collisions at 2.76 TeV. They are based on the essential points discussed in the preceding sections, some of which have previously been applied to collisions at RHIC \cite{hy1, hz2}. Now we consider LHC without changing the basic formalism. Although we have studied the $p_T$ spectra at LHC before \cite{hz1}, it was, however, for a limited range of $p_T$ ($<5$ GeV/c) and was based on a simple assumption about momentum degradation, which we have subsequently found to be unrealistic as the $p_T$ range is extended to above 10 GeV/c. Our present treatment of momentum degradation, discussed in Sec.\ IV, enables us below to reproduce the data up to $p_T\sim20$ GeV/c, thus superseding the earlier parametrizations in \cite{hz1}. Nevertheless, we stress by repeating that the basic equations are the same, as summarized in \cite{hz2}, except that a new $\gamma_g$ is to be adjusted to fit the data.
\subsection{Pion and proton production}
To be specific we consider the production of $\pi^+$
\begin{eqnarray}
{dN^{TT}_{\pi}\over p_Tdp_T} &=&\frac{C^2}{6}e^{-p_T/T} , \label{61}\\
{dN_{\pi}^{TS}\over p_Tdp_T} &=& {C\over p_T^3} \int_0^{p_T} dp_1 p_1e^{-p_1/T}
\left[{\cal S}^{u}(p_T-p_1) +{\cal S}^{\bar d}(p_T-p_1)\right] , \label{62} \\
{dN^{{SS}^{1j}}_{\pi}\over p_Tdp_T} &=& {1\over p_T} \int {dq\over q^2} \sum_i \hat{F}_i(q)D^{\pi}_i(p_T,q) , \label{63}\\
{dN_{\pi}^{{SS}^{2j}}\over p_Tdp_T} &=& {\Gamma\over p_T^3} \int_0^{p_T} dp_1 {\cal S}^{u}(p_1) {\cal S}^{\bar d}(p_T-p_1) . \label{64}
\end{eqnarray}
While pion mass is neglected above, proton mass is certainly not negligible, so $p^0$ in Eq.\ (\ref{32}) becomes the transverse mass $m_T^p=(m_p^2+p_T^2)^{1/2}$ for $\eta=0$. With the RF given in Eq.\ (\ref{38}), we have
\begin{eqnarray}
\frac{dN_p^{TTT}}{p_Tdp_T}=g_{st}^pg_pg_p'\frac{C^3p_T^2}{m_T^p}e^{-p_T/T}, \label{65}
\end{eqnarray}
where $g_p'=B(\alpha+2, \beta+2)B(\alpha+2, \alpha+\beta+4)$, $\alpha$ and $\beta$ being given after Eq.\ (\ref{38}), and
\begin{eqnarray}
{dN_p^{TTS}\over p_T dp_T}&=&{g_{st}^pg_p C^2\over m_T^p p_T^{2\alpha+\beta+3}} \int_0^{p_T} dp_1 \int_0^{p_T-p_1} dp_2\ e^{-(p_1+p_2)/T} \nonumber \\
&& \hspace{1cm} \times\left\{ (p_1p_2)^{\alpha+1}(p_T-p_1-p_2)^{\beta} {\cal S}^d(p_T-p_1-p_2)\right.\nonumber\\
&&\hspace{1cm}\left.+p_1^{\alpha+1}p_2^{\beta+1}(p_T-p_1-p_2)^{\alpha} {\cal S}^u(p_T-p_1-p_2)\right\}, \label{66}
\end{eqnarray}
\begin{eqnarray}
{dN_p^{{TSS}^{1j}}\over p_T dp_T}&=&{g_{st}^pg_p C\over m_T^p p_T^{2\alpha+\beta+3}} \int_0^{p_T} dp_1 \int_0^{p_T-p_1} dp_2\ e^{-p_1/T} \nonumber \\
&& \hspace{1cm} \times\left\{ p_1^{\beta+1}p_2^{\alpha}(p_T-p_1-p_2)^{\alpha} {\cal S}^{uu}(p_2,p_T-p_1-p_2)\right.\nonumber\\
&&\hspace{1cm}\left.+p_1(p_1p_2)^{\alpha}(p_T-p_1-p_2)^{\beta} {\cal S}^{ud}(p_2,p_T-p_1-p_2)\right\},\label{67}
\end{eqnarray}
\begin{eqnarray}
{dN_p^{{SSS}^{1j}}\over p_T dp_T}=\frac{1}{m_p^T}\int\frac{dq}{q^2}\sum\limits_i\hat F_i(q)D_i^p(p_T, q),\label{68}
\end{eqnarray}
where $\mathcal{S}^{qq}$ in Eq.\ (\ref{67}) is
\begin{eqnarray}
\mathcal{S}^{qq}(p_2, p_3)=\int\frac{dq}{q}\sum\limits_i\hat F_i(q){\rm S}_i^q(p_2, q){\rm S}_i^q(p_3, q-p_2).\label{69}
\end{eqnarray}
Equations (\ref{66})-(\ref{68}) correspond to Fig.\ 8(a)-(c). For 2-jet contributions in Fig.\ 8(d) and (e) we have
\begin{eqnarray}
{dN_p^{{TSS}^{2j}}\over p_T dp_T}&=&{g_{st}^pg_p C\Gamma\over m_T^p p_T^{2\alpha+\beta+3}} \int_0^{p_T} dp_1 \int_0^{p_T-p_1} dp_2\ e^{-p_1/T} \nonumber \\
&& \hspace{1cm} \times\left\{ p_1^{\beta+1}p_2^{\alpha}(p_T-p_1-p_2)^{\alpha} {\cal S}^u(p_2) {\cal S}^{u}(p_T-p_1-p_2)\right.\nonumber\\
&&\hspace{1cm}\left.+p_1(p_1p_2)^{\alpha}(p_T-p_1-p_2)^{\beta} {\cal S}^u(p_2) {\cal S}^{d}(p_T-p_1-p_2)\right\}, \label{610}
\end{eqnarray}
\begin{eqnarray}
{dN_p^{{SSS}^{2j}}\over p_T dp_T}&=&{g_{st}^pg_p\Gamma \over m_T^p p_T^{2\alpha+\beta+3}} \int_0^{p_T} dp_1 \int_0^{p_T-p_1} dp_2 \nonumber \\
&& \hspace{1cm} \times\left\{ p_1^{\beta}p_2^{\alpha}(p_T-p_1-p_2)^{\alpha} {\cal S}^d(p_1) {\cal S}^{uu}(p_2,p_T-p_1-p_2)\right.\nonumber\\
&&\hspace{1cm}\left.+(p_1p_2)^{\alpha}(p_T-p_1-p_2)^{\beta} {\cal S}^u(p_1) {\cal S}^{ud}(p_2,p_T-p_1-p_2)\right\}. \label{611}
\end{eqnarray}
The above equations describe the production of pion and proton in the recombination model for hadronization at the final stage of the nuclear collision process where the medium density is low. Since thermal partons represent the properties of the bulk medium at hadronization irrespective of the initiating system, we use for the normalization factor $C$ and inverse slope $T$ in Eq.\ (\ref{35}) the same values as at RHIC \cite{hy1}
\begin{eqnarray}
C=23.2\hspace{0.05cm} \mbox{GeV$^{-1}$}, \hspace{0.5cm} T=0.31\hspace{0.05cm}\mbox{GeV}. \label{612}
\end{eqnarray}
To justify the use of these values for collisions at LHC, we recall first that in our treatment of hadronization the thermal distributions ${\mathcal T}(p_1)$ is not what can be derived from hydro studies. At RHIC it is determined by fitting the pion distribution at $p_T<2$ GeV/c. Using Eqs.\ (\ref{35}) and (\ref{37}) in (\ref{31}) one obtains (\ref{61}) for TT recombination only, which yields the values of $C$ and $T$ in Eq.\ (\ref{612}) in order to reproduce the pion data at low $p_T$ \cite{hy1}, as can be seen in Fig.\ 17 in Appendix A below.
As mentioned earlier in Sec.\ III, the thermal partons include the soft partons generated by hard and semihard partons as they traverse the medium and have thermalized with the bulk partons by the end of the deconfined phase. When those thermal partons are dilute enough and be ready for confinement through recombination, their local properties are no longer sensitive to the collisional system in which the medium is created initially.
The concept is consistent with the notion of universal hadrosynthesis where statistical study of hadron ratios has found universality independent of collision energy, analogous to water vapor condensing at 100$^{\circ}$C independent of how hot it has previously been.
$C$ and $T$ are local measures that carry no information of the global properties, such as rapidity range and overall multiplicities, which depend on the collision energy.
The distribution s we study are at mid-rapidity, so the increase of total multiplicity due largely to the broadening of the rapidity plateau is not of concern here. Our interest is in the increase of $\left.dN/d\eta\right|_{\eta\sim 0}$ which we claim is related to the increase of ${\cal S}^q(p_2)$ by demonstrating that the observed spectra can be reproduced in the RM. The thermal distribution\ ${\cal T}(p_1)$ was determined
at RHIC for low $p_1$ where ${\mathcal S}(p_2)$ is negligible; that same ${\cal T}(p_1)$ is now used at LHC. In Appendix B it is shown that the use of any values of $C$ and $T$ different from Eq.\ (\ref{612}) fails to reproduce the data at all $p_T$. We remark, parenthetically, that the value of $C$ above corresponds very well to the formula in Ref.\ \cite{hz2} that gives the centrality dependence
\begin{eqnarray}
C(N_{\rm part})=3.43N_{\rm part}^{0.32}, \label{613}
\end{eqnarray}
wherein we use $N_{\rm part}=383$ for 0-5\% in Pb-Pb collisions \cite{aam}.
It is reasonable to question why $C$ should remain the same as at RHIC, when more partons are produced at LHC, even though $T$ is the same at hadronization. Our answer is that our formalism is inadequate to treat accurately the hadron formation at very low $p_T$ for $p_T<1$ GeV/c. The values of $C$ and $T$ in Eq.\ (\ref{612}) are used for calculating the spectra for $p_T>1$ GeV/c. At lower $p_T$ our pion distribution is lower than the data, which is undoubtedly related to the extra low-$p_1$ partons created at LHC that we cannot easily include in our parametrization. Besides, there are resonance contribution to the pion spectrum that we have not counted for.
We recall that in order to tame the soft shower parton distributions from minijets we need to introduce a cut-off parameter $p_c$ in the SPD $S_i^j(p_2, q)$ in Eq.\ (\ref{49}).
The value of $p_c$ is determined mainly by keeping the proton distribution under bound for $p_T<1$ GeV/c, since pions have resonance and other contributions mentioned above that are not included in Eqs.\ (\ref{61})-(\ref{63}). Nevertheless, the dependence on $p_c$ is not sensitive; its value at 0.5 GeV/c is essentially chosen as a reasonable value. Such a cutoff in the shower parton ${\cal S}^q(p_3)$ for $p_3<0.5$ GeV/c cannot affect the outcome of the dominant TTS contribution in the $1<p_T<5$ GeV/c (to be seen in Fig.\ 10 below) because at small $p_3$ we see in Eq.\ (\ref{66}) that $p_1+p_2=p_T-p_3$ must be greater than 0.5 GeV/c so the integral is suppressed by the exponential factor $e^{-(p_1+p_2)/T}$ in the integrand.
The other parameters $\gamma_0$ and $q_0$ in Eq.\ (\ref{47}) for the $q$-dependent gluon degradation factor $\gamma_g(q)$
are crucial in our attempt to find a good fit of both $\pi$ and $p$ distributions at all $p_T$ up to 20 GeV/c. That makes good sense in physics since the degradation of hard- and semihard-parton momenta is the central theme of heavy-ion physics at LHC. Our study here reveals how important minijets are in explaining the hadron spectra at all $p_T$ observed.
With the choice
\begin{eqnarray}
\gamma_0=0.8 \hspace{0.5cm}\mbox{and}\hspace{0.5cm} q_0=10\ \mbox{GeV/c} \label{614}
\end{eqnarray}
we calculate the pion distribution for $0<p_T<20$ GeV/c and obtain the different components shown in Fig.\ 9 by different line types, although only the region $p_T>1$ GeV/c is reliable. Their sum in black-cross line agrees with data from ALICE \cite{kslambda} very well for $p_T>1$ GeV/c. The solid black line includes what we cannot calculate and is put in by hand to raise the distribution to fit the data at $p_T<1$ GeV/c.
We note that $\rm TS$ is larger than $\rm TT$ for $p_T>1$ GeV/c. The total goes below the data points at $p_T>15$ GeV/c. Some further adjustment of $\gamma_g(q)$ at very high $q$ can repair that deficiency by raising $\rm SS^{1j}$ there, but that much fine tuning is not our interest here since our focus is on the interplay among the different components at low and intermediate $p_T$. The 2-jet component $\rm SS^{2j}$ is too small to be significant; nevertheless, it is interesting to observe that $\rm SS^{2j}$ has very nearly the same magnitude as $\rm SS^{1j}$ at $p_T\approx2.5$ GeV/c. That is not the situation at RHIC, as can be seen in Fig.\ 17 in Appendix A, where $\rm SS^{2j}$ is much less than $\rm SS^{1j}$ at all $p_T$. The difference owes its origin to the relative sizes of $\mathcal{S}(p_1)$ shown in Fig.\ 6(a) and (b). Since recombination is dominated by shower partons in the dense region, i.e., at low $p_1$, two such partons from nearby jets can contribute as much as from a single jet.
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig9.eps}
\caption{(Color online) Transverse momentum distribution of pion produced in Pb-Pb collision at $\sqrt{s_{NN}}=2.76$ TeV. Data are from \cite{mi} for centrality 0-5\%. }
\end{figure}
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig10.eps}
\caption{(Color online) Transverse momentum distribution of proton produced in Pb-Pb collision at $\sqrt{s_{NN}}=2.76$ TeV. Data are from \cite{mi} for centrality 0-5\%. }
\end{figure}
Without changing any parameter we calculate the proton distribution that is shown in Fig.\ 10. It also agrees with the data \cite{kslambda} extremely well. Note that $\rm TTS$, $\rm TSS^{1j}$ , $\rm TSS^{2j}$ and $\rm SSS^{1j}$ components are all of similar magnitudes at $p_T\approx6$ GeV/c; together they lift the total to meet the data points. That is a feature that is unique among the hadronization models. As with the pion distribution, $\rm TTS$ is larger than $\rm TTT$ for $p_T>1$ GeV/c, demonstrating again that the soft shower partons play an important role at low $p_T$. Furthermore, one sees that $\rm SSS^{2j}\approx SSS^{1j}$ around $p_T\approx 3$ GeV/c just as $\rm SS^{2j}\approx SS^{1j}$ for pions, although they are all much less than $\rm TTS$ and $\rm TS$, respectively.
With the results shown in Fig.\ 9 and 10 we regard our main objective as having been accomplished. It is non-trivial to reproduce the data in such a wide range of $p_T$ and it is remarkable that the main input that is adjustable is just the momentum degradation factor $\gamma_g(q)$ in Eq.\ (\ref{47}). What we have obtained for $\gamma_0$ and $q_0$ in Eq.\ (\ref{614}) are good not only for $\pi$ and $p$ distributions, but also for all other particles, as we shall show below. Thus the result strongly supports the assertion that minijet production plays the dominate role in the structure of hadronic spectra. The corresponding shower partons have been exhibited already in Fig.\ 5 together with discussions on their dominance over thermal partons for nearly all $p_1$.
\subsection{$K$ and $\Lambda$ production}
Proceeding to the production of strange particles, we use the same formalism as for pion and proton, except that $s$ quark being more massive than the light quarks requires separate attention. For the thermal $s$ quarks we use the same distribution as in Eq.\ (\ref{35})
\begin{eqnarray}
\mathcal{T}^s(p_1)=Cp_1e^{-p_1/T_s} \label{615}
\end{eqnarray}
but with a different inverse slope $T_s$, which is the only parameter we adjust to fit the data. Since the $s$ quark mass, $m_s$, does not appear explicitly in Eq.\ (\ref{615}), and also since $T_s$ may be regarded as an effective temperature at the time of hadronization, the fluid velocity may raise $T_s$ above $T$ (for light quarks). The $s$ shower parton distribution $\mathcal{S}^s(p_2)$ is as given in Eq.\ (\ref{36}) with the unintegrated SPD $S_i^j(z)$ determined from the FFs into $K$ and $\Lambda$ \cite{hy1,AKK08}. The degradation of $s$-quark momentum is taken to be the same as others, i.e., $\gamma_s=\gamma_q=\gamma_g/2$.
With the RF for kaon given in Ref.\ \cite{hy8,hy9} we have for the $K^+$ distributions
\begin{eqnarray}
{dN^{TT}_{K}\over p_Tdp_T} &=&
{12C^2\over m_T^Kp_T^5} \int_0^{p_T} dp_1 p_1(p_T-p_1)^2p_1e^{-p_1/T}(p_T-p_1)e^{-(p_T-p_1)/T_s} , \label{616}\\
{dN_{K}^{TS}\over p_Tdp_T} &=& {12C\over m_T^Kp_T^5} \int_0^{p_T} dp_1 p_1^2(p_T-p_1)^2 \nonumber \\
&&\times \left[e^{-p_1/T}{\cal S}^{\bar s}(p_T-p_1,c) +\left({p_T\over p_1}-1\right)e^{-(p_T-p_1)/T_s}{\cal S}^u(p_1)\right] , \label{617} \\
{dN^{{SS}^{1j}}_{K}\over p_Tdp_T} &=& {1\over m^K_T} \int {dq\over q^2} \sum_i \hat{F}_i(q)D^{K}_i(p_T,q)
, \label{618}\\
{dN_{K}^{{SS}^{2j}}\over p_Tdp_T} &=& {12\Gamma\over m_T^Kp_T^5} \int_0^{p_T} dp_1 p_1(p_T-p_1)^2 {\cal S}^{u}(p_1) {\cal S}^{\bar s}(p_T-p_1) . \label{619}
\end{eqnarray}
With $T_s$ being the only adjustable parameter we obtain for
\begin{eqnarray}
T_s=0.34\hspace{0.1cm}\mbox{GeV/c} \label{620}
\end{eqnarray}
the distribution shown in Fig.\ 11. Evidently, the data from ALICE \cite{kslambda} are well reproduced. The value of $T_s$ is slightly higher than $T$ in Eq.\ (\ref{612}). As it is with pions, the $\rm TS$ components is greater than $\rm TT$ for $p_T>0.5$ GeV/c. Although $\mathcal{S}^s(p_1)$ is suppressed relative to $\mathcal{S}^u(p_1)$, the $\bar su$ recombination sustains the $\rm TS$ component. However, $\rm{SS}^{1j}$ is clearly much lower than that for pion in Fig.\ 9 at low $p_T$. Note that $\rm{SS}^{2j}$ is again very close to $\rm{SS}^{1j}$ at $p_T\approx2$ GeV/c.
For $\Lambda$ production we use Eq.\ (\ref{615}) again for the thermal $s$ quarks, but allow $T_s$ to be different from the value in Eq.\ (\ref{620}). Appendix C contains the explicit distributions of the various components. With the choice
\begin{eqnarray}
T_s^{\Lambda}=0.42\hspace{0.05cm}\mbox{GeV/c} \label{621}
\end{eqnarray}
we obtain the result shown in Fig.\ 12. The data \cite{kslambda} are reproduced very well. The physics is clearly very much the same as for $\pi, p$ and $K$. The value of $T_s^{\Lambda}$ is higher because $m_{\Lambda}$ is higher, although how the thermal partons depend on the quark mass is not specified explicitly. We stress that the momentum degradation parameters have not been adjusted so the hard parton and minijet distributions $\hat F_i(q)$ are the same as described in Sec.\ IV, independent of the hadrons produced. Thus the recombination model has enabled us to calculate the spectra of all strange and non-strange hadron at all $p_T$ in a universal formalism.
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig11.eps}
\caption{(Color online) Transverse momentum distribution of kaon produced in Pb-Pb collision at $\sqrt{s_{NN}}=2.76$ TeV. Data are from \cite{kslambda} for centrality 0-5\%. }
\end{figure}
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig12.eps}
\caption{(Color online) Transverse momentum distribution of $\Lambda$ produced in Pb-Pb collision at $\sqrt{s_{NN}}=2.76$ TeV. Data are from \cite{kslambda} for centrality 0-5\%. }
\end{figure}
\section{Multi-strange hyperons and meson}
We complete our investigation of hadron production by considering $\Xi$, $\Omega$ and $\phi$ . Apart from different quark contents of those particles, the physics of hadronization through recombination is the same as before. Since they cannot be used either as target on beam particles, their wave functions in terms of momentum fractions of constituent quarks are not known as firmly as we do with $\pi, K$ and $p$. Furthermore, there is the question of the probability for more than one strange quark to find one another to recombine. As the system expands, the plasma gets out of chemical equilibrium first as the temperature is lowered because $gg\to s\bar s$ and $q\bar q\to s\bar s $ processes become less frequent than their reverses on account of $m_s>m_q>m_g$. Thus the density of $s$ quarks becomes lower.
The language used above is that of the conventional interpretation of the expanding medium getting out of chemical equilibrium. We need not subscribe to the details of that description, while still adhering to the qualitative physical picture of the system that has general validity. Thus we proceed in the same manner as we have for $\pi$ and $p$.
For a single $s$ quark to hadronize at late time there are abundant light quarks in the neighborhood to form $K$ and $\Lambda$ with. However, for multi-strange hadron to form, the probability of $ss$, $sss$ or $s\bar s$ to be in close proximity of one another at late time is reduced, when the density of $s$ quark is lower than that of light quarks.
If at earlier time $\Xi$, $\Omega$ and $\phi$ are formed at higher density, their survival in the medium is suppressed due to their dissociation through interaction with the plasma that is still active. Thus in either case the rate of multi-strange hadron production is lower. We cannot predict that rate in the recombination model, so an adjustable parameter will be used to fit the overall normalization; that is in addition to the inverse slope $T_s$, since each particle has it own hadronization time and mass effect on the effective temperature. On the other hand, the density of shower partons arising from hard and semihard partons is independent of the final hadrons formed, so we can still use our formalism to calculate the various components of the $p_T$ distributions.
The detail equations for $\Xi$ and $\Omega$ formations are given in Appendices D and E, respectively. The only free parameters we use in each case are $g_h$ and $T_s$. For best fit we obtain
\begin{eqnarray}
\Xi: \hspace{0.5cm}g_{\Xi}=6\times10^{-3},\hspace{0.3cm}T_s=0.46\hspace{0.1cm}\mbox{GeV/c}, \label{71}
\end{eqnarray}
\begin{eqnarray}
\Omega: \hspace{0.5cm}g_{\Omega}=9\times10^{-4},\hspace{0.3cm}T_s=0.51\hspace{0.1cm}\mbox{GeV/c}. \label{72}
\end{eqnarray}
The results are shown in Figs.\ 13 and 14, reproducing the data very well. There are, however, some differences in the strengths of different components, even though the shower partons are the same in all cases.
\begin{figure}[tbph]
\includegraphics[width=.8\textwidth]{fig13.eps}
\caption{(Color online) Transverse momentum distribution of $\Xi$ produced in Pb-Pb collision at $\sqrt{s_{NN}}=2.76$ TeV. Data are from \cite{ba2} for centrality 0-10\%. }
\end{figure}
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig14.eps}
\caption{(Color online) Transverse momentum distribution of $\Omega$ produced in Pb-Pb collision at $\sqrt{s_{NN}}=2.76$ TeV. Data are from \cite{ba2} for centrality 0-10\%. }
\end{figure}
What is most noticeable about the $\Xi$ distributions is that the $\rm{TTS}$ component dominates the whole spectrum for $p_T>1$ GeV/c and that $\rm{TSS}$ and $\rm{SSS}$ components are much lower. The relative strengths of those components are unlike the situation with proton and $\Lambda$. Whereas the $\rm S$ in $\rm{TTS}$ can be non-strange, $\rm{TSS}$ must have at least one $s$ in the $\rm SS$, and $\rm{SSS}$ must have two $s$ quarks. Since $S^s$ is suppressed compared to $S^q$, the ordering of $\rm{TTS}$, $\rm{TSS}$ and $\rm{SSS}$ is evident in Fig.\ 13. Moreover, $\rm{TSS}^{1j}$ and $\rm{TSS}^{2j}$ have roughly the same magnitude; so also do $\rm{SSS}^{1j}$ and $\rm{SSS}^{2j}$.
For $\Omega$ production shown in Fig.\ 14, similar remarks about the ordering of the various components can be made as for $\Xi$. One notable difference is that this time even $\rm{TTS}$ is suppressed relative to $\rm{TTT}$. That is because every coalescing quark for $\Omega$ must be strange, so $S^s$ in $\rm{TTS}$ lowers its magnitude relative to $\rm{TTT}$. Herein lies a very interesting point that was noticed several years ago even in RHIC data \cite{ja,hw1}. The $p_T$ distribution of $\Omega$ is exponential
(apart from the prefactor $p_T^2/m_T^\Omega$ in Eq.\ (\ref{D1}))
without any power-law up-bending at high $p_T$. It means that $\Omega$ is produced thermally even at $p_T\sim 6$ GeV/c without any contribution from parton fragmentation, which is the usual mechanism considered in pQCD. Neither can hydrodynamics be applied to particle production at such high $p_T$. In recombination each $s$ quark need only be at $p_T<2$ GeV/c on the average. Our thermal partons at $T_s=0.51$ GeV/c imply that $\Omega$ is formed earlier than other hyperons. In fact, it is of interest to exhibit the dependence of $T_s$ on the number $n_s$ of $s$-quark content of the hyperons. Figure 15 shows that there is a linear increase from $\Lambda$ to $\Omega$, and therefore non-linear if plotted against the hyperon masses, since
$m_{\Xi}-m_{\Lambda}=200\hspace{0.05cm}\mbox{MeV}\ \mbox{and}\ m_{\Omega}-m_{\Xi}=367\hspace{0.05cm}\mbox{MeV}.$
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig15.eps}
\caption{Linear dependence of $T_s$ on the number $n_s$ of strange quarks in hyperons.}
\end{figure}
A comparison between Figs.\ 10 and 14 reveals the drastic difference in the compositions of the various components contributing to $p$ and $\Omega$. For $p$ the all thermal TTT component is unimportant compared to TTS, TSS and SSS, while for $\Omega$ TTT is the only dominant component. If we were to compare only the TTT components in $p$ and $\Omega$, then their ratio $(\Omega/p)^{\rm TTT}$ would be exponentially rising in $p_T$. Using $T=0.31$ GeV and $T_s=0.51$ GeV, that ratio rises by 3 orders of magnitude if only the exponential factors are considered with neglect of the multiplicative factors. In reality, as we have seen in Fig.\ 3, the ratio for LHC rises by only a factor of 10. The reason is, of course, the dominance of the $q$ shower partons in the production of proton, as evident from Fig.\ 10, where fragmentation is not important until $p_T>7$ GeV/c. On the other hand, the $s$ shower partons are unimportant for the production of $\Omega$, which can adequately be described by the exponential behavior of TTT alone. In Fig.\ 3 we have noted the difference between LHC and RHIC in the $p_T$ dependencies of $\Omega/p$. While $\Omega$ production at RHIC is also mainly TTT and thus exponential \cite{hy2, hw1}, Fig.\ 18 in Appendix A shows that the $p_T$ distribution for proton at RHIC has a transition from TTT to TTS in the region $3< p_T< 4$ GeV/c. That accounts for the saturation of $\Omega/p$ in that region in Fig.\ 3. That transition is absent in Fig.\ 10 for LHC, hence no saturation seen for LHC in Fig.\ 3. All these inter-related phenomena can be traced to the simple source, namely: $q$ shower partons are abundant at LHC, not $s$ shower partons.
Lastly, we consider the production of $\phi$, for which the equations are given in Appendix F. Since no light quarks are involved in the formation of both $\Omega$ and $\phi$, we use the same value of $T_s$ for both, i.e., $T_s=0.51$ GeV. By varying $g_{\phi}$ only for the overall normalization, we obtain the result shown in Fig.\ 16 for $g_\phi=0.07$. The underlying components are very similar to those for $\Omega$, namely: TT dominates over TS, while SS (whether 1j or 2j) is nearly 2 orders of magnitudes farther down.
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig16.eps}
\caption{(Color online) Transverse momentum distribution of $\phi$ produced in Pb-Pb collision at $\sqrt{s_{NN}}=2.76$ TeV. Data are from \cite{ba2} for centrality 0-10\%. }
\end{figure}
The small value of $g_\phi$ is an indication of quarkonium suppression after $\phi$ is formed at a time much earlier than $\pi$, when the density of $s$ (and $\bar s$) is higher. As is the case with $J/\psi$ suppression, $\phi$ experiences the effects of dissociation by the plasma as it traverses the remaining portion of the medium before it completely hadronizes. The value of $g_\phi$ depends on aspects of the process that are not included in the formalism discussed in this paper, and therefore cannot be predicted. The same remarks can be made for the formation of $\Xi$ and $\Omega$, for which $g_\Xi$ and $g_\Omega$ are quite small in Eqs.\ (\ref{71}) and (\ref{72}).
\section{Conclusion}
We have made a thorough study of the production of all identified hadrons in Pb-Pb collisions at LHC in a formalism that displays all the components of thermal- and shower-parton recombination. The degradation of momenta of hard and semihard partons is treated in a way that uses two free parameters, which are determined by fitting the high-$p_T$ distribution of the pion. The resultant shower-parton distributions of $q$ and $s$ quarks are then used to calculate the spectra of all hadrons ($\pi, K, p, \Lambda$, $\Xi$, $\Omega$ and $\phi$). They agree well with the data for all $p_T$ up to 20 GeV/c. The description not only establishes a consistent scheme for treating the hadronization process of a quark-gluon plasma at LHC, but also points out the importance of the effects of minijets on the pion and proton distributions at low and intermediate $p_T$ --- yet not at all on the $\phi$ and $\Omega$ distributions on the other end of the spectrum in strangeness content.
The dominance of light shower partons over the thermal partons in nearly the whole range of parton momenta is an observation we make on the basis of an adopted form of the thermal parton distribution $\mathcal{T}(p_1)$. While the shower parton distribution $\mathcal{S}(p_1)$ can be calculated, we have no dynamical scheme to calculate $\mathcal{T}(p_1)$, which is at the final stage of the evolution of the dense medium, dilute enough to enter into the confinement process. Since hadronization is insensitive to the initial process in which the dense medium is created, we have used the $\mathcal{T}(p_1)$ determined at RHIC, where thermal partons dominate the low-$p_T$ region of all particles produced. The use of that $\mathcal{T}(p_1)$ for our treatment at LHC is justified by the fact that ALICE data $\pi$ and $p$ distributions at low $p_T$ are well reproduced by our results in which $\rm{TS}$ and $\rm{TTS}$ components dominate. Any more (or less) thermal partons would not have resulted in satisfactory fits of the low-$p_T$ data, since the density of soft shower partons is constrained by the fragmentation of hard and semihard jets. It is therefore meaningful to compare $\mathcal{S}(p_1)$ with $\mathcal{T}(p_1)$ and arrive at the conclusion that there are far more soft shower partons than thermal parton at LHC. It then follows that any theoretical treatment of hadrons produced at low $p_T$ would be incomplete without taking the effects of minijets into account. In particular, the parameters in the hydrodynamical formalism cannot be determined by phenomenology in the soft sector without including also the soft partons from minijets.
It may be of interest to mention here that there is a phenomenological two-component model, in which the hard component exerts a strong influence on the production of pions in the low-$p_T$ region to the extent that the validity of hydrodynamical treatment of soft hadrons is questioned \cite{trainor}. Although the physical basis for that observation may share some common ground with what we have found here (despite the very different languages and concepts used), it should be emphasized that our shower partons are dominant only at LHC, whereas Ref.\ \cite{trainor} contends the importance of the hard component in the soft region even at RHIC.
The dominance of $\mathcal S(p_1)$ over $\mathcal T(p_1)$ for production of $\pi$ and $p$ does not apply to $\phi$ and $\Omega$. The $s$ quarks in the shower are suppressed, so $\rm{TS}$ and $\rm{TTS}$ are lower than $\rm{TT}$ and $\rm{TTT}$, respectively. The other particles ($K,\Lambda$ and $\Xi$) with less strangeness contents are in the intermediate situation. The recombination of thermal partons as the mechanism for the production of $\phi$ and $\Omega$ is therefore a satisfactory explanation for their $p_T$ distributions up to 6.5 GeV/c that is too high for hydrodynamics and too abundantly produced for fragmentation.
A serious consequence of our conclusion about shower partons dominating over thermal partons is its implication on azimuthal anisotropy in non-central collisions. The usual explanation is that the azimuthal harmonics are due to the flow effects of the fluctuations of the initial configuration of the collision system. If, however, the non-flow effects such as minijets are important, the fluid treatment would be inadequate on the one hand, and our approach is in need of suitable treatment to be convincing on the other. For Au-Au collisions at 200 GeV, we have shown that the azimuthal harmonics can be obtained by taking into account the azimuthal dependence of minijet and the related ridge effect \cite{hz3}. Now for Pb-Pb collisions at 2.76 TeV we have only investigated the case of central collisions here. To extend the study to non-central collisions is, of course, the natural problem to pursue next. How minijets influence the azimuthal asymmetry will undoubtedly be a major area of investigation. The consideration described here represents only the first, but significant, step toward understanding the physics of hadronization at LHC.
\section*{Acknowledgment}
This work was supported, in part, by the NSFC of China under Grant No.\ 11205106 and by the U.\ S.\ Department of Energy under Grant No. DE-FG02-96ER40972.
\begin{appendix}
\section{Hadron Distribution at RHIC Revisited}
Although the problem of hadron production at RHIC has been extensively studied previously \cite{hy1, hz2}, we have made progressive improvement on the treatment of momentum degradation. In order to make sensible comparison between LHC and RHIC results, we recalculated here the pion and proton distributions at RHIC, using the same description of the effects of energy loss on the shower partons, as has been done in Sec.\ IV.
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig17.eps}
\caption{(Color online) Transverse momentum distribution of pion produced in Au-Au collision at $\sqrt{s_{NN}}=200$ GeV. Data are from \cite{aa} for centrality 0-10\%. }
\end{figure}
\begin{figure}[tbph]
\vspace*{-0.5cm}
\includegraphics[width=.8\textwidth]{fig18.eps}
\caption{(Color online) Transverse momentum distribution of proton produced in Au-Au collision at $\sqrt{s_{NN}}=200$ GeV. Data are from \cite{aa} for centrality 0-10\%. }
\end{figure}
The basic difference between what we do now and what was done in Ref.\ \cite{hz2} is that $\gamma_g(q)$ is $q$ dependent as given in Eq.\ (\ref{47}). Keeping $T=0.31$ GeV as in Eq.\ (\ref{612}), as well as in Ref.\ \cite{hy1}, we vary $\gamma_0$ to find the best fit of the $\pi$ distribution in Au-Au collisions at 200 GeV for 0-10\% centrality, with $q_0=10$ GeV/c fixed, as in Eq.\ (\ref{614}). The initial parton distribution $f_i(k)$ are as given in Ref.\ \cite{sgf}, and the recombination equation are the same as those in Sec.\ VI. With $\gamma_0=0.6$, we obtain the results shown in Fig.\ 17 for pion and Fig.\ 18 for proton, which are evidently very good. Comparing Fig.\ 17 to the pion distribution at LHC in Fig.\ 9, one can see the drastic difference in $\rm TS$ relative to $\rm TT$ between the two cases. At RHIC $\rm TS$ crosses $\rm TT$ at $p_T\approx3$ GeV/c, whereas at LHC it occurs at $p_T\approx0.5$ GeV/c. The latter is a consequence of $\mathcal S(p_1)>\mathcal T(p_1)$ for $p_1>0.5$ GeV/c, shown in Fig.\ 5. In contrast, at RHIC that cross-over does not occur until $p_1>2$ GeV/c, as shown in Fig.\ 19. The ratio of $\mathcal S/T$ is already previewed in Fig.\ 6(a). Thus at RHIC $\mathcal S(p_1)$ is a factor of 7 lower than that at LHC for $p_1<2$ GeV/c. The low density of shower partons makes the hydrodynamical treatment of thermal partons to be sensible without concern for minijets, which is not the case at LHC.
\begin{figure}[tbph]
\includegraphics[width=.8\textwidth]{fig19.eps}
\caption{(Color online) Thermal distribution $\mathcal{T}(p_1)$ for Au-Au collisions at $\sqrt{s_{NN}}=200$ GeV is depicted by the dashed (blue) line for $T=0.31$ GeV, while the shower parton distribution $\mathcal{S}^u$ is shown by the solid (red) line with low-$p_1$ cutoff.}
\end{figure}
\section{Thermal Parton Distribution}
The thermal parton distribution\ is given in Eq.\ (\ref{35}) and the parameters $C$ and $T$ are given in Eq.\ (\ref{612}). Section IV-A contains extensive discussion on why the thermal distribution\ ${\cal T}(p_1)$ remains the same at LHC as it is at RHIC. In short, at late time when the bulk system is ready for hadronization its local properties at midrapidity are insensitive to its early history, except in very low $p_1$ region ($<0.5$ GeV/c) where the enhanced thermal partons due to the energy lost by the semihard partons to the medium becoming even more enhanced at LHC. In this Appendix we show that different sets of higher values of $C$ and $T$ lead to $p_T$\ distribution s of $\pi$ and $p$ that are unacceptable for $p_T>1$ GeV/c.
All equations we use to calculate the pion and proton spectra are as before, namely: Eq.\ (\ref{35}) for ${\cal T}(p_1)$, (\ref{36}) for ${\cal S}(p_3)$, (\ref{61})-(\ref{64}) for $dN_\pi/p_T dp_T$, and (\ref{65})-(\ref{611}) for $dN_p/p_T dp_T$. The only changes are in the parameters $C$ and $T$. For our demonstration here, we use the four combinations of $C=23.2$ and 30 GeV/c$^{-1}$, and $T=0.31$ and 0.4 GeV/c. The results are shown in Figs.\ 20 and 21. The solid black lines are the ones corresponding to the universal values of $C$ and $T$ given in Eq.\ (\ref{612}). The other three lines are for larger values of either $C$, or $T$, or both. Evidently, all of them far exceed the data in the $p_T$\ range shown and must be rejected. We have not exhibited the components $TT, TS, \cdots, TTT, TTS, \cdots$ etc.\ for each case for the sake of clarity; however, it is obvious that the thermal-shower recombination raises the contribution at intermediate $p_T$\ significantly above the data when ${\cal T}(p_1)$ is increased. We have not changed ${\cal S}(p_2)$ so SS and SSS terms are not affected and remain the only dominant terms when $p_T$\ is high enough.
Our conclusion is therefore that with the shower parton distribution\ ${\cal S}(p_2)$ fixed by the phenomenology at high $p_T$, only the thermal parton distribution\ described by $C$ and $T$ given in Eqs.\ (\ref{35}) and (\ref{612}) can reproduce the $p_T$\ spectra of $\pi$ and $p$ for $p_T>1$ GeV/c.
\begin{figure}[tbph]
\includegraphics[width=.8\textwidth]{fig20.eps}
\caption{(Color online) Pion distributions for four sets of values of $C$ and $T$.}
\end{figure}
\begin{figure}[tbph]
\includegraphics[width=.8\textwidth]{fig21.eps}
\caption{(Color online) Proton distributions for four sets of values of $C$ and $T$.}
\end{figure}
\section{$p_T$ Distribution of $\Lambda$ at LHC}
The $p_T$ distribution of $\Lambda$ is very similar to that of proton except for the replacement of a $u$ quark by an $s$ quark. The thermal and shower parton distributions for $s$ are different from those for $u$, and the RF for $\Lambda$ is different from that for $p$. For $\mathcal{T}^s(p_1)$ we use the same form as Eq. (\ref{615}), but allow $T_s$ to be adjustable. $\mathcal{S}^s(p_2)$ is the same as used for $K$ production in Sec. VI-B. The RF for $\Lambda$ has the same form as Eq. (\ref{38}) for proton but with $\alpha=1$ and $\beta=2$ in a problem on strange particle production at RHIC considered in Ref. \cite{hy9}. We simply list the equation below for the various components.
\begin{eqnarray}
{dN_{\Lambda}^{TTT}\over p_T dp_T}&=&{g_{st}^{\Lambda}g_{\Lambda}C^3\over m_T^p p_T^{2\alpha+\beta+3}} \int_0^{p_T} dp_1 \int_0^{p_T-p_1} dp_2 \nonumber \\
&& \hspace{1cm} \times (p_1p_2)^{\alpha+1}\ e^{-(p_1+p_2)/T}(p_T-p_1-p_2)^{\beta+1}\ e^{-(p_T-p_1-p_2)/T_s}, \label{B1}
\end{eqnarray}
\begin{eqnarray}
{dN_{\Lambda}^{TTS}\over p_T dp_T}&=&{g_{st}^{\Lambda}g_{\Lambda}C^2 \over m_T^p p_T^{2\alpha+\beta+3}} \int_0^{p_T} dp_1 \int_0^{p_T-p_1} dp_2 \nonumber \\
&& \hspace{1cm} \times\left\{p_1p_2\ e^{-(p_1+p_2)/T} (p_1p_2)^{\alpha}(p_T-p_1-p_2)^{\beta}{\cal S}^s(p_T-p_1-p_2)\right.\nonumber\\
&&\hspace{1cm}\left.+p_1\ e^{-p_1/T}p_2\ e^{p_2/T_s} p_1^{\alpha}p_2^{\beta}(p_T-p_1-p_2)^{\alpha} {\cal S}^u(p_T-p_1-p_2)\right\},
\label{B2}
\end{eqnarray}
\begin{eqnarray}
{dN_{\Lambda}^{TSS^{1j}}\over p_T dp_T}&=&{g_{st}^{\Lambda}g_{\Lambda}C \over m_T^{\Lambda} p_T^{2\alpha+\beta+3}} \int_0^{p_T} dp_1 \int_0^{p_T-p_1} dp_2 \nonumber \\
&& \hspace{1cm} \times\left\{ p_1\ e^{-p_1/T_s} p_1^{\beta}p_2^{\alpha}(p_T-p_1-p_2)^{\alpha} {\cal S}^{ud}(p_2,p_T-p_1-p_2)\right.\nonumber\\
&&\hspace{1cm}\left.+p_1\ e^{-p_1/T} (p_1p_2)^{\alpha}(p_T-p_1-p_2)^{\beta} {\cal S}^{ds}(p_2,p_T-p_1-p_2)\right\},
\label{B3}
\end{eqnarray}
\begin{eqnarray}
{dN_{\Lambda}^{{TSS^{2j}}}\over p_T dp_T}&=&{g_{st}^{\Lambda}g_{\Lambda}C \Gamma\over m_T^{\Lambda} p_T^{2\alpha+\beta+3}} \int_0^{p_T} dp_1 \int_0^{p_T-p_1} dp_2\ \nonumber \\
&& \hspace{1cm} \times\left\{ p_1e^{-p_1/T_s}p_1^{\beta}p_2^{\alpha}(p_T-p_1-p_2)^{\alpha}{\cal S}^{u}(p_2){\cal S}^{d}(p_T-p_1-p_2) \right.\nonumber\\
&&\hspace{1cm}\left.+p_1e^{-p_1/T}(p_1p_2)^{\alpha}(p_T-p_1-p_2)^{\beta}{\cal S}^{d}(p_2){\cal S}^{s}(p_T-p_1-p_2) \right\},
\label{B4}
\end{eqnarray}
\begin{eqnarray}
{dN_{\Lambda}^{{SSS}^{1j}}\over p_T dp_T}=\frac{1}{m_{\Lambda}^T}\int\frac{dq}{q^2}\sum\limits_i\hat F_i(q)D_i^{\Lambda}(p_T, q),
\label{B5}
\end{eqnarray}
\begin{eqnarray}
{dN_{\Lambda}^{{SSS^{2j}}}\over p_T dp_T}&=&{g_{st}^{\Lambda}g_{\Lambda}\Gamma\over m_T^{\Lambda} p_T^{2\alpha+\beta+3}} \int_0^{p_T} dp_1 \int_0^{p_T-p_1} dp_2 \nonumber \\
&& \hspace{1cm} \times\left\{ p_1^{\beta}p_2^{\alpha}(p_T-p_1-p_2)^{\alpha}{\cal S}^{s}(p_1) {\cal S}^{ud}(p_2,p_T-p_1-p_2)\right.\nonumber\\
&&\hspace{1cm}\left.+(p_1p_2)^{\alpha}(p_T-p_1-p_2)^{\beta} {\cal S}^{u}(p_1){\cal S}^{ds}(p_2,p_T-p_1-p_2)\right\},
\label{B6}
\end{eqnarray}
The statistical factor is $g_{st}^{\Lambda}=1/8$, and the prefactor from RF is $g_{\Lambda}=[B(\alpha+1, \alpha+\beta+2)B(\alpha+1, \beta+1)]^{-1}$. The corresponding FFs, $D_i^{\Lambda}(z)$, are given by AKK \cite{AKK08} by fitting the data at next-leading-order (NLO).
\section{$p_T$ Distribution of $\Xi$ at LHC}
For the recombination of $dss$ to form $\Xi$ we make the simplifying assumption that the RF is proportional to $\delta$-functions, i.e., $\prod\limits_{i=1}^3\delta(p_i-p_T/3)$. Then it is straightforward to write the distributions
\begin{eqnarray}
{dN^{TTT}_{\Xi}\over p_Tdp_T} = {g_{\Xi}C^3p_T^2\over 27m^{\Xi}_T}e^{-p_T/3T}e^{-2p_T/3T_s} , \label{C1}
\end{eqnarray}
\begin{eqnarray}
{dN^{TTS}_{\Xi}\over p_Tdp_T} = {g_{\Xi}C^2p_T\over 9m^{\Xi}_T}\left\{{e^{-\frac{p_T} {3} (\frac{1}{T}+\frac{1}{T_s})}} \mathcal S^s(p_T/3)+e^{-2p_T \over 3T_s}\mathcal S^u(p_T/3)\right\} , \label{C2}
\end{eqnarray}
\begin{eqnarray}
{dN^{{TSS}^{1j}}_{\Xi}\over p_Tdp_T} = {g_{\Xi}C\over 3m^{\Xi}_T}\left\{e^{-\frac{p_T} {3T} } \mathcal S^{ss}(p_T/3, p_T/3)+e^{-\frac{p_T} {3T_s} }\mathcal S^{us}(p_T/3, p_T/3)\right\} , \label{C3}
\end{eqnarray}
\begin{eqnarray}
{dN^{{TSS}^{2j}}_{\Xi}\over p_Tdp_T} = {g_{\Xi}C\Gamma\over 3m^{\Xi}_T}\left\{e^{-\frac{p_T} {3T} } \mathcal S^{s}(p_T/3,)\mathcal S^{s}(p_T/3)+e^{-\frac{p_T} {3T_s} }\mathcal S^{u}(p_T/3)\mathcal S^{s}(p_T/3)\right\} , \label{C4}
\end{eqnarray}
\begin{eqnarray}
{dN^{{SSS}^{1j}}_{\Xi}\over p_Tdp_T} = {g_{\Xi}\over {p_Tm^{\Xi}_T}} \mathcal S^{uss}(p_T/3, p_T/3, p_T/3) , \label{C5}
\end{eqnarray}
\begin{eqnarray}
{dN^{{SSS}^{2j}}_{\Xi}\over p_Tdp_T} ={ g_{\Xi}\Gamma\over p_Tm^{\Xi}_T}\Big\{\mathcal S^{u}(p_T/3) S^{ss}(p_T/3, p_T/3) +\mathcal S^{s}(p_T/3) S^{us}(p_T/3, p_T/3)\Big\} , \label{C6}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{S}^{dss}(p_1,p_2,p_3)=\int\frac{dq}{q}\sum\limits_i\hat F_i(q)S_i^u(p_1,q)S_i^s(p_2,q-p_1)S_i^s(p_3,q-p_1-p_2). \label{C7}
\end{eqnarray}
\section{$p_T$ Distribution of $\Omega$ at LHC}
With the RF for $\Omega$ assumed to be $\prod\limits_{i=1}^3\delta(p_i-p_T/3)$, as it is for $\Xi$, the distributions for the different components are simplest of all baryons, since all constituent quarks are the same. We have
\begin{eqnarray}
{dN^{TTT}_{\Omega}\over p_Tdp_T} = {g_{\Omega}C^3p_T^2\over 27m^{\Omega}_T}e^{-p_T/T_s} , \label{D1}
\end{eqnarray}
\begin{eqnarray}
{dN^{TTS}_{\Omega}\over p_Tdp_T} = {g_{\Omega}C^2p_T\over 9m^{\Omega}_T}e^{-2p_T/3T_s} \mathcal S^s(p_T/3) , \label{D2}
\end{eqnarray}
\begin{eqnarray}
{dN^{{TSS}^{1j}}_{\Omega}\over p_Tdp_T} = {g_{\Omega}C\over 3m^{\Omega}_T}e^{-p_T/3T_s}\mathcal S^{ss}(p_T/3,p_T/3) , \label{D3}
\end{eqnarray}
\begin{eqnarray}
{dN^{{TSS}^{2j}}_{\Omega}\over p_Tdp_T} ={ g_{\Omega}C\Gamma\over 3m^{\Omega}_T}e^{-p_T/3T_s} \mathcal S^s(p_T/3) \mathcal S^s(p_T/3) , \label{D4}
\end{eqnarray}
\begin{eqnarray}
{dN^{{SSS}^{1j}}_{\Omega}\over p_Tdp_T} = {g_{\Omega}\over {p_Tm^{\Omega}_T}} \mathcal S^{sss}(p_T/3, p_T/3, p_T/3) , \label{D5}
\end{eqnarray}
\begin{eqnarray}
{dN^{{SSS}^{2j}}_{\Omega}\over p_Tdp_T} = {g_{\Omega}\Gamma\over p_Tm^{\Omega}_T} \mathcal S^s(p_T/3)\mathcal S^{ss}(p_T/3,p_T/3). \label{D6}
\end{eqnarray}
Apart from the prefactor that involves $p_T^2/m_T^{\Omega}$, the $\rm{TTT}$ term is a pure exponential. If it is dominant, then the $p_T$ dependence of Eq.\ (\ref{D1}) is a direct test of the validity of our description of $\Omega$ production.
\section{$p_T$ Distribution of $\phi$ at LHC}
As it is for $\Omega$, the distributions for $\phi$ is simple when the RF is taken to be $\prod\limits_{i=1}^2\delta(p_i-p_T/2)$ for $s\bar s$ recombination. One gets
\begin{eqnarray}
{dN^{TT}_{\phi}\over p_Tdp_T} = {g_{\phi}C^2p_T\over 4m^{\phi}_T}e^{-p_T/T_s} , \label{E1}
\end{eqnarray}
\begin{eqnarray}
{dN^{TS}_{\phi}\over p_Tdp_T} = {g_{\phi}C\over 2m^{\phi}_T}e^{-p_T/2T_s} \mathcal S^s(p_T/2) , \label{E2}
\end{eqnarray}
\begin{eqnarray}
{dN^{{SS}^{1j}}_{\phi}\over p_Tdp_T} = {g_{\phi} \over p_Tm^{\phi}_T}\mathcal S^{s\bar s}(p_T/2, p_T/2) , \label{E3}
\end{eqnarray}
\begin{eqnarray}
{dN^{{SS}^{2j}}_{\phi}\over p_Tdp_T} = {g_{\phi} \Gamma\over p_Tm^{\phi}_T}\mathcal S^s(p_T/2)\mathcal S^{\bar s}(p_T/2) . \label{E4}
\end{eqnarray}
\end{appendix}
|
2,869,038,153,801 | arxiv |
\section{Application to Galois orbits of Heegner points on modular curves}\label{sec:apps}
\subsection{The modular curve $Y_0(N)$}
Let $N\geq1$ be an integer and let $S$ be a scheme over $\Spec \mathbb{Z}\left [\frac{1}{N} \right ]$. An
enhanced elliptic curve $(E,C)$ over $S$ is an elliptic curve $E$ over $S$ together with a closed subgroup $C$ that is locally isomorphic to the constant group scheme
$(\mathbb{Z}/N\mathbb{Z})_{S}$ for the \'etale topology on $S$. The modular curve
$Y=Y_{0}(N)$ is a smooth affine curve over $\Spec \mathbb{Z}\left [\frac{1}{N} \right]$ that coarsely
represents the contravariant functor mapping $S$ to the set of isomorphism
classes of enhanced elliptic curves over $S$. Here, we say that two enhanced elliptic curves
$(E_{1},C_{1})$ and $(E_{2},C_{2})$ over $S$ are isomorphic if there is an isomorphism
$E_{1}\stackrel{\simeq}{\rightarrow} E_{2}$ of elliptic curves over $S$ that maps $C_{1}$ to $C_{2}$.
\iffalse
If $N\geq3$, there is at most one such isomorphism and $Y$ is a fine moduli space
for the above functor.
\fi
We denote by $[E,C]$ the $S$-valued point of $Y$ defined by an enhanced elliptic curve $(E,C)$ over $S$. For $S=\Spec \mathbb{C}$, we have the usual isomorphism
\begin{equation}
\Gamma_{0}(N)\backslash\mathfrak{h} \stackrel{\simeq}{\longrightarrow}Y(\mathbb{C})\qquad\tau\mapsto[E_{\tau},C_{\tau}]\label{eq:HisoY}
\end{equation}
where $\mathfrak{h}$ is the upper half-plane, $\displaystyle \Gamma_{0}(N)=\left\{ {\mtwo a b c d } \in \SL_{2}(\mathbb{Z}) \colon c\equiv0\bmod N\right\} $ acts on $\mathfrak{h}$ by $\displaystyle {\mtwo a b c d} \cdot \tau=\frac{a\tau+b}{c\tau+d}$, and $E_{\tau}=\mathbb{C}/\left\langle 1,\tau\right\rangle $ with
$C_{\tau}=\left\langle N^{-1},\tau\right\rangle /\left\langle 1,\tau\right\rangle $.
\subsection{Isogeny classes}
Fix an elliptic curve $\mathcal{E}$ over a base $S$. Let $\mathcal{I}(\mathcal{E}_{\star})$ be the contravariant functor that assigns to an $S$-scheme $T$ the set \emph{$\mathcal{I}(\mathcal{E}_{T})$} of isomorphism classes of triples $(E,C,\phi)$ where $(E,C)$ is
an enhanced elliptic curve over $T$ and where $\phi$ is an invertible
element of $\Hom_{T}(E/C,\mathcal{E}_{T})\otimes\mathbb{Q}$.
An isomorphism between two such triples $(E_{1},C_{1},\phi_{1})$
and $(E_{2},C_{2},\phi_{2})$ is an isomorphism of enhanced elliptic
curves $\theta \colon (E_{1},C_{1})\rightarrow(E_{2},C_{2})$ such that $\phi_{2}\circ\overline{\theta}=\phi_{1}$, where $\overline{\theta} \colon E_{1}/C_{1}\rightarrow E_{2}/C_{2}$ is the
induced isomorphism. The group $\Aut_{T}^{0}(\mathcal{E}_{T})$ of
invertible elements in $\End_{T}^{0}(\mathcal{E}_{T}) = \End_T(\mathcal{E}_T) \otimes \mathbb{Q}$
acts on $\mathcal{I}(\mathcal{E}_{T})$ by
$$
\sigma\cdot(E,C,\phi)=(E,C,\sigma\circ\phi).
$$
If $s \in S$ is a geometric point, the map $(E,C,\phi)\mapsto[E,C]$ yields a bijection
\[
\Aut_{s}^{0}(\mathcal{E}_{s})\backslash\mathcal{I}(\mathcal{E}_{s})\simeq Y(\mathcal{E}_{s})=\left\{ x\in Y(s)\vert x=[E,C]\mbox{ s.t. }\Hom_{s}^{0}(E,\mathcal{E}_{s})\neq0\right\} \subset Y(s).
\]
\subsection{Lattices}
For a geometric point $s$ of $S$ and a prime number $p$ let $T_{p}(\mathcal{E}_{s})$ be the $p$-adic Tate module of $\mathcal{E}_{s}$ if $\chr(s)\neq p$ and the covariant Dieudonn\'e crystal
of the $\ell$-divisible group $\mathcal{E}_{s}[\ell^{\infty}]$ if $p=\ell=\chr(s)$.
Let also $\widehat{T}(\mathcal{E}_{s})=\prod_{p}T_{p}(\mathcal{E}_{s})$
and $\widehat{V}(\mathcal{E}_{s})=\widehat{T}(\mathcal{E}_{s})\otimes\mathbb{Q}$.
If $\chr(s)=0$, a lattice in $\widehat{V}(\mathcal{E}_{s})$
is any $\widehat{\mathbb{Z}}$-submodule that is commensurable with
$\widehat{T}(\mathcal{E}_{s})$. If $\chr(s)=\ell$, a lattice
in $\widehat{V}(\mathcal{E}_{s})=\widehat{V}^{(\ell)}(\mathcal{E}_{s})\times V_{\ell}(\mathcal{E}_{s})$ is a submodule of the form $\widehat{T}^{(\ell)}\times T_{\ell}$ where
$\widehat{T}^{(\ell)}$ is a $\widehat{\mathbb{Z}}^{(\ell)}$-submodule of $\widehat{V}^{(\ell)}(\mathcal{E}_{s})$ that is commensurable with $\widehat{T}^{(\ell)}(\mathcal{E}_{s})$, and
where $T_{\ell}$ is a subcrystal of $V_{\ell}(\mathcal{E}_{s})$. Here,
$$
\widehat{\mathbb{Z}}^{(\ell)} = \prod_{p \ne \ell} \mathbb{Z}_{p} \qquad \widehat{V}^{(\ell)}(\mathcal{E}_{s}) = \prod_{p \ne \ell} V_p(\mathcal{E}_s) \qquad \displaystyle \widehat{T}^{(\ell)}(\mathcal{E}_{s}) = \prod_{p \ne \ell} T_p(\mathcal{E}_s).
$$
Let $\mathcal{L}(\mathcal{E}_{s})$
be the set of pairs of lattices $(\widehat{T}_{1},\widehat{T}_{2})$ in $\widehat{V}(\mathcal{E}_{s})$ such that $\widehat{T}_{1}\subset\widehat{T}_{2}$ and $\widehat{T}_{2}/\widehat{T}_{1}\simeq\mathbb{Z}/N\mathbb{Z}$ (thus if $\chr(s)=\ell$, $\widehat{T}_{1}$ and $\widehat{T}_{2}$ share the same $\ell$-component, since $\ell \nmid N$). For $(E,C,\phi)\in\mathcal{I}(\mathcal{E}_{s})$,
we have morphisms
\[
\widehat{T}(E)\stackrel{\mathrm{can}}{\longrightarrow}\widehat{T}(E/C)\subset\widehat{V}(E/C)\stackrel{\phi}{\longrightarrow}\widehat{V}(\mathcal{E}_{s})
\]
and the resulting map
\[
(E,C,\phi)\mapsto\left(\phi\circ\mathrm{can}\left(\widehat{T}(E)\right),\phi\left(\widehat{T}(E/C)\right)\right)
\]
yields a bijection $\mathcal{I}(\mathcal{E}_{s})\simeq\mathcal{L}(\mathcal{E},s)$.
Thus also
\[
Y(\mathcal{E}_{s})\simeq\Aut_{s}^{0}(\mathcal{E}_{s})\backslash\mathcal{L}(\mathcal{E},s).
\]
\subsection{CM points as special points}
Let $K$ be a quadratic imaginary field and let
$K\hookrightarrow\mathbb{C}$ be a fixed embedding. An elliptic curve $E$ over a
field $F$ is said to have complex multiplication by $K$ if $\End_{F}^{0}(E)$
is isomorphic to $K$. If $F$ is a subfield of $\mathbb{C}$, we
normalize the isomorphism $K\simeq\End_{F}^{0}(E)$ by requiring that
$K$ acts on the tangent space $\Lie E(\mathbb{C})$ through our fixed
embedding $K\hookrightarrow\mathbb{C}$. The conductor of $E$ is
the unique positive integer $c(E)$ such that $\End_{F}(E)\simeq\mathbb{Z}+c(E)\mathcal{O}_{K}$
inside $\End_{F}^{0}(E)\simeq K$. A complex point $x\in Y(\mathbb{C})$
is said to have complex multiplication by $K$ if $x=[E,C]$ for some
elliptic curve $E$ over $\mathbb{C}$ with complex multiplication
by $K$. The elliptic curve $E/C$ then also has complex multiplication by $K$. The
fine and coarse conductors of $x$ are respectively equal to
\[
\mathfrak{c}_{f}(x)=(c(E),c(E/C))\in\mathbb{N} \times \mathbb{N}\quad\mbox{and}\quad \mathfrak{c}_{g}(x)=\mathrm{lcm}(c(E),c(E/C))\in\mathbb{N}.
\]
We denote by $\mathcal{CM}_{K}$ the subset of $Y(\mathbb{C})$ thus
defined and refer to its elements as CM points. Note that the bijection~(\ref{eq:HisoY})
restricts to
\begin{equation}
\Gamma_{0}(N)\backslash \left ( \mathfrak{h} \cap K \right ) \stackrel{\simeq}{\longrightarrow}\mathcal{CM}_{K}\qquad\tau\mapsto[E_{\tau},C_{\tau}].
\label{eq:HKisoCM}
\end{equation}
Let $\tau\in\mathfrak{h}\cap K$ satisfy the quadratic equation $A\tau^{2}+B \tau+C = 0$ where $A > 0$, $A, B, C \in \mathbb{Z}$ and $(A, B, C) = 1$. One can calculate the fine conductor of $[E_\tau, C_\tau]$ as follows: if $\displaystyle \Delta(\tau) = B^2 - 4AC$ is the discriminant modular function and if $D < 0$ is the fundamental discriminant of $K$ then (see, e.g.,
\cite[\S 7]{cox:book})
\[
\mathfrak{c}_{f}[E_{\tau},C_{\tau}]= \left ( \sqrt{\left | \frac{\Delta(\tau)}{D} \right | },
\sqrt{ \left | \frac{\Delta(N\tau)}{D} \right | }\right ) .
\]
The point $[E_\tau, C_\tau]$ is a Heegner point if and only if $\Delta(\tau) = \Delta(N\tau)$. It is not hard to show that the latter is equivalent to $A \mid N$ and $(A/N, B, NC) = 1$.
\subsection{CM points as isogeny class}
All elliptic curves over $\mathbb{C}$ with complex multiplication
by $K$ are isogenous. Fix one such curve $\mathbb{E}$.
Then (notation as above)
\begin{equation}
\mathcal{CM}_{K}=Y(\mathbb{E})\stackrel{\simeq}{\longleftarrow}K^{\times}\backslash\mathcal{I}(\mathbb{E})\stackrel{\simeq}{\longrightarrow}K^{\times}\backslash\mathcal{L}(\mathbb{E},\mathbb{C}).
\label{eq:CMandLat}
\end{equation}
The fine conductor corresponds to
\[
\mathfrak{c}_{f}(\widehat{T}_{1},\widehat{T}_{2})=\left(c(\widehat{T}_{1}),c(\widehat{T}_{2})\right)
\]
where for a lattice $\widehat{T}$ in $\widehat{V}(\mathbb{E})$, $c(\widehat{T})$ is the conductor of the quadratic order
$$
\mathcal{O}_{c(\widehat{T})} = \{s\in K \colon s\widehat{T}\subseteq \widehat{T}\}.
$$
The $\mathcal{O}_{c(\widehat{T})}$-module $\widehat{T}$ is thus free of rank one.
\subsection{The Galois action on CM points}
Let $K^{ab}$ be the maximal abelian extension of $K$ inside $\mathbb{C}$,
and let $\mathrm{Art}_{K} \colon \widehat{K}^{\times}\rightarrow\Gal(K^{ab}/K)$
be the reciprocal of the usual Artin reciprocity. In other words, $\mathrm{Art}_K$ sends uniformizers to geometric Frobenii \cite[p.90]{milne:cm}. The main theorem of complex multiplication then
says (see \cite[Thm.3.10]{milne:cm2} or \cite[Thm.9.10]{milne:cm}):
\begin{quote}
For any $\sigma\in\Gal(\mathbb{C}/K)$ and any $s\in\widehat{K}^{\times}$
such that $\sigma\vert K^{ab}=\mathrm{Art}_{K}(s)$ in $\Gal(K^{ab}/K)$,
there exists a unique isogeny $\lambda \colon \mathbb{E} \rightarrow \sigma\mathbb{E}$
such that for all $y \in\widehat{V}(\mathbb{E})$, $\lambda(sy) = \sigma y$
in $\widehat{V}(\sigma\mathbb{E})$.
\end{quote}
Suppose now that $x=[E,C]$ belongs to $\mathcal{CM}_{K}$, corresponding
to a pair of lattices $(\widehat{T}_{1},\widehat{T}_{2})$ in $\mathcal{L}(\mathbb{E},\mathbb{C})$
obtained from $(E,C)$ by choosing s non-zero $\phi \colon E/C \rightarrow \mathbb{E}$.
Fix $\sigma$, $s$ and $\lambda$ as above. Then $\sigma x=[\sigma E,\sigma C]$
belongs to $\mathcal{CM}_{K}$, and it corresponds to the pair of
lattices $(s\widehat{T}_{1},s\widehat{T}_{2})$ for the choice
$$
\phi'=\lambda^{-1}\circ\sigma\phi \colon \sigma E/\sigma C \rightarrow \mathbb{E}.
$$
The bijection~(\ref{eq:CMandLat}) therefore maps the action of
$\sigma\in\Gal(\mathbb{C}/K)$ on $\mathcal{CM}_{K}$ to left multiplication by
$s\in\widehat{K}^{\times}$ on $\mathcal{L}(\mathbb{E},\mathbb{C})$. It follows that the field of definition of $x \in \mathcal{CM}_{K}$
is equal to the ring class field $K[\mathfrak{c}_{g}(x)]$ specified by the coarse
conductor $\mathfrak{c}_{g}(x)$ of $x$. In fact, it is well-known that the corresponding point
$x\in Y(K[\mathfrak{c}_{g}(x)])$ can be represented by an enhanced elliptic curve $(E,C)$ over
$K[\mathfrak{c}_{g}(x)]$.
\subsection{The good reduction of CM points}
Let $\overline{\mathbb{Q}}$ be the algebraic closure of $\mathbb{Q}$
inside $\mathbb{C}$. Let $\ell \nmid N$ be a prime. Fix an embedding
$\iota_\ell \colon \overline{\mathbb{Q}} \hookrightarrow\overline{\mathbb{Q}}_{\ell}$. Note that $\iota_\ell$ determines a valuation ring $\overline{\mathcal{O}}\subset\overline{\mathbb{Q}}$ whose residue field $\overline{\mathbb{F}}_\ell$ is an algebraic closure of $\mathbb{F}_{\ell}$ and a place $\overline{\lambda}$ of $\overline{\Q}$ over $\ell$. Let $K[\infty]\subset K^{\ab}$ be the union of all ring class fields $K[c]$. For each $c$ (including $c = \infty$), let $\lambda_c$ be the place of $K[c]$ below $\overline{\lambda}$. Let $\mathcal{O}[c]$ be the valuation ring of $\lambda_c$ inside $K[c]$ and let $\mathbb{F}[c]$ be its residue field.
There are various equivalent approaches to the reduction theory of CM points at $\lambda$:
\paragraph{1. Reduction via properness.} Let $X_{/\Spec\mathbb{Z}[1/N]}$ be the smooth and proper compactification of $Y$ constructed by Deligne--Rapoport \cite{deligne-rapoport} when $N$ is prime and by Katz--Mazur \cite{katzmazur} in general.
One gets a reduction map
$$
\red \colon Y(K[c]) \hookrightarrow X(K[c]) \simeq X(\mathcal{O}[c]) \xra{\red_{\lambda_c}} X(\mathbb{F}[c]),
$$
where the bijection $X(K[c]) \simeq X(\mathcal{O}[c])$ follows from the valuative criterion of
properness.
\paragraph{2. Reduction via N\'eron models.} For a point $x\in\mathcal{CM}_{K}$
with coarse conductor $c=\mathfrak{c}_{g}(x)$ write $x=[E,C]$ for some enhanced elliptic curve
$(E,C)$ over $K[c]$ with complex multiplication by $K$. Suppose first that $E$ has
good reduction at $\lambda_c$. Then $(E,C)$ extends to an enhanced elliptic
curve $(\mathcal{E},\mathcal{C})$ over $\mathcal{O}[c]$ where
$\mathcal{E}_{/\mathcal{O}[c]}$ is the N\'eron model of $E$. The special fiber of the
latter gives a point $\mathrm{red}(x)=[\mathcal{E}_{\mathbb{F}[c]},\mathcal{C}_{\mathbb{F}[c]}]$ in $Y(\mathbb{F}[c])$. If $E$ does not have good reduction at
$\lambda_c$, we know by \cite[Thm.7]{serre-tate} that $E$
acquires good reduction at $\overline{\lambda}$ (and indeed everywhere) after a suitable
cyclic extension of $K[c]$. We may thus define $\mathrm{red}(x)$ as the point corresponding
to the special fiber of the N\'eron model of the base change of $(E,C)$ to such an
extension. This yields a well-defined reduction map
\[
\mathrm{red} \colon \mathcal{CM}_{K}\rightarrow Y(\overline{\mathbb{F}}_\ell).
\]
The above two constructions give the same map on $\mathcal{CM}_{K}$ with
values in $X(\overline{\F}_\ell)$. Since $Y(\mathbb{F}[\infty])=X(\mathbb{F}[\infty]) \cap Y(\overline{\F}_\ell)$
in $X(\overline{\mathbb{F}}_\ell)$, we have a well-defined map
\begin{equation}
\mathrm{red} \colon \mathcal{CM}_{K}\rightarrow Y(\mathbb{F}[\infty]).\label{eq:reducDef1}
\end{equation}
For a more explicit construction of this map, we can also choose an
elliptic curve $\mathbb{E}$ over $K[\infty]$ with complex multiplication
by $K$ and good reduction at $\lambda_\infty$. Such a curve exists by the elementary
theory of complex multiplication and by \cite[Cor.1]{serre-tate}. We then
reduce $\mathcal{CM}_{K}$ as the isogeny class $Y(\mathbb{E})$,
as we shall now explain.
\paragraph {3. Reduction maps for isogeny classes.}
Let $S=\Spec\overline{\mathcal{O}}$ with geometric points $g=\Spec \overline{\mathbb{Q}}$
and $s=\Spec\overline{\mathbb{F}}_\ell$. Let $\mathcal{E}_{/S}$ be an elliptic curve. We will eventually take $\mathcal{E}$
to be the N\'eron model of an elliptic curve $\mathbb{E}$ with complex
multiplication by $K$, but in what follows we will not make this assumption. The theory of N\'eron models implies that
restriction from $S$ to $g$ yields bijections
\[
\mathcal{I}(\mathcal{E})\stackrel{\simeq}{\longrightarrow}\mathcal{I}(\mathcal{E}_{g})\qquad\mbox{and}\qquad\End_{S}(\mathcal{E})\stackrel{\simeq}{\longrightarrow}\End_{g}(\mathcal{E}_{g}).
\]
Restriction from $S$ to $s$ on the other hand gives
\[
\mathcal{I}(\mathcal{E})\rightarrow\mathcal{I}(\mathcal{E}_{s})\qquad\mbox{and}\qquad\End_{S}(\mathcal{E})\hookrightarrow\End_{s}(\mathcal{E}_{s}).
\]
We get reduction maps $\red \colon \mathcal{L}(\mathcal{E}, g) \rightarrow \mathcal{L}(\mathcal{E}, s)$ (from left to right):
\[
\begin{array}{ccccc}
\mathcal{L}(\mathcal{E},g) & \stackrel{\simeq}{\longleftarrow} & \mathcal{I}(\mathcal{E}) & \rightarrow & \mathcal{L}(\mathcal{E},s)\\
\uparrow\simeq & & \parallel & & \uparrow\simeq\\
\mathcal{I}(\mathcal{E}_{g}) & \stackrel{\simeq}{\longleftarrow} & \mathcal{I}(\mathcal{E}) & \rightarrow & \mathcal{I}(\mathcal{E}_{s})\\
\downarrow & & \downarrow & & \downarrow\\
Y(\mathcal{E}_{g})=\Aut_{g}^{0}(\mathcal{E}_{g})\backslash\mathcal{I}(\mathcal{E}_{g}) & \stackrel{\simeq}{\longleftarrow} & \Aut_{S}^{0}(\mathcal{E})\backslash\mathcal{I}(\mathcal{E}) & \rightarrow & \Aut_{s}^{0}(\mathcal{E}_{s})\backslash\mathcal{I}(\mathcal{E}_{s})=Y(\mathcal{E}_{s})\end{array}
\]
The map $\mathrm{red} \colon \mathcal{L}(\mathcal{E},g)\rightarrow\mathcal{L}(\mathcal{E},s)$
is induced by the natural isomorphism $\widehat{T}^{(\ell)}(\mathcal{E}_g)\stackrel{\simeq}{\longrightarrow}\widehat{T}^{(\ell)}(\mathcal{E}_s)$
between the Tate modules away from $\ell$, together with a map
\begin{equation}
\mathrm{red}_{\ell} \colon \mathcal{L}_{\ell}(\mathcal{E}, {g})\rightarrow\mathcal{L}_{\ell}(\mathcal{E},s)
\label{eq:redp}
\end{equation}
from the set $\mathcal{L}_{\ell}(\mathcal{E}, g)$ of $\mathbb{Z}_{\ell}$-lattices
in $V_{\ell}(\mathcal{E}_g)$ to the set $\mathcal{L}_{\ell}(\mathcal{E}, s)$
of crystals in $V_{\ell}(\mathcal{E}_s)$. For $X,Y\in\mathcal{L}_{\ell}(\mathcal{E},s)$,
we set
\[
[X:Y]=\mathrm{length}(X/X\cap Y)-\mathrm{length}(Y/X\cap Y)
\]
where the length function is relative to the $\mathbb{Z}_{\ell}$-module
stucture if $u=g$ and to the $W$-module structure
if $u = s$ (here, $W := W(\overline{\F}_\ell)$ is the ring of Witt vectors of $\overline{\F}_\ell$). Then for all $X\in\mathcal{L}_{\ell}(\mathcal{E},g)$,
\begin{equation}
\left[T_{\ell}(\mathcal{E}_g):X\right]=\left[T_{\ell}(\mathcal{E}_s) \colon \mathrm{red}_{\ell}(X)\right]\label{eq:DegComp}
\end{equation}
Indeed, we may choose a triple $(E,C,\phi)\in\mathcal{I}(\mathcal{E})$
such that \[
X=\phi\left(T_{\ell}((E/C)_g)\right)\quad\mbox{and}\quad\mathrm{red}_{\ell}(X)=\phi\left(T_{\ell}((E/C)_s)\right)\]
and then both sides of (\ref{eq:DegComp}) are equal to the exponent
of $\ell$ in the degree of $\phi$.
\subsection{The supersingular case}
If $\mathcal{E}_{s}$ is a supersingular elliptic curve, then
\[
\left(T_{\ell}(\mathcal{E}_s),F,V\right)\simeq\left(W^2, \pi_{\ell}\sigma,\pi_{\ell}\sigma^{-1}\right)\qquad\mbox{with}\qquad\pi_{\ell}={\mtwo 0 1 p 0} \in M_{2}(W)
\]
where $\sigma$ is the Frobenius automorphism of $W$. We thus obtain
\[
\mathcal{L}_{\ell}(\mathcal{E}_s)\simeq\left\{ \pi_{\ell}^{i}W^2 \subset\mathcal{K}^{2} \colon i\in\mathbb{Z}\right\} \]
where $\mathcal{K}$ is the fraction field of $W$. In particular,
the map (\ref{eq:redp}) is uniquely determined by (\ref{eq:DegComp}).
Moreover, the endomorphism ring
\[
\End\left(T_{\ell}(\mathcal{E}_s),F,V\right)\simeq\left\{ x\in M_{2}(W) \colon \pi\sigma(x)\pi^{-1}=x\right\}
\]
is the maximal $\mathbb{Z}_{\ell}$-order $\mathcal{R}$ of a non-split
quaternion algebra $\mathcal{B}=\mathcal{R}\otimes_{\mathbb{Z}_{\ell}}\mathbb{Q}_{\ell}$
over $\mathbb{Q}_{\ell}$ and $\mathcal{B}^{\times}\simeq\mathcal{R}^{\times}\times\pi_{\ell}^{\mathbb{Z}}$
acts transitively on $\mathcal{L}_{\ell}(\mathcal{E},z)$.
\begin{rem}
The reduction map (\ref{eq:redp}) is more difficult to analyze
when $\mathcal{E}_{s}$ is an ordinary elliptic curve, especially
when $\mathcal{E}_{g}$ does \emph{not }have complex multiplication.
\end{rem}
\subsection{Matrices}
We choose a subgroup $T'$ of $T=\H_{1}(\mathcal{E}(\mathbb{C}),\mathbb{Z})$
such that $T/T'\simeq\mathbb{Z}/N\mathbb{Z}$. We put $V=T\otimes\mathbb{Q}$,
$B=\End_{\mathbb{Q}}(V)$ and
\[
R=\left\{ x\in B \colon xT'\subset T'\mbox{ and }xT\subset T\right\} ,
\]
an Eichler order of level $N$ in $B\simeq M_{2}(\mathbb{Q})$. This
gives rise to identifications
\[
\widehat{T}(\mathcal{E}_g)\simeq\widehat{T}(\mathcal{E},\mathbb{C})\simeq\widehat{T},\quad\widehat{V}(\mathcal{E}_g)\simeq\widehat{V}(\mathcal{E},\mathbb{C})\simeq\widehat{V}\quad\mbox{and}\quad\widehat{B}^{\times}/\widehat{R}^{\times}\simeq\mathcal{L}(\mathcal{E}, g),
\]
where the last map sends $b$ to $(b\widehat{T}',b\widehat{T})$.
Suppose that $\mathcal{E}_{s}$ is supersingular and let $B_{\{\ell\}}=\End_{s}^{0}(\mathcal{E}_{s})$,
a definite quaternion algebra over $\mathbb{Q}$ with $\mathrm{Ram}_{f}(B_{\{\ell\}})=\{\ell\}$.
For $p\neq \ell$, the left action of $B_{\{\ell\},p}$ on $V_{p}(\mathcal{E}_s)\simeq V_{p}(\mathcal{E},g)\simeq V_{g}$
yields an isomorphism $\theta_{p}:B_{p}\simeq B_{\{\ell\},p}$,
and the left action of $B_{\{\ell\},\ell}$ on $V_{\ell}(\mathcal{E}_s)$ yields
an isomorphism $\theta_{\ell} \colon \mathcal{B}\simeq B_{\{\ell\},\ell}$ with $\mathcal{B}$
as above. Put $\mathrm{red}(\widehat{T}',\widehat{T})=(\widehat{T}'_{s},\widehat{T}_{s})$
and
\[
R_{\{\ell\}}=\left\{ x\in B_{\{\ell\}} \colon x\widehat{T}'_{s}\subset\widehat{T}'_{s}\mbox{ and }x\widehat{T}_{s}\subset\widehat{T}_{s}\right\}.
\]
Thus $\theta_{p}(R_{p})=R_{\{\ell\},p}$ for all $p\neq \ell$,
and $\theta_{\ell}(\mathcal{R})=R_{\{\ell\},\ell}$ with $\mathcal{R}$ as
above. The map $b\mapsto(b\widehat{T}'_{s},b\widehat{T}_{s})$ yields
an identification $\widehat{B}_{\{\ell\}}^{\times}/\widehat{R}_{\{\ell\}}^{\times}\simeq\mathcal{L}(\mathcal{E},s)$,
and the reduction map
\[
\widehat{B}^{\times}/\widehat{R}^{\times}\simeq\mathcal{L}(\mathcal{E},g)\stackrel{\mathrm{red}}{\longrightarrow}\mathcal{L}(\mathcal{E},s)\simeq\widehat{B}_{\{\ell\}}^{\times}/\widehat{R}_{\{\ell\}}^{\times}
\]
sends $\widehat{b}\widehat{R}^{\times}=\prod b_{p}R_{p}^{\times}$
to $\widehat{b}'\widehat{R}_{\{\ell\}}^{\times}=\prod b'_{p}R_{\{\ell\},p}^{\times}$
where $b'_{p}=\theta_{p}(b_{p})$ for $p\neq \ell$ and $b'_{\ell}=\theta_{\ell}(\pi_{\ell})^{v_{\ell}}$
for $v_{\ell}=\mathrm{ord}_{\ell}(\mathrm{nr}(b_{\ell}))$. We finally obtain a reduction map
\[
\Aut_{g}^{0}(\mathcal{E}_{g})\backslash\widehat{B}^{\times}/\widehat{R}^{\times}\simeq Y(\mathcal{E}_{g})\stackrel{\mathrm{red}}{\longrightarrow}Y(\mathcal{E}_{s})\simeq B_{\{\ell\}}^{\times}\backslash\widehat{B}_{\{\ell\}}^{\times}/\widehat{R}_{\{\ell\}}^{\times}\]
where $\End_{g}^{0}(\mathcal{E}_{g})$ embeds in $B$ through its
action on $V=\H_{1}(\mathcal{E}_{g}(\mathbb{C}),\mathbb{Q})$.
\subsection{The supersingular reduction of CM points}
We now assume that $\ell$ is inert in $K$, and let $\mathcal{E}$
be the N\'eron model over $\overline{\mathcal{O}}$ of an elliptic curve
with complex multiplication by $K$, i.e. $\End_{g}^{0}(\mathcal{E}_{g})=K$.
Then $\mathcal{E}_{s}$ is indeed supersingular, $Y(\mathcal{E}_{g})=\mathcal{CM}_{K}$
and $Y^{ss}(\mathcal{E}_{s})=Y^{ss}(\overline{\mathbb{F}}_\ell)$, the
set of supersingular points in $Y(\overline{\mathbb{F}}_\ell)$. We have
now identified the geometric reduction map
\begin{equation}
\mathrm{red} \colon \mathcal{CM}_{K}\rightarrow Y^{ss}(\overline{\mathbb{F}}_\ell)
\label{eq:FinalRed}
\end{equation}
with the adelic reduction map which we had previously considered.
\begin{rem}
The surjectivity of $(\ref{eq:FinalRed}$) implies that $Y^{ss}(\overline{\mathbb{F}}_\ell)\subset Y(\mathbb{F}[\infty])$.
On the other hand, class field theory shows that $\mathbb{F}[\infty]\simeq\mathbb{F}_{\ell^{2}}$.
We thus retrieve the well-known fact that $Y^{ss}(\overline{\mathbb{F}}_\ell)\subset Y(\mathbb{F}_{\ell^{2}})$.
\end{rem}
\paragraph{The main correspondence between Heegner points and optimal embeddings.}
Let now $c$ be a positive integer satisfying $(c, \ell N) = 1$ and let $\mathcal{O}_c$ be the corresponding order in $K$. As a consequence of the above identifications, our Corollary~\ref{cor:main} implies the following:
\begin{cor}\label{cor:corresp}
Let $s \in Y^{ss}(\overline{\F}_\ell)$ be a supersingular point. Choose $[\widetilde{E}, \widetilde{C}] = s$ and define
$$
R_s = \End(\widetilde{E}, \widetilde{C}) = \left \{ \alpha \in \End(\widetilde{E}) \colon \alpha(\widetilde{C}) \subset \widetilde{C}\right \}.
$$
There is a one-to-one correspondence between the following two sets:
\[
\begin{array}{rcl}
\left \{
\begin{array}{c}
\textrm{Points } x \in \mathcal{CM}_K\textrm{ on } Y \\
\textrm{ of conductor }c
\textrm{ reducing to } s
\end{array}
\right \}
&
\Longleftrightarrow
&
\left \{
\begin{array}{c}
R_s^\times-\textrm{conjugacy classes of} \\
\textrm{conjugate pairs of}\\
\textrm{optimal embeddings } \mathcal{O}_{c} \hookrightarrow R_s
\end{array}
\right \}.
\end{array}
\]
\end{cor}
\begin{rem}
This was precisely the correspondence needed in \cite{jetchev-kane} to translate the equidistribution question for Heegner points to a question about optimal embeddings. It is shown in \cite[\S4.1]{jetchev-kane} that the latter relates to counting primitive representations of integers by ternary quadratic forms.
For $c = 1$, the above correspondence is known as Deuring lifting theorem (see \cite{deuring}) and has been subsequently refined (as a correspondence) by Gross and Zagier \cite[Prop.2.7]{gross-zagier:singular}.
\end{rem}
\begin{rem}
The left-to-right map in Corollary~\ref{cor:corresp} is rather natural.
Let $(E,C)$ be an enhanced elliptic curve over $\overline{\mathbb{Q}}$
with complex multiplication by $K$ and coarse conductor $c$. Extend $(E, C)$
to an enhanced elliptic curve $(\mathcal{E},\mathcal{C})$ over
$\overline{\mathcal{O}}$ and suppose that $(\mathcal{E}_{s},\mathcal{C}_{s})\simeq(\widetilde{E},\widetilde{C})$.
A choice of isomorphisms $(\mathcal{E}_{s},\mathcal{C}_{s})\stackrel{\simeq}{\longrightarrow}(\widetilde{E},\widetilde{C})$ and $K\stackrel{\simeq}{\longrightarrow}\mathrm{End}^{0}(\mathcal{E},\mathcal{C})$ yields an embedding $\mathcal{O}_{c}\hookrightarrow R_{s}$,
and the resulting $R_{s}^{\times}$-conjugacy class of pairs of conjugate
embeddings does not depend upon these two choices. Since
$\ell\nmid c$, it is still fairly straightforward to verify that the embeddings
thus obtained are optimal. What is not obvious is that this construction gives a one-to-one correspondence. This is precisely what we establish in our Theorem~\ref{thm:main} in a greater generality for quaternion algebras.
\end{rem}
\iffalse
Let $Y_0(N)$ be the classical modular curve over $\mathbb{Q}$ whose complex points are given by $Y_0(N)(\mathbb{C}) = \Gamma_0(N) \backslash \mathfrak{h}$
Recall that the curve $Y_0(N)$ parametrizes isomoprhism classes $[(E, C)]$ of an elliptic curve $E$ together with a cyclic subgroup of $E$ of order $N$. Throughout, we will be using this moduli description of $Y_0(N)$.
\subsection{Moduli description of CM points, supersingular and reduction maps}
\label{subsec:moduli}
Let $K = \mathbb{Q}(\sqrt{-D})$ be a quadratic imaginary field with discriminant $-D < 0$ that is coprime to $N$ such that $\ell$ is inert in $K$. If $\lambda$ is the unique prime of $K$ above $\ell$, we fix an embedding $\iota_{\lambda} \colon \overline{K} \hookrightarrow \overline{K_\lambda}$. This is equivalent to fixing a prime $\overline{\lambda}$ of $\overline{K}$ above $\lambda$.
\paragraph{CM points.} A point on $Y_0(N)$ with complex multiplication by $K$ is a moduli
$[(E, C)]$ where $E$ is an elliptic curve that has CM by $K$ and $C \subset E$ is a cyclic group of order $N$. Here, $[(E, C)]$ denotes the isomorphism class of $(E, C)$. If
$x = [(E, C)]$ is a CM point then $\End(E) = \mathcal{O}'$ and $\End(E/C) = \mathcal{O}''$ are two orders in
the ring of integers $\mathcal{O}_K$ of $K$ of conductors $c'$ and $c''$, respectively. We define the conductor $c(x)$ of $x$ as $c(x) = \lcm(c', c'')$. Note that $\End(x) = \mathcal{O}_{c(x)}$. The point
$x$ is defined over $K[c(x)]$, the ring class field of $K$ of conductor $c(x)$. We denote the
set of all CM points for $K$ on $Y_0(N)$ by $\mathcal{CM}_K$.
\paragraph{Heegner points.} A \emph{Heegner point} on $Y_0(N)$ is a CM point $x = [E, C]$ where $\End(E) = \End(E/C)$. Note that $\End(E)$ in this case is an order $\mathcal{O}$ in the ring of integers $\mathcal{O}_K$ of the imaginary quadratic field $K$. The conductor $c(x)$ of $x$ is then the conductor of the order $\mathcal{O}$ and $x$ is defined over the ring class field $K[c] / K$ of conductor $c$. Observe that if $(c, N) = 1$ then all CM points of conductor $c$ are indeed Heegner points.
\paragraph{Supersingular points.} A moduli $s = [(\widetilde{E}, \widetilde{C})] \in Y_0(N)(\overline{\F}_{\ell})$ is called a supersingular point if $\widetilde{E}_{/\overline{\F}_{\ell}}$ is a supersingular elliptic curve. The latter automatically implies that the $N$-isogenous elliptic curve $\widetilde{E} / \widetilde{C}$ is also supersingular since the supersingular elliptic curves modulo $\ell$ form an isogeny class (see, e.g., \cite{mestre:graphs}). Moreover, both $\widetilde{E}$ and $\widetilde{E} / \widetilde{C}$ are defined over $\mathbb{F}_{\ell^2}$ and hence, $s \in Y_0(N)(\mathbb{F}_{\ell^2})$. Let $\mathcal{X}(\ell)^{ss}$ be the set of supersingular points on $Y_0(N)$ modulo $\ell$. Moreover, if $s \in \mathcal{X}(\ell)^{ss}$ then $\End(s)$ is an Eichler order of level $N$ in the quaternion algebra $B_{\ell, \infty}$ ramified precisely at $\ell$ and $\infty$.
\paragraph{Geometric descriptions of reduction maps.}
Let $\displaystyle K^{\ab}$ be the maximal abelian extension of $K$. Let $\mathcal{CM}_K \subset Y_0(N)(\overline{K})$ be the set of all points on $Y_0(N)$ that have CM by $K$. The points
$\mathcal{CM}_K$ are defined over $K^{\ab}$. By abuse of notation, let $\overline{\lambda}$ also denote the prime of $K^{\ab}$ below the fixed prime of $\overline{K}$. One can define (geometrically) a reduction map
\begin{equation}\label{eq:redmoduli}
\red_{\lambda} \colon \mathcal{CM}_K \rightarrow \mathcal{X}(\ell)^{ss}
\end{equation}
at the fixed prime
$\overline{\lambda}$ as follows: if $x = [(E, C)] \in Y_0(N)(K^{\ab})$ is a CM moduli where $E$ is an elliptic curve that has CM by $K$ and $C$ is a cyclic subgroup of $E$ of order $N$ then by \cite[Prop.1.2]{cornut:inventiones} (the latter uses \cite[p.507]{serre-tate}), there exists a pair $(E', C')$ that represents the same CM point $x$ and such that both $E'$ and $E' / C$ have good reduction at $\ell$. Let $\mathcal{E}'_{/\mathcal{O}_{\overline{\lambda}}}$ be the N\'eron model of $E'$ (base-extended to $\mathcal{O}_{\overline{\lambda}}$). The cyclic subgroup $C'$ of order $N$ then extends to a subgroup $\mathcal{C}' \subset \mathcal{E}'$ that is again cyclic of order $N$. The reduction $\red_{\lambda}(x)$ is defined to be the isomorphism class of the pair $(\mathcal{E}_{\overline{\F}_\ell}, \mathcal{C}_{\overline{\F}_\ell})$ lies in the set $\mathcal{X}(\ell)^{ss} \subset Y_0(N)(\overline{\F}_{\ell^2})$ by \cite{deuring}.
\paragraph{Optimal embeddings.}
Suppose that $x \in \mathcal{CM}_K$ is a CM point of conductor $c$ and let $s = \red_{\lambda}(x) \in \mathcal{X}(\ell)^{ss}$ be the image of $x$ under the reduction map. Any element $\phi \in \End(x)$ then gives rise to an endomorphism $\widetilde{\phi} \in \End(s)$ and any $\phi \ne 0$ gives rise to a non-trivial $\widetilde{\phi}$, i.e., the map $f \colon \End(x) \rightarrow \End(s)$ is an embedding of the quadratic order $\mathcal{O}_{c(x)}$ into the Eichler order $R_s := \End(s)$ of level $N$ in $B_{\ell, \infty}$. In general, let $\mathcal{O}$ be a quadratic order, let $R$ be an Eichler order of $B_{\ell, \infty}$ and let $f \colon \mathcal{O} \hookrightarrow R$ be an embedding. Extending $f$ to an embedding $f_K \colon K \hookrightarrow B_{\ell, \infty}$, we call $f$ \emph{optimal} if $f_K(\mathcal{O}) = f_K(K) \cap R$. Note that if $(c, \ell N) = 1$ and $c(x) = c$ then the embedding $f \colon \End(x) \hookrightarrow R_s$ is optimal. In general, if $c^{(\ell)}$ is the prime-to-$\ell$ part of $c$ then $f(K) \cap R_s = f(\mathcal{O}_{c^{(\ell)}})$. Finally, note that every optimal embedding $f \colon \mathcal{O} \hookrightarrow R$ has a complex conjugate embedding $f^\tau \colon \mathcal{O} \hookrightarrow R$.
\paragraph{The main correspondence.}
For the rest of this section, we assume that $c$ is a positive integer satisfying $(c, \ell N) = 1$. We will show how to use Theorem~\ref{thm:main} to relate Heegner points of conductor $c$ reducing to a given supersingular point $s$ and optimal embeddings $f \colon \mathcal{O}_c \hookrightarrow R_s$ via the following correspondence.
\begin{cor}\label{cor:corresp}
Let $s \in \mathcal{X}(\ell)^{ss}$ be a supersingular point modulo $\ell$. There is a one-to-one correspondence between the following two sets:
\[
\begin{array}{rcl}
\left \{
\begin{array}{c}
\textrm{Heegner points } x \textrm{ on } Y_0(N) \\
\textrm{ of conductor }c
\textrm{ reducing to } s
\end{array}
\right \}
&
\Longleftrightarrow
&
\left \{
\begin{array}{c}
R_s^\times-\textrm{conjugacy classes of} \\
\textrm{of conjugate pairs of}\\
\textrm{optimal embeddings } \mathcal{O}_{D, c} \hookrightarrow R_s
\end{array}
\right \}.
\end{array}
\]
\end{cor}
\begin{rem}
This was precisely the correspondence needed in \cite{jetchev-kane} to translate the equidistribution question for Heegner points to a question about optimal embeddings. It is shown in \cite[\S4.1 ]{jetchev-kane} the latter relates to counting primitive representations of integers by ternary quadratic forms.
For $c = 1$, the above correspondence is known as Deuring lifting theorem (see \cite{deuring}) and has been subsequently refined (as a correspondence) by Gross and Zagier \cite[Prop.2.7]{gross-zagier:singular}.
\end{rem}
\subsection{Proof of Corollary~\ref{cor:corresp}}
We start by first relating the moduli definitions of CM points, supersingular points, optimal embeddings and reduction maps to the adelic ones.
\paragraph{CM points.} Fix an elliptic curve $E$ that has CM by $K$ as well as a cyclic subgroup $C \subset E(\overline{\Q})_{\tors}$ of order $N$. This defines a CM point
$[E \rightarrow E/C] \in Y_0(N)$. The adelic Tate module of $E$ is defined as
$\displaystyle \widehat{T}(E) := \prod_{q} T_q(E)$, where $T_q(E)$ is the Tate module of $E$ at the prime $q$. The module $\widehat{T}(E)$ is a free $\widehat{\mathbb{Z}}$-module of rank 2. Fix an identification $e \colon \widehat{T}(E) \stackrel{\sim}{\longrightarrow} \widehat{\mathbb{Z}}^2$.
The associated Galois representation $\displaystyle \widehat{V}(E) = \prod_{q} T_q(E) \otimes \mathbb{Q}_q$ then yields an identification $e \colon \widehat{V}(E) \stackrel{\sim}{\longrightarrow} \mathbb{A}_f^2$. We also get an identification $\Aut(\widehat{T}(E)) \stackrel{\sim}{\longrightarrow} \GL_2(\widehat{\mathbb{Z}})$ that extends to $\Aut(\widehat{V}(E)) \stackrel{\sim}{\longrightarrow} \GL_2(\mathbb{A}_f)$. Since $\End(E) \otimes \mathbb{Q} \subset \End(\widehat{V}(E))$, we get an embedding $K \hookrightarrow M_2(\mathbb{Q}) \hookrightarrow M_2(\mathbb{A}_f)$.
Next, consider $N^{-1} \widehat{T}(E/C)$ as a lattice in $\widehat{V}(E)$ that contains $\widehat{T}(E)$ so that the quotient $N^{-1} \widehat{T}(E/C) / \widehat{T}(E)$ is cyclic of order $N$. Suppose that $\varphi \colon E' \rightarrow E' / C'$ is a cyclic isogeny of degree $N$ where $[(E', C')] \in \mathcal{CM}_K$. Since elliptic curves with CM by $K$ are isogenous over $\mathbb{C}$, we have an isogeny $\phi \colon E \rightarrow E'$. By pulling back the lattices $\widehat{T}(E')$ and
$N^{-1}\widehat{T}(E'/C')$ via $\phi$ and $\varphi \circ \phi$, respectively, we get lattices $T_1$ and $T_2$ in $\widehat{V}(E)$ with the property that $T_2 /T_1 \cong \mathbb{Z} / N\mathbb{Z}$ (we denote such a pair by $T_1 \subset_N T_2$). Note that $T_1 \subset_N T_2 \subset \widehat{V}(E)$ is well defined up to a choice of $\phi \in \Hom(E, E')$.
We are thus led to consider pairs of lattices $T_1 \subset T_2 \subset \widehat{V}(E)$ satisfying $T_2 / T_1 \cong \mathbb{Z} / N\mathbb{Z}$. Now, $\GL_2(\mathbb{A}_f)$ acts transitively on these pairs and the stabilizer of the pair $(\widehat{T}(E), N^{-1} \widehat{T}(E/C))$ is isomorphic to $\widehat{R}^\times$ for some Eichler order $R$ of level $N$ in $M_2(\mathbb{Q})$. Thus, the set of pairs of lattices $T_1 \subset_N T_2 \subset \widehat{V}(E)$ is in bijection with $\GL_2(\mathbb{A}_f) / \widehat{R}^\times$ via the fixed identification $e$.
Next, the copy of $K^\times$ inside $\GL_2(\mathbb{A}_f)$ corresponding to $(\End(E) \otimes \mathbb{Q})^\times$ acts on the set of pairs of lattices $T_1 \subset T_2 \subset \widehat{V}(E)$.
\begin{lem}
The set $\mathcal{CM}_K$ is in one-to-one correspondence with the orbits of $K^\times$ acting on the set of pairs of lattices $T_1 \subset_N T_2 \subset \widehat{V}(E)$. In particular, the fixed choice of $(E, C)$ gives us a bijection
\begin{equation}
\varphi_{E, C} \colon \mathcal{CM}_K \stackrel{\sim}{\longrightarrow} K^\times \backslash \GL_2(\mathbb{A}_f) / \widehat{R}^\times.
\end{equation}
\end{lem}
\begin{proof}
As explained above, given a CM point $x = [(E', E'/C')]$, one obtains a pair $T_1 \subset_N \subset \widehat{V}(E)$ defined up to a choice of $\Hom(E, E')$. Since $\Hom(E, E') \otimes \mathbb{Q} \cong K$, the action of $K^\times$ then removes the ambiguity in the choice of $\phi$. Conversely, given a $K^\times$-orbit $T_1 \subset_N T_2 $, one can modify the pair by an element $\lambda \in K^\times$ to give
$\widehat{T}(E) \subset \lambda T_1 \subset_N \lambda T_2 \subset \widehat{V}(E)$. Then
$\lambda T_1$ and $\lambda T_2$ give rise to finite subgroups $X_1$ and $X_2$ of $E$ that are isomorphic to $T_1 / \widehat{T}(E)$ and $T_2 / \widehat{T}(E)$, respectively, so we get a CM point $[E/X_1 \rightarrow E/X_2]$ in $\mathcal{CM}_K$. Moreover, a different choice of $\lambda \in K^\times$ does not change the resulting point in $\mathcal{CM}_K$. It is easy to conclude that we get a bijection between $\mathcal{CM}_K$ and $K^\times \backslash \{T_1 \subset_N T_2 \subset \widehat{V}(E)\}$. By the preceding argument, the latter set is identified with
$K^\times \backslash \GL_2(\mathbb{A}_f) / \widehat{R}^\times$, so we get an identification
\begin{equation}
\varphi_{E, C} \colon \mathcal{CM}_K \stackrel{\sim}{\longrightarrow} K^\times \backslash \GL_2(\mathbb{A}_f) / \widehat{R}^\times
\end{equation}
that depends only on the choice of the pair $(E, C)$.
\end{proof}
\paragraph{Supersingular points.}
The fixed pair of elliptic curves $(E, E/C)$ reduces to a supersingular point
$(\widetilde{E}, \widetilde{E/C})$ at the prime $\ell$. The curve $\widetilde{E}$ is the special fiber of the N\'eron model $\mathcal{E}_{/\mathbb{Z}_\ell}$ of $E$ at $\ell$. To give an adelic description of $\mathcal{X}(\ell)^{ss}$, we follow \cite[\S3]{ribet:modreps} and \cite[\S3.2]{cornut-vatsal}. The adelic Tate module is
$$
\displaystyle \widehat{T}(\widetilde{E}) := T_\ell(\widetilde{E}) \times \prod_{q \ne \ell} T_q(\widetilde{E}),
$$
where $T_\ell(\widetilde{E})$ is the covariant Dieudonn\'e module associated to $\widetilde{E}$ (see \cite[\S3]{ribet:modreps}). Fix an identification
$e' \colon \End(\widetilde{E}) \otimes \mathbb{Q} \stackrel{\sim}{\longrightarrow} B_{\ell, \infty}$.
Let $R_S = \End(\widetilde{E}, \widetilde{E/C})$ which is identified with an Eichler order of level $N$ in the definite quaternion algebra $B_{\ell, \infty}$ via $e'$. Let $(\widetilde{E}', \widetilde{C}')$ be any
supersingular point modulo $\ell$. Choose an isogeny $\phi \in \Hom(\widetilde{E}, \widetilde{E}') \otimes_{\mathbb{Z}} \mathbb{Q}$ and view
$\widehat{T}(\widetilde{E'})$ via as a sublattice of $\widehat{V}(\widetilde{E})$ by pullback via $\phi$. There exists a unique element $g \in \widehat{B_{\ell, \infty}}^\times / \widehat{R_S}^\times$ such that $g \widehat{T}(\widetilde{E}') = \widehat{T}(\widetilde{E})$. The ambiguity in the choice of
$\phi$ then gives us an identification
\begin{equation}\label{eq:adelic}
\phi_{E, C} \colon \mathcal{X}(\ell)^{ss} \stackrel{\sim}{\longrightarrow} B_{\ell, \infty}^\times \backslash \widehat{B_{\ell, \infty}}^\times / \widehat{R_S}^\times.
\end{equation}
We denote the bijection by $\phi_{E, C}$ to indicate that it depends on the choice of the original pair $(E, C)$.
\paragraph{Optimal embeddings.}
The fixed identifications $e$ and $e'$ above give rise to an embedding of $K^\times$ into $B_{\ell, \infty}$ via the embedding $\End(E) \otimes \mathbb{Q} \hookrightarrow \End(\widetilde{E}) \otimes \mathbb{Q}$.
As was explained in the introduction, right-hand side of \eqref{eq:adelic} is identified with
the set $\Cl(R_S)$ of $B_{\ell, \infty}^\times$-homothety classes of locally principal
fractional right $R_S$-ideals in $B_{\ell, \infty}$.
The set of pairs $(f \colon K \hookrightarrow B_{\ell, \infty}, \mathcal I)$, where $f \colon K \hookrightarrow B_{\ell, \infty}$ is an embedding and $\mathcal I \in \Cl(R_S)$ is an ideal class is then classified by the following
product (via the above identifications):
$$
B_{\ell, \infty}^\times / K^\times \times B_{\ell, \infty}^\times \backslash \widehat{B_{\ell, \infty}}^\times / \widehat{R_S}^\times.
$$
In order to identify the latter product with the double quotient $K^\times \backslash \widehat{B_{\ell, \infty}}^\times / \widehat{R_S}^\times$,
we need to make a choice of ideals that are representatives for the ideal class group (equivalently, elements $g_1, \dots, g_h \in \widehat{B_{\ell, \infty}}^\times$ such that
$\displaystyle \widehat{B_{\ell, \infty}} = \bigsqcup_{i= 1}^h B_{\ell, \infty}^\times g_i \widehat{R_S}^\times$).
The way to do this is to take the supersingular points $s_1, \dots, s_h \in \mathcal{X}(\ell)^{ss}$ and consider their endomorphism rings $R_i = \End(s_i)$ that are Eichler orders of $B_{\ell, \infty}$ of level $N$. Each $R_i$ then corresponds to an ideal $I_i$ representing a class $\mathcal I_i$. Thus, we get an identification
\begin{equation}\label{eq:optimal}
B_{\ell, \infty}^\times / K^\times \times B_{\ell, \infty}^\times \backslash \widehat{B_{\ell, \infty}}^\times / \widehat{R_S}^\times \simeq K^\times \backslash \widehat{B_{\ell, \infty}}^\times/ \widehat{R_S}^\times.
\end{equation}
\paragraph{Reduction maps.}
We are now ready to identify the reduction map defined in Section~\ref{subsec:moduli} in the moduli setting with a map $\red_S$ defined by \eqref{eq:redmap} in the adelic setting. Recall that the construction of the lifting map $\theta_S$ involved a choice of an embedding $\iota_\lambda \colon \overline{K} \rightarrow \overline{K_\lambda}$ as well as embeddings $K \hookrightarrow M_2(\mathbb{Q})$ and $K \hookrightarrow B_{\ell, \infty}$. We have already fixed a choice of $\iota_\lambda$ in the beginning of the section. Moreover, the fixed choice of $(E, C)$ and $e$ determines embeddings $K \hookrightarrow M_2(\mathbb{Q})$ and $K \hookrightarrow B_{\ell, \infty}$ as explained above. Thus, the fixed choice of $(E, C)$ and $e$ determine uniquely $\theta_S$ and hence, $\red_S$.
\begin{lem}
If $(E, C)$ is a fixed pair of an elliptic curve $E$ that has CM by $K$ and a cyclic subgroup of $E$ of order $N$ and if $S = \{\ell\}$ then the following diagram is commutative:
\[
\xymatrix{
\mathcal{CM}_K \ar@{->}[r]^{\red_{\lambda}} \ar@{->}[d]^{\varphi_{E, C}} & \mathcal{X}(\ell)^{ss} \ar@{->}[d]^{\phi_{E, C}} \\
K^\times \backslash \GL_2(\mathbb{A}_f) / \widehat{R}^\times \ar@{->}[r]^{\red_S} & B_S^\times \backslash \widehat{B_S}^\times / \widehat{R_S}^\times,
}
\]
where $\red_S$ is the map defined in \eqref{eq:redmap} and $\varphi_{E, C}$ and $\phi_{E, C}$ are the above bijections.
\end{lem}
\begin{proof}
Start with a pair $(E', C')$ representing a CM point $x \in \mathcal{CM}_K$ with the property that the elliptic curve $E'$ has a good reduction at $\ell$ (the existence of such a representative can always be guaranteed by \cite{serre-tate}) and consider an isogeny $\phi \in \Hom(E,E')$. The preimages of $T(E')$ and $N^{-1}T(E'/C')$ under $\phi$ give a pair of lattices $T_1 \subset_N T_2 \subset \widehat{V}(E)$. Choose $g \in \GL_2(\mathbb{A}_f)$ that transforms this pair to the pair $\widehat{T}(E) \subset_N N^{-1}\widehat{T}(E/C)$, so we know that $\varphi_{E, C}(x) = [g]$.
\noindent We now distinguish two cases:
\paragraph{Case 1: $p \ne \ell$.} Recall that in order to define $\theta_{S, p}$ one uses a carefully chosen isomorphism $M_2(\mathbb{Q}_p) \cong B_{S, p}$ (see \cite[\S 2.1.3]{cornut-vatsal}) that identifies the images of $\End(E) \otimes \mathbb{Q} \cong K$ under $e_p$ and $e_p'$.
This implies that the element $g_p \in \GL_2(\mathbb{Q}_p)$ transforming $(T_1)_p \subset_N (T_2)_p \subset V_p(E)$ to the pair $T_p(E) \subset_N N^{-1} T_p(E/C)$ maps to an element $\widetilde{g}_p \in B_{S, p}^\times$ transforming the pair $(\widetilde{T_1})_p \subset_N (\widetilde{T_2})_p \subset V_p(\widetilde{E})$ to the pair $T_p(\widetilde{E}) \subset_N N^{-1}T_p(\widetilde{E} / \widetilde{C})$, where $\widetilde{T_1} = T_p{\widehat{E'}}$ for $\widetilde{E'}$ being the image of $E$ under the map $\red_\lambda$.
\paragraph{Case 2: $p = \ell$.} Recall that $\theta_{S, \ell}$ was constructed via
\begin{equation}
\theta_{S, \ell} \colon \GL_2(\mathbb{Q}_\ell) / R_\ell^\times \xra{\det} \mathbb{Q}_\ell^\times / \det(R_\ell^\times)
\cong B_{S, \ell}^\times / R_{S, \ell}^\times,
\end{equation}
where the last identification is given by the reduced norm $\nr \colon B_{S, \ell}^\times \rightarrow \mathbb{Q}_\ell^\times$. Next, recall that one obtains a pair of lattices $(T_1)_\ell \subset_N (T_2)_\ell \subset V_\ell(E)$ from the Tate modules $T_\ell(E')$ and $T_\ell(E' / C')$ via pullbacks via $\phi$. The isogeny $\phi$ induces an isogeny $\widetilde{\phi} \colon \widetilde{E} \rightarrow \widetilde{E'}$ that one uses to get a pair of lattices $(\widetilde{T_1})_\ell \subset_N (\widetilde{T_2})_\ell \subset V_\ell(\widetilde{E})$. Thus, if $g_\ell \in \GL_2(\mathbb{Q}_\ell)$ is the element that transforms $(T_1)_\ell \subset_N (T_2)_\ell$ to $T_\ell(E) \subset_N N^{-1}T_\ell(E)$ and $\tilde{g} \in B_{S, \ell}^\times$ is the element transforming $(\widetilde{T_1})_\ell \subset_N (\widetilde{T_2})_\ell$ to $T_\ell(\widetilde{E}) \subset_N N^{-1} T_\ell(\widetilde{E/C})$ then we will be done if we show that the classes of $\det(g_\ell)$ and $\nr(\widetilde{g}_\ell)$ in $\mathbb{Q}_\ell^\times / \mathbb{Z}_\ell^\times$ coincide. But the image of $\det(g_\ell)$ in $\mathbb{Q}_\ell^\times / \mathbb{Z}_\ell^\times$ is precisely $p^{\ord_\ell(\deg(\phi))} \mathbb{Z}_\ell^\times$ and the image of $\nr(\widetilde{g}_\ell)$ is $p^{\ord_\ell(\deg(\widetilde{\phi}))} \mathbb{Z}_\ell^\times$. The statement follows since $\deg(\phi) = \deg(\widetilde{\phi})$.
\end{proof}
\noindent Finally, Corollary~\ref{cor:corresp} is an immediate consequence of the above lemma and Theorem~\ref{thm:main}.
\fi
\section{Adelic constructions of the lifting and the reduction maps}\label{sec:lifting}
\subsection{Notation and preliminaries}\label{subsec:not}
We now switch back to the notation in the introduction, where $F$ is a totally real number field, $B$ is a quaternion algebra over $F$, $K$ is a totally imaginary quadratic extension of $F$ and $\iota \colon K \hookrightarrow B$ is an embedding. We denote by $\mathrm{Ram}_{f}B$ the set of all finite places of $F$ where $B$ does not split. Let $S$ be a finite set of finite places of $F$ that satisfy the following hypotheses:
{\bf H.1.} $B$ is unramified at every $v \in S$, i.e., $\mathrm{Ram}_{f}B\cap S=\emptyset$,
{\bf H.2.} $|S| + |\Ram_f(B)|$ + $[F:\mathbb{Q}]$ is even,
{\bf H.3.} Every $v \in S$ is either inert or ramified in $K$.
Hypotheses {H.1.} and {H.2.} imply that there exists a unique (up to isomorphism) totally
definite quaternion algebra $B_S$ over $F$ ramified exactly at the finite places $\Ram_f(B) \cup S$. Hypothesis {H.3.} implies that $K$ embeds into $B_S$.
We fix once and for all such an embedding $\iota_{S}:K\hookrightarrow B_{S}$.
We let
\[
G=\mathrm{Res}_{F/\mathbb{Q}}B^{\times},\qquad G_{S}=\mathrm{Res}_{F/\mathbb{Q}}B_{S}^{\times}\qquad\mbox{and}\qquad T=\mathrm{Res}_{F/\mathbb{Q}}K^{\times}.
\]
\iffalse
These are linear algebraic groups over $\mathbb{Q}$, equipped with fixed embeddings
\[
\iota \colon T\hookrightarrow G\qquad\mbox{and}\qquad \iota_{S} \colon T\hookrightarrow G_{S}.
\]
Let $\mathfrak n \subset \mathcal{O}_F$ be an integral ideal of $\mathcal{O}_F$. For the rest of the paper, let $R = R' \cap R'' \subset B$ be an Eichler order of level $\mathfrak n$ where $R'$ and $R''$ are maximal orders of $B$. We will consider $H = \widehat{R}^\times$ and will define $R_S := \pi_S^{-1}(\theta_S(\widehat{R})) \cap B_S \subset B_S$, $R_S' = \pi_S^{-1}(\theta_S(\widehat{R'})) \cap B_S$ and
$R_S'' = \pi_S^{-1}(\theta_S(\widehat{R''})) \cap B_S$. Note that $R_S'$ and $R_S''$ are maximal orders of $B_S$ and that $R_S$ is an Eichler order of $B_S$ whose level $\mathfrak n^{(S)}$ is the prime-to-$S$ part of $\mathfrak n$. Throughout, we will be using the open compact subgroup $H_S = \widehat{R_S}^\times \subset G_S(\mathbb{A}_f)$. In Section~\ref{subsec:adelic}, we explain how to obtain a simultaneous reduction map
\begin{equation}
\RED_S \colon \mathcal{CM}(G, H) \rightarrow \mathcal{X}(G_S, H_S)
\end{equation}
at all primes in $S$ via the key construction of Galois and Hecke-equivariant liftings
$$
\theta_S \colon \mathcal{CM}(G, H) \rightarrow \mathcal{CM}(G_S, H_S)
$$
that make the following diagram commute:
$$
\xymatrix{
& \mathcal{CM}(G_S, H_S) \ar@{->}[d]^{\pi}\\
\mathcal{CM}(G, H) \ar[r]^{\mathrm{RED}}\ar@{.>}[ur]^{\theta} & \mathcal{X}(G_S, H_S).
}
$$
For any $\Gal_{K}$-orbit $\Gamma$ of points in $\mathcal{CM}(G,H)$
and any point $s\in\mathcal{X}(G_S,H_S)$, our equivariant lifting $\theta$
yields a $\kappa$-to-$1$ surjective map
\[
\theta \colon \Gamma \cap \mathrm{red}^{-1}(s)\rightarrow \Gamma_S \cap \pi^{-1}(s)
\]
where $\Gamma_S=\theta_S(\Gamma)$ is a $\Gal_{K}$-orbit in $\mathcal{CM}(G_S,H_S)$ and $\displaystyle \kappa=\kappa(\Gamma)=\frac{\left|\Gamma \right|}{\left|\Gamma_S\right|}$
is a non-negative integer (see Section~\ref{subsec:corresp} for the full statement of the correspondence).
\fi
\subsection{Adelic construction of the map $\theta_S$}\label{subsec:adelic}
Following Cornut and Vatsal (see~\cite[\S 2.1]{cornut-vatsal}), we consider the following locally compact and totally discontinous groups:
\[
\left\{ \begin{array}{rcccl}
T(\mathbb{A}_{f}) & = & (K\otimes\mathbb{A}_{f})^{\times} & = & {\textstyle \prod'}K_{v}^{\times}\\
G(\mathbb{A}_{f}) & = & (B\otimes\mathbb{A}_{f})^{\times} & = & {\textstyle \prod'}B_{v}^{\times}\\
G_{S}(\mathbb{A}_{f}) & = & (B_{S}\otimes\mathbb{A}_{f})^{\times} & = & {\textstyle \prod'}B_{S,v}^{\times}\end{array}\right.\enskip\mbox{and}\quad G(S)=\left({\textstyle \prod'_{v\notin S}}B_{S,v}^{\times}\right)\times{\textstyle \prod}_{v\in S}F_{v}^{\times}.
\]
\noindent The restricted products on the left are the usual ones defined by the arbitrary choice of an integral structure on the
algebraic groups, whereas the restricted product on the right is induced by that of $G_{S}(\mathbb{A}_{f})$.
These topological groups together fit in a commutative diagram
$$
\xymatrix{
& G(\mathbb{A}_f) \ar@{->}[rd]^{\phi_S}& \\
T(\mathbb{A}_f) \ar@{->}[ur]^{\iota} \ar@{->}[rd]^{\iota_S} & & G(S) \\
& G_S(\mathbb{A}_f) \ar@{->}[ru]^{\pi_S} &
}
$$
where $\iota$ and $\iota_{S}$ are the continuous embeddings induced
by their algebraic counterparts, where $\pi_{S}$ is the continuous,
open and surjective morphism
\[
G_{S}(\mathbb{A}_{f})={\textstyle \prod}'_{v\notin S}B_{S,v}^{\times}\times {\textstyle \prod}_{v\in S}B_{S,v}^{\times}\twoheadrightarrow {\textstyle \prod}'_{v\notin S}B_{S,v}^{\times}\times{\textstyle \prod}_{v\in S}F_{v}^{\times}=G(S)
\]
which is induced by the identity on $B_{S,v}$ for $v\not\in S$ and by the reduced norm
$\mathrm{nr}_{S, v} \colon B_{S,v}^{\times}\rightarrow F_{v}^{\times}$ for $v\in S$. Finally, the continuous, open and
surjective morphism
\[
\phi_S \colon G(\mathbb{A}_{f})={\textstyle \prod}'_{v\notin S}B_{v}^{\times}\times {\textstyle \prod}_{v\in S}B_{v}^{\times}\twoheadrightarrow{\textstyle \prod}'_{v\notin S}B_{S,v}^{\times} \times {\textstyle \prod}_{v\in S}F_{v}^{\times}=G(S)
\]
is again induced by the reduced norm $\mathrm{nr}_{v} \colon B_{v}^{\times}\rightarrow F_{v}^{\times}$
for $v\in S$, and by well-chosen isomorphisms $\theta_{S,v} \colon B_{v}\stackrel{\simeq}{\longrightarrow}B_{S, v}$
for $v\notin S$ whose choice is carefully explained in \cite[\S 2.1.3]{cornut-vatsal}.
\iffalse
\noindent Here, $\displaystyle {\textstyle \prod'_{v \notin S}} B_{v}^\times$ denotes the restricted topological product of the groups $B_{v}^\times$ with respect to the compact subgroups $R_{v}'^\times \subset B_{v}^\times$, where $R_{v}'$ is the closure of $R'$ in $B_{v}$. The restricted topological product $\displaystyle {\textstyle \prod'_{v \notin S}} B_{S, v}^\times$ is defined similarly.
We will use the adelic group $G(S)$ for the adelic description of the reduction maps.
\paragraph {The map $\theta_S \colon G(\mathbb{A}_f) \rightarrow G(S)$.} Cornut and Vatsal (see~\cite[\S 2.1.3]{cornut-vatsal}) define a map
$\theta_S \colon G(\mathbb{A}_f) \rightarrow G(S)$ that is compatible with the fixed embeddings $\iota$ and $\iota_S$, i.e., that makes the following diagram commutative:
$$
\xymatrix{
& G(\mathbb{A}_f) \ar@{->}[rd]^{\theta_S}& \\
T(\mathbb{A}_f) \ar@{->}[ur]^{\iota} \ar@{->}[rd]^{\iota_S} & & G(S) \\
& G_S(\mathbb{A}_f) \ar@{->}[ru]^{\pi_S} &
}
$$
\noindent The construction uses the reduced norm at each $v \in S$:
$$
\nr_S = \prod_{v \in S} \nr_v \colon \prod_{v \in S} B_v \rightarrow \prod_{v \in S} F_v.
$$
For $v \notin S$, $B_{v} \cong B_{S, v}$, but there are various choices for the isomorphism, and not every choice would make the above diagram commute. Thus, constructing such $\theta_S$ could be achieved if one can construct a collection of isomorphisms $\theta_v : B_v \rightarrow B_{S, v}$ for
$v \notin S$, such that:
\begin{enumerate}
\item For all $v \notin S$, $\theta_v \circ \iota = \iota_S$,
\item The map $\displaystyle \prod_{v \notin S} \theta_v : \textstyle{\prod'_{v \notin S}} B_v \rightarrow
\textstyle{\prod'_{v \notin S}} B_{S, v}$ is a continuous isomorphism (here,
$\displaystyle \textstyle{\prod'_{v \notin S}} B_v$ and $\displaystyle \textstyle{\prod'_{v \notin S}} B_{S, v}$ denote the
restricted topological products of the $B_v$'s and the $B_{S, v}$'s with respect to
the compact subgroups $R_v'$ and $R_{S, v}'$, respectively.
\end{enumerate}
We refer the reader to \cite[\S 2.1.3]{cornut-vatsal} for the details on how to construct the maps $\theta_v$.
\fi
\paragraph{Topological properties and Galois action.} We use the above commutative diagram to let
$T(\mathbb{A}_{f})$ act on $G(\mathbb{A}_{f})$, $G_{S}(\mathbb{A}_{f})$ and $G(S)$ by multiplication on the left.
Let $\overline{T(\mathbb{Q})}$ be the closure of $T(\mathbb{Q})$
in $T(\mathbb{A}_{f})$, and recall that class field theory yields
an isomorphism of topological groups
$$
\mathrm{Art}_K \colon T(\mathbb{A}_{f})/\overline{T(\mathbb{Q})} \stackrel{\simeq}{\longrightarrow} \Gal_{K}^{ab}.
$$
We thus obtain continuous $\Gal_{K}^{ab}$-equivariant maps of topological
$\Gal_{K}^{ab}$-sets \[
\overline{T(\mathbb{Q})}\backslash G(\mathbb{A}_{f})\stackrel{\phi_{S}}{\longrightarrow}\overline{T(\mathbb{Q})}\backslash G(S)\stackrel{\pi_{S}}{\longleftarrow}\overline{T(\mathbb{Q})}\backslash G_{S}(\mathbb{A}_{f}).\]
These maps are also respectively equivariant for the right actions
of $G(\mathbb{A}_{f})$ and $G_{S}(\mathbb{A}_{f})$. For a compact
open subgroup $H$ of $G(\mathbb{A}_{f})$, define $H(S)=\phi_{S}(H)$
and $H_{S}=\pi_{S}^{-1}(H(S))$. We now have $\Gal_{K}^{ab}$-equivariant
maps of discrete $\Gal_{K}^{ab}$-sets \[
\overline{T(\mathbb{Q})}\backslash G(\mathbb{A}_{f})/H\stackrel{\phi_{S}}{\longrightarrow}\overline{T(\mathbb{Q})}\backslash G(S)/H(S)\stackrel{\pi_{S}}{\longleftarrow}\overline{T(\mathbb{Q})}\backslash G_{S}(\mathbb{A}_{f})/H_{S}\]
and the above $\pi_{S}$ is a bijection by construction of $H_{S}$.
Since \[
\mathcal{CM}(G,H)=T(\mathbb{Q})\backslash G(\mathbb{A}_{f})/H=\overline{T(\mathbb{Q})}\backslash G(\mathbb{A}_{f})/H\]
and similarly for $G_{S}$, we obtain a $\Gal_{K}^{ab}$-equivariant
map of discrete $\Gal_{K}^{ab}$-sets \[
\theta_{S}=\pi_{S}^{-1}\circ\phi_{S}:\mathcal{CM}(G,H)\rightarrow\mathcal{CM}(G_{S},H_{S}).\]
If $H=\prod_{v}H_{v}$ in $G(\mathbb{A}_{f})=\prod'_{v}B_{v}^{\times}$,
then $H_{S}=\prod_{v}H_{S,v}$ in $G_{S}(\mathbb{A}_{f})=\prod'_{v}B_{S,v}^{\times}$
with\[
H_{S,v}=\phi_{v}(H_{v})\mbox{ for }v\notin S\quad\mbox{and}\quad H_{S,v}=\nr_{S,v}^{-1}\left(\nr_{v}(H_{v})\right)\mbox{ for }v\in S.\]
In this case, the map induced by $\theta_{S}$ on the $\Gal_{K}^{ab}$-orbit
spaces,
\begin{equation}\label{eq:thetabar}
\overline{\theta}_{S} \colon \Gal_{K}^{ab}\backslash\mathcal{CM}(G,H)\rightarrow\Gal_{K}^{ab}\backslash\mathcal{CM}(G_{S},H_{S})
\end{equation}
has a purely local description, namely
\begin{equation}\label{eq:thetabar-local}
\overline{\theta}_{S}=(\overline{\theta}_{S,v}) \colon {\textstyle \prod'_{v}}K_{v}^{\times}\backslash B_{v}^{\times}/H_{v}\rightarrow{\textstyle \prod'_{v}}K_{v}^{\times}\backslash B_{S,v}^{\times}/H_{S,v}
\end{equation}
where $\overline{\theta}_{S,v}$ is the bijection induced by $\theta_{S, v} \colon B_{v} \stackrel{\simeq}{\longrightarrow} B_{S, v}$ for $v\not\in S$, and equals \[
K_{v}^{\times}\backslash B_{v}^{\times}/H_{v}\stackrel{\nr_{v}}{\longrightarrow}\nr_{v}(K_{v}^{\times})\backslash F_{v}^{\times}/\nr_{v}(H_{v})\stackrel{\nr_{S,v}^{-1}}{\longrightarrow}K_{v}^{\times}\backslash B_{S,v}^{\times}/H_{S,v}\]
for $v\in S$.
\noindent The construction of the map $\theta_S$ also gives us a reduction map
\begin{equation}\label{eq:redmap}
\red_S \colon \mathcal{CM}(G, H) \rightarrow \mathcal{X}(G_S, H_S)
\end{equation}
defined by $\red_S := \pi \circ \theta_S$, where $\pi \colon \mathcal{CM}(G_S, H_S) \rightarrow \mathcal{X}(G_S, H_S)$ is the natural projection map.
\subsection{Fine and coarse conductors}\label{subsec:cond}
We now specialize the above constructions to the case $H=\widehat{R}^{\times}$ for some Eichler
$\mathcal{O}_{F}$-order $R$ in $B$ that will be fixed throughout. The level of $R$ is a nonzero integral ideal of
$\mathcal{O}_{F}$ that we denote by $\mathfrak{n}$. We also choose two maximal $\mathcal{O}_{F}$-orders
$R'$ and $R''$ in $B$ such that $R=R'\cap R''$. Then $H_{S}=\widehat{R}_{S}^{\times}$
where $R_{S}$ is an Eichler $\mathcal{O}_{F}$-order in $B_{S}$
whose level $\mathfrak{n}_{S}$ is the prime-to-$S$-part of $\mathfrak{n}$.
We also obtain two maximal $\mathcal{O}_{F}$-orders $R'_{S}$
and $R''_{S}$ in $B_{S}$ such that $R_{S}=R'_{S}\cap R''_{S}$. For a finite place $v$ of $F$, $R_{S, v} = \theta_{S, v}(R_v)$ if $v \notin S$ while $R_{S, v}$ is the unique maximal order of $B_{S, v}$ if $v \in S$.
Explicitly,
\[
R_{S}=\left\{ b\in B_{S} \colon \forall v\notin S,\,\theta_{S,v}^{-1}(b)\in R_{v} \text{ and }\forall v \in S, \ \nr_{S, v}(b) \in \mathcal{O}_{F,v} \right\}.
\]
We have a similar explicit description for $R'_{S}$ and $R''_{S}$.
\paragraph{Definitions.} For any finite set of places $T$ of $F$, let $\mathcal{I}(T)$ be the monoid of integral ideals of $\mathcal{O}_F$ that are coprime to the places of $T$. The \emph{fine conductor} is a $\Gal_K^{\ab}$-invariant map
\begin{equation}
\mathfrak{c}_{f} \colon \mathcal{CM}(G, H) \rightarrow \mathcal{I}(\Ram_f B) \times \mathcal{I}(\Ram_f B)
\end{equation}
that is defined as follows: given a CM point $x = [g]$ in $\mathcal{CM}(G, H) = T(\mathbb{Q}) \backslash G(\mathbb{A}_f) / H$, consider the images $x'$ and $x''$ of $x$ in $T(\mathbb{Q}) \backslash G(\mathbb{A}_f) / \widehat{R'}^\times$ and $T(\mathbb{Q}) \backslash G(\mathbb{A}_f) / \widehat{R''}^\times$, respectively.
The intersection $K \cap g\widehat{R'} g^{-1}$ is an $\mathcal{O}_F$-order in $K$ that depends only on the $\Gal_K^{\ab}$-orbit
of $x$ and whose conductor $\mathfrak{c}(x')$ is an $\mathcal{O}_F$-ideal that is prime to $\Ram_f B$.
Similarly, we obtain an integral ideal $\mathfrak{c}(x'') \subset \mathcal{O}_F$ for $R''$. One then defines $\mathfrak{c}_{f}(x) := (\mathfrak{c}(x'), \mathfrak{c}(x''))$.
The \emph{coarse conductor map}
\begin{equation}
\mathfrak{c}_{g} \colon \mathcal{CM}(G, H) \rightarrow \mathcal{I}(\Ram_f B)
\end{equation}
is defined as $\mathfrak{c}_{g}(x) = \mathfrak{c}(x') \cap \mathfrak{c}(x'') \subset \mathcal{O}_F$. The stabilizer of $x$ in $T(\mathbb{A}_f) = \widehat{K}^\times$ is then
$$
\Stab_{T(\mathbb{A}_f)}(x) = K^\times \cdot \left ( \widehat{K}^\times \cap g \widehat{R}^\times g^{-1} \right ) = K^\times \widehat{\mathcal{O}_{\mathfrak{c}_{g}(x)}}^\times,
$$
and the field $K(x)$ that is fixed by the stabilizer of $x$ in $\Gal_K^{\ab}$ is the ring class field $K[\mathfrak{c}_{g}(x)]$ of conductor $\mathfrak{c}_{g}(x)$, i.e., the abelian extension of $K$ fixed by $\rec_K(K^\times \widehat{\mathcal{O}_{\mathfrak{c}_{g}(x)}}^\times)$.
Similarly, we define the fine and coarse conductors
$$
\mathfrak{c}_{S, \textrm{f}} \colon \mathcal{CM}(G_S, H_S) \rightarrow \mathcal I(\Ram_{f}B_S)^2 \qquad \text{and} \qquad \mathfrak{c}_{S, \textrm{g}} \colon
\mathcal{CM}(G_S, H_S) \rightarrow \mathcal I(\Ram_{f} B_S).
$$
Note that if $y = \theta_S(x)$ then $\mathfrak{c}(y)$ is the prime-to-$S$ part of $\mathfrak{c}(x)$. We thus have a commutative diagram
\begin{equation}\label{eq:DiagThetaEichler}
\xymatrix{
\mathcal{CM}(G,H) \ar@{->>}[r] \ar@{->}[d]^{\theta_S} & \Gal_{K}^{ab}\backslash\mathcal{CM}(G,H) \ar@{->}[r]^{\mathfrak c_\textrm{f}} \ar@{->}[d]^{\overline{\theta}_S} & \mathcal{I}(\mathrm{Ram}_{f}B)^{2} \ar@{->}[d]^{()_S} \\
\mathcal{CM}(G_{S},H_{S}) \ar@{->>}[r] & \Gal_{K}^{ab}\backslash\mathcal{CM}(G_{S},H_{S}) \ar@{->}[r]^{\mathfrak c_{S, \textrm{f}}} & \mathcal{I}(\mathrm{Ram}_{f}B_{S})^{2},
}
\end{equation}
where the map $()_{S}$ sends $I \in\mathcal{I}(\Ram_f B)$ to its prime-to-$S$ part $I_{S}\in\mathcal{I}(\Ram_f B_S)$.
\paragraph{Local analysis.} The right-hand square of this diagram can be analyzed by purely local means: it is the restricted product over all finite primes $v$ of $F$ of one of the following diagrams:
\vspace{0.1in}
{\bf Case 1:} $v \notin \Ram_f B \cup S$.
\[
\xymatrix{
K_{v}^{\times}\backslash B_{v}^{\times}/R_{v}^{\times} \ar@{->}[r]^-{\underline{n}_{v}} \ar@{->}[d]^{\theta_{S, v}} & \mathbb{N} \times \mathbb{N} \ar@{->}[d]^{\mathrm{id}} \\
K_{v}^{\times}\backslash B_{S,v}^{\times}/R_{S,v}^{\times} \ar@{->}[r]^-{\underline{n}_{S,v}} & \mathbb{N} \times \mathbb{N}
}
\]
\vspace{0.1in}
{\bf Case 2:} $v \in S$.
\[
\xymatrix{
K_{v}^{\times}\backslash B_{v}^{\times}/R_{v}^{\times} \ar@{->}[r]^-{\underline{n}_{v}} \ar@{->}[d]^{\theta_{S, v}} & \mathbb{N} \times \mathbb{N} \ar@{->}[d]^{0} \\
K_{v}^{\times}\backslash B_{S,v}^{\times}/R_{S,v}^{\times} \ar@{->}[r] & 0
}
\]
\vspace{0.1in}
{\bf Case 3:} $v \in \Ram_f B$.
\[
\xymatrix{
K_{v}^{\times}\backslash B_{v}^{\times}/R_{v}^{\times} \ar@{->}[r]^-{\underline{n}_{v}} \ar@{->}[d]^{\theta_{S, v}} & 0 \ar@{->}[d]^{0} \\
K_{v}^{\times}\backslash B_{S,v}^{\times}/R_{S,v}^{\times} \ar@{->}[r] & 0
}
\]
Here, for $\star \in \{\emptyset,S\}$, the map $\underline{n}_{\star,v}$ sends $K_{v}^{\times}g_{v}R_{\star,v}^{\times}$ to the pair of integers $(n',n'')$ such that $K_{v}\cap g_{v}R'_{\star, v}g_{v}^{-1}$ (resp. $K_{v}\cap g_{v}R''_{\star, v}g_{v}^{-1}$)
is the order of conductor $\mathfrak p_{v}^{n'}$ (resp. $\mathfrak p_v^{n''}$) in $\mathcal{O}_{K_{v}}$, where $\mathfrak p_{v}$ is the maximal ideal in $\mathcal{O}_{F_{v}}$.
These maps are related to the one defined in \eqref{eq:DefFineCond} as follows: fix a simple left $B_{\star, v}$-module $V_{\star, v}$, let $\mathcal{L}_{v}$ be the set of all $\mathcal{O}_{F_{v}}$-lattices in $V_{\star, v}$ and fix two lattices $\Lambda'_v$ and $\Lambda''_v$ in $\mathcal{L}_{v}$ that are fixed by $(R'_{\star, v})^\times$ and $(R''_{\star, v})^{\times}$, respectively.
Thus, if
$(i_1, i_2) = \mathrm{inv}(\Lambda'_v, \Lambda''_v)$ as defined in (\ref{eq:DefInvRel}) then we have
$\left| i_1 - i_2 \right|=v(\mathfrak{n})$, $R_{\star, v}^{\times}$ is the stabilizer of $(\Lambda'_v, \Lambda''_v) \in \mathcal{L}_{v} \times \mathcal{L}_v$ and $B_{\star, v}^{\times}$ acts transitively on the set $\mathcal{L}_{v}(i_1, i_2)$ of pairs of lattices $(\Lambda', \Lambda'')$ with $\mathrm{inv}(\Lambda', \Lambda'')=(i_1, i_2)$.
Therefore,
\begin{equation}\label{eq:ident}
K_{v}^{\times}\backslash B_{\star, v}^{\times}/R_{\star, v}^{\times}\simeq K_{v}^{\times}\backslash\mathcal{L}_{v}(i_1, i_2)
\end{equation}
is contained in $K_{v}^{\times}\backslash\mathcal{L}_{v} \times \mathcal{L}_v$. The
map $\underline{n}_{v}$ that we have defined above on the former set is equal to the restriction of the map
defined by \eqref{eq:DefFineCond} on the latter. In particular,
the fiber of $\underline{n}_{v}$ over any $(n', n'') \in \mathbb{N} \times \mathbb{N}$
is finite of order $N_{v}(n', n'' ,v(\mathfrak{n}))$ as defined in Lemma~\ref{lem:LocComp} with $q = N(v)$ the order of the residue field of $F$ at $v$.
\subsection{A sign invariant}
In the three diagrams above, the vertical maps are surjective. In
Cases $1$ and $3$, the restriction of the first vertical map to compatible
fibers of the two horizontal maps is still obviously surjective. But
in Case $2$, it may be that such a restriction fails to be surjective.
Since for $v \in S$,
\[
K_{v}^{\times}\backslash B_{S,v}^{\times}/R_{S,v}^{\times}\simeq
\begin{cases}
\mathbb{Z}/2\mathbb{Z} & \mbox{if }K_{v}/F_{v}\mbox{ is unramified}\\
\{0\} & \mbox{otherwise,}
\end{cases}
\]
this occurs precisely when $v\in S$ is inert in $K$. Let thus $S'$
be the set of all places $v$ in $S$ that are inert in $K$. For
any such $v \in S'$, the valuation of the reduced norm induces a projection
$$
K_v^\times \backslash B_v^\times / R_v^{\times} \twoheadrightarrow NK_v^\times \backslash F_v^\times / \nr(R_v^\times) \cong\mathbb{Z}/2\mathbb{Z},
$$
as well as a bijection $K_v^\times \backslash B_{S, v}^\times / R_{S, v}^\times \cong \mathbb{Z} / 2\mathbb{Z}$. We thus obtain maps
$$
\varphi_{S'} \colon \mathcal{CM}(G, H) \rightarrow \Gal_K^{\ab} \backslash \mathcal{CM}(G, H) \rightarrow
\prod_{v \in S'} K_v^\times \backslash B_{v}^\times / R_v^\times \rightarrow (\mathbb{Z} / 2\mathbb{Z})^{|S'|}
$$
and
$$
\psi_{S'} \colon \mathcal{CM}(G_S, H_S) \rightarrow \Gal_K^{\ab} \backslash \mathcal{CM}(G_S, H_S) \twoheadrightarrow
\prod_{v \in S'} K_v^\times \backslash B_{S, v}^\times / R_{S, v}^\times \cong (\mathbb{Z} / 2\mathbb{Z})^{|S'|}.
$$
By construction, $\varphi_{S'} = \psi_{S'} \circ \theta_S$.
The map $\varphi_{S'}$ admits a description in terms of lattices.
Under the identification \eqref{eq:ident}, the element $K_v^\times g_v R_v^\times$ corresponds to the pair of lattices
$(g_v \Lambda_v', g_v \Lambda_v'')$. Let $y'$ and $y''$ be the images of $\Lambda_v'$ and $\Lambda_v''$ in the Bruhat--Tits tree $\mathcal{V}_v = F_v^\times \backslash \mathcal{L}_v$ and recall from Section~\ref{sec:ql} that for all $k \in \mathbb{N}$,
$$
\mathcal{V}_v(k) = \{y \in \mathcal{V}_v \colon n(y) = k\} = \{y \in \mathcal{V}_v \colon \dist(y, y_0) = k\},
$$
where $\mathcal{V}_v(0) = \{y_0\}$ since $v$ is inert in $K$. Therefore,
$$
n(g_v y') - n(y') = \dist(g_v y', y_0) - \dist(y', y_0) \equiv \dist(g_v y', y') \equiv v(\det(g_v)) \textrm{ mod } 2
$$
and similarly for $y''$. Note that the same argument also shows that
$$
n(y') - n(y'') \equiv v(\mathfrak n) \textrm { mod } 2.
$$
The map $\varphi_{S'}$ can thus be computed as follows: for $x \in \mathcal{CM}(G, H)$ with $\mathfrak{c}_{f}(x) = (\mathfrak c' , \mathfrak c'')$,
\begin{equation}\label{eq:defphiS}
\varphi_{S'}(x) = (v(\mathfrak c') - n(\Lambda_v') \textrm{ mod }2)_{v \in S'} = (v(\mathfrak c'') - n(\Lambda_v'') \textrm{ mod }2)_{v \in S'} \in (\mathbb{Z} / 2\mathbb{Z})^{|S'|}.
\end{equation}
In particular, $\varphi_{S'}$ factors through the fine conductor map $\mathfrak{c}_{f}$.
\iffalse
Indeed, via the identification \eqref{eq:ident}, the element $K_v^\times \cdot 1 \cdot R_v'^\times$ corresponds to $\Lambda'_v$. Choose any $g_v \in B_v^\times$ and note that $\dist([\Lambda'_v], [g_v \Lambda'_v]) \equiv v(\det(g_v))$ mod 2 (this is a consequence of the invariant factor theorem applied to $\Lambda'_v$ and $g_v \Lambda'_v$).
Since $n' = n$, then
$$
\dist([\Lambda'_v], [g_v \Lambda'_v]) \equiv n'([\Lambda'_v]) - n'([g_v\Lambda'_v]) =
n([\Lambda'_v]) - n([g_v\Lambda'_v]) \textrm{ mod } 2.
$$
Using the same argument for both $\Lambda_v'$ and $\Lambda_v''$, we obtain
$$
n([g_v\Lambda'_v]) - n([\Lambda'_v]) \equiv v(\det(g_v)) \equiv n([g_v\Lambda''_v]) - n([\Lambda''_v]) \textrm{ mod }2.
$$
Thus, $\varphi_{S'}$ can be computed as follows: if $x \in \mathcal{CM}(G, \widehat{R}^\times)$ and if
$(x', x'')$ is the image of $x$ in $\mathcal{CM}(G, \widehat{R'}^\times) \times \mathcal{CM}(G, \widehat{R''}^\times)$ then
$$
\varphi_{S'}(x) = (v(\mathfrak{c}(x')) - n(\Lambda'_v))_{v \in S'} \in \langle \pm 1 \rangle ^{|S'|}.
$$
In other words, the map $\varphi_{S'}$ is constant on all elements of $\mathcal{CM}(G, H)$ with the same fine conductor.
\fi
\subsection{Statement of the main theorem}\label{subsec:corresp}
Fix $(\mathfrak{c}', \mathfrak{c}'') \in \mathcal I(\Ram_{f} B)$ such that $v(\mathfrak c') - v(\mathfrak c'') \equiv v(\mathfrak n)$ mod 2 for every $v \in S'$. Let
$e_{S'}(\mathfrak c', \mathfrak c'')$ be the element of $(\mathbb{Z}/2\mathbb{Z})^{|S'|}$ defined by the right-hand side of \eqref{eq:defphiS}.
Let $\mathfrak c = \mathfrak c' \cap \mathfrak c''$ and let
$$
\kappa = |\Gal(K[\mathfrak c] / K[\mathfrak c_S])| \prod_{v \in S}N_v(v(\mathfrak{c}'), v(\mathfrak{c}''), v(\mathfrak n)),
$$
where $N_v(n', n'', \delta)$ is the function defined in Lemma~\ref{lem:LocComp} with $q = N(v)$ being the order of the residue field of $F$ at $v$.
\begin{thm}\label{thm:main}
By restriction, the map $\theta_S$ induces a surjective $\kappa$-to-1 correspondence
$$
\theta_S \colon \mathfrak{c}_{f}^{-1}(\mathfrak c', \mathfrak c'') \rightarrow \mathfrak{c}_{S, \textrm{f}}^{-1}(\mathfrak c'_S, \mathfrak c''_S) \cap \psi_{S'}^{-1}(e_{S'}(\mathfrak c', \mathfrak c'')).
$$
\end{thm}
\noindent By restricting to a subspace of the target space, we immediately obtain the following:
\begin{cor}\label{cor:main}
For any point $s \in \mathcal{X}(G_S, H_S)$, the map $\theta_S$ induces a $\kappa$-to-1 correspondence
\begin{equation}\label{eq:corr}
\theta_S \colon \red_S(s)^{-1} \cap \mathfrak{c}_{f}^{-1}(\mathfrak c', \mathfrak c'') \rightarrow \pi^{-1}(s) \cap \mathfrak{c}_{S, \textrm{f}}^{-1}(\mathfrak c'_S, \mathfrak c''_S) \cap \psi_{S'}^{-1}(e_{S'}(\mathfrak c', \mathfrak c'')).
\end{equation}
\end{cor}
\subsection{Computation of the fiber for the definite algebra $B_S$}
Before proving the theorem, we shall explain how to compute the right-hand side of the correspondence \eqref{eq:corr}. Fix $s \in \mathcal{X}(G_S, H_S)$ and some $g \in G_S(\mathbb{A}_f)$ above $s$. Then $b \mapsto b^{-1}g$ induces a bijection
$$
R_{S, g}^\times \backslash G_S(\mathbb{Q}) / T(\mathbb{Q}) \stackrel{\sim}{\longrightarrow} \pi^{-1}(s),
$$
where $R_{S, g} = g \widehat{R_S}g^{-1} \cap B_S$ is an Eichler order of level $\mathfrak n_S$ in $B_S$. On the other hand, the map $b \mapsto \text{ad}(b) \circ \iota_S$ induces a bijection
$$
G_S(\mathbb{Q} ) / T(\mathbb{Q}) \stackrel{\sim}{\longrightarrow} \Hom_{F-\alg}(K, B_S)
$$
that is $G_S(\mathbb{Q}) = B_S^\times$-equivariant for the natural left actions on both sides. Combining these two identifications, we obtain
$$
\pi^{-1}(s) \cong R_{S, g}^\times \backslash \Hom_{F-\alg}(K, B_S).
$$
Let also $R_{S, g}' = g \widehat{R_S'}g^{-1} \cap B_S$ and $R_{S, g}'' = g \widehat{R_S''}g^{-1} \cap B_S$. These are maximal orders in $B_S$ and $R_{S, g} = R_{S,g}' \cap R_{S, g}''$. Under these identifications,
\begin{itemize}
\item The restriction to $\pi^{-1}(s)$ of the fine conductor map $\mathfrak{c}_{S, f} \colon \mathcal{CM}(G_S, H_S) \rightarrow \mathcal I(\Ram_{f}B_S)^2$ is induced by the $R_{S, g}^\times$-invariant map
$$
\mathfrak{c}_{S, f}^g \colon \Hom_{F-\alg}(K, B_S) \rightarrow \mathcal I(\Ram_{f} B_S)^2
$$
whose fiber over $(\mathfrak{c}_S', \mathfrak{c}_S'')$ consists of the embeddings $j \colon K \rightarrow B_S$ such that
$j^{-1}(R_{S, g}') = \mathcal{O}_{\mathfrak{c}'_S}$ and $j^{-1}(R_{S, g}'') = \mathcal{O}_{\mathfrak{c}''_S}$, so that $j^{-1}(R_{S, g}) = \mathcal{O}_{\mathfrak{c}_S}$ with $\mathfrak{c}_S = \mathfrak{c}'_S \cap \mathfrak{c}''_S$.
\item The restriction to $\pi^{-1}(s)$ of the sign invariant $\psi_{S'} \colon \mathcal{CM}(G_S, H_S) \rightarrow (\mathbb{Z} / 2\mathbb{Z})^{|S'|}$ is induced by the $R_{S, g}^\times$-invariant map
$$
\psi_{S'}^g \colon \Hom_{F-\alg}(K, B_S) \stackrel{\text{sp}}{\longrightarrow} \prod_{v \in S'}\Hom_{\mathbb{F}(v)}(\mathbb{K}(v), \mathbb{B}_S(v))
\stackrel{\sim}{\rightarrow} (\mathbb{Z} / 2\mathbb{Z})^{|S'|},
$$
where $\mathbb{F}(v)$, $\mathbb{K}(v)$ and $\mathbb{B}_S(v)$ are the residue fields of the maximal
$\mathcal{O}_{F_v}$-orders in $F_v$, $K_v$ and $B_{S, v}$, respectively and $\text{sp}$ is the natural specialization map. Moreover, the last bijection is the unique isomorphism of $(\mathbb{Z} / 2\mathbb{Z})^{|S'|}$-torsors that maps $\text{sp}(\iota_S)$ to $\psi_{S'}(s) = (v(\det g_v) \text{ mod }2)_{v \in S'}$.
\end{itemize}
\begin{rem}
For a coarse conductor $\mathfrak{c} \in \mathcal I(\Ram_{f}B)$ that is prime to the level $\mathfrak n$, we may replace the fine conductor maps by the coarse ones in the above statements since
$$
\mathfrak{c}_{f}(\mathfrak{c}, \mathfrak{c}) = \mathfrak{c}_g^{-1}(\mathfrak{c}) \qquad \text{and} \qquad \mathfrak{c}_{S, f}^{-1}(\mathfrak{c}_S, \mathfrak{c}_S) = \mathfrak{c}_{S, g}^{-1}(\mathfrak{c}_S).
$$
In this situation, the right-hand side of Corollary~\ref{cor:main} counts the number of $R_{S, g}^\times$-conjugacy classes of optimal embeddings $\mathcal{O}_{\mathfrak{c}_S} \hookrightarrow R_S$ that induce a given collection of isomorphisms $\mathbb{K}(v) \rightarrow \mathbb{B}_S(v)$ between the corresponding residue field at $v \in S'$. If, in addition, $S'$ consists of a single prime $S' = \{\ell\}$, one can remove this last condition by identifying the embeddings that are conjugated by the non-trivial automorphism of $\Gal(K/F) \cong \Gal(\mathbb{K}(v) / \mathbb{F}(v))$.
\end{rem}
\begin{rem}
For $F = \mathbb{Q}$, one can interpret the right-hand side of \eqref{eq:corr} in terms of primitive representations of integers by ternary quadratic forms (see \cite[pp.172--173]{gross:heights} and
\cite[Prop.4.2]{jetchev-kane} for details).
\end{rem}
\section{Introduction}
Let $F$ be a totally real number field and let $B$ be a quaternion algebra over $F$. Let $G = \Res_{F/\mathbb{Q}}B^\times$ be the algebraic group over $\mathbb{Q}$ associated to $B^\times$. Let $\mathbb{A}_f$ be the finite adeles of $\mathbb{Q}$.
For an open subgroup $H \subset G(\mathbb{A}_f)$ that is compact modulo the center let
\[
\mathcal{X}(G,H) =G(\mathbb{Q})\backslash G(\mathbb{A}_f)/H.
\]
This is a finite set. In the case when $H=\widehat{R}^{\times}$ for
some $\mathcal{O}_{F}$-order $R$ in $B$, the map $\widehat{b}\mapsto(\widehat{b}\cdot\widehat{R})\cap B$
identifies $\mathcal{X}(G,H)$ with the set of $B^{\times}$-homothety
classes of locally principal fractional right $R$-ideals in $B$.
When $B$ is not totally definite, the reduced norm $\nr \colon B\rightarrow F$
induces a bijection
\[
\mathcal{X}(G,H)\stackrel{\simeq}{\longrightarrow}\mathrm{nr}(B^{\times})\backslash\widehat{F}^{\times}/\mathrm{nr}(H)
\]
by the strong approximation theorem \cite[p.81]{vigneras:quaternion}. Moreover, the norm theorem \cite[Thm.III.4.1]{vigneras:quaternion} implies that $\mathrm{nr}(B^{\times})$
is precisely the subgroup of elements $\lambda\in F^{\times}$ such that $\lambda_{v}>0$
for all places $v\mid\infty$ of $F$ where $B$ is not split, i.e.,
where $B_{v}\not\simeq M_{2}(F_{v})$. On the other hand, if $B$ is
totally definite, then $\mathcal{X}(G,H)$ is a genuinely non-commutative
object, which shows up quite often in low dimensional arithmetic geometry (see, e.g., \cite{gross:heights} and \cite{ribet:modreps}).
Next, let $K$ be a totally imaginary quadratic extension of $F$ that embeds into $B$. Fix an embedding $K\hookrightarrow B$. If $T = \Res_{K/\mathbb{Q}} K^\times$ is the associated rational torus, we get a corresponding embedding $T \hookrightarrow G$. Define
\[
\mathcal{CM}(G,H)=T(\mathbb{Q})\backslash G(\mathbb{A}_f)/H.
\]
This is now an infinite set with an obvious projection map
\begin{equation}\label{eq:projred}
\pi \colon \mathcal{CM}(G,H)\rightarrow\mathcal{X}(G,H)
\end{equation}
In addition, it is equipped with a left action
of $T(\mathbb{A}_f)$ with finite orbits. Using Artin's reciprocity
map $\rec_{K} \colon T(\mathbb{A}_f) = \widehat{K}^{\times}\twoheadrightarrow\Gal_{K}^{ab}$
whose kernel equals the closure of $K^{\times}$ in $\widehat{K}^{\times}$,
we can also view this action as a continous action of $\Gal_{K}^{ab}$. We shall thus refer to it as the
\emph{Galois action} on $\mathcal{CM}(G, H)$.
\vspace{0.1in}
\noindent As suggested by the notation, such sets are most frequently occurring
as sets of complex multiplication points, or special points, on
certain Shimura varieties. Assume for instance that $B$ is split
at a single place $v\mid\infty$ of $F$, say $v=v_{\iota}$ corresponding
to an embedding $\iota:F\rightarrow\mathbb{R}$. Let $X$ be the Shimura
curve of level $H$ attached to $B$. It is an algebraic curve over
the reflex field $\iota F$, and
\[
\mathcal{CM}(G,H) \simeq G(\mathbb{Q})\backslash\left(G(\mathbb{A}_f)/H\times G(\mathbb{Q})\cdot\tau\right) \subset X(\mathbb{C})=G(\mathbb{Q})\backslash\left(G(\mathbb{A}_f)/H\times(\mathbb{C}-\mathbb{R})\right)
\]
is the set of special points with complex multiplication by $K$ in
$X$. In this formula, we have fixed an isomorphism $B\otimes_{F,\iota}\mathbb{R}\simeq M_{2}(\mathbb{R})$
to define the action of $G(\mathbb{Q})$ on $\mathbb{C}-\mathbb{R}$,
and $\tau$ is the unique point on the upper half-plane whose stabilizer is $K^\times = T(\mathbb{Q})$ under this isomorphism. These special points are defined over the maximal abelian extension of $\iota K$ in $\mathbb{C}$, and the above Galois action is the natural one (this follows from the definition of Shimura varieties). In this setting, the projection map is almost
the projection to the set of connected components of $X$, namely
\[
\mathcal{CM}(G,H)\hookrightarrow X(\mathbb{C})\twoheadrightarrow\pi_{0}(X(\mathbb{C}))=G(\mathbb{Q})\backslash\left(G(\mathbb{A}_f)/H\times\{\pm1\}\right)=G(\mathbb{Q})^+ \backslash G(\mathbb{A}_f) / H.\]
Here $G(\mathbb{Q})$ acts on $\{\pm1\}$ by the sign of $\iota\circ\det \colon G(\mathbb{Q}) \rightarrow\mathbb{R}^{\times}$,
and $G(\mathbb{Q})^+$ is the kernel of this action.
\iffalse
Let $v$ be a place of $F$ that is not split in $K$ and such that $B_v \cong M_2(F_v)$ is split and $H_v$ is a maximal compact. In this case, our Shimura curve $X$ has good reduction at $v$ and our special points reduce
to supersingular points in the special fiber. This yields a reduction
map
\[
\mathrm{RED} \colon \mathcal{CM}(G,H)\rightarrow\mathcal{X}(G',H'),
\]
where $G'$ is the algebraic group associated to the totally definite quaternion algebra $B'$ over $F$
obtained from $B$ by changing the invariants at $v$
and $v_{\iota}$, with $H'$ being the analog of $H$ for $G'$. If instead
we want to reduce our special points at a place $v$ of bad reduction
for $X$, there are similar descriptions for the map from special
points to sets of irreducible components of the special fibers via Ribet's theory of bimodules.
We refer to \cite{molina} for a nice survey of such reduction maps. The
study of these various reduction maps has applications to the study of specializations of CM points
and to the difficult problem of writing down explicit equations for Shimura curves associated to quaternion algebras with non-trivial discriminants \cite{gonzalez-rotger:shimura}, \cite{molina}. Recently, Molina used such reduction maps to write explicit equations for certain hyperelliptic Shimura curves \cite{molina:hyperelliptic}.
\fi
\paragraph{Reduction maps.}
There are similar but more interesting maps to consider. For instance,
let $v$ be a finite place of $F$ such that $B_{v}\simeq M_{2}(F_v)$.
Then our Shimura curve has a natural model over the corresponding
local ring, the special fiber of which contains a distinguished
finite set of supersingular points and this finite set of
points may be identified with a set $\mathcal{X}(G',H')$ as above,
where $G'$ is the algebraic group over $\mathbb{Q}$ associated to
the totally definite quaternion algebra $B'$ over $F$ which is obtained
from $B$ by changing the invariants at $v_{\iota}$ and $v$ (see, e.g., \cite{deligne-rapoport} and \cite{katzmazur} for
$B=M_{2}(\mathbb{Q})$, and \cite{carayol} for
the remaining cases). If we choose an extension of $v$ to the maximal
abelian extension of $\iota K$ in $\mathbb{C}$ then each
CM point extends to the corresponding local ring. Moreover, if $v$
does not split in $K$, these extended points reduce to supersingular
points on the special fiber and one obtains a reduction map
\begin{equation}\label{eq:red}
\red \colon \mathcal{CM}(G,H)\rightarrow\mathcal{X}(G',H')
\end{equation}
that is described in \cite{cornut:inventiones} and Section~\ref{sec:apps} if $B=M_{2}(\mathbb{Q})$ and in \cite{cornut-vatsal} for the remaining cases. On the other hand, if $B$ is ramified at
$v$, there is a similar description for the reduction map with
values in the set of irreducible components of the special fiber of
a suitable model of $X$ via Ribet's theory of bimodules. We refer
to \cite{molina} for a survey of such maps, and to \cite{molina:hyperelliptic} for
applications to the determination of explicit equations for certain
hyperelliptic Shimura curves.
Given such a reduction map, it is expected that the Galois orbits
in $\mathcal{CM}(G,H)$ tend to be equidistributed among the finitely many fibers of the reduction map
$\mathrm{red}$. Such equidistribution results are already known in various cases
\cite{michel:subconvexity}, \cite{cornut-vatsal}, \cite{michel-harcos}, \cite{jetchev-kane}, \cite{molina} and
were crucial in the proof of Mazur's non-vanishing conjecture by the first author and Vatsal
\cite{cornut:inventiones}, \cite{vatsal:uniform}, \cite{cornut-vatsal:durham}.
\paragraph{Our contributions.}
We propose a simple strategy to reduce the study of the arithmetically
interesting reduction maps \eqref{eq:red} to that of the more straightforward
projections \eqref{eq:projred}. In all cases, there is indeed a natural
$\mathrm{Gal}_{K}$-equivariant map
\[
\theta \colon \mathcal{CM}(G,H)\rightarrow\mathcal{CM}(G',H')
\]
such that $\mathrm{red}=\pi\circ\theta$. Thus, for any $\mathrm{Gal}_{K}$-orbit
$\Gamma$ in $\mathcal{CM}(G,H)$ and any point $s\in\mathcal{X}(G',H')$,
we obtain a $\kappa$-to-$1$ surjective map
\[
\theta \colon \Gamma\cap\mathrm{red}^{-1}(s)\rightarrow\Gamma'\cap\pi^{-1}(s)
\]
where $\Gamma'=\theta(\Gamma)$ is a $\mathrm{Gal}_{K}$-orbit in
$\mathcal{CM}(G',H')$ and $\kappa=\kappa(\Gamma)=\left|\Gamma\right|/\left|\Gamma'\right|$.
This paper essentially implements this strategy when $H=\widehat{R}^{\times}$
for some Eichler $\mathcal{O}_{F}$-order $R$ in $B$. The algebraic
description of $\theta$ (and also of $\mathrm{red}=\pi\circ\theta$)
is given in Section~\ref{subsec:adelic} in a more general setting following the
conventions of \cite{cornut-vatsal}. The size of the Galois orbits (and thus also
the constant $\kappa$ above) is controlled by a simple invariant that is defined in Section~\ref{subsec:cond} together with a refinement: these are the coarse and fine conductors $\mathfrak{c}_{g}$ and $\mathfrak{c}_{f}$, respectively. The number of Galois orbits with a prescribed
fine conductor is also given there using elementary local computations
that are carried on in Section~\ref{sec:ql}. Our main result, Theorem~\ref{thm:main},
then describes the restriction of $\theta$ to the fibers of the fine
conductor map. If we furthermore restrict $\theta$ to a given fiber
of $\mathrm{red}$ and $\pi$, we obtain an explicit correspondence
between certain sets of CM points on two distinct quaternion algebras,
a special case of which is used in \cite{jetchev-kane} and thoroughly explained
in the final section. Another case of our main theorem is used in \cite{molina:hyperelliptic} and \cite{molina} to compute explicit equations for Shimura curves with no cusps.
\iffalse
More generally, suppose that the quaternion algebra $B$ is indefinite and fix an embedding $\iota \colon K \hookrightarrow B$. Let $S$ be a set of finite places of $F$ that satisfy the following hypotheses:
{\bf H.1.} $B$ is unramified at every $v \in S$,
{\bf H.2.} $|S| + |\Ram_f(B)|$ + $[F:\mathbb{Q}]$ is even,
{\bf H.3.} Every $v \in S$ is either inert or ramified in $K$.
Hypotheses {H.1.} and {H.2.} imply that there exists a unique totally
definite quaternion algebra $B_S$ over $F$ ramified exactly at the finite places $\Ram_f(B) \cup S$. Hypothesis {H.3.} implies that $K$ embeds into $B_S$. Throughout, fix such an embedding $\iota_S \colon K \hookrightarrow B_S$.
Let $G_S = \Res_{F/\mathbb{Q}} B_S^\times$ be the algebraic group associated to $B_S$. The fixed embeddings give us embeddings $\iota \colon T \rightarrow G$ and $\iota_S \colon T \rightarrow G_S$.
Let $\mathfrak n \subset \mathcal{O}_F$ be an integral ideal of $\mathcal{O}_F$. For the rest of the paper, let $R = R' \cap R'' \subset B$ be an Eichler order of level $\mathfrak n$ where $R'$ and $R''$ are maximal orders of $B$. We will consider $H = \widehat{R}^\times$ and will define $R_S := \pi_S^{-1}(\theta_S(\widehat{R})) \cap B_S \subset B_S$, $R_S' = \pi_S^{-1}(\theta_S(\widehat{R'})) \cap B_S$ and
$R_S'' = \pi_S^{-1}(\theta_S(\widehat{R''})) \cap B_S$. Note that $R_S'$ and $R_S''$ are maximal orders of $B_S$ and that $R_S$ is an Eichler order of $B_S$ whose level $\mathfrak n^{(S)}$ is the prime-to-$S$ part of $\mathfrak n$. Throughout, we will be using the open compact subgroup $H_S = \widehat{R_S}^\times \subset G_S(\mathbb{A}_f)$. In Section~\ref{subsec:adelic}, we explain how to obtain a simultaneous reduction map
\begin{equation}
\RED_S \colon \mathcal{CM}(G, H) \rightarrow \mathcal{X}(G_S, H_S)
\end{equation}
at all primes in $S$ via the key construction of Galois and Hecke-equivariant liftings
$$
\theta_S \colon \mathcal{CM}(G, H) \rightarrow \mathcal{CM}(G_S, H_S)
$$
that make the following diagram commute:
$$
\xymatrix{
& \mathcal{CM}(G_S, H_S) \ar@{->}[d]^{\pi}\\
\mathcal{CM}(G, H) \ar[r]^{\mathrm{RED}}\ar@{.>}[ur]^{\theta} & \mathcal{X}(G_S, H_S).
}
$$
For any $\Gal_{K}$-orbit $\Gamma$ of points in $\mathcal{CM}(G,H)$
and any point $s\in\mathcal{X}(G_S,H_S)$, our equivariant lifting $\theta$
yields a $k$-to-$1$ surjective map
\[
\theta \colon \Gamma \cap \mathrm{red}^{-1}(s)\rightarrow \Gamma_S \cap \pi^{-1}(s)
\]
where $\Gamma_S=\theta_S(\Gamma)$ is a $\Gal_{K}$-orbit in $\mathcal{CM}(G_S,H_S)$ and $\displaystyle k=k(\Gamma)=\frac{\left|\Gamma \right|}{\left|\Gamma_S\right|}$
is a non-negative integer.
\fi
\iffalse
The paper is organized as follows: Section~\ref{sec:ql} contains the necessary background on lattices for quadratic extensions of local fields. In Section~\ref{subsec:quadlat} we discuss the notion of local conductors in and in
Section~\ref{subsec:action}, we study the action of $K^\times$ on the set of pairs of lattices in a given fixed free $K$-module of rank 1. We describe the orbits of this action explicitly. Section~\ref{sec:lifting} is where we introduce the main setting, construct the lifting maps and state the main result of the paper. The adelic construction, as well as the topological properties of the maps are described in Section~\ref{subsec:adelic}. After restricting to the case $\widehat{F}^{\times}H=\widehat{F}^{\times}\widehat{R}^{\times}$ for some Eichler $\mathcal{O}_{F}$-order $R$ in $B$ in Section~\ref{subsec:eichord}, we provide a convenient local parametrization of the $\Gal_{K}$-orbits in $\mathcal{CM}(G,H)$
and $\mathcal{CM}(G',H')$, and describe the maps $\Gamma \mapsto \Gamma'$ and $\Gamma \mapsto k(\Gamma)$ with respect to these parametrisations (Section~\ref{subsec:cond}). Before stating the main result, we introduce a special map from the set $\mathcal{CM}(G', H')$ to a finite group that is product of copies of $\mathbb{Z}/2\mathbb{Z}$. The latter is an artifact (at least in the classical setting) of the existence of conjugate pairs of optimal embeddings of an order in a quadratic imaginary field into an Eichler order in a quaternion algebra and is necessary to properly state the main Theorem~\ref{thm:main} in Section~\ref{subsec:corresp}. The proof of the theorem is given in Section~\ref{sec:proof}. Finally, in Section~\ref{sec:apps} we apply the main theorem to the case of reduction maps from CM points on classical modular curves to supersingular points and the interpretation of the liftings of these maps in terms of optimal embeddings (Gross points) to deduce a correspondence that has been key in the work of the first author and Kane on equidistribution of Heegner points with respect to reduction maps. Since optimal embeddings are related to primitive representations of integers by ternary quadratic forms (see Section~\ref{subsec:repr}, the correspondence allows for a study of Heegner points reducing to a given supersingular point in terms of primitive representations of integers by quadratic forms as was done in \cite{jetchev-kane}.
\fi
\subsection{Proof of Theorem~\ref{thm:main}}\label{sec:proof}
\iffalse
Suppose that $v$ is a split prime for the quaternion algebra $B$. Let
$$
B_{v}(n', n'') = \{x_{v} \in B_{v}^\times \colon x_{v}^{-1} R_{v}' x_{v} \cap K_{v} = \mathcal{O}_{{v}, n'}, \ x_{v}^{-1} R_{v}'' x_{v} \cap K_{v} = \mathcal{O}_{v, n''}\}.
$$
Note that $B_{v}(n', n'') / F_v^\times R_v^\times$ corresponds to pairs of vertices $(v', v'') \in \mathcal{V} \times \mathcal{V}$ such that $\textrm{dist}(v', v'') = \delta$, $\textrm{dist}(v', \mathcal{V}(0)) = n'$ and $\textrm{dist}(v'', \mathcal{V}(0)) = n''$. We then have the decomposition
$$
K_v^\times \backslash B_v^\times / R_v^\times = \bigsqcup_{(n', n'') \in \mathbb{N} \times \mathbb{N}} K_v^\times \backslash B_v(n', n'') / R_v^\times.
$$
\vspace{0.1in}
\noindent \emph{2. Ramified case: $v \in \Ram_{\textrm{f}} B$.} As was previously mentioned, in this case the local algebra $B_v$
has a unique maximal order and we have an exact sequence
$$
0 \rightarrow R_v^\times \rightarrow B_v^\times \xra{w} \mathbb{Z} \rightarrow 0,
$$
where $w : B_v^\times \rightarrow \mathbb{Z}$ be the valuation on $B^\times_v$.
Since
$$
w(K_v^\times) = \left \{
\begin{array}{ll}
2\mathbb{Z} & \textrm{if } K_v / F_v \textrm{ is unramified} \\
\mathbb{Z} & \textrm{if } K_v / F_v \textrm{ is ramified,} \\
\end{array}
\right.
$$
we obtain
$$
K_v^\times \backslash B_v^\times / R_v^\times \cong \left \{
\begin{array}{ll}
\mathbb{Z}/2\mathbb{Z} & \textrm{if } K_v / F_v \textrm{ is unramified} \\
1 & \textrm{if } K_v / F_v \textrm{ is ramified.} \\
\end{array}
\right.
$$
\noindent Using the conclusion from Section~\ref{subsec:action}, we have
$$
\#K_v^\times \backslash B_v(n', n'') / R_v^\times = N(n', n'', v(n)).
$$
Then, via the local analysis in Section~\ref{subsec:action} (i.e., the action of $K_v^\times$ on $\mathcal{V} \times \mathcal{V}$), as well as via (\ref{eq:thetabar}) and (\ref{eq:thetabar-local}), we obtain the following
\begin{lem}\label{lem:galorb}
The number of $\Gal_{K}^{ab}$-orbits in $\mathcal{CM}(G,H)$ with
a prescribed fine conductor $(\mathfrak{c}',\mathfrak{c}'')\in\mathcal{I}(\mathrm{Ram}_{f}B)^{2}$
is finite and equal to
$$
N_{B}(\mathfrak{c}',\mathfrak{c}'',\mathfrak{n})= 2^{\#\{v \in \mathrm{Ram}_{f} B,\ v\ \mathrm{inert\ in\ }K\}} \prod_{v\notin\mathrm{Ram}_{f}B}N_{v}\left(v(\mathfrak{c}'),v(\mathfrak{c}'),v(\mathfrak{n})\right).
$$
The number of CM points in each of these orbits is equal to \[
h(\mathfrak{c})=\left|\Gal(K[\mathfrak{c}]/K)\right|\quad\mbox{where }\mathfrak{c}=\mathfrak{c}'\cap\mathfrak{c}''.\]
\end{lem}
\fi
To prove the theorem, let $(\mathfrak{c}_{S}',\mathfrak{c}_{S}'')\in\mathcal{I}(\mathrm{Ram}_{f}B_{S})^{2}$ be the prime-to-$S$ parts of
$(\mathfrak{c}', \mathfrak{c}'') \in \mathcal{I}(\mathrm{Ram}_{f} B)^2$.
Consider the diagram
$$
\xymatrix{
\mathfrak{c}_{\textrm{f}}^{-1}(\mathfrak{c}',\mathfrak{c}'') \ar@{->>}[r] \ar@{->}[d]^{\theta_S^{(1)}} & \Gal_{K}^{ab}\backslash \mathfrak{c}_{\textrm{f}}^{-1}(\mathfrak{c}',\mathfrak{c}'') \ar@{->}[d]^{\theta_S^{(2)}} \\
\mathfrak{c}_{S, \textrm{f}}^{-1}(\mathfrak{c}_{S}',\mathfrak{c}_{S}'') \ar@{->>}[r] & \Gal_{K}^{ab}\backslash \mathfrak{c}_{S, \textrm{f}}^{-1}(\mathfrak{c}_{S}',\mathfrak{c}_{S}'')
}
$$
obtained from the first square of \eqref{eq:DiagThetaEichler} by restriction to the relevant fibers of $\mathfrak{c}_{\textrm{f}}$ and
$\mathfrak{c}_{S, \textrm{f}}$. Note that the vertical maps may fail to be surjective, but the local analysis of Section~\ref{subsec:action} shows that
\begin{lem}\label{lem:locgalorb}
The map $\theta_S^{(2)}$ maps $\Gal_{K}^{ab}\backslash \mathfrak{c}_{\textrm{f}}^{-1}(\mathfrak{c}',\mathfrak{c}'')$ onto
$ \Gal_K^{\ab} \backslash \left (\mathfrak{c}_{S, \textrm{f}}^{-1}(\mathfrak{c}_S', \mathfrak{c}_S'') \cap \psi_{S'}^{-1}(e_{S'}(\mathfrak{c}', \mathfrak{c}''))\right )$ and moreover,
it is $k^{(2)}$-to-1, where
$$
k^{(2)}=\prod_{v\in S}N_{v}\left(v(\mathfrak{c}'),v(\mathfrak{c}''),v(\mathfrak{n})\right).
$$
\end{lem}
\noindent The same local analysis allows us to compute the number of Galois orbits of CM points with a prescribed fine conductor for each of the algebras
$B$ and $B_S$:
\begin{lem}\label{lem:galorb}
(i) The number of $\Gal_{K}^{ab}$-orbits in $\mathcal{CM}(G,H)$ with
fine conductor $(\mathfrak{c}',\mathfrak{c}'')\in\mathcal{I}(\mathrm{Ram}_{f}B)^{2}$
is finite and equal to
$$
N_{B}(\mathfrak{c}',\mathfrak{c}'',\mathfrak{n})= 2^{\#\{v \in \mathrm{Ram}_{f} B,\ v\ \mathrm{inert\ in\ }K\}} \prod_{v\notin\mathrm{Ram}_{f}B}N_{v}\left(v(\mathfrak{c}'),v(\mathfrak{c}''),v(\mathfrak{n})\right),
$$
where the product is taken over all finite primes. The number of CM points in each of these orbits is equal to
\[
h(\mathfrak{c})=\left|\Gal(K[\mathfrak{c}]/K)\right|\quad\mbox{where }\mathfrak{c}=\mathfrak{c}'\cap\mathfrak{c}''.
\]
\noindent (ii) The number of $\Gal_{K}^{ab}$-orbits in $\mathcal{CM}(G_S,H_S)$ with fine conductor
$(\mathfrak{c}'_S,\mathfrak{c}''_S)\in\mathcal{I}(\mathrm{Ram}_{f}B_S)^{2}$ is finite and equal to
$$
N_{B_S}(\mathfrak{c}'_S,\mathfrak{c}''_S,\mathfrak{n})= 2^{\#\{v \in \mathrm{Ram}_{f} B_S,\ v\ \mathrm{inert\ in\ }K\}} \prod_{v\notin\mathrm{Ram}_{f}B_S}N_{v}\left(v(\mathfrak{c}'_S),v(\mathfrak{c}''_S),v(\mathfrak{n})\right),
$$
where the product is taken over all finite primes. The number of CM points in each of these orbits is equal to
\[
h(\mathfrak{c}_S)=\left|\Gal(K[\mathfrak{c}_S]/K)\right|\quad\mbox{where }\mathfrak{c}_S=\mathfrak{c}'_S\cap\mathfrak{c}''_S.
\]
\end{lem}
\noindent Theorem~\ref{thm:main} now follows easily from Lemma~\ref{lem:locgalorb}.
\section{Review of quadratic orders}\label{sec:ql}
Only for this particular section, $\mathcal{O}_{F}$ will be a Dedekind domain with fraction field $F$. Let $K$ be a semi-simple commutative $F$-algebra of dimension $2$, i.e., $K\simeq F\times F$ or $K$ is a quadratic field extension of $F$.
Let $\mathcal{O}_{K}$ be the integral closure of $\mathcal{O}_{F}$ in $K$. The map which sends $\mathfrak c$ to $\mathcal{O}_{\mathfrak c}=\mathcal{O}_{F}+\mathfrak c\mathcal{O}_{K}$ is a bijection from the set of all non-zero ideals $\mathfrak c \subset\mathcal{O}_{F}$ onto the set of all $\mathcal{O}_{F}$-orders in $K$. It is well-known that all such orders are \emph{Gorenstein} rings. We refer to $\mathfrak c$ as the conductor of $\mathcal{O}=\mathcal{O}_{\mathfrak c}$.
\subsection{Quadratic lattices}\label{subsec:quadlat}
Fix a free, rank one $K$-module $V$. Let $\mathcal{L}$ be the set of all full $\mathcal{O}_{F}$-lattices in $V$. The \emph{conductor} of a lattice $\Lambda \in\mathcal{L}$ is the conductor $c(\Lambda)$ of the $\mathcal{O}_{F}$-order
$\mathcal{O}(\Lambda)=\{\lambda\in K:\lambda \Lambda \subset \Lambda\}=\mathcal{O}_{c(\Lambda)}$.
It follows from~\cite[Prop.7.2]{bass:gorenstein} that $\Lambda$ is a projective rank
one $\mathcal{O}_{c(\Lambda)}$-module. Let $[\Lambda]$ be its isomorphism class in the Picard group $\Pic(\mathcal{O}_{c(\Lambda)})$. Since any two $\mathcal{O}_F$-lattices in $V$ are $K^\times$-homothetic precisely when they have the same conductor and define the same class in the relevant Picard group, the map $\Lambda \mapsto [\Lambda]$ induces a bijection
\begin{equation}
K^{\times}\backslash\mathcal{L}\simeq \bigsqcup_{\mathfrak c}\mathrm{Pic}(\mathcal{O}_{\mathfrak c}).
\end{equation}
When $\mathcal{O}_{F}$ is a local ring with maximal ideal $\mathfrak{p}_{F}$, $\mathrm{Pic}(\mathcal{O}_{\mathfrak{p}_{F}^{n}})=\{1\}$ for all $n$, so the above bijection becomes
\begin{equation}
K^\times \backslash \mathcal{L} \simeq \mathbb{N}, \qquad K^\times \Lambda \mapsto n(\Lambda),
\end{equation}
where $n(\Lambda)$ is the positive integer for which $c(\Lambda)=\mathfrak{p}_{F}^{n(\Lambda)}$.
\subsection{The action of $K^\times$ on $\mathcal{L} \times \mathcal{L}$}\label{subsec:action}
Assume that $\mathcal{O}_F$ is a discrete valuation ring. We will describe the orbits of $K^{\times}$ acting diagonaly on $\mathcal{L} \times \mathcal{L}$.
\paragraph{A $K^\times$-invariant of $\mathcal{L}^2$.} There is an invariant $K^{\times}\backslash\mathcal{L}^2 \twoheadrightarrow (K^{\times}\backslash\mathcal{L})^2 \simeq\mathbb{N}^{2}$,
given by
\begin{equation}
x=(\Lambda',\Lambda'')\mapsto\underline{n}(x)=\left(n(\Lambda'),n(\Lambda'')\right).\label{eq:DefFineCond}\end{equation}
There is another invariant $K^{\times}\backslash\mathcal{L}^2 \twoheadrightarrow \GL_{F}(V)\backslash\mathcal{L}^{2}\simeq\mathfrak{S}_{2}\backslash\mathbb{Z}^{2}$
that describes the relative position of two lattices. It maps $x=(\Lambda', \Lambda'')$
to the unique pair of integers $\mathrm{inv}(x)=\{i_1, i_2\}\in\mathfrak{S}_{2}\backslash\mathbb{Z}^{2}$
for which there exists an $F$-basis $(e_{1},e_{2})$ of $V$ such that
\begin{equation}
\Lambda'=\mathcal{O}_{F}e_{1}\oplus\mathcal{O}_{F}e_{2}\quad\mbox{and}\quad \Lambda''=\mathfrak{p}_{F}^{i_{1}}e_{1}\oplus \mathfrak{p}_{F}^{i_{2}}e_{2}.\label{eq:DefInvRel}
\end{equation}
\paragraph{The distance function.} This latter invariant is related to the distance function on the set of vertices $\mathcal{V} = F^\times \backslash \mathcal{L}$ of
the Bruhat-Tits tree of $\PGL_F (V )$: if $v'$ and $v''$ are the images of $\Lambda'$ and $\Lambda''$ in $\mathcal{V}$, then
$\dist(v', v'') = \left | i_1 - i_2\right |$. Since $K^\times \backslash \mathcal{L} \cong \mathbb{N}$, also $K^\times \backslash \mathcal{V} \cong \mathbb{N}$.
In other words, the function $n$ on $\mathcal{L}$ descends to a function $n \colon \mathcal{V} \rightarrow \mathbb{N}$ whose fibers
$\mathcal{V}(k) = \{v \in \mathcal{V} \colon n(v) = k\}$ are precisely the
$K^\times$-orbits in $\mathcal{V}$. We claim that also $\mathcal{V}(k) = \{v \in \mathcal{V} \colon \dist(v, \mathcal{V}(0)) = k\}$ for all $k \in \mathbb{N}$.
To prove this, first note that $\mathcal{V}(0)$ is a convex subset of $\mathcal{V}$, namely a single vertex
if $K$ is an unramified extension of $F$, a pair of adjacent vertices if $K$ is a ramified
extension of $F$, and a line in the building (i.e. an apartement) if $K \cong F \times F$.
Now, let $v$ be any vertex in $\mathcal{V}(k)$. Then $v$ is represented by $\Lambda = \mathcal{O}_{\mathfrak{p}_F^k} e$ for some $K$-basis
$e$ of $V$.
For $0 \leq j \leq k$, let $v_j \in \mathcal{V}$ be the $F^\times$-homothety class of $\Lambda_j = \mathcal{O}_{\mathfrak{p}_F^j} e \in \mathcal{L}$. Then $v_j \in \mathcal{V}(j)$ and $(v_k, v_{k-1}, \dots , v_0)$ is a path of length $k$ from $v = v_k$ to the vertex $v_0$
of $\mathcal{V}(0)$. Therefore $\dist(v, \mathcal{V}(0)) = j$ for some $0 \leq j \leq k$, and the convexity of $\mathcal{V}(0)$
implies that the $j$th term in our path, namely $v_{k-j}$, should already be in $\mathcal{V}(0)$.
Therefore $k-j = 0$ and $\dist(v, \mathcal{V}(0)) = k$.
\iffalse
The distance between $\Lambda'$ and $\Lambda''$ is defined as $\dist(\Lambda', \Lambda'') := |i_1 - i_2|$.
Let $\mathcal{V}=F^{\times}\backslash\mathcal{L}$ be the Bruhat-Tits tree of $\PGL_{F}(V)$, let $\Lambda \in \mathcal{L}$ and let
$[\Lambda]$ be the associated vertex of $\mathcal{V}$ (equivalently, the $F^\times$-homothety class of $\Lambda$).
Notice that both invariants $n([\Lambda])$ (resp. $\dist([\Lambda'], [\Lambda''])$) is well defined on the set of vertices (resp., pairs of vertices) of $\mathcal{V}$. For $k\in\mathbb{N}$ let $\mathcal{V}(k)=\{v\in\mathcal{V} \colon n(v)=k\}$.
These are precisely the $K^{\times}$-orbits in $\mathcal{V}$. First, it is easy to describe the set $\mathcal{V}(0)$ in the three different cases for $K/F$:
$$
\mathcal V(0) =
\begin{cases}
\Lambda_0 = [\mathcal{O}_{K}e] & \mbox{if }K/F \mbox{ is unramified}, \\
\{\Lambda_0, \Lambda_1\} \mbox{ for } \Lambda_0 = [\mathcal{O}_{K}e], \Lambda_1 = [\varpi \mathcal{O}_Ke] & \mbox{if } K/F \mbox{ is ramified and } \varpi \in \mathcal{O}_K \mbox{ is a uniformizer}, \\
\{[(\mathcal{O}_F \times \mathfrak{p}^d \mathcal{O}_F)e] \colon d \in \mathbb{Z}\} & \mbox{if } K \cong F \times F.
\end{cases}
$$
The latter can also be interpreted as a line that represents an apartment in the Bruhat--Tits building.
\noindent Next, let $\tilde{n} \colon \mathcal{V} \rightarrow\mathbb{N}$ be defined as $\tilde{n}(v)=\mathrm{dist}(v,\mathcal{V}(0))$. Using the above explicit description (more precisely, the convexity of $\mathcal{V}(0)$) and the invariance property $\dist(v', v'') = \dist(gv', gv'')$ for any $g \in \GL(V)$, we observe that
\begin{equation}
\tilde{n}(v) = k = n(v), \textrm{ for all }v \in \mathcal{V}(k).
\end{equation}
\fi
\paragraph{Counting the $K^\times$-orbits.}
Finally, fix $(n',n'')\in\mathbb{N} \times \mathbb{N}$, $(i_{1},i_{2})\in\mathbb{Z} \times \mathbb{Z}$
and let $\delta=\left|i_{1}-i_{2}\right|$. It follows from the above
considerations that the projection $\mathcal{L}\twoheadrightarrow\mathcal{V}$
induces a bijection between
$$
\mathcal{L}(n',n'';i_{1},i_{2})=\left\{ x\in K^{\times}\backslash\mathcal{L}^{2}\mbox{ s.t. }\underline{n}(x)=(n',n'')\mbox{ and }\mathrm{inv}(x)=\{i_{1},i_{2}\}\right\}
$$
and the set of $K^{\times}$-orbits of pairs $(v',v'')\in\mathcal{V} \times \mathcal{V}$
such that \[
\mathrm{dist}(v',\mathcal{V}(0))=n',\quad\mathrm{dist}(v'',\mathcal{V}(0))=n''\quad\mbox{and}\quad\mathrm{dist}(v',v'')=\delta.\]
If for instance $n'\geq n''$, the choice of a vertex $v'\in\mathcal{V}(n')$
identifies the latter set of $K^\times$-orbits with the set of all vertices $v''\in\mathcal{V}(n'')$
at distance $\delta$ from $v'$. Using then the above description of $\mathcal{V}(0)$, it is a simple combinatorial exercise to
prove the following:
\begin{lem}\label{lem:LocComp}
If $\mathcal{O}_{F}/\mathfrak{p}_{F}$ is finite of order
$q$, then $\mathcal{L}(n',n'';i_{1},i_{2})$ is finite of order
\[
N(n',n'',\delta)=\left|\mathcal{L}(n',n'';i_{1},i_{2})\right|\quad\mbox{with }\delta=\left|i_{1}-i_{2}\right|.\]
Moreover $N(n',n'',\delta)=0$ unless one of the following conditions
holds:
\begin{enumerate}
\item $\delta=\left|n'-n''\right|+2r$ for some $0\leq r<\min(n',n'')$.
Then\[
N(n',n'',\delta)=\begin{cases}
1 & \mbox{if }r=0,\\
(q-1)q^{r-1} & \mbox{if }r>0\end{cases}\]
\item $K$ is an unramified extension of $F$ and $\delta=n'+n''$.
Then\[
N(n',n'',\delta)=q^{\min(n',n'')}.\]
\item $K$ is a ramified extension of $F$ and $\delta=n'+n''+s$ with
$s\in\{0,1\}$. Then\[
N(n',n'',\delta)=\begin{cases}
q^{\min(n',n'')} & \mbox{if }s=1\mbox{ or }\min(n',n'')=0,\\
(q-1)q^{\min(n',n'')-1} & \mbox{if }s=0<\min(n',n'').\end{cases}\]
\item $K\simeq F\times F$ and $\delta=n'+n''+s$ with $s\in\mathbb{N}$.
Then \[
N(n',n'',\delta)=\begin{cases}
1 & \mbox{if }\min(n',n'')=0=s,\\
2 & \mbox{if }\min(n',n'')=0<s,\\
(q-2)q^{\min(n',n'')-1} & \mbox{if }\min(n',n'')>0=s,\\
2(q-1)q^{\min(n',n'')-1} & \mbox{if }\min(n',n'',s)>0.\end{cases}\]
\end{enumerate}
\end{lem} |
2,869,038,153,802 | arxiv |
\section{Applications}
\label{sec:applications}
We have three motivations for offloading OS kernel tasks to the GPU:
\begin{compactitem}
\item To reduce the \emph{latency} for tasks that run more
quickly on the GPU than on the CPU
\item To exploit the GPU's parallelism to increase the \emph{throughput}
for some types of operations, such as increasing the number of
clients a server can handle
\item To make feasible incorporation of \emph{new functionality} into
the OS kernel that runs too slowly on the CPU
\end{compactitem}
These open the door for new avenues of research, with the potential for
gains in security, efficiency, functionality,
and performance of the OS.
In this section, we describe a set of tasks that have been shown to
perform well on the CPU, and discuss how they show promise for
augmenting the operating system.
\textbf{Network Packet Processing:}
Recently, the GPU has been demonstrated to show impressive performance
enhancements for software routing and packet processing.
PacketShader~\cite{packetshader} is capable of fast routing table lookups,
achieving a rate of close to 40Gbps for both IPv4 and IPv6 forwarding
and at most 4x speedup over the CPU-only mode using two
NVIDIA GTX 480 GPUs.
For IPSec, PacketShader gets a 3.5x speedup over the CPU.
Additionally, a GPU-accelerated SSL implementation,
SSLShader~\cite{sslshader} runs four times faster than an equivalent
CPU version.
While PacketShader shows the feasibility of moving part of the
network stack onto GPUs and delivers excellent throughput, it suffers
from a higher round trip latency for each packet when compared
to the CPU-only approach. This exposes the
weakness of the GPU in a latency-oriented computing model: the
overhead caused by copying data and code into GPU memory and then copying
results back affects the overall response time of a GPU computing task
severely. To implement GPU offloading support, OS kernel
designers must deal with this latency problem. Our \glinux{} prototype
decreases the latency of GPU computing tasks with the
techniques discussed in section~\ref{sec:design}.
Though there are specialized programmable network interfaces which can
be used for packet processing, the CPU+GPU combination offers a
compelling alternative: the high level of interest in GPUs, and
the fact that they are sold as consumer devices drives
wide deployment, low cost, and substantial investment in improving
them.
\textbf{In-Kernel Cryptography:}
Cryptography operations accelerated by GPUs have been shown to be
feasible and to get significant speedup over CPU
versions~\cite{Harrison_practicalsymmetric, sslshader}. OS functionality
making heavy use of cryptography includes IPSec~\cite{packetshader} ,
encrypted filesystems, and content-based data redundancy
reduction of filesystem blocks~\cite{tangwongsan:infocom2010} and
memory pages~\cite{difference-engine}.
Another potential application of the GPU-accelerated cryptography is
trusted computing based on the Trusted Platform Module (TPM). A TPM
is traditionally hardware, but recent software implementations of the
TPM specification, such as vTPM~\cite{vtpm}, are developed for hypervisors
to provide trusted computing in virtualized environments where
virtual machines cannot access the host TPM directly.
Because TPM operations are cryptography-heavy (such as secure hashing
of executables and memory regions), they can also potentially
be accelerated with GPUs.
The Linux kernel contains a general-purpose cryptography library
used by many of its subsystems. This library can easily be extended
to offload to the GPU.
Our \glinux{} prototype
implements AES on the GPU for the Linux kernel, and we present a
microbenchmark in Section~\ref{sec:aes-cuda} showing that it can outperform
the CPU by as much as 6x for sufficiently large block sizes.
Due to the parallel nature of the GPU, blocks of data can represent
either large blocks of a single task or a number of smaller
blocks of different tasks.
Thus, the GPU can not only speed up bulk data encryption but also
scale up the number of simultaneous users of the cryptography subsystem,
such as SSL or IPSec sessions with
different clients.
\textbf{Pattern Matching Based Tasks:}
The GPU can accelerate regular
expression matching, with speedups of up to 48x reported over CPU
implementations~\cite{reg-gpu}. A network intrusion detection system
(NIDS) with GPU-accelerated regular expression matching~\cite{reg-gpu}
demonstrated a 60\% increase in overall packet
processing throughput on fairly old GPU hardware.
Other tasks such as information flow
control inside the OS~\cite{information-flow-control}, virus
detection~\cite{gpu-antivirus} (with two orders of magnitude speedup),
rule-based firewalls, and content-based search in
filesystems can potentially benefit
from GPU-accelerated pattern matching.
\textbf{In-Kernel Program Analysis:}
Program analysis is gaining traction as a way to enhance the security and
robustness of programs and operating systems.
For example, the Singularity OS~\cite{Singularity} relies on
safe code for process isolation rather than traditional memory protection.
Recent work on EigenCFA has shown that some types of program analysis can be dramatically
sped up using a GPU~\cite{tarun}.
By re-casting the Control Flow Analysis problem (specifically, 0CFA) in terms
of matrix operations, which GPUs excel at, EigenCFA is able to see a speed
up of 72x, nearly two orders of magnitude.
The authors of EigenCFA are working to extend it to pointer analysis as well.
With speedups like this, analysis that was previously too expensive to do
at load time or execution time becomes more feasible;
it is conceivable that some program analysis could be done as code is
loaded into the kernel, or executed in some other trusted context.
\textbf{Basic Algorithms:}
A number of basic algorithms, which are used in many system-level tasks, have
been shown to achieve varying levels of speedup on GPUs.
These include sort, search~\cite{gpgpu-survey} and graph
analysis~\cite{graph-gpu}. GPU-accelerated sort and
search fit the functionality of filesystems very well. An interesting
potential use of
GPU-accelerated graph analysis is for in-kernel garbage collection (GC).
GC is usually considered to be time-consuming
because of its graph traversal operation, but a recent patent
application~\cite{GPU-GC} shows it is
possible to do the GC on GPUs, and that it may have better performance than on
CPUs.
Besides GC for memory objects, filesystems also use GC-like operations to
reorganize blocks, find dead links, and check unreferenced blocks for
consistency. Another example of graph analysis in the kernel is
the Featherstitch~\cite{Featherstitch} system, which exposes
the dependencies among writes in a reliable filesystem. One of the most
expensive parts of Featherstich is analysis of dependencies in its
\emph{patch graph}, a task we believe could be done efficiently
on the GPU.
GPGPU computing is a relatively new field, with the earliest frameworks
appearing in 2006.
Many of the applications described in this section are, therefore, early
results, and may see further improvements and broader applicability.
With more and more attention being paid to
this realm, we expect more valuable and interesting
GPU-accelerated in-kernel applications to present themselves in the
future.
\section{GPU Computing For The Linux Kernel}
\label{sec:design}
Because of the functional limitations discussed in
Section~\ref{sec:introduction},
it is impractical to run a fully functional OS kernel on a GPU.
Instead, our \glinux{} framework runs a traditional OS kernel on the
CPU, and treats the GPU as a co-processor.
We have implemented a prototype of \glinux{} in the Linux kernel, using
NVIDIA's CUDA framework to run code on the GPU.
\subsection{Challenges}
\label{sec:challenges}
\glinux{} must deal with two key challenges to efficiently use the
GPU from the OS kernel: the overhead of copying data back and forth, and
latency-sensitive launching of tasks on the GPU.
\textbf{Data Copy Overhead:}
A major overhead in GPGPU computing is caused by the fact that the GPU has
its own memory, separate from the main memory used by the CPU.
Transfer between the two is done via DMA over the PCIe bus.
Applications using the GPU
must introduce two copies: one to move the input to GPU memory,
and another to return the result.
The overhead of these copies is proportional to the size of the data.
There are two kinds of main memory the CUDA driver can use: one is general
memory (called pageable memory in CUDA), allocated by \texttt{malloc()}.
The other is \emph{pinned} memory, allocated by the CUDA driver and
\texttt{mmap}-ed into the GPU device.
Pinned memory is much faster than the pageable memory when doing DMA.
In \glinux{}, we use pinned memory for all buffers because of its
superior performance.
The downside of pinned memory is that it is locked to specific physical
pages, and cannot be paged out to disk; hence, we must be careful about
managing our pinned buffers.
This management is described in Subsection~\ref{sec:glinux-arch}.
\textbf{GPU Kernel Launch Overhead:}
Another overhead is caused by
the GPU kernel launch, which introduces DMA transfers of the GPU kernel
code, driver set-up for kernel execution and other device-related
operations.
This sets a lower bound on the time the OS kernel must wait for the GPU code
to complete, so the lower we can make this overhead, the more code can
potentially benefit from GPU acceleration.
This overhead is not high when the GPU kernel execution time or the data
copy overhead dominates the total execution time, as is the case for
most GPGPU computing, which is throughput-oriented~\cite{Garland:2010:UTA}.
OS kernel workloads, on the other hand, are likely to be dominated by a
large number of smaller tasks, and latency of each operation is of
greater importance.
Though larger tasks can be created by batching many small requests, doing
so increases the latency for each request.
CUDA has a ``stream''~\cite{CUDA_GUIDE} technology that allows kernel execution to
proceed concurrently with GPU kernel execution and data copy.
By itself, this helps to improve throughput, not latency, but we
make use of it to communicate between code running on the GPU and CPU.
Instead of launching a new GPU kernel every time the OS wants to call a
GPU code, we have designed a new GPU kernel execution model,
which we call the Non-Stop Kernel (NSK).
The NSK is small, is launched only once, and does not terminate.
To communicate with the NSK, we have implemented a new CPU-GPU message-based
communication method.
It allows messages to be passed between the GPU and main memory
while a GPU kernel is still running.
This is impossible in traditional CUDA programming, in which
the CPU has to explicitly wait for synchronization with
the GPU.%
We use pinned memory to pass these messages, and NVIDIA's streaming
features to asynchronously trigger transfers of the message buffer
back and forth between CPU and GPU memory.
Requests are sent from the CPU to the NSK as messages.
The NSK executes the requested service, which it has pre-loaded into
the GPU memory.
Similarly, the CPU receives completion notifications from the
NSK using these messages.
We measured the time to launch an empty GPU kernel, transfer a small amout
of input data to it (4KB), and wait for it to
return.
Though most CUDA benchmarks measure only the execution time on the GPU,
we measured time on the CPU to capture the entire delay the OS kernel
will observe.
NSK outperforms the traditional launch method by a factor of 1.3x,
reducing the base GPU kernel launch time to $16.7\mu{}s$ for
a kernel with 512 threads, $17.3\mu{}s$ for 1024 threads,
and $18.3\mu{}s$ for 2048 threads.
While this is much larger than the overhead of calling a function on
the CPU, as we will show in Section~\ref{sec:aes-cuda}, the speedup
in execution time can be well worth the cost.
Because of a limitation in CUDA that does not allow a running GPU kernel
to change its number of threads dynamically, NSK switches to a
traditional CUDA kernel launch model when a service requires more
threads on the GPU. This switch will not be necessary in future when
the vendors provide the functionality of dynamically creating new
GPU threads.
\subsection{\glinux{} Architecture}
\label{sec:glinux-arch}
\begin{figure}[]
\centering \includegraphics[width=1.0\linewidth]{arch.pdf}
\caption{\glinux{} framework architecture}
\label{fig:arch}
\end{figure}
Our framework for calling the GPU is shown in Figure~\ref{fig:arch}.
It is divided into three parts: a module in the OS kernel, a user-space
helper process, and NSK running on the GPU.
The user-space helper is necessitated by the closed-source nature of NVIDIA's
drivers and CUDA runtime, which prevent the use of CUDA directly from
inside the kernel.
To call a function on the GPU, the OS kernel follows the following steps:
\begin{compactitem}
\item It requests one of the pinned-memory buffers, and fills it with the
input. If necessary, it also requests a buffer for the result.
\item It builds a service request. Services are CUDA programs that have been
pre-loaded into NSK to minimize launch time. The service request
can optionally include a completion callback.
\item It places the service request into request queue.
\item It waits for the request to complete, either by blocking until the
completion callback is called or busy-waiting on the response queue.
\end{compactitem}
The user-space helper for \glinux{} watches the request queue, which is in
memory shared with the OS kernel.
Upon receipt of a new service request, the helper DMAs the input data buffer
to the GPU using the CUDA APIs. This can proceed concurrently with another
service running on the GPU. When the DMA is complete, the helper sends
a service request message to NSK using the message-passing mechanism
described in Section~\ref{sec:challenges}.
When the NSK receives the message, it calls the service function, passing it
pointers to the input buffer and output buffer.
When the function completes, the NSK sends a completion message to
the CPU side, and resumes polling for new request messages.
The user-level helper relays the result back to the OS kernel through their
shared response queue.
To avoid a copy between the kernel module and the user-space helper,
the pinned data buffers allocated
by the CUDA driver are shared between the two.
Also, because NSK allows the user-space helper to work asynchronously
via messages, service execution on the GPU and data buffer copies
between main memory and GPU memory can run concurrently.
As a result, the data buffers locked in physical memory are managed carefully
to cope with the complex uses.
On the CPU side, buffers can be used for four different purposes:
\begin{compactenum}
\item Preparing for a future service call by accepting data from a caller
in the OS kernel
\item To DMA input data from main memory to the GPU for the next service call
\item To DMA results from the last service call from GPU memory to main memory
\item Finishing a previous service call by returning data to the caller in the
OS kernel
\end{compactenum}
Each of these tasks can be performed concurrently, so, along with the service
currently running on the GPU, the total depth of the service call pipeline
is five stages.
In the current \glinux{} prototype, we statically allocate four buffers, and
each changes its purpose over time.
For example, after a buffer is prepared with data from the caller, it becomes
the host to GPU DMA buffer.
On the GPU, we use three buffers: at the same time that one is used by
the active service, a second may receive input for the next service from
main memory via DMA, and a third may be copying the output of the
previous service to main memory.
\subsection{Example: A GPU AES Implementation}
\label{sec:aes-cuda}
To demonstrate the feasibility of \glinux{}, we implemented the AES
encryption algorithm as a service on the GPU for the Linux crypto
subsystem.
Our implementation is based on an existing CUDA AES
implementation~\cite{engine-cuda}, and uses the ECB cipher mode for
maximum parallelism.
We did a microbenchmark to compare its performance with the original CPU
version in the Linux kernel, which is itself optimized by using special
SSE instructions in the CPU.
We used a 480-core NVIDIA GTX 480 GPU, a quad-core Intel Core i7-930
2.8 GHz CPU and 6GB of DDR3 PC1600 memory. The OS
is Ubuntu 10.04 with Linux kernel 2.6.35.3.
We get a performance increase of up to
6x, as shown in Figure~\ref{fig:aes}.
The results show that the
GPU AES-ECB outperforms the CPU implementation when the size of the
data is 8KB or larger, which is two memory pages when using typical
page sizes.
So, kernel tasks that depend on
per-page encryption/decryption, such as encrypted
filesystems, can be accelerated on the GPU.
\begin{figure}[]
\centering
\includegraphics[width=1.0\linewidth]{aes.pdf}
\caption{Encryption performance of \glinux{} AES. Decryption, not shown,
has similar performance.}
\label{fig:aes}
\end{figure}
\section{Discussion}
\label{sec:discussion}
The GPU-augmented OS kernel opens new opportunities for systems software,
with the potential to bring performance improvements, new functionality,
and security enhancements into the OS.
We will continue to develop and improve \glinux{} and to implement more
GPU functions in our framework.
One such improvement will be dynamically dispatching tasks to the CPU or GPU
depending on their size.
As seen in Figure~\ref{fig:aes}, the overheads associated with calling the
GPU mean that small tasks may run faster on the CPU.
Since the crossover point will depend on the task and the machine's specific
hardware, a good approach may be to calibrate it using microbenchmarks
at boot time.
Another improvement will be to allow other kernel subsystems to specifically
request allocation of memory in the GPU pinned region.
In our current implementation, GPU inputs must be copied into these regions
and the results copied out, because the pinned memory is used only
for communication with the GPU.
By dynamically allocating pinned buffers, and allowing users of the framework to
request memory in this region, they
can manage structures such as filesystem blocks directly in pinned memory,
and save an extra copy.
This would also allow multiple calls to be in the preparing and post-service
callback stages at once.
We expect that future developments in GPUs will alleviate some of the current
limitations of \glinux{}.
While the closed nature of current GPUs necessitates interacting with
them from user-space, the trend seems to be towards openness;
AMD has recently opened their high-end 3D GPU drivers and
indicated that drivers for their upcoming APU platform
will also be open-source.
Furthermore, by combining a GPU and CPU on the same die, APUs, e.g. Intel
SandyBridge and AMD Fusion, are likely
to remove the memory copy overhead with shared cache between CPU cores and
GPU cores;
lower copy overhead will mean that the minimum-sized task that can benefit
from GPU offloading will drop significantly.
\section{Introduction}
\label{sec:introduction}
Modern GPUs can be used for more than just graphics processing;
through frameworks like CUDA~\cite{CUDAZONE}, they can run general-purpose
programs.
While not well-suited to \emph{all} types of programs, they excel on code that
can make use of their high degree of parallelism.
Most uses of so-called ``General Purpose GPU'' (GPGPU) computation have been
outside the realm of systems software.
However, recent work on software routers~\cite{packetshader} and encrypted
network connections~\cite{sslshader} has given examples of how GPGPUs can be
applied to tasks more traditionally within the realm of operating
systems.
We claim that these uses are only scratching the surface.
In Section~\ref{sec:applications}, we give more examples of how GPU
computing resources can be used to improve performance and bring new
functionality into OS kernels.%
\footnote{In GPU terminology, a program running on the GPU is called
a ``kernel.'' To avoid confusion, we use the term ``OS kernel'' or ``GPU
kernel'' when the meaning could be ambiguous.}
These include tasks that have applications on the desktop, on the server,
and in the datacenter.
Consumer GPUs currently contain up to 512 cores~\cite{GTX580}, and are
fairly inexpensive: at the time of writing, a current-generation GPU with
336 cores can be purchased for as little as \$160, or about 50 cents per
core.
GPUs are improving at a rapid pace: the theoretical performance of NVIDIA's
consumer GPUs improved from 500 gigaFLOPS in 2007 (GeForce 8800)
to over 1.3 teraFLOPS in 2009 (GTX 480)~\cite{CUDA_GUIDE}.
Furthermore, the development of APUs, which contain a CPU and a GPU on the
same chip, is likely to drive even wider adoption.
This represents a large amount of computing power, and we argue that
systems software should not overlook it.
Some recent OS designs have tried to embrace processor heterogeneity.
Helios~\cite{Helios} provides a single OS image across multiple
heterogeneous cores so as to simplify program development.
Barrelfish~\cite{Barrelfish} treats a multicore system as a distributed
system, with independent OS kernels on each core and
communication via message-passing.
Both, however, are targeted at CPUs that have support for traditional OS
requirements, such as virtual memory, interrupts, preemption, controllable
context switching, and the ability to interact directly
with I/O devices.
GPUs lack these features, and are thus simply not suited to designs that
treat them as peers to traditional CPUs.
Instead, they are better suited for use as co-processors.
Because of this, we argue that GPUs can be and
should be used to augment OS kernels, but that a heterogeneous OS
cannot simply treat the GPU as a fully functional
CPU with different ISA. The OS kernel needs a new framework if it is
to take advantage of the opportunities presented by GPUs.
To demonstrate the feasibility of this idea, we designed
and prototyped \glinux{}, a framework for calling GPU code from
the Linux kernel.
We describe this framework and the challenges
we faced in designing it in Section~\ref{sec:design}%
|
2,869,038,153,803 | arxiv | \section{Introduction}
Maintenance of physical equipment, machinery, systems and even complete infrastructure represents an essential process for ensuring successful operation. It helps minimizing downtime of technical equipment \citep{Heidergott.2010}, eliminate the risk thereof \citep{Groenevelt.1992}, or prolong the life of systems \citep{Dogramaci.2004}. Maintenance is often enforced by external factors, such as regulations or quality management \citep{Lee.1987}. Yet maintenance burdens individuals, businesses and organizations with immense costs. For instance, the International Air Transport Association~(IATA) reported that maintenance costs of 49 major airlines increased by over 3~percent from 2012 to 2016, finally totaling \$15.57 billion annually.\footnote{International Air Transport Association~(IATA). \emph{Airline maintenance cost executive commentary}. URL: \url{https://www.iata.org/whatwedo/workgroups/Documents/MCTF/MCTF-FY2016-Report-Public.pdf}, accessed April~18, 2019.}
Decision support in maintenance can be loosely categorized according to two different objectives depending on whether they serve a corrective or preemptive purpose.\footnote{Despite the wealth of earlier works on maintenance operations, there is no universal terminology. Instead, the interested reader is referred to \citet{Jardine.2006}, \citet{Heng.2009}, and \citet{Si.2011} for detailed overviews. We adhere to their terminology.} The former takes place after the failure of machinery with the goal of restoring its operations back to normal. Conversely, preemptive maintenance aims at monitoring these operations, so that the time-to-failure can be predicted and acted upon in order to mitigate potential causes and risk factors by, for instance, replacing deteriorated components in advance. Preemptive actions help in reducing downtime and, in practice, promise substantial financial savings, thus constituting the focus of this paper.
Preemptive maintenance is based on estimations of the remaining useful life~(RUL) of the machinery. While preventive maintenance makes these forecasts based on human knowledge, predictive maintenance utilizes data-driven models. Different models have been proposed that can be categorized by which input data is utilized (see \Cref{sec:background} for an overview). In the case of raw event data, the conventional approach involves the estimation of probability density functions. If sensor data is available, the prominent approach draws upon machine learning models \citep{Baptista.2018,Seera.2014}. The latter fosters non-linear relationships between sensor observations and RUL estimates, which aid in obtaining more accurate forecasts.
Machine learning models are subject to an inherent drawback: they frequently operate in a black-box fashion \citep{Breiman.2001,Jang.2019, Subramania.2011}, which, when providing decision support, directly impedes potential insights into the underlying rules behind their decision-making. However, \emph{interpretability} is demanded for a variety of practical reasons. For instance, practitioners desire to benchmark predictive models with their own expertise, as well as to validate the decision-making rules from machine learning models against common knowledge \citep{Delen.2013}. Further, managers can identify potential causes of a short machine lifetime and, thus, outline means by which to reduce errors \citep{Reifsnider.2002}. Moreover, accountability in RUL forecasts is sometimes even required by regulatory agencies, such as, \eg, in aircraft or railroad maintenance \citep[\eg][]{LIU.2006, Papakostas.2010}.
Interpretability refers to machine learning models where the decision logic of the model itself is transparent. Notably, the concept of interpretability differs from post-hoc \emph{explainability} that aims for a different objective. Here, a single (or multiple random) forecast is decomposed, thus highlighting potential relationships but without any structural guarantees \citep{Lipton.2018}. That is, explainability takes an arbitrary model as input and, based on it, attempts to unravel the decision logic behind it, but does so only for a local neighborhood of the input rather than deriving its actual structure. Hence, post-hoc explanations are often not reliable, result in misleading outputs and, because of that, the need for interpretable machine learning has been named an important objective for safety-aware applications \citep{Rudin.2018}. By constructing models that are inherently interpretable, practitioners obtain insights into the underlying mechanisms of the model \citep{Lou.2013}. In keeping with this, we formulate our research objective as follows.
\vspace{0.2cm}
\textsc{Objective:} Forecasting remaining useful life via machine learning with the additional requirement that the model fulfills the definition of \textquote{interpretability}.
\vspace{0.2cm}
We develop interpretable deep learning models for forecasting RUL as follows: we propose a novel structured-effect neural network that represents a viable trade-off between attaining accurate forecasts and the interpretability from simple distributional estimations. In order to estimate its parameters, we develop an innovative estimation technique based on variational Bayesian inferences that minimize the Kullback-Leibler divergence.\footnote{Some researchers have raised concerns about the applicability of variational Bayesian inference to neural networks, specifically as alternatives might potentially be more straightforward to optimize. Yet variational Bayesian inferences entail obvious strengths in our setting: in contrast to other approaches, it allows to include prior domain knowledge (as is done in our work when choosing regularization priors). }
We demonstrate the effectiveness of our approach in terms of interpretability and prediction performance as follows. We utilize the public \textquote{Turbofan Engine Degradation Simulation} dataset \citep{Saxena.2008} with sensor measurements from aircraft engines. This dataset is widely referred to as a baseline for comparing predictive models in maintenance and RUL predictions; see \eg, \citet{Butcher.2013} and \citet{Dong.2017}. Here the goal is to forecast the remaining useful life until irregular operations, such as breakdowns or failures, take place. The proposed structured-effect neural network outperforms the distribution-based approaches, reducing the forecast error by \SI{51.60}{percent}. While our approach is surpassed slightly by deep learning, it fulfills the definition of being interpretable, \ie, it maintains the same accountability as the much simpler probabilistic approaches.
The remainder of this paper is structured as follows. \Cref{sec:background} provides an overview on predicting remaining useful life for preemptive maintenance. \Cref{sec:methods} then introduces our methodological framework consisting of probabilistic approaches, machine learning and the novel structured-effect neural network that combines the desirable properties of both. The resulting performance is reported in \Cref{sec:experiments}, where we specifically study the interpretability of the different approaches. Finally, \Cref{sec:discussion} concludes with a discussion of our findings and implications of our work with respect to decision support.
\section{Background}
\label{sec:background}
Previous research has developed an extensive range of mathematical approaches in order to improve maintenance and, due to space constraints, we can only summarize core areas related to our work in the following. For detailed overviews, we refer to \citet{Heng.2009,Liao.2014,Navarro.2010} and \citet{Si.2011}, which provide a schematic categorization of run-to-failure, condition monitoring and predictive methods that estimate the remaining useful life of the machinery. Depending on the underlying approach, the resulting strategy can vary between corrective, responsive, or preemptive maintenance operations. Predictive maintenance, in particular, gives rise to a multitude of variants, \eg, probabilistic approaches and fully data-driven methods that rely upon machine learning together with granular sensor data. The intuition behind inserting sensor measurements into predictive models is that the latter can quantify the environment numerically, the operations and the potential deterioration \citep{Dong.2007}. The observed quantities can be highly versatile and include vibration, oil analysis, temperature, pressure, moisture, humidity, loading, speed, and environmental effects \citep{Si.2011}. As such, sensor measurements are likely to supersede pure condition-based signals in their contribution to overall prognostic capability.
\begin{table}[H]
\scriptsize
\makebox[\textwidth]
\begin{tabular}{p{3.5cm}p{2.6cm}p{2.5cm}p{2cm}p{4cm}}
\toprule
\textbf{Approach} & \textbf{\mcellt{Decision\\ variable}} & \textbf{Input variables} & \textbf{\mcellt{Maintenance\\ strategy}} & \textbf{Operationalization} \\
\midrule
Run-to-failure & --- & --- & Corrective & Service on failure \\[0.5cm]
\midrule
Condition monitoring & Latent state & Event/sensor data & Responsive & Service when latent state indicates (upcoming) failure \\[0.5cm]
Physics-based RUL models & Physics-based RUL & Simulation models & Preemptive & Service when RUL reaches predefined threshold \\[0.5cm]
Probabilistic RUL models & Population-wide (or conditional) RUL & Event data & Preemptive & Service when RUL reaches predefined threshold \\[0.5cm]
\textbf{Sensor-based RUL predictions (\eg, proportional hazards model, machine learning)} & \textbf{System-specific RUL} & \textbf{Sensor data} & \textbf{Preemptive} & \textbf{Service when RUL reaches predefined threshold} \\[0.5cm]
\bottomrule
\end{tabular}
}
\caption{Schematic overview of key research streams for using RUL models in maintenance operations. Further hybridizations of approaches exists that are not covered by the categorization.}
\label{tbl:maintenance}
\end{table}
In order to carry out preemptive measures, one estimates the \emph{remaining useful life}~(RUL) and then applies a suitable strategy for scheduling maintenance operations (such as a simple threshold rule that triggers a maintenance once RUL undercuts a safety margin) in a cost-efficient manner \citep{Kim.2013, Papakostas.2010, Plitsos.2017}. Mathematically, the RUL at time $t$ can be formalized as a random variable $Y_t$ that depends on the operative environment and its past use $X_t, \ldots, X_2, X_1$, \ie,
\begin{equation}
\expectation \left[Y_t \mid X_t, \ldots, X_2, X_1 \right].
\label{equ:exp_RUL}
\end{equation}
Here the variables $X_1, \ldots, X_t$ can refer to event data tracking past failures \citep{Ghosh.2007}, numerical quantities tracing the machines condition over time as an early warning of malfunctioning \citep{Si.2011}, or measurements of its use as a proxy for deterioration \citep{Navarro.2010}.
\subsection{Probabilistic lifetime models}
Probabilistic models utilize knowledge about the population of machinery by learning from the sensor observations of multiple machines. This knowledge is obtained utilizing predefined probability density functions that specify the probability distributions over machinery lifetimes. Mathematically, when $X_t$ is not available, the RUL estimation turns into $\expectation \left[Y_t \mid X_t, \ldots, X_2, X_1 \right] = \expectation \left[Y_t \right] = \cfrac{\expectation \left[t + Y_t \right]}{R(t)}$, where $R(t)$ is the survival function at $t$. Common choices include exponential, log-logistic, log-normal, gamma, and Weibull distributions \cite[e.\,g.][]{Heng.2009}. We refer to \citet{Navarro.2010} for a detailed survey. For instance, the Weibull distribution has been found to be effective even given few observations of lifetimes, which facilitates its practical use \citep{MAZHAR.2007}. Both log-normal and Weibull distributions can be extended by covariates for sensor-data, which we describe below in \Cref{sec:structured_effect_neural_network} but are then constrained to the mathematical structure, rather than flexibility when calibrating a data-driven approach through machine learning.
Probabilistic approaches are common choices as they benefit from straightforward use, direct interpretability and reliable estimates, that are often required in practical applications and especially by the regulatory body. However, a focus is almost exclusively placed on raw event data, thereby ignoring the prognostic capacity of sensor data.
Probabilistic approaches can theoretically be extended to accommodate sensor data, resulting in survival models. Since its initial proposal by \Citet{Cox.1972}, the proportional hazards model has been popular for lifetime analysis in general \citep{Wang.2013} and the estimation of RUL in particular.
A key advantage of the proportional hazards model over many other approaches is that the interaction between a number of influencing factors can be easily combined with a baseline function that describes the general lifetime of the machinery. More precisely, the proportional hazards model assumes that the probability estimates consists of two components, namely, a structural effect and random effects described by covariates \citep{Si.2011}. As will be discussed later, our structured-effect neural network is built on a similar idea; however, it exploits deep learning to increase the predictive power of the RUL in contrast to the proportional hazards model, which utilizes an exponential model to describe the random effect.
\subsection{Machine learning in lifetime predictions}
Machine learning has recently received great traction for RUL as the flexibility of these models facilitates a superior prognostic capacity. For instance, linear regression models offer the advantage of high interpretability when predicting RUL. Extensions by regularization yield the lasso and ridge regression, which have been found to be effective for high-dimensional sensor data \citep{Zihajehzadeh.2016}. To overcome the limitations of linear relationships, a variety of non-linear models have been utilized, including support vector regression \citep{Baptista.2018}, random forests \citep{Seera.2014}, and neural networks \citep{Riad.2010}. We refer to \citet{Heng.2009} and \citet{Si.2011} for a detailed overview of the proposed models. However, non-linear models generally fall short in terms of their explanatory power \citep{Breiman.2001}.
Even though machine learning demonstrates high predictive power, these models struggle with the nature of sensor data as time series. It is common practice to make RUL estimates based purely on the sensor data at one specific point in time \citep{Si.2011}. This simplifies $\expectation \left[ Y_t \mid X_t, \ldots X_1 \right]$ to $\expectation \left[ Y_t \mid X_t \right]$, thereby ignoring the past trajectory of sensor measurements. Yet the history of sensor measurements is likely to encode valuable information regarding the past deterioration and usage of machinery. As an intuitive example, a jet engine that experiences considerable vibration might require more frequent check-ups. As a remedy, feature engineering has been proposed in order to aggregate past usage profiles onto feature vectors that are then fed into the machine learning model \citep{Mosallam.2013}. Formally, this yields $\expectation \left[ Y_t \mid \phi(X_t, \ldots, X_1) \right]$ or $\expectation \left[ Y_t \mid X_t, \phi(X_{t-1}, \ldots, X_1) \right]$, where the aggregation function $\phi$ could, for instance, extract the maximum, minimum, or variability from a sensor time series. As a result, the features could theoretically be linked to interpretations but this is largely prohibited by the nature of the machine learning model.
Advances from deep neural networks have only recently been utilized for the prediction of RUL. In \citet{SateeshBabu.2016}, the authors apply convolutional neural networks along the temporal dimension in order to incorporate automated feature learning from raw sensor signals and predict RUL. In other works, long short-term memory networks~(LSTMs), as a prevalent form of recurrent neural networks, have been shown to perform superior to traditional statistical probability regression methods in predicting RUL \citep{Dong.2017, Wu.2018, Zheng.2017}. Thereby, the LSTM can make use of the complete sequence of sensor measurements by processing complete sequences with the objective of directly estimating the formula $\expectation \left[ Y_t \mid X_t, \ldots, X_1 \right]$ with varying, machine-dependent $t$. In addition, LSTMs entail a high degree of flexibility, which helps to accurately model highly non-linear relationships. This commonly lowers the forecast error, which further translates into improved maintenance operations.
Deep neural networks are rarely utilized in practical applications for a variety of reasons. Arguably, this is not only because deep neural networks have only recently begun to be used for estimating RUL, but also because they are widely known to be black-box functions with limited to no interpretability. Hence, it is the contribution of this paper to develop a combination of structural predictions and deep learning in order to reach a favorable trade-off between interpretability and prognostic capacity. As points of comparison, we draw upon previous works for RUL predictions, including those concerned with machine learning, feature engineering, and deep learning.
\subsection{Verification in machine learning}
Interpretability of machine learning is particularly important in mission-critical systems, which requires the development of assessment techniques that reliably identify unlikely types of error \citep{Xiang.2018}. One approach to uncovering cases where the model may be incompatible with the desired behaviour is to systematically search for worst-case results during evaluation \citep[\eg][]{Athalye.2018}. Formal verification proves that machine learning models are specification consistent \citep[\eg][]{Singh.2018}. While the field of formal verification has been subject to research, these approaches are impeded by limited scalability, especially in response to modern deep learning systems.
\subsection{Explainable vs. interpretable machine learning}
Explainable machine learning refers to \emph{post-hoc} explaining predictions without elucidating the mechanisms with which models work. Examples of such post-hoc interpretations are local linear approximation of the model's behavior \citep[\eg partial dependence plots; see][]{Ribeiro.2016} or decompositions of the final prediction into the contribution of each input feature \citep[\eg SHAP values; see][]{Lundberg.2017}. Another widely applied approach to obtaining explanations is to render visualizations to determine qualitatively what a model has learned \citep{vanderMaaten.2008}. However, explainable machine learning is limited in understanding the underlying process of estimation. Notably, it is also limited to a local neighborhood of the input space or the prediction.
In contrast, interpretable machine learning is to encode an interpretable structure \emph{a priori}, which allows to looking into their mechanisms in order to understand the complete functioning of predictions for all possible input features \citep{Murdoch.2019}. Here global relationships are directly encoded in the structure of the model. As such, the relationship in individual features or outcomes for average cases are explicitly modeled. Naturally, linear models have become a prevalent choice for applications in (safety-)critical use cases, where a complete traceability of the model's estimation is inevitable. Hence, the estimation (rather than predictions) can now be compared against prior knowledge or used for obtaining insights.
\section{Methods}
\label{sec:methods}
This research aims at developing forecasting models for the remaining useful life that, on the one hand, obtain a favorable out-of-sample performance while achieving a high degree of interpretability at the same time. Hence, this work contributes to the previous literature by specifically interpreting the relevance of different sensor types and usage profiles in relation to the overall forecast. To date, probabilistic models of failure rates have been widely utilized for predicting the remaining useful life due to their exceptional explanatory power. We thus take the interpretable feature of this approach and develop a method that combines it with the predictive accuracy of deep learning.
We compare the forecasting performance of our structured-effect neural network with the following approaches: (i)~na{\"i}ve empirical estimations, (ii)~probabilistic approaches, (iii)~traditional machine learning, (iv)~traditional machine learning with feature engineering for time series applications, and (v)~deep neural networks. All of the aforementioned methods are outlined in the following.
\subsection{Na{\"i}ve empirical estimation of remaining useful life}
\label{sec:empirical_estimation_of_failure_rates}
Na{\"i}ve empirical estimation of remaining useful life describes the approximation of RUL utilizing past lifetimes of the machinery. Let $Z$ denote the random variable referring to the total lifetime of a machinery, and let $Z_1,\dots,Z_n$ denote $n$ realizations of this random variable. Then we utilize the mean of these realizations to estimate the total lifetime of a machinery, \ie,
\begin{align}
\expectation [ Z ] = \frac{1}{n} \sum_{i = 1}^{n} Z_i.
\end{align}
We can now translate this estimation of the total lifetime into an estimation of the RUL by subtracting the time the machinery has been in use since last being maintained. Let $Y_t$ denote the random variable that describes the RUL of a machinery at time $t$. Then we estimate $Y_t$ by
\begin{align}
\expectation [ Y_t ] = \expectation [ Z ] - t.
\end{align}
\subsection{Probabilistic lifetime models}
\label{sec:pdf}
In accordance with our literature review, we draw upon two prominent probability density functions $P$ that model the lifetime expectancy of machinery, namely, the Weibull distribution, and the log-normal distribution \cite[e.\,g.][]{Vlok.2002}. Let, again, $Z$ denote the random variable referring to the total lifetime of the population. Then the probability density functions of the Weibull distribution and the log-normal distribution at time step $t > 0$ are given by
\begin{align}
P_\text{Weibull}(Z; a, b) &= \frac{b}{a} \left( \frac{Z}{a} \right)^{b - 1} \e{-(\frac{Z}{a})^b} \quad\text{and} \\
P_\text{log-normal}(Z; a, b) &=
\begin{cases}
\frac{1}{\sqrt{2 \pi} b Z} \e{-\frac{(log(Z) - a)^2}{2 b^2}}, & Z > 0 , \\
0, & Z \leq 0 ,
\end{cases}
\end{align}
respectively, with distribution parameters $a$ and $b$. All distribution parameters are estimated based on past event data; more precisely, the historical time-spans between failures of the machinery are inserted as the lifetime $Z$. This allows us to estimate the expected lifetime of the machinery after a maintenance event, as well as the corresponding variance.
The mean value of the different probability density functions \emph{could} provide estimates of the remaining useful life for unseen data observations. However, this would ignore the knowledge that the machine has already functioned over $t$ time steps. Hence, we are interested in the conditional expectation, given that the machine had the last maintenance event $t$ time steps ago. This results in an estimated RUL at time $t$ of
\begin{equation}
\label{eqn:RUL_conditional_expectation}
\expectation [ Y_t ] = \expectation_{Z \sim P} \left[ Z \mid Z > t \right] - t .
\end{equation}
To compute the previous expression, we draw upon the cumulative distribution function $F(Z; \cdot)$ and the definition of the conditional probability. We then rewrite \Cref{eqn:RUL_conditional_expectation} into
\begin{equation}
\label{eqn:RUL_conditional_expectation2}
\expectation [ Y_t ] = \expectation_{Z \sim P} \left[ Z \mid Z > t \right] - t = \expectation_{Z \sim P} \left[ \frac{Z}{1 - F(Z; \cdot)} \right] - t.
\end{equation}
Unfortunately, there is no (known) closed-form solution to the expected conditional probability of a Weibull distribution. Hence, we utilize Markov chain Monte Carlo to approximate \Cref{eqn:RUL_conditional_expectation2} for both, the Weibull distribution and the log-normal distribution, in order to come up with the expected remaining lifespan (conditional on the time of the last maintenance event).
\subsection{Traditional machine learning}
\label{sec:traditional_machine_learning}
In the following, let $f$ refer to the different machine learning models with additional parameters $w$. Then, in each time step $t$, the machine learning model $f$ is fed with the current sensor data $X_t$ and computes the predicted RUL, given by $\tilde{Y}_t = f(X_t; w)$, such that $Y_t \approx \tilde{Y}_t$. The deviation between the true RUL, $Y_t$, and the forecast $\tilde{Y}_t$ defines the prediction error that we try to minimize. Hence, the optimal parameters can be determined by an optimization problem
\begin{equation}
w^\ast = \argmin_{w}{\norm{ Y_t - f(X_t; w) }} .
\end{equation}
A variety of models $f$ are common in predicting remaining useful life; see the surveys in \citet{Heng.2009} and \citet{Si.2011}. We adhere to previous choices and thus incorporate a variety of baseline models that consist of both linear and non-linear models. Linear models include ridge regression, lasso, and elastic net, all of which are easily interpretable and have been shown to perform well on many machine learning tasks with high-dimensional and even collinear features~\citep{Hastie.2009}. The set of non-linear baseline models include random forest and support vector regression~(SVR).
All models are then fed with two different sets of features: (1)~we take the current sensor measurements $X_t$ when predicting the RUL estimate $Y_t$. However, this approach ignores the trajectory of historic sensor data. (2)~As a remedy, we rely upon feature engineering as a means of condensing the past time series into a feature vector, as described in the following.
Feature engineering provides a means by which to encode the past usage of machinery into an input vector for the predictive model. Yet previous research has only little guidance at hand regarding what type of features are most useful. Hence, we adapt the choice of aggregation functions from \citet{Mosallam.2013}, as detailed in \Cref{tbl:aggregation_functions}. For instance, vibration is known to accelerate deterioration, but it is unclear whether this is caused by sudden peaks (\ie, minima or maxima), frequent changes (\ie, standard deviation), or a constantly high tremor (\ie, average). Mathematically, each aggregation function $\phi$ takes a sequence of past sensor measurements $X_t, \ldots, X_1$ as input and then computes a new input feature $\phi(X_t, \ldots, X_1)$. These aggregation functions are necessary to map the complete trajectory onto a fixed, predefined number of features that can be readily processed by the machine learning models.
\begin{table}[H]
\centering
\notsotiny
\makebox[\textwidth]
\renewcommand{\arraystretch}{1.35}
\begin{tabular}{lll}
\toprule
\textbf{Aggregation function} & \textbf{Formula} & \textbf{Interpretation} \\
\midrule
Max & max($X_1,\dots,X_t$) & Extrema \\
Min & min($X_1,\dots,X_t$) & Extrema \\
Mean & $\mu = \frac{1}{t} \sum_{i = 1}^{t} X_i$ & Average sensor measurement \\
Range & max $-$ min & Variability \\
Sum & $\sum_{i = 1}^t X_i$ & Total signal \\
Energy & $\sum_{i = 1}^{t} X_i^2$ & Total signal with focus on peaks \\
Standard deviation & $\sigma = \sqrt{\frac{1}{t} \sum_{i = 1}^{t} (X_i - \mu})^2$ & Variability \\
Skewness & $\frac{1}{t} \sum_{i = 1}^{t} \left( \frac{X_i - \mu}{\sigma} \right) ^3$ & Symmetry of deviation\\
Kurtosis & $\frac{1}{t} \sum_{i = 1}^{t} \left( \frac{X_i - \mu}{\sigma} \right) ^4$ & Infrequent extreme deviations \\
Peak-to-peak & $\frac{1}{n_1} \sum_{i = 1}^{n_1} \text{loc max} + \frac{1}{n_2} \sum_{i = 1}^{n_2} \text{loc min}$ & Bandwith \\
Root mean square & $\sqrt{\frac{1}{t} \sum_{i = 1}^{t} X_i^2} $ & Total load focus on peaks \\
Entropy & $-\sum_{i = 1}^{t} P(X_i) \, \text{log} \, P(X_i) $ & Information signal \\
\lcellt{Arithmetic mean of\\ power spectral density} & $20 \, \text{log}_{10} \cfrac{\frac{1}{t} \sum_{i = 1}^{t} \abs{\text{fft}(X_i)}}{10^{-5}}$ & Frequency of oscillations \\
Line integral & $\sum_{i = 1}^{t - 1} \abs{X_{i + 1} - X_{i}}$ & Path length \\
Kalman filter & $Y_t - b - \sum_{i = 1}^{p} a_i X_{t-i} $ & Unexpected deviation \\
\bottomrule
\end{tabular}
\caption{Our feature engineering draws upon the above aggregation functions. This choice is common in predicting remaining useful life \cite[e.\,g.][]{Mosallam.2013}. Here $p$ refers to an optional parameter specifying the number of lags, which is later set to 50 in accordance with previous research. The expressions loc max and loc min refer to the local maximum and minimum of the inputs.}
\label{tbl:aggregation_functions}
\end{table}
To determine the best hyperparameter combination in traditional machine learning, we implemented group $10$-fold cross-validation in order to minimize the bias associated with random sampling of training and validation data. The group approach also ensures that we do not split maintenance cycles during cross-validation and that the same maintenance cycle is not present in both the training and validation set.
\subsection{Recurrent neural network}
\label{sec:recurrent_neural_network}
Recurrent neural networks refer to a special class of deep neural networks that can learn from sequences of varying lengths, rather than a fixed size of feature vector \citep{Goodfellow.2017}. This is beneficial to our setting, as it allows us to directly inject time series with sensor data into the RNN and predict the remaining useful life from it. The mathematical formalization is as follows: let $f_\text{NN}$ denote a traditional (or deep) neural network that defines a mapping $[X_t , h_{t-1}] \mapsto h_t$ with hidden states $h_{t-1}, h_t \in \mathbb{R}^{n}$ and a suitably chosen dimension $n$. Then a prediction from a complete sequence can be made via
\begin{equation}
\mathit{RNN}_\Theta = f_\text{NN} ( [X_t, f_\text{NN} ( [X_{t-1}, \ldots f_\text{NN}([X_1, \bm{0}_n]) ] ) ] ) .
\end{equation}
In other words, the RNN iterates over the sequence, while updating its hidden state $h_t$, which summarizes the already-seen sequence, similar to an internal state. This recurrent relationship between the states introduces the possibility of passing information onwards from the current state $h_t$ to the next $h_{t+1}$. Therefore, RNNs can process sequences of arbitrary length, making them capable of utilizing the complete trajectory of sensor data. To illustrate this, \Cref{fig:RNN_unrolled} presents the processing of sequential data by means of unrolling the recurrent structure.
Different variants of recurrent neural networks have been proposed in earlier research; see \citet{Goodfellow.2017}. In this work, we choose the long short-term memory from \citet{Hochreiter.1997} because it is capable to keep information over long sequences, and it enjoys widespread use in research and practical applications \citep{Fischer.2018, Kraus.2018, Srivastava.2018}. For deep neural networks, we reduce the computational runtime for hyperparameter tuning and instead follow conventional guidelines, whereby a random sample of the training data (\SI{10}{\percent}) serves for validation.
\begin{figure}[H]
\centering
\includegraphics[width=.5\textwidth]{RNN}
\caption{Recurrent neural network that recursively applies the same simple neural network $f_\text{NN}$ to the input sequence $X_1, \ldots, X_t$ with outputs $o_1, \ldots, o_t$. The states $h_1, \ldots, h_{t-1}$ encode the previous sequence into a fixed-size feature vector.}
\label{fig:RNN_unrolled}
\end{figure}
\subsection{Proposed structured-effect neural network}
\label{sec:structured_effect_neural_network}
\subsubsection{Model specification}
We now propose our structured-effect model. This approach enforces a specific structure that lends to intuitive interpretation. More precisely, it combines non-parametric approaches for modeling the expected RUL through a probabilistic density function with the flexibility of machine learning in order to incorporate sensor measurements and thus capture the heterogeneity from machine-specific deterioration processes.
The idea of building structured models loosely resembles earlier research efforts related to proportional hazards models \citep{Cox.1972} that present a popular choice in lifetime analysis. This class of survival models decomposes its estimations into components referring to the baseline function and a function of covariates: the baseline specifies the general lifetime across all machines via the same function $\lambda(t)$. The latter further assumes machine-specific effects through additional covariates that describe the random effects. Our structured-effect neural network follows a similar intuition, as it assumes a population-wide general lifetime common across all machines and further sensor-based deviations in order to model the within-machine heterogeneity due to the different usage profiles.
Our structured-effect model splits the estimated remaining useful life into three components, namely, a non-parametric baseline, a covariate-based prediction, and a recurrent component which specifically incorporates the historic trajectory of sensor measurements. These components help in explaining the variance among the different machine lifetimes and, for this purpose, we again draw upon the history of sensor measurements $X_t, \ldots, X_1$. Let $\lambda(t)$ denote the non-parametric part with the explicit probabilistic lifetime model and let further $\mathit{RNN}_\Theta$ refer to a recurrent neural network (such as a long short-term memory) with weights $\Theta$. Then the prediction of the structured-effect neural network $\mathit{SENN}_\Theta(t; X_t, \ldots, X_1)$ follows the form
\begin{equation}
\label{eq:structured_effect_neural_network}
\footnotesize
\mathit{SENN}_\Theta(t; X_t, \ldots, X_1) = \underbrace{\lambda(t)}_{\text{\mcellt{Non-parametric component\\[-0.6em] with explicit lifetime model}}} + \underbrace{\beta^T X_t}_{\text{\mcellt{Linear component\\[-0.6em] with current condition}}} + \underbrace{\mathit{RNN}_\Theta(X_t, \ldots, X_1, t)}_{\text{\mcellt{Recurrent component\\[-0.6em] with deep neural network}}}
\end{equation}
with coefficients $\beta$. Model variations are discussed later in \Cref{sec:model_variations}.
While our model follows a similar intuition behind the proportional hazards model in decomposing the prediction, it also reveals clear differences, as it introduces a recurrent neural network that allows for considerably higher flexibility in modeling the variance and even incorporates the \emph{complete} sequence of sensor measurements and not just a simple vector of covariates. Moreover, the specific way of our model formulation entails a set of further advantages. On the one hand, it circumvents again the explicit need for feature engineering. On the other hand, it achieves a beneficial trade-off between interpretability of non-parametric approaches and the flexibility of non-linear predictions from sensor data. Here the deep neural network needs to explain a considerably smaller variance compared to an approach based solely on a neural network, thereby facilitating the estimation of the network weights. To this end, practitioners can decompose the prediction into a population-wide baseline and machine-specific heterogeneity, based on which they can explicitly quantify the relative contribution of each components through the corresponding coefficients. As such, one can identify reasons why the remaining useful life attains a certain value (\eg, a negative value from the recurrent component indicates a strong deterioration over time) or one can attribute deterioration to unexpected behavior. Moreover, the proposed approach is highly extensible and can easily be generalized to other parameterizations or domains.
We later experiment with different variations of the structured-effect model. These differ in the choices with which we specify the different components. First, we adhere to conventional approaches in predictive maintenance \citep{Navarro.2010} by assuming that the lifetimes follow either a conditional Weibull or a conditional log-normal distribution. That is, we obtain
\begin{equation}
\lambda(t) = \expectation_{Z \sim \text{Weibull}(a, b)} [ Z \mid Z > t ] - t
\quad\text{and}\quad
\lambda(t) = \expectation_{Z \sim \text{log-normal}(a, b)} [ Z \mid Z > t ] - t
\end{equation}
with distribution parameters $a$ and $b$. Thus, the first component is identical to the probabilistic lifetime models that we utilize as part of our benchmarks. Second, the linear component can either be fed directly with $X_t$ or, alternatively, one could also apply feature engineering to it, \ie, giving $\phi(X_t, \ldots, X_1)$. The benefit of the latter is that we again obtain a linear structure where one can assess the relevance of individual predictors by looking at the coefficients. Here we further assume a linear combination as used in ordinary least squares and, as an extension, introduce priors, so that we yield a regularization, where the coefficients in the linear component are estimated via the least absolute shrinkage operator~(lasso). This performs implicitly variable selection in the linear component as some coefficients are directly set to zero \citep{Tibshirani.1996}. Third, the recurrent neural network is implemented via a long short-term memory as this represents the state-of-the-art in sequence learning \citep{Goodfellow.2017}.
\subsubsection{Model estimation through variational Bayesian inferences}
We now detail how we estimate the parameters inside the structured-effect neural network. We refer to $\theta$ as the combined set of unknown parameters and $X$ as the overall dataset including all sensor measurements. Then the objective is to determine the optimal parameters
\begin{equation}
\label{eqn:theta_star}
\theta^\ast = \argmax_{\theta} \; P(\theta \,|\, X) .
\end{equation}
We solve the previous optimization problem through a variational Bayesian method. The predominant reason for this choice over traditional optimization is that the latter would merely give point estimates of the different parameters, whereas variational Bayesian inferences yield quantifications of uncertainty. For instance, this allows us to obtain confidence assessments concerning the relative importance of the different components and thus facilitates the interpretability of our approach.
In our model estimation, we treat all parameters as latent variables with a pre-defined prior distribution and, subsequently, maximize the overall likelihood of the parameters according to the following procedure. That is, utilizing Bayes' theorem, \Cref{eqn:theta_star} is rewritten to
\begin{equation}
P(\theta \,|\, X) = \frac{P(X \,|\, \theta)\,P(\theta)}{P(X)}
= \frac{P(X \,|\, \theta)\,P(\theta)}{\int \! P(X \,|\, \theta)\, P(\theta) \, \text{d}\theta} .
\end{equation}
As a result, the denominator can be computed through sampling methods, with the most prominent being Markov chain Monte Carlo~(MCMC). However, MCMC methods are computationally expensive as the runtime scales exponentially with the dimensions of $\theta$. Thus, this algorithm becomes intractable for large-scale or high-dimensional datasets. As a remedy, we propose the use of variational Bayes for approximating the posterior distributions. We derive a variational lower bound, called ELBO, for our structured-effect neural network in \Cref{sec:derivation}.
\subsubsection{Estimation parameters}
In our experiments, we optimize the $\mathit{SENN}$-model by utilizing the Adam optimizer with learning rate \num{0.005} and all other parameters set to the default values. All implementations are performed in Python utilizing the probabilistic programming library \textquote{pyro} (\url{http://pyro.ai/}). Code for reproducibility is available online.\footnote{See \url{https://github.com/MathiasKraus/PredictiveMaintenance}}.
As part of our computational experiments, we later draw upon the following architectures of the structured-effect neural network: (1)~we assume the non-parametric component to follow a Weibull or log-normal prior distribution, where the underlying distribution parameters are modeled as informative normal prior distributions. Mathematically, this is given by $a \sim \mathcal{N}(a_{\text{empirical}},\,1)$ and $b \sim \mathcal{N}(b_{\text{empirical}},\,1)$. (2)~The linear component is modeled such that the coefficients stem from normal prior distributions (\ie, as used in ordinary least squares). This is formalized by $\beta_i \sim \mathcal{N}(0,\,10)$, where we allow for a wider standard deviation to better handle variations in the relative influence of the predictors. As an alternative, we also implement weakly informative prior distributions (\ie, Laplace priors). The latter enforce a regularization similar to the least absolute shrinkage operator in the sense that certain coefficients are set exactly to zero in order to perform implicit variable selection and come up with a parsimonious model structure. (3)~The recurrent component is implemented as a long short-term memory network with two layers containing \num{100} and \num{50} neurons, respectively. To reduce computational costs, we follow common approaches and utilize the trajectory of the previous \num{50} sensor values at all time steps. All weights in the network are implemented as variational parameters that follow a Gaussian prior with standard deviation of 1. Utilizing \Cref{equ:grad}, we optimize the three components simultaneously.
\section{Computational experiments}
\label{sec:experiments}
\subsection{Dataset}
For reasons of comparability, all computational experiments are based on the \textquote{Turbofan Engine Degradation Simulation} dataset, which is widely utilized as a baseline for comparing predictive models in maintenance and RUL predictions; see \eg, \citet{Butcher.2013} and \citet{Dong.2017}. The objective is to predict the RUL (measured in cycles) based on sensor data from 200 aircraft engines.\footnote{The specific dataset of this study can be downloaded from \url{https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognostic-data-repository/}, accessed April~18, 2019.} More specifically, it includes measurements from 21 sensors. Unfortunately, however, the exact name of each sensor is sanitized. In addition, the dataset comes with a pre-determined split into a training set (100 engines) and a test set (also 100 engines). The average RUL spans \num{82.30} cycles with a standard deviation of \num{54.59}. Moreover, half of the engines experience a failure within \num{77} cycles, while only \SI{25}{\percent} exceed \num{118} cycles.
\subsection{Prediction performance for remaining useful life}
The prediction results for all models are listed in \Cref{tbl:results}. Here we report the mean absolute error, as it represents a widely utilized metric for this dataset \citep{Ramasso.2014}. The benefit of this metric is that practitioners can easily translate the forecast error into a number of cycles that would serve as a security margin. The table also compares two different feature sets for traditional machine learning, \ie, on which we use only the sensor measurements from the current time step or on which we additionally apply aggregation functions to the sensors as part of feature engineering.
The empirical RUL in the first row reflects the performance of our na{\"i}ve benchmark when using no predictor (\ie, predicting the average RUL of the machines). The following conditional expectations are based on the Weibull and log-normal distribution, that result in improvements of \SI{39.17}{\percent} and \SI{38.32}{\percent}, respectively. Among the traditional machine learning models, we find the lowest mean absolute error when using the random forest, which yields an improvement of \SI{35.08}{\percent} compared to the log-normal-based conditional expectation. Thereby, our results identify a superior performance through the use of feature engineering for the majority of traditional machine learning models. Recurrent neural networks outperform traditional machine learning. In particular, the LSTM yields the overall lowest mean absolute error, outperforming the random forest with feature engineering by \SI{37.12}{\percent} and the empirical RUL by \SI{60.51}{\percent}.
The structured-effect neural networks outperform traditional machine learning. Utilizing a log-normal distribution along with feature engineering yields an improvement of \SI{25.44}{\percent} compared to the best traditional machine learning model. Thereby, feature engineering accounts for \SI{11.91}{\percent} of the improvement, strengthening the assumption that feature engineering of sensor data facilitates the prediction of RUL.
\begin{table}[H]
\tiny
\makebox[\textwidth]
\begin{tabular}{ll S *{4}{S[table-align-text-post=false]} *{4}{S[table-align-text-post=false]}}
\toprule
\multicolumn{2}{l}{\textbf{Method}} &
\multicolumn{1}{c}{\textbf{MAE}} & \multicolumn{4}{c}{\textbf{Forecast comparison ($\bm{t}$-statistic)}} & \multicolumn{4}{c}{\textbf{Forecast comparison ($\bm{P}$-value)}} \\
\cmidrule(l){3-3} \cmidrule(l){4-7} \cmidrule(l){8-11}
\multicolumn{2}{l}{} & & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Best} \\
\multicolumn{2}{l}{} & & \multicolumn{1}{c}{baseline} & \multicolumn{1}{c}{machine} & \multicolumn{1}{c}{LSTM} & \multicolumn{1}{c}{structured-effect} & \multicolumn{1}{c}{baseline} & \multicolumn{1}{c}{machine} & \multicolumn{1}{c}{LSTM} & \multicolumn{1}{c}{structured-effect} \\
\multicolumn{2}{l}{} & & & \multicolumn{1}{c}{learning} & & \multicolumn{1}{c}{LSTM} & & \multicolumn{1}{c}{learning} & & \multicolumn{1}{c}{LSTM} \\
\midrule
\multicolumn{6}{l}{\textsc{Baselines without sensor data}} \\
\multicolumn{2}{l}{\quad Empirical RUL} & 45.060 & 8.286 & 24.933 & 16.974 & 18.865 & 0.468 & 0.486 & 0.778 & 0.642 \\
\multicolumn{2}{l}{\quad Conditional expectation (Weibull)} & 27.794 & 0.288 & 8.309 & 9.615 & 8.018 & 0.293 & 0.462 & 0.666 & 0.457 \\
\multicolumn{2}{l}{\quad \textbf{Conditional expectation (log-normal)}} & \bfseries 27.409 & {---} & 4.420 & 15.269 & 10.285 & {---} & \bfseries 0.464 & \bfseries 0.669 & \bfseries 0.451 \\
\midrule
\multicolumn{6}{l}{\textsc{Traditional machine learning}} \\
\multicolumn{2}{l}{\quad Ridge regression} & 19.193 & -6.789 & 1.438 & 5.270 & 2.768 & 0.012* & 0.139 & 0.450 & 0.322 \\
\multicolumn{2}{l}{\quad Ridge regression (with feature engineering)} & 18.382 & -8.029 & 0.427 & 3.297 & 2.778 & 0.010* & 0.132 & 0.343 & 0.399 \\[0.25em]
\multicolumn{2}{l}{\quad Lasso} & 19.229 & -7.015 & 0.766 & 6.577 & 4.145 & 0.015* & 0.222 & 0.500 & 0.401 \\
\multicolumn{2}{l}{\quad Lasso (with feature engineering)} & 18.853 & -7.842 & 0.550 & 5.949 & 2.324 & 0.014* & 0.293 & 0.432 & 0.390 \\[0.25em]
\multicolumn{2}{l}{\quad Elastic net} & 19.229 & -7.276 & 0.990 & 5.190 & 2.829 & 0.015* & 0.222 & 0.500 & 0.401 \\
\multicolumn{2}{l}{\quad Elastic net (with feature engineering)} & 18.245 & -9.055 & 0.458 & 4.244 & 2.572 & 0.009** & 0.132 & 0.297 & 0.245 \\[0.25em]
\multicolumn{2}{l}{\quad Random forest} & 17.884 & -4.927 & 0.058 & 5.909 & 4.487 & 0.006** & 0.102 & 0.240 & 0.198 \\
\multicolumn{2}{l}{\quad \textbf{Random forest (with feature engineering)}} & \bfseries 17.793 & -9.495 & {---} & 2.924 & 3.263 & \bfseries 0.006** & {---} & \bfseries 0.236 & \bfseries 0.180 \\[0.25em]
\multicolumn{2}{l}{\quad SVR} & 18.109 & -7.321 & 0.240 & 5.756 & 3.976 & 0.011* & 0.129 & 0.288 & 0.230 \\
\multicolumn{2}{l}{\quad SVR (with feature engineering)} & 21.932 & -4.706 & 3.081 & 9.440 & 3.740 & 0.092* & 0.310 & 0.583 & 0.531 \\
\midrule
\multicolumn{6}{l}{\textsc{Recurrent neural networks}} \\
\quad \textbf{LSTM} & & \bfseries 11.188 & -11.596 & -4.981 & {---} & -1.441 & \bfseries 0.000*** & \bfseries 0.000*** & {---} & \bfseries 0.003** \\
\midrule
\multicolumn{6}{l}{\textsc{Structured-effect neural networks}} \\
\quad \emph{Distribution} & \emph{Linear component} \\
\cmidrule(l){1-1}\cmidrule(l){2-2}
\quad Weibull & None & 15.862 & -11.266 & -1.015 & 4.424 & 1.825 & 0.000*** & 0.066* & 0.255 & 0.200 \\
\quad Weibull & Regularized & 17.433 & -8.617 & -0.213 & 3.536 & 2.746 & 0.000*** & 0.094* & 0.261 & 0.220 \\
\quad Weibull & Feature engineering & 13.392 & -8.526 & -2.595 & 1.352 & 0.068 & 0.000*** & 0.004** & 0.144 & 0.134 \\
\quad Weibull & Regularized feature engineering & 14.989 & -10.918 & -1.579 & 1.710 & 1.381 & 0.000*** & 0.049* & 0.284 & 0.183 \\[0.25em]
\quad log-normal & None & 15.061 & -6.294 & -1.420 & 1.627 & 1.702 & 0.000*** & 0.057* & 0.261 & 0.198 \\
\quad log-normal & Regularized & 16.319 & -8.200 & -0.779 & 4.582 & 1.334 & 0.000*** & 0.094* & 0.310 & 0.211 \\
\quad \textbf{log-normal} & \textbf{Feature engineering} & \bfseries 13.267 & -13.620 & -2.912 & 1.764 & {---} & \bfseries 0.000*** & \bfseries 0.000*** & \bfseries 0.142 & {---} \\
\quad log-normal & Regularized feature engineering & 14.545 & -11.331 & -1.736 & 3.293 & 0.651 & 0.000*** & 0.038* & 0.252 & 0.162 \\[0.25em]
\bottomrule
\multicolumn{4}{l}{\textbf{Significance level: * 0.1, ** 0.01, *** 0.001}}
\end{tabular}
\caption{Comparison of prediction performance over remaining useful life across different model specifications. Here we specifically report whether the models only utilize sensor measurements from the current time step or whether aggregation functions have been applied to it as part of feature engineering. Consistent with earlier works \citep{Ramasso.2014}, the mean absolute error (MAE) is given. The best-performing model in each panel is highlighted in bold. Additionally, we perform $t$-tests between each model and the best performing model from each of the four categories. The $t$-tests are based on the MAE of the forecasted RUL to show that improvements are at a statistically significant level.}
\label{tbl:results}
\end{table}
\subsection{Forecast decomposition for RUL predictions}
We now demonstrate how the proposed structured-effect model achieves accountability over its RUL forecasts. That is, we leverage the linear model specification and compute the estimated values for each summand in \Cref{eq:structured_effect_neural_network} when making a RUL prediction. Yet the model can still adapt to non-linearity since the neural network can absorb the variance that cannot be explained by the other components.
\Cref{fig:decomposition} illustrates the interpretability of the RUL forecasts for an example engine. More specifically, we can understand how predictions are formed by decomposing the forecasts from the structured-effect model into three components -- namely, the probabilistic RUL model, the linear combination of sensor measurements, and an additional neural network -- as follows:
\begin{enumerate}
\item As we can see, the distribution-based lifetime component accounts for a considerable portion of the forecast. The maximum values in the example exceed \num{100}, which is considerably higher than the maximum value computed by the recurrent component. The component reaches this value when making a prediction after around \num{50} cycles and, with each subsequent usage cycle, lowers the estimated remaining useful life.
Notably, it is identical across all engines as it encodes the prior knowledge before considering the engine-specific deterioration process.
\item The second component specifies a linear combination of sensor measurements, which allows the predictions to adapt to the specific usage profiles of individual engines and explains the within-engine and within-time variability. It thus no longer yields a smooth curve but rather an engine-specific pattern. Formally, this component refers to $\beta^T X_t$ and, in order to determine the relevance of sensor $i$, we simply interpret the coefficients in the vector $\beta$.
\item While the previous linear component still achieves full accountability over its forecasts, we now introduce the final component for modeling the remaining noise. Here we draw upon (deep) neural networks, as they are known to effectively model non-linearities. However, we thus lose the explanatory power for this component, as neural networks largely operate in a black-box fashion. In our example, we see that the recurrent part entails a non-linear curve but takes higher values in later cycles. This indicates that a linear combination is not always sufficient for making predictions and, as a remedy, the structured-effect model can benefit from additional non-linear relationships and from accumulating the usage profile over time.
Notably, the magnitude of the recurrent component is much smaller than the magnitude of the other components. This is beneficial, as the SENN attributes most of the explained variance to other, interpretable model components. Methodologically, it is likely to be based on the following: at timestep $t$, the SENN makes prediction of the RUL from the current sensor data $X_t$, and the trajectory of sensor data $X_t, X_{t-1}, X_{t-2},\dots, X_1$. As shown in \Cref{tbl:results}, $X_t$ is highly informative for estimating RUL and, by following stochastic gradient descent, it optimizes in the direction where the loss function decreases the most (\ie in the direction of both the distribution-based and the linear component). Only after optimizing the the interpretable components, the model updates the recurrent component to further push predictive performance via non-linear mappings.
\end{enumerate}
\begin{figure}
\vspace{-1cm}
\centering
\makebox[0.7\textwidth]
\includegraphics[width=0.7\linewidth]{pred_components}
\caption{This plot visualizes the RUL predictions made by the structured-effect LSTMs based on a log-normal and normal priors in the linear component for an example engine. It decomposes the forecasts using the structured-effect model into three components that facilitate interpretations of how predictions are formed. (1)~The distribution-based lifetime component contributes a considerable portion of the overall forecasts, as it is well suited to model the overall nature of the remaining useful life. This part is identical across all engines. (2)~The sensor measurements introduce a variability that adapts to the specific usage profile of an engine. This component originates from a Bayesian linear model and we can trace the forecast back to individual sensors. (3)~The recurrent neural network introduces a non-linear component that operates in black-box fashion.}
\label{fig:decomposition}
\end{figure}
We can further compute the fraction of variance explained by the different components relative to the overall variance of the actual RUL values. Thereby, we quantify the contribution of each component to the overall forecast. Accordingly, this is defined by
\begin{equation}
1 - \frac{ \sum_t \left( \tilde{Y}_t - \tilde{\Psi}_t \right)^2 }{\sum_t \left( \tilde{Y}_t - \frac{1}{T} \sum_t \tilde{Y}_t \right)^2} ,
\end{equation}
where $T$ denotes the total number of observations and $\tilde{\Psi}_t$ is the prediction of the component under study. Accordingly, we obtain a score of \num{0.175} for the non-parametric part, \num{0.408} for the linear component, and \num{0.064} for the recurrent component. This matches our expectations and, once more, highlights the overall importance of the distribution-based part of the overall forecast, as well as the role of the neural network in modeling secondary variations.
\subsection{Estimated parameters}
It is common practice to compute parameters in predictive models as point estimates \citep{Hastie.2009}, while our optimization technique based on variational Bayesian inferences allows for uncertainty estimates. This yields a key benefit, since we can validate our confidence in the structural components of the model by studying the posterior distribution of the parameter estimates. \Cref{fig:posterior_dist} depicts these parameters specifying the Weibull and log-normal distribution inside the structured-effect models.
\begin{figure}
\centering
\makebox[\textwidth]
\includegraphics[width=0.7\linewidth]{shape_scale_weib_gamma}
\caption{These histograms illustrate the posterior distribution of the estimated parameters (\ie, shape, mean and scale) inside the structured-effect neural network (with log-normal structure and normal priors in the linear component). Here the estimations are compared for both distributions, namely, the Weibull and log-normal distributions. Altogether, the posteriors quantify the uncertainty of the estimated parameters.}
\label{fig:posterior_dist}
\end{figure}
\Cref{tbl:posterior_linear_component} further reports the posterior distribution of the coefficients $\beta_i$ from the linear component of the structured-effect neural network (shown is the SENN model based on a log-normal and normal priors for $\beta_i$). These measure the effect size, \ie, how a change in a sensor measurement affects the forecast. We see that the confidence regions for the different coefficients vary considerably. For reasons of comparability, we further report the standardized coefficients $\beta_i \, \text{var}({\beta_i}) / \text{var}(Y_1, \ldots, Y_t)$, which correct for the variance of the predictor, as well as the outcome \citep{Bring.1994}. As a result, this value allows us to rank variables by their importance.
\begin{table}[H]
\notsotiny
\makebox[\textwidth]
\centering
\begin{tabular}{l SSS}
\toprule
\textbf{Sensor} & \textbf{Mean} & \textbf{Standard} & \textbf{Standardized} \\
& \textbf{estimate} & \textbf{deviation} & \textbf{coefficient} \\
\midrule
$X_9$ & -33.169 & 0.498 & -16.506 \\
$X_{12}$ & 49.721 & 0.250 & 12.440 \\
$X_{21}$ & 44.932 & 0.258 & 11.601 \\
$X_7$ & 48.154 & 0.230 & 11.073 \\
$X_{11}$ & -24.622 & 0.357 & -8.796 \\
$X_{20}$ & 42.850 & 0.184 & 7.880 \\
$X_{14}$ & -22.540 & 0.317 & -7.155 \\
$X_4$ & -16.183 & 0.275 & -4.447 \\
$X_{15}$ & -13.934 & 0.297 & -4.145 \\
$X_2$ & -12.446 & 0.316 & -3.931 \\
$X_6$ & 19.678 & 0.159 & 3.126 \\
$X_3$ & -7.851 & 0.318 & -2.494 \\
$X_{17}$ & -9.951 & 0.208 & -2.066 \\
$X_8$ & -4.121 & 0.367 & -1.511 \\
$X_{16}$ & -0.273 & 2.105 & -0.574 \\
$X_{13}$ & -1.424 & 0.348 & -0.495 \\
$X_{19}$ & 0.234 & 2.028 & 0.474 \\
$X_{10}$ & 0.126 & 1.900 & 0.239 \\
$X_{18}$ & -0.095 & 1.936 & -0.184 \\
$X_1$ & -0.070 & 1.937 & -0.135 \\
$X_5$ & 0.001 & 1.993 & 0.002 \\
\bottomrule
\end{tabular}
\caption{Reported here are the posterior estimates of the effect size as measured by the coefficients $\beta_i$ inside the linear component of the structured-effect neural network. The coefficients entail direct interpretations (similar to ordinary least squares) as to how a certain percentage of change in a sensor measurement affects the RUL prediction. In addition, standardized coefficients are reported, as they allow for the ranking of variables by importance.}
\label{tbl:posterior_linear_component}
\end{table}
\subsection{Model variations}
\label{sec:model_variations}
We experimented with alternative specifications of our structured-effect neural network as follows.
First, we extended the neural network by an additional weighting factor $\gamma$. This yields a component $\gamma \, \mathit{RNN}_\Theta$. However, it resulted in an inferior performance in all of our experiments due to severe overfitting.
Second, we experimented with a two-stage estimation approach. Here we first optimized the non-parametric component and the linear component by traditional gradient descent. Afterwards, we optimized the recurrent component against the residuals from the first stage. This approach is generally easier to train as there are fewer parameters in each stage. Yet we found an inferior performance as compared to the proposed $\mathit{SENN}$: the mean absolute error increased to \SI{16.209}. This is possibly owed to the fact that it prevents information sharing between the different components. Details are reported in \Cref{sec:two_stage_optimization}.
\section{Discussion}
\label{sec:discussion}
\subsection{Implication for decision support}
Estimates of the remaining useful life can facilitate decision support with the objective of replacing deteriorated components and thus mitigating potential risks and failures that generally result in increased costs.
Our approach to predicting remaining useful life is thus of direct relevance to practitioners. According to a McKinsey report, the use of accurate prediction models for RUL as a cornerstone for predictive maintenance can typically reduce the downtime of machinery by \SIrange{30}{50}{\percent} and, at the same time, increase the overall life of machines by \SIrange{20}{40}{\percent}.\footnote{McKinsey (2017). Manufacturing: Analytics unleashes productivity and profitability. URL: \url{https://www.mckinsey.com/business-functions/operations/our-insights/manufacturing-analytics-unleashes-productivity-and-profitability}, last accessed on April~18, 2019.} By knowing the exact time-to-failure, companies can plan maintenance ahead of time and, therefore, make preparations for efficient decision support. As a result, even small improvements in predictive power translate into substantial operational cost savings.
As a direct implication for management, this research shows that forward-looking predictive analytics is capable of heavily influencing the way decision support in maintenance operations is conducted. However, predictive models are most powerful when fed by a large number of predictors (\ie sensors) that describe the condition of the machinery and the environmental effects that influence the system. Therefore, managers should encourage the implementation of additional sensors to further improve accuracy when forecasting the time-to-failure. Moreover, investments in artificial intelligence are oftentimes necessary for the majority of firms who have not yet taken their first step into the age of deep learning.
\subsection{Implications for the use of analytics}
Trajectories of sensor data accumulate relevant information regarding the past usage profile of the machinery and, thereby, facilitate a prognosis regarding the risk of failure. Mathematically, this results in the objective of finding a mapping $f : \left[ X_1, \ldots, X_t \right] \mapsto Y_t$ that is not dependent on the current time step $t$ and thus utilizes a time series with historic measurements of arbitrary length in order to infer a prediction from it. The task can be accomplished by a special type of deep learning -- namely, recurrent neural networks -- as these networks can sequentially process past measurements and store the processed knowledge in their hidden layers. Even though the benefits are obvious, the use of such networks in decision support systems research remains scarce with few exceptions \citep[\eg][]{Evermann.2017, Kraus.2017, Mahmoudi.2018}.
Deep learning is often believed to require extensive amounts of data in order to be successful. However, our approach, based on variational Bayesian estimation, represents a viable alternative that can overcome this limitation. Instead of vast quantities of input data, it advocates domain knowledge that is explicitly encoded in a structural model. In our case, we already know the approximate shape of the predicted variable and can incorporate this via a probability density function into our structural part of the model. As a result, the predetermined structure can be fitted fairly easily with variational inference and thus presents a path towards encoding domain knowledge into deep neural networks. The structured effect reduces the variance and thus makes it easier to describe the remaining variance with a neural network.
\subsection{Implications from interpretable forecasts}
Our approach contributes to interpretability of deep learning. Here we remind the reader of the difference between explainability and interpretability in machine learning \citep{Lipton.2018,Lou.2013,Rudin.2018}. Explainability merely allows a post-hoc analysis of how predictions were computed in a local neighborhood. In contrast, interpretability presents a stronger notion: it requires machine learning models to attain complete transparency of their decision logic. Thus, we contribute to a novel approach for interpretable machine learning to decision support, that can eventually benefit (safety-)critical application fields where accountable models are required.
The high degree of interpretability of our approach reveals further implications. In practice, gaining insights into the estimated RUL aids engineers in identifying potential risks and weak spots when designing machinery. For instance, a high coefficient for a sensor measuring moisture could encourage designers to improve the sealing of a given piece of machinery. By shedding light on the prediction process, structured-effect neural networks enable novel conclusions regarding the relevance of each sensor.
Decision support as a discipline takes the demands of all stakeholders into account. With regard to the latter, managers, for instance, need to understand the decision-making of automated systems. However, this requirement is not fulfilled by recent trends in advanced analytics and especially deep learning, as these mostly operate in a black-box fashion \cite[e.\,g.][]{Goodfellow.2017}. As a remedy, our structured-effect neural network shows improvements in predictive performance as compared to traditional machine learning, while also allowing for a high degree of interpretability. \Cref{tbl:PredVsInterp} compares the stylized characteristics of our structured-effect neural network to other approaches.
\begin{table}[H]
\scriptsize
\makebox[\textwidth]
\begin{tabular}{p{2cm} p{4cm}p{4.5cm}p{4cm}p{3.5cm}}
\toprule
\textbf{Method} & \textbf{Predictive performance} & \textbf{Interpretability} & \textbf{Non-linearities} & \textbf{Estimation}\\
\midrule
Probabilistic models & Poor (but reliable as no variance is associated with it) & Good (often used along a simple threshold) & Poor (or rather constrained by how well outcomes follow a distribution) & Good (when using sampling over analytic forms) \\[0.25em]
Linear machine learning & Fair (regularization, especially, can yield parsimonious models and reduces the risk of overfitting) & Good (as coefficients directly quantify the effect size) & Poor (often not regarded or only interaction terms or predefined transformations such as logit) & Good (closed-form solution for ordinary least squares; optimization problems for regularization) \\[0.25em]
Non-linear machine learning & Good (still regarded as the benchmark against which other models have to compete) & Poor (with the exception of certain approaches, \eg, random forests, that rank variable importance but still don't yield accountability of forecasts) & Good (can adapt well to subgroups, non-linear response curves and interactions) & Fair (often efficient estimations, but without uncertainty quantification) \\[0.25em]
Deep learning & Good (given sufficient training data) & Poor & Good (even when taking sequences as input) & Poor (challenging hyperparameter tuning) \\
\midrule
\textbf{Structured-effect neural network} & Good (theoretically identical to deep learning) & Good (full accountability of the structured effect) & Good (included but only confined to the variance that cannot be explained by the structured effect) & Fair (time-consuming sampling but less prone to unfavorable hyperparameters)\\
\bottomrule
\end{tabular}
\caption{Stylized characteristics of different models in machine learning. Here we extend the categorization from \citet[p.\,351]{Hastie.2009} to include deep learning and our structured-effect neural network.}
\label{tbl:PredVsInterp}
\end{table}
Sensors have always been an important part of predictive maintenance, as they allow to monitor and to adjust small changes so that small problems do not turn into big problems \citep[\eg][]{Rabatel.2011}. Many different sensors monitoring different measurements can be the key to better understanding processes and preventing early failures and consequent downtime. However, complex relationships between potentially large number of sensors and the effect on machinery demands for advanced, non-linear modeling of the remaining useful life. Thus, to fully exploit the information obtained from sensors, interpretability is of great use. Our structured-effect neural network bridges the gap between these key specifications.
\subsection{Limitations and potential for future research}
Recently, dropout as a Bayesian approximation has been proposed as a simple, yet efficient means to obtain uncertainty estimates for neural networks \citep{Gal.2016}. This approach leads to models with fewer parameters, which generally facilitates optimization. Further, computationally costs for optimization are lower, compared to the costs when utilizing variational Bayesian inference. However, Bayesian approximation via dropout does not provide uncertainty estimates for coefficients in our model. Additionally, Bayesian approximation via dropout comes at the cost of not being capable of including prior information about the coefficients into the model. As the latter is particularly important for predictive maintenance where expert knowledge is inevitable, we decided to utilize variational Bayesian inference.
\subsection{Concluding remarks}
Decision support as a field has developed a variety of approaches to improve the cost efficiency of maintenance, especially by predicting the remaining useful life of machinery and linking operational decision-making to it. Common approaches for predictive maintenance include statistical models based on probability density functions or machine learning, which further incorporates sensor data. While the former still serves as widespread common practice due to its reliability and interpretability, the latter has shown considerable improvements in prediction accuracy.
This research develops a new model that combines both advantages. Our suggested structured-effect neural network achieves accountability similar to simple distribution-based RUL models as its primary component, as well as a linear combination of sensor measurements. The remaining variance is then described by a recurrent neural network from the field of deep learning, which is known for its flexibility in adapting to non-linear relationships. For this purpose, all parameters are modeled as latent variables and we propose variational Bayesian inferences for their estimation in order to optimize the Kullback-Leibler divergence. Our findings reveal that our structured-effect neural network outperforms traditional machine learning models and still allows one to draw interpretable conclusions about the sources of the deterioration process.
|
2,869,038,153,804 | arxiv | \section{\uppercase{Introduction}}
The ability to accurately classify the sentiment of short sentences such as Facebook posts or tweets is essential to natural language understanding. In recent years, more and more users share information about their customer experience on social media pages related to (and managed by) the equivalent firms/companies. Generated data attracts a lot of research towards sentiment analysis with many applications in political science, social sciences, business, education, etc. \cite{ortigosa2014sentiment}, \cite{feldman2013techniques}, \cite{troussas2013sentiment}.
Customer experience (CX) represents a holistic perspective on customer encounters with a firm’s products or services. Thus, the more managers can understand about the experiences customers have with their product and service offerings, the more they can measure them again in the future to influence purchase decisions. The rise of social media analytics \cite{fan2014power} offers managers a tool to manage this process with customer opinion data being widely available on social media. Analysing Facebook posts can help firm managers to better manage posts by allowing customer care teams to reply faster to unsatisfied customers or maybe even delegate posts to employees based on their expertise. Also, it would be possible to estimate how the reply on a post affects the reaction from other customers. To our knowledge, no previous research work on predicting Facebook reaction posts exists.
The main goals and contributions of this paper are the following: (a) contribute a dataset which can be used for predicting reactions on Facebook posts, useful for both machine learners and marketing experts and (b) perform sentiment analysis and emotion mining to Facebook posts and comments of several supermarket chains by predicting the distribution of the user reactions. Firstly, sentiment analysis and emotion mining baseline techniques are utilized in order to analyse the sentiment/emotion of a post and its comments. Afterwards, neural networks with pretrained word embeddings are used in order to accurately predict the distribution of reactions to a post. Combination of the two approaches gives a working final ensemble which leaves promising directions for future research.
The remainder of the paper is organized as follows. Section \ref{sec:rw} presents related work about sentiment and emotion analysis on short informal text like from Facebook and Twitter. The used dataset is described in Section \ref{sec:dataset}, followed by the model (pipeline) description in Section \ref{sec:predictionsystem}. Section \ref{sec:experiments} presents the experimental results and finally, Section \ref{sec:conclusion} concludes the paper and presents future research directions.
\section{\uppercase{Related Work}}
\label{sec:rw}
Deep learning based approaches have recently become more popular for sentiment classification since they automatically extract features based on word embeddings. Convolutional Neural Networks (CNN), originally proposed in \cite{lecun1998gradient} for document recognition, have been extensively used for short sentence sentiment classification. \cite{Kim14f} uses a CNN and achieves state-of-the art results in sentiment classification. They also highlight that one CNN layer in the model's architecture is sufficient to perform well on sentiment classification tasks. Recurrent Neural Networks (RNN) and more specifically their variants Long Short Term Memory (LSTM) networks \cite{hochreiter1997long} and Gated Recurrent Units (GRU) networks \cite{chung2014empirical} have also been extensively used for sentiment classification since they are able to capture long term relationships between words in a sentence while avoiding vanishing and exploding gradient problems of normal recurrent network architectures \cite{hochreiter1998vanishing}. \cite{wang2014sentiment} proves that combining different architectures, such as CNN and GRU, in an ensemble learner improves the performance of individual base learners for sentiment classification, which makes it relevant for this research work as well.
Most of the work on short text sentiment classification concentrates around Twitter and different machine learning techniques \cite{wang2011topic}, \cite{kouloumpis2011twitter}, \cite{saif2012semantic}, \cite{sarlan2014twitter}. These are some examples of the extensive research already done on Twitter sentiment analysis. Not many approaches for Facebook posts exist, partly because it is difficult to get a labeled dataset for such a purpose.
Emotion lexicons like EmoLex \cite{emolex} can be used in order to annotate a corpus, however, results are not satisfactory and this is the reason that bootstrapping techniques have been attempted in the past. For example, \cite{bootstrap} propose such a technique which enhances EmoLex with synonyms and then combines word vectors \cite{mikolov2013efficient} in order to annotate more examples based on sentence similarity measures.
Recently, \cite{tian2017facebook} presented some first results which associate Facebook reactions with emojis but their analysis stopped there. \cite{pool2016distant} utilized the actual reactions on posts in a distant supervised fashion to train a support vector machine classifier for emotion detection but they are not attempting at actually predicting the distribution of reactions.
Moreover, analysis of customer feedback is an area which gains interest for many companies over the years. Given the amount of text feedback available, there are many approaches around this topic, however none of them are handling the increasing amounts of information available through Facebook posts. For the sake of completeness, we highlight here some these approaches. Sentiment classification (\cite{pang2002thumbs}, \cite{glorot2011domain}, \cite{socher2013recursive}) deals only with the sentiment analysis (usually mapping sentiments to positive, negative and neutral (or other 5-scale classification) and similarly emotion classification (\cite{yang2007emotion}, \cite{wen2014emotion} only considers emotions. Some work exists on Twitter data \cite{pak2010twitter} but does not take into account the reactions of Facebook. Moreover, work has been conducted towards customer review analysis (\cite{yang2004online}, \cite{hu2004mining}, \cite{cambria2013new}) but none of them are dealing with the specific nature of Facebook (or social media in general).
In this work, we combine sentiment analysis and emotion mining techniques with neural network architectures in order to predict the distribution of reactions on Facebook posts and actually demonstrate that such an approach is feasible.
\section{\uppercase{Dataset Construction}}
\label{sec:dataset}
Our dataset consists out of Facebook posts on the customer service page of 12 US/UK big supermarket/retail chains, namely Tesco, Sainsbury, Walmart, AldiUK, The Home Depot, Target, Walgreens, Amazon, Best Buy, Safeway, Macys and publix. The vast majority of these posts are initiated by customers of these supermarkets. In addition to the written text of the posts, we also fetch the Facebook's reaction matrix \footnote{\url{http://newsroom.fb.com/news/2016/02/reactions-now-available-globally/}} as well as the comments attached to this post made by other users. Such reactions only belong to the initial post, and not to replies to the post since the feature to post a reaction on a reply has only been introduced very recently (May 2017) and would result in either a very small dataset or an incomplete dataset. These reactions include \textit{like}, \textit{love}, \textit{wow}, \textit{haha}, \textit{sad}, \textit{angry} as shown in Figure \ref{fig:fbreactions}. This form of communication was introduced by Facebook on February 24th, 2016 and allows users to express an `emotion' towards the posted content.
\begin{figure}[h!]
\centering
\scalebox{0.9}{
\includegraphics[width=\columnwidth]{reactions-image-en_us.png}}
\caption{The Facebook reaction icons that users are able to select for an original post.}
\label{fig:fbreactions}
\end{figure}
In total, there were more than 70,000 posts without any reaction, thus they were excluded from the dataset. Apart from this problem, people are using the `like' reaction not only to show that they like what they see/read but also to simply tell others that they have seen this post or to show sympathy. This results in a way too often used `like'-reaction which is why likes could be ignored in the constructed dataset. So, instead of using all crawled data, the developed models will be trained on posts that have at least one other reaction than likes. After applying this threshold the size of the training set reduced from 70,649 to 25,969. The threshold of 1 is still not optimal since it leaves much space for noise in the data (e.g. miss-clicked reactions) but using a higher threshold will lead to extreme loss of data. Statistics on the dataset and on how many posts `survive' by using different thresholds can be seen in Figure \ref{fig:total_post_thresholds}.
\begin{figure}[!h]
\centering
\scalebox{1}{
\includegraphics[width = \columnwidth]{total_posts.png}}
\caption{Amount of survived posts for different thresholds including/excluding likes}
\label{fig:total_post_thresholds}
\end{figure}
Exploratory analysis on the dataset shows that people tend to agree in the reactions they have to Facebook posts (which is consistent for building a prediction system), i.e. whenever there are more than one types of reactions they seem to be the same in a great degree (over 80 \%) as can be seen in Figure \ref{fig:react}. In addition, Figure \ref{fig:equal} shows that even by excluding the \textit{like} reaction, which seems to dominate all posts, the distribution of the reactions remains the same, even if the threshold of minimum reactions increases. Using all previous insights and the fact that there are 25,969 posts with at least one reaction and since the \textit{like} reaction dominates the posts, we chose to include posts with at least one reaction which is not a \textit{like}, leading to finally 8,103 posts. Full dataset is available \footnote{\url{https://github.com/jerryspan/FacebookR}} and will be updated as it is curated and validated at the moment of the paper submission.
\begin{figure}[h!]
\centering
\scalebox{1}{
\includegraphics[width=\columnwidth]{reactions.png}}
\caption{Reaction match when there is more than one type}
\label{fig:react}
\end{figure}
\begin{figure}[h!]
\centering
\scalebox{1}{
\includegraphics[width=\columnwidth]{equalfreq.png}}
\caption{Distribution of reactions with different minimum thresholds}
\label{fig:equal}
\end{figure}
\subsection{Pre-processing}
Pre-processing on the dataset is carried out using the Stanford CoreNLP parser \cite{corenlp} and includes the following steps:
\begin{itemize}
\item Convert everything to lower case
\item Replace URLs with ``\_\_URL\_\_" as a generic token
\item Replace user/profile links with ``\_\_AT\_USER\_\_" as a generic token
\item Remove the hash from a hashtag reference (e.g. \#hashtag becomes ``hashtag")
\item Replace three or more occurrences of one character in a row with the character
itself (e.g. ``looooove" becomes "love")
\item Remove sequences containing numbers (e.g. ``gr34t")
\end{itemize}
Afterwards, each post is split using a tokenizer based on spaces and after some stop-word filtering the final list of different tokens is derived. Since pre-processing on short text has attracted much attention recently \cite{singh2016role}, we also demonstrate the effect of it on the developed models in the Experiments section.
\section{\uppercase{Reaction distribution prediction system pipeline}}
\label{sec:predictionsystem}
In this Section, the complete prediction system is described. There are three core components: emotion mining applied to Facebook comments, artificial neural networks that predict the distribution of the reactions for a Facebook post and a combination of the two in the final prediction of the distribution of reactions.
\subsection{Emotion mining}
\label{sec:emotionmining}
The overall pipeline of the emotion miner can be found in Figure \ref{fig:miner}.
\begin{figure*}[h!]
\centering
\scalebox{0.9}{
\includegraphics[width=\textwidth]{emotionminer.png}}
\caption{Emotion miner pipeline}
\label{fig:miner}
\end{figure*}
The emotion lexicon that we utilize is created by \cite{emolex} and is called NRC Emotion Lexicon (EmoLex). This lexicon consists of 14,181 words with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) associated with each word in the lexicon. It is possible that a single word is associated with more than one emotion. An example can be seen in Table \ref{tab:emolex}. Annotations were manually performed by crowd-sourcing.
\begin{table}[H]
\caption{Examples from EmoLex showing the emotion association to the words abuse and shopping.}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
& \textbf{Anger} & \textbf{Anticipation} & \textbf{Disgust} & \textbf{Fear} & \textbf{Joy}& \textbf{Sadness}& \textbf{Surprise}& \textbf{Trust} \\
\hline
\textbf
abuse & \textbf{1} & 0 & \textbf{1}& \textbf{1} & 0 & \textbf{1}& 0 & 0 \\
\hline
shopping & 0 & \textbf{1} & 0 & 0 & \textbf{1} & 0 & \textbf{1} & \textbf{1} \\
\hline
\end{tabular}}
\label{tab:emolex}
\end{table}
Inspired by the approach of \cite{bootstrap}, EmoLex is extended by using WordNet \cite{wordnet}: for every synonym found, new entries are introduced in EmoLex having the same emotion vector as the original words. By applying this technique the original database has increased in size from 14,181 to 31,485 words that are related to an emotion vector. The lexicon can then be used to determine the emotion of the comments to a Facebook post. For each sentence in a comment, the emotion is determined by looking up all words in the emotion database and the found emotion vectors are added to the sentence emotion vector. By merging and normalizing all emotion vectors, the final emotion distribution for a particular Facebook post, based on the equivalent comments, can be computed. However, this naive approach yielded poor results, thus several enhancements were considered, implemented and described in subsections \ref{sec:negation}-\ref{sec:svm}.
\subsubsection{Negation Handling}
\label{sec:negation}
The first technique that was used to improve the quality of the mined emotions is negation handling. By detecting negations in a sentence, the ability to `turn' this sentiment or emotion is provided. In this paper only basic negation handling is applied since the majority of the dataset contains only small sentences and this was proved to be sufficient for our goal. The following list of negations and pre- and suffixes are used for detection (based on work of \cite{neg}):
\begin{table}[h]
\centering
\caption{Negation patterns}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c|p{2.0in}|}
\hline
Negations & no, not, rather, wont, never, none, nobody, nothing, neither, nor, nowhere, cannot, without, n't \\
\hline
Prefixes & a, de, dis, il, im, in, ir, mis, non, un \\
\hline
Suffixes & less \\
\hline
\end{tabular}}
\label{tab:negations}%
\end{table}%
The following two rules are applied:
\begin{enumerate}
\item The first rule is used when a negation word is instantly followed by an emotion-word (which is present in our emotion database).
\item The second rule tries to handle adverbs and past particle verbs (Part-of-Speech (POS) tags: RB, VBN).
If a negation word is followed by one or more of these POS-tags and a following emotion-word, the emotion-word's value will be negated.
For example this rule would apply to `not very happy'.
\end{enumerate}
\noindent There are two ways to obtain the emotions of a negated word:
\begin{enumerate}
\item Look up all combinations of negation pre- and suffixes together with the word in our emotion lexicon.
\item If there is no match in the lexicon a manually created mapping is used between the emotions and their negations.
This mapping is shown in Table \ref{tab:mappingemotions}.
\end{enumerate}
\begin{table}[H]
\caption{Mapping between emotion and negated emotions.}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
& \textbf{Anger} & \textbf{Anticipation} & \textbf{Disgust} & \textbf{Fear} & \textbf{Joy}& \textbf{Sadness}& \textbf{Surprise}& \textbf{Trust} \\
\hline
\textbf{Anger} & 0 & 0 & 0 & 0 & \textbf{1}& 0 & 0 & 0 \\
\hline
\textbf{Anticipation}& 0 & 0 & 0 & 0 & \textbf{1}& 0 & \textbf{1} & 0 \\
\hline
\textbf{Disgust} & 0 & 0 & 0 & 0 & \textbf{1}& 0 & 0 & \textbf{1} \\
\hline
\textbf{Fear} & 0 & 0 & 0 & 0 & \textbf{1}& 0 & 0 & \textbf{1} \\
\hline
\textbf{Joy} & \textbf{1} & 0 & \textbf{1} & \textbf{1 }& 0& \textbf{1} & 0 & 0 \\
\hline
\textbf{Sadness} & 0 & 0 & 0 & \textbf{1} & 0& 0 & 0 & 0 \\
\hline
\textbf{Surprise} & 0 & \textbf{1} & 0 & 0 & 0& 0 & 0 &\textbf{1} \\
\hline
\textbf{Trust} & 0 & 0 &\textbf{1} & 0 & 0& 0 & \textbf{1} & 0 \\
\hline
\end{tabular}}
\label{tab:mappingemotions}
\end{table}
\subsubsection{Sentence similarity measures}
\cite{bootstrap}'s approach is using word vectors \cite{mikolov2013efficient} in order to calculate similarities between sentences and further annotate sentences. In the context of this paper, a more recent approach was attempted \cite{sentence2vec}, together with an averaging word vector approach for comparison. \cite{sentence2vec} creates a representation for a whole sentence instead of only for one word as word2vec. The average word vector approach is summing up the word vector of each word and then taking the mean of this sum. To find a similarity between two sentences, one then uses the cosine similarity. Surprisingly, both approaches return comparable similarity scores. One main problem which occurred here is that two sentences with different emotions but with the same structure are measured as `similar'. This problem is exemplified with an example:
\begin{verbatim}
Sentence 1: "I really love your car."
Sentence 2: "I really hate your car."
Sentence2Vec similarity: 0.9278
Avg vector similarity: 0.9269
\end{verbatim}
\noindent
This high similarity is problematic since the emotions of the two sentences are completely different. Also, one can see that the two models output almost the same result and that there is no advantage by using the approach of \cite{sentence2vec} over the simple average word vector approach. Hence, the sentence similarity measure method to annotate more sentences is not suited for this emotion mining task because one would annotate positive emotions to a negative sentence and was not adapted for further use.
\subsubsection{Classification of not annotated sentences}
\label{sec:svm}
If after performing these enhancement steps there remain any non-emotion-annotated sentences, then a Support Vector Machine (SVM) is used to estimate the emotions of these sentences based on the existing annotations. The SVM is trained as a one-versus-all classifier with a linear kernel (8 models are trained, one for each emotion of EmoLex) and the TF-IDF model \cite{salton1988term} is used for providing the input features. Input consists of a single sentence as data (transformed using the TF-IDF model) and an array of 8 values representing the emotions as a label. With a training/test-split of 80\%/20\%, the average precision-recall is about 0.93. Full results of the SVM training can be seen in Figure \ref{fig:precisionrecall} together with the precision-recall curve for all emotions. The result in this case was judged to be satisfactory in order to utilize it for the next step, which is the reaction prediction and is used as presented here.
\begin{figure}[h!]
\centering
\scalebox{1}{
\includegraphics[width=\columnwidth]{precisionrecall.png}}
\caption{Precision-Recall (ROC) curve using a linear SVM in an one-versus-all classifier}
\label{fig:precisionrecall}
\end{figure}
\subsection{Reaction distribution predictor}
In order to predict the distribution of the post reactions, neural networks are built and trained using Tensorflow \cite{abadi2016tensorflow}. Two networks were tested, based on literature research: a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) that uses LSTMs.
Both networks start with a word embedding layer. Since the analysed posts were written in English, the GloVe \cite{pennington2014glove} pretrained embeddings (with 50 as a vector dimension) were used. Moreover, posts are short texts and informal language is expected, thus we opted for using embeddings previously trained on Twitter data instead of the Wikipedia versions.
\subsubsection{CNN}
The CNN model is based on existing successful architectures (see \cite{Kim14f}) but is adapted to give a distribution of reactions as an output. An overview of the used architecture is provided in Figure \ref{fig:cnn1}.
First issue to be handled with CNNs is that since they deal with variable length input sentences, padding is needed so as to ensure that all posts have the same length. In our case, we padded all posts to the maximum post length which also allows efficient batching of the data. In the example of Figure \ref{fig:cnn1} the length of the sentence is 7 and each word $x_i$ is represented by the equivalent word vector (of dimension 50).
The convolutional layer is the core building block of a CNN. Common patterns in the training data are extracted by applying the convolution operation which in our case is limited into 1 dimension: we adjust the height of the filter, i.e. the number of adjacent rows (words) that are considered together (see also red arrows in Figure \ref{fig:cnn1}). These patterns are then fed to a pooling layer. The primary role of the pooling layer is to reduce the spatial dimensions of the learned representations (that's why this layer is also known to perform downsampling). This is beneficial, since it controls for over-fitting but also allows for faster computations. Finally, the output of the pooling layer is fed to a fully-connected layer (with dropout) which has a softmax as output and each node corresponds to each predicted reaction (thus we have six nodes initially). However, due to discarding \textit{like} reaction later on in the research stage, the effective number of output nodes was decreased to 5 (see Experiments). The softmax classifier computes a probability distribution over all possible reactions, thus provides a probabilistic and intuitive interpretation.
\begin{figure}[h!]
\centering
\scalebox{1}{
\includegraphics[width=\columnwidth]{cnnarch.png}}
\caption{Convolutional network architecture example}
\label{fig:cnn1}
\end{figure}
\subsubsection{RNN}
Long short-term memory networks (LSTM) were proposed by \cite{hochreiter1997long} in order to adress the issue of learning long-term dependencies. The LSTM maintains a separate memory cell inside it that updates and exposes its content only when deemed necessary, thus making it possible to capture content as needed. The implementation used here is inspired by \cite{graves2013generating} and an overivew is provided in Figure \ref{fig:rnn1}.
An LSTM unit (at each time step $t$) is defined as a collection of vectors: the input gate ($i_t$), the forget gate ($f_t$), the output gate ($o_t$), a memory cell ($c_t$) and a hidden state ($h_t$). Input is provided sequentially in terms of word vectors ($x_t$) and for each time step $t$ the previous time step information is used as input. Intuitively, the forget gate controls the amount of which each unit of the memory cell is replaced by new info, the input gate controls how much each unit is updated, and the output gate controls the exposure of the internal memory state.
In our case, the RNN model utilizes one recurrent layer (which has 50 LSTM cells) and the rest of the parameters are chosen based on current default working architectures. The output then comes from a weighted fully connected 6-(or 5, depending on the number of reactions)-class softmax layer. Figure \ref{fig:rnn1} explains the idea of recurrent architecture based on an input sequence of words.
\begin{figure}[h!]
\centering
\scalebox{1}{
\includegraphics[width=\columnwidth]{rnnarch.png}}
\caption{Recurrent network architecture example}
\label{fig:rnn1}
\end{figure}
\subsection{Prediction ensemble}
The final reaction ratio prediction is carried out by a combination of the neural networks and the mined emotions on the post/comments. For a given post, both networks provide an estimation of the distributions, which are then averaged and normalised. Next, emotions from the post and the comments are extracted following the process described in Section \ref{sec:emotionmining}. The ratio of estimations and emotions are combined into a single vector which is then computed through a simple linear regression model, which re-estimates the predicted reaction ratios. The whole pipeline combining the emotion miner and the neural networks can be seen in Figure \ref{fig:pipeline} and experimental results are presented in the next Section.
\begin{figure*}[h!]
\centering
\scalebox{0.8}{
\includegraphics[width=\textwidth]{prediction_pipeline.png}}
\caption{Pipeline for final prediction of reaction distributions}
\label{fig:pipeline}
\end{figure*}
\section{\uppercase{Experiments}}
\label{sec:experiments}
Several experiments were conducted in order to assess different effects on the reaction distribution prediction. Firstly, the effect of pre-processing on posts is examined in subsection \ref{sec:preprocessing}. Since Facebook reactions were not introduced too long ago, a lot of posts in the dataset still contain primarily \textit{like} reactions. This might lead to uninteresting results as described in the Dataset Section and in Subsection \ref{sec:exLikes}. Finally, Subsection \ref{sec:mse} discusses the training with respect to the mean squared error (MSE) for CNN and RNN models, as well as the effect of the ensembled approach.
As mentioned before, both networks utilized the GloVe pre-trained embeddings (with size 50). Batch size was set to 16 for the CNN and 100 for the RNN/LSTM.
CNN used 40 filters for the convolution (with varying height sizes from 3 to 5), stride was set to 1 and padding to the maximum post length was used. Rectified Linear Unit (ReLU) \cite{glorot2011deep} activation function was used.
Learning rate was set to 0.001 and dropout was applied to both networks and performance was measured by the cross entropy loss with scores and labels with L2-regularization \cite{masnadi2009design}. Mean Squared Error (MSE) is used in order to assess successful classifications (which effectively means that every squared error will be a 1) and in the end MSE is just the misclassification rate of predictions.
\subsection{Raw vs Pre-processed Input}
\label{sec:preprocessing}
In order to assess the effect of pre-processing on the quality of the trained models, two versions for each neural network were trained. One instance was trained without pre-processing the dataset and the other instance was trained with the pre-processed dataset. Results are cross-validated and here the average values are reported. Figure \ref{fig:preproc} indicates that overall the error was decreasing or being close to equal (which is applicable for both CNN and RNN). The x-axis represents the minimum number of `non-like' reactions in order to be included in the dataset. It should be noted that these models were trained on the basis of having 6 outputs (one for each reaction), thus the result might be affected by the skewed distribution over many `like' reactions. This is the reason that the pre-processed version of CNN performs very well for posts with 5 minimum reactions and very bad for posts with 10 minimum reactions In addition, the variance for the different cross-validation results was high. In the next subsection we explore what happens after the removal of `like' reactions.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{preeffect.png}
\caption{Effect of pre-processing on different models}
\label{fig:preproc}
\end{figure}
\subsection{Exclusion of like reactions}\label{sec:exLikes}
Early results showed that including the original \textit{like} reaction in the models would lead to meaningless results. The huge imbalanced dataset led to predicting a 100\% ratio for the \textit{like} reaction. In order to tackle this issue, the \textit{like} reactions are not fed into the models during the training phase (moreover the \textit{love} reaction can be used for equivalent purposes, since they express similar emotions). Figure \ref{fig:nolikes} shows an increase of the error when the likes are ignored. The explanation for this increase is related to heavily unbalanced distribution of \textit{like} reactions: Although there is an increase in the error, predictions now are more meaningful than always predicting a like ratio close to 100\%. After all, it is the relative reaction distribution that we are interested in predicting.
\begin{figure}[h]
\includegraphics[width=\columnwidth]{likeseffect.png}
\caption{Effect of inclusion/exclusion of likes on different models}
\label{fig:nolikes}
\end{figure}
\subsection{Ensemble performance}\label{sec:mse}
Table \ref{tab:networks} summarizes the testing error for the CNN and RNN with respect to the same split dataset and by also taking the validation error into account. One can see that RNN performs better than CNN, although it requires additional training time. Results are cross-validated on 10 different runs and variances are presented in the Table as well.
\begin{table}[h]
\centering
\caption{RNN and CNN comparison after cross-validation}
\scalebox{1}{
\begin{tabular}{c|c|c}
\hline
& \multicolumn{1}{l}{MSE} & \multicolumn{1}{l}{\# Epochs} \\
\hline
CNN & 0.186 ($\pm 0.023$) & 81 \\
\hline
RNN & 0.159 ($\pm 0.017$) & 111 \\
\hline
\end{tabular}}
\label{tab:networks}
\end{table}
Combined results for either of the networks and the emotion miner can be seen in Figure \ref{fig:predictresult}. The networks themselves have the worst results but an average combination of both is able to achieve a better result. Optimal result is achieved by the emotions + cnn combination, although this difference is not significant than other combinations. These results can be boosted by optimizing the hyperparameters of the networks and also by varying different amount of posts. As a conclusion one can say that using emotions to combine them with neural network output improves the results of prediction.
\begin{figure}[h]
\centering
\includegraphics[width = \columnwidth]{mseall.png}
\caption{Performance results for different combinations of the neural networks and emotions.}
\label{fig:predictresult}
\end{figure}
\iffalse
\begin{figure*}[h]
\centering
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\columnwidth]{CNN_training_error_0001.PNG}
\caption{MSE with learning rate $= 0.001$}
\label{fig:cmse001}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\columnwidth]{CNN_training_error_00005.PNG}
\caption{MSE with learning rate $= 0.0005$}
\label{fig:cmse0005}
\end{subfigure}
\caption{CNN mean squared error during training procedure}
\label{fig:cnnmseplot}
\end{figure*}
\begin{figure*}[h]
\centering
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\columnwidth]{RNN_training_error_0001.PNG}
\caption{MSE with learning rate $= 0.001$}
\label{fig:rmse001}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=\columnwidth]{RNN_training_error_00005.PNG}
\caption{MSE with learning rate $= 0.0005$}
\label{fig:rmse0005}
\end{subfigure}
\caption{RNN mean squared error during training procedure}
\label{fig:rnnmseplot}
\end{figure*}
\fi
Finally, we present a simple, yet effective visualization environment which highlights the results of the current paper, that can be found in Figure \ref{fig:vis}. In this figure, one can see at the input field of the Facebook post on the top and then four different result panels: the first one shows the reaction distribution, the second panel shows the proportions of the eight emotions, the third panel highlights the emotions (and by hovering one can see the total shows the overall distribution (vector of eight) and the fourth panel shows the highlighting of the sentiments.
\begin{figure*}[!h]
\centering
\includegraphics[width = \textwidth]{visualization.png}
\caption{Example visualisation}
\label{fig:vis}
\end{figure*}
\section{Conclusion}
\label{sec:conclusion}
In this paper, a framework for predicting the Facebook post reaction distribution was presented, trained on a customer service dataset from several supermarket Facebook posts. This study revealed that a baseline sentiment miner can be used in order to detect a post sentiment/emotion. Afterwards, these results can be combined with the output of neural network models to predict the Facebook reactions. While there has been a lot of research around sentiment analysis, emotion mining is still mostly uncharted territory and this work also contributes to this direction. The used dataset is available for other researchers and can be also used as a baseline for performing further experiments. In addition, a more accurate evaluation of the emotion miner can be conducted by using the MPQA corpus \cite{mpqa}.
Facebook reaction predictions can clearly enhance customer experience analytics. Most companies are drowned in social media posts, thus a system that identifies the emotion/reaction prediction of a post in almost real-time can be used to provide effective and useful feedback to customers and improve their experience. So far in the dataset, the reaction of the page owner has not been included but this could be useful information on how the post was addressed (or could be addressed).
Future work includes working towards refining the architectures of the neural networks used. Moreover, one of the next steps is to implement a network that predicts the (absolute) amount of reactions (and not just the ratio). This number is of course susceptible to external parameters (e.g. popularity of the post/poster, inclusion of other media like images or external links, etc.), so another direction would be to include this information as well. More specifically, the combination of images and text can reveal possible synergies in the vision and language domains for sentiment/emotion related tasks.
\bibliographystyle{apalike}
{\small
|
2,869,038,153,805 | arxiv | \section{Introduction}
Models in which conformal matter of central charge $c$ is coupled to 2d quantum
gravity have attracted considerable interest. Much progress has been
made for the case $c \le 1$, using both continuum and discrete
approaches. The former has yielded
the KPZ formulae~\cite{KPZ,Dav1,DK}, which give as functions of the
central charge the modification of critical exponents due to gravity.
However, these formulae only apply for $c \le 1$; for larger
values of $c$ they predict that the string susceptibility
$\gamma_{str}$ is complex, which does not make much sense.
The discrete methods involve using dynamical
triangulations coupled to matter fields. Matrix model techniques have
proven to be extremely valuable for studying $c\le 1$
models~\cite{BreKaz,GroMig,DouShe}
and perturbative expansions allow one to investigate numerically the
behaviour of $\gamma_{str}$ for $c>1$~\cite{BreHik,HikBre,Hik}.
The model in which a single
Ising spin is attached to the face of each triangle has a central
charge of one half and has been solved
exactly~\cite{Meh,Kaz1,Kaz2,BouKaz1,BurJur}, yielding results for the
critical exponents that agree with those of the KPZ formulae.
Recently some remarkable
analytical progress has been made for the $c \! \to \! \infty$ limit by
considering the low temperature expansion~\cite{Wex1,Wex2,ADJ}.
There is evidence
that in this limit tree-like or branched polymer
graphs dominate the behaviour of the model
and this is supported by our results. Despite
much effort no one has managed to solve analytically any other models for $c
\ge 1$; however such models are well-defined and there is no obvious
pathology at $c=1$.
Many Monte Carlo simulations have been
performed~\cite{BaiJoh1,BaiJoh2,ADJT,BFHM,AmbTho,KowKrz} in an attempt
to investigate the behaviour near the $c=1$ barrier and to discover
whether the breakdown of KPZ theory is due to some change in the
geometry, such as a transition to a branched polymer phase.
So far these simulations have failed to produce any convincing
evidence of such a phase transition at $c=1$.
In this paper we study the properties of a model for which $p$
independent Ising spins are attached to the face of each triangle,
giving a central charge of $c=p/2$. In section 2 we define precisely three
slightly different models and in section 3 we
examine the limit of large $p$, identifying most of the dominant
graphs.
A version of the model in which the free energy is truncated is used
in section 4 to study the transition
to behaviour typical of large $p$,
in the limit of small $\beta$. In section~\ref{sec:mag} we
study the properties of the magnetization transition,
deriving a bound
on the critical value of the coupling constant ($\beta_c \ge 0.549$)
and looking at the nature of the transition in the limit $p \to 0$.
In section~\ref{sec:conc} we conclude by discussing possible forms of
the phase diagram and by relating our work to the results from various
computer simulations and to analytical work carried out by other authors.
\section{Definition of the model}
The model is one of $p$ independent Ising
spins on each vertex of a random
$\phi^3$ graph (which is the dual graph of a triangulated random
surface). For a fixed $N$-vertex $\phi^3$ graph, $G$,
with a single spin on each vertex,
the Ising partition function is
$$ Z_G = \frac{1}{Z_0}
\sum_{\{S\}} \exp \left( \beta \sum_{<i j>} S_i S_j \right) \ , \eqno ( \number\mycount ) \global\advance\mycount by 1$$
with
$$ Z_0 = 2^N \left( \cosh \beta \right)^\frac{3N}{2}, \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $S_i$ is the spin on the $i$-th vertex of the graph, the sum over
$<i j>$ is a sum over nearest neighbours (by which we mean vertices
that are connected by a line in the graph) and
$\beta$ is the coupling constant. The factor of ${Z_0}^{-1}$ is
introduced in order to simplify the formulae later on.
We then sum over a set of $\phi^3$
graphs, with $N$ vertices and a fixed genus, $g$, to get the partition
function,
\eqsum=\mycount
$$ Z_N(p)= \sum_G \frac{1}{{s_{\scriptscriptstyle G}}} (Z_G)^p. \eqno ( \number\mycount ) \global\advance\mycount by 1$$
Each graph is weighted by the symmetry factor ${{s_{\scriptscriptstyle G}}}^{-1}$,
which is the reciprocal of the order of the symmetry
group for that graph.
The symmetry factors are inserted because they occur in the
matrix model solution of the $p=0$ and $p=1$ cases. However, we could equally
well take ${s_{\scriptscriptstyle G}}=1$ in the above definition, giving us a
slightly different model, which is nonetheless expected to have
identical properties in the thermodynamic limit $N \to \infty$.
Consequently we will often ignore the symmetry
factors especially in the large $N$ limit when we would expect the
graphs not to be very symmetric and hence to have ${s_{\scriptscriptstyle G}} \approx 1$ anyway.
In this paper we work with planar diagrams (ie $g=0$) and
consider three different versions of this model, which differ with
respect to the sets of graphs that are used. In model~{I}, $G$ runs over all
the planar connected $\phi^3$ graphs with $N$ vertices.
Model~{II}~is the same as model~{I}, except that tadpoles are
excluded so that the graphs are one-particle irreducible.
For model~{III}, tadpoles and self-energy terms are excluded
giving two-particle irreducible graphs.
In the thermodynamic limit the partition function
has the asymptotic form,
\eqasy=\mycount
$$ Z_N(p) = e^{\mu(p,\beta) N} N^{\gamma_{str} - 3}
\left( a_0 + \frac{a_1}{N} +
\cdots \right) . \eqno ( \number\mycount ) \global\advance\mycount by 1$$
The free energy $\mu(p,\beta)$ is defined
as
$$ \mu(p,\beta) = \lim_{N \rightarrow \infty} \frac{1}{N} \log(Z_N(p)) \eqno ( \number\mycount ) \global\advance\mycount by 1$$
and discontinuities in its derivatives indicate the presence of a
phase transition.
Similarly, $\mu_G$ for a given graph, $G$, is defined by,
$$ \mu_G(\beta) = \lim_{N \to \infty} \frac{1}{N} \log(Z_G). \eqno ( \number\mycount ) \global\advance\mycount by 1$$
The string exponent $\gamma_{str}$ is believed to be universal and to depend
only upon certain general characteristics of the model.
\subsection{Matrix model results ($p=0$ and $p=1$)}
The case $p=0$, where there are no Ising spins,
just corresponds to enumerating $\phi^3$ graphs.
The number of graphs ${{\cal G}^{(1)}(N)}$ in model~{I}~can be calculated, for
example by matrix model methods~\cite{BIPZ} (see
also~\cite{Tutte,Koplik}) with the result,
\eqb=\mycount
$$ {{\cal G}^{(1)}(N)} = \frac{8^\frac{N}{2} \Gamma(\frac{3N}{4})}{2
(\frac{N}{2}+2)! \ \Gamma(\frac{N}{4} +1)}
\similar
e^{\frac{1}{2} \log(12 \sqrt{3}) N} N^{-\frac{7}{2}} . \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
For models~{II}~and~{III}, the number of $N$-vertex graphs are denoted
by ${{\cal G}^{(2)}(N)}$ and ${\cal G}^{(3)} (N)$ respectively and can also be calculated giving,
$$ {{\cal G}^{(2)}(N)} = \frac{2^\frac{N}{2} \left( \frac{3N}{2} -1 \right)!}{
\left(\frac{N}{2}\right)! \left(N+2\right)!} \similar
e^{\frac{1}{2} \log \left( \frac{27}{2} \right) N} N^{- \frac{7}{2}}, \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
\eqa=\mycount
$$ {\cal G}^{(3)} (N) = \frac{(2N-3)!}{\left(\frac{N}{2}\right)!
\left(\frac{3N}{2}\right)!} \similar e^{\frac{1}{2} \log(\frac{256}{27}) N}
N^{-\frac{7}{2}}. \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
It should be noted that all these models yield the asymptotic
form given in (\number\eqasy) with $\gamma_{str}= -\frac{1}{2}$.
The $p=1$ case has been solved analytically~\cite{Meh,Kaz1}
and has a third order
phase transition from a disordered to a magnetized state. For
model~{I}, the critical value of the coupling constant is given
by~\cite{BouKaz1},
$$ \beta_c = - \frac{1}{2} \log \left( \frac{1}{27} \left( 2 \sqrt{7} -1 \right)
\right) \approx 0.9196, \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
and for model~{III}~\cite{BurJur},
$$ \beta_c = \frac{1}{2} \log \left( \frac{108}{23} \right) \approx 0.7733 .\eqno ( \number\mycount ) \global\advance\mycount by 1 $$
In both cases, the critical exponents are
$\alpha = -1$ , $\beta = \frac{1}{2}$, $\gamma = 2$,
$\delta =5$, $\nu = \frac{3}{d_H}$ and $\eta= 2 - \frac{2d_H}{3}$,
where $d_H$ is some unknown dimension depending on the geometry of the graphs.
Also, $\gamma_{str}=-\frac{1}{2}$ everywhere, except at the critical point,
where $\gamma^{*}_{str}=-\frac{1}{3}$.
These results should be compared with those for a fixed regular
lattice, such as the hexagonal lattice, for which,
$$ \beta_c = \tanh^{-1}\left( \frac{1}{\sqrt{3}} \right) \approx 0.658
\eqno ( \number\mycount ) \global\advance\mycount by 1 $$
and there is a second order magnetization transition with critical
exponents: $\alpha=0$, $\beta=\frac{1}{8}$, $\gamma=\frac{7}{4}$,
$\delta=15$, $\nu=1$, $\eta=\frac{1}{4}$. Introducing the sum over
triangulations changes the universality class, but the precise nature
of the sum is not important.
\section{Dominant graphs in the limit $p \to \infty$}
\subsection{Concavity of $\mu(p,\beta)$}
In this section we will ignore the symmetry factors, however it is
easy to show that all the results follow if we include them.
Using the Cauchy-Schwartz inequality on (\number\eqsum) it follows that,
$$ \left( Z_N \left( \frac{p+q}{2} \right) \right)^2 \leq Z_N(p)
Z_N(q) \eqno ( \number\mycount ) \global\advance\mycount by 1$$
and hence that $\mu(p,\beta)$ is concave with respect to p,
$$\mu\left( \frac{p+q}{2}, \beta \right) \leq \frac{1}{2}
\left[ \mu\left(p,\beta\right) + \mu\left(q,\beta\right) \right]. \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
(A slight modification of the standard argument shows that $\mu$ is
also concave with respect to $\beta$).
It is trivial that for $p \geq 1$,
$$ Z_N(p) = \sum_G (Z_G)^p \leq \left( \sum_G Z_G \right)^p , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
so that,
$$ \mu(p,\beta) \leq p \mu(1,\beta), \qquad p \ge 1.\eqno ( \number\mycount ) \global\advance\mycount by 1 $$
We also have that,
$$
{\partial\mu(p,\beta) \over \partial p} = \lim_{N \rightarrow \infty} \frac{1}{N} \frac{1}{Z_N}
{\partial \over \partial p} \left( \sum_G (Z_G)^p \right) \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
$$
\qquad = \lim_{N \rightarrow \infty} \frac{1}{Z_N} \sum_G (Z_G)^p \mu_G > 0
$$
and hence that $\mu(p,\beta)$ is a concave monotonic increasing function
with respect to $p$.
However, $\mu$ is bounded by a linear function
and hence it must be asymptotically linear in $p$ as $p \to \infty$.
Since the number of graphs with given $Z_G$ does not depend upon $p$
it must be the case that, as $p \to \infty$, $Z_N(p)$ is dominated by those
graphs $G_0$ with largest $Z_G$ so that,
$$ \mu(p \rightarrow \infty) \sim p \mu_{G_0} \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
and it is therefore of some interest to identify these maximal graphs.
This identification alone is not sufficient to solve the models at
large $p$ because the number of such graphs and fluctuations around
them must be included to find the sub-asymptotic behaviour and, in
particular, the value of $\gamma_{str}$.
In the following section we identify most of the maximal graphs for
models~{I}, {II}~and~{III}. For a given value of $\beta$ we might
suppose that there is a value $p_c(\beta)$ of $p$ above which the
maximal graphs dominate. (Of course it may be that the transition to
dominance by maximal graphs is seamless and that there is no critical
value or that $p_c(\beta)=\infty$ for all $\beta$.)
In section~\ref{sec:trunc} we begin to address the question of
the behaviour of $p_c(\beta)$.
\subsection{Model~{I}}
\begin{figure}[b]
\caption[l]{Tree-like graph}
\label{fig:tree}
\begin{picture}(100,35)(-35,0)
\epsfbox{tree.eps}
\end{picture}
\end{figure}
In this section we prove that the maximal graphs for model~{I}~are
tree graphs with tadpoles at the ends of the branches (see
fig~\ref{fig:tree} for an example).
Starting with an arbitrary $N$ vertex $\phi^3$ graph, select a point D and
reconnect the links to it as shown in fig~\ref{fig:treeblob}. The
resulting graph still has $N$ vertices, and provided that the link
DC lay on a closed loop in the original graph, the new graph will still be
connected. Repeat this procedure taking care not to disconnect the graph.
Eventually there are no further links that can be cut and the
graph is tree-like (ie there are no closed loops except for the
tadpoles at the ends of the branches).
\begin{figure}[htb]
\caption{Partition functions}
\label{fig:treeblob}
\begin{picture}(100,45)(-10,0)
\epsfbox{treeblob3.eps}
\end{picture}
\end{figure}
Before the link DC is cut the partition function is
\setcounter{equation}{\mycount}
\addtocounter{equation}{-1}
\global\advance\mycount by 2
\vbox{
\begin{eqnarray}
Z_1 &=& \frac{1}{2 C^3}
\sum_{S_a S_b S_c S_d} Z(S_a,S_b,S_c) \, \exp\beta(S_a S_d+S_b
S_d+S_c S_d)\\
&=& \sum_{S_a S_b S_c} Z(S_a,S_b,S_c) \, (1 + t^2 (S_a S_b+S_b
S_c+S_c S_a)),
\end{eqnarray}}
\noindent where $Z(S_a,S_b,S_c)$~$(\ge 0)$ represents the partition
function for the remainder of the graph with boundary spins
$S_a$, $S_b$, $S_c$ and $C= \cosh \beta$, $t=\tanh \beta$. After
reconnecting the links, the new partition function is
$$ Z_2 =\sum_{S_a S_b S_c} Z(S_a,S_b,S_c) \, (1 + t)(1+t S_a S_b)
\eqno ( \number\mycount ) \global\advance\mycount by 1$$
so that,
$$ Z_2 - Z_1 = \sum_{S_a S_b S_c} Z(S_a,S_b,S_c) \, t (1+S_a S_b)
(1-t S_b S_c) \ge 0 .\eqno ( \number\mycount ) \global\advance\mycount by 1 $$
Thus at each step the partition function increases
(equality occurs for $\beta =0$ and $\beta = \infty$). The partition
function takes the same value,
$$ Z_{tree} =
(1+\tanh \beta)^{\frac{N}{2}+1} \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
for all tree graphs with $N$ vertices, so we have proved that the set
of tree graphs is maximal for all finite non-zero $\beta$. This set of
graphs does not magnetize at finite $\beta$. We will need later
the number of tree graphs with $N$ vertices which is given by~\cite{BKKM}
$${\cal G}_{tree}(N) = \frac{(N-2)!}{(\frac{1}{2} N +1)! (\frac{1}{2} N -1)!}
\similar
e^{N \log 2} N^{-\frac{5}{2}} . \eqno ( \number\mycount ) \global\advance\mycount by 1$$
In considering the contributions of different graphs to $Z_N(p)$ at
finite $p$ (which we will do in sections~\ref{sec:mag}
and~\ref{sec:conc}) it is useful
to examine the ratio,
$$\frac{Z_2}{Z_1} = \left(1 - \frac{t}{(1+t)^2} \left< 1 + S_a S_b - t
(S_b S_c +S_a S_c) \right>_{G_2} \right)^{-1}, \eqno ( \number\mycount ) \global\advance\mycount by 1$$
where
$$ \left< Q \right>_{G_2} \equiv \frac{1}{Z_2} \sum_{S_a S_b S_c}
Z(S_a,S_b,S_c) \, (1+t) (1+t S_a S_b) Q .\eqno ( \number\mycount ) \global\advance\mycount by 1$$
For small $t$, $\left< S_a S_b \right>=1$ or is of order $t$ (depending
on whether or not A and B are distinct points), while $\left<S_b S_c
\right>$, $\left< S_a S_c \right> \sim O(t^m)$ with $m \ge 1$; thus
$Z_2/Z_1$ increases with $t$ at small $t$ so that tree-like graphs are
becoming more important. Assuming that $p_c(\beta)$ is finite,
it is decreasing with $\beta$ in this region. On the other hand,
for large enough $t$, $\left< S_a S_b \right>$, $\left< S_b
S_c \right>$,
$\left< S_a S_c \right> \approx 1$ and $Z_2/Z_1$ decreases towards one
as $t \to 1$, so that tree-like graphs become less important again.
The position of any minimum of the curve $p_c(\beta)$ must lie between
these two regimes.
\subsection{Model~{II}}
\label{sect:ring}
For model~{II}, the maximal graphs are ring graphs
(fig~\ref{fig:ring}) and again this is true for any value of $\beta$.
The proof, which is very
similar to that for model~{I}, consists of two parts. Firstly, we
show that any graph in this model can be converted into a ring graph
by a series of steps, where none of the intermediate graphs
contain tadpoles. Secondly, we show that the partition function
increases at each step and thus that ring graphs are maximal for all
$\beta$.
\begin{figure}[htbp]
\caption{Ring graph}
\label{fig:ring}
\begin{picture}(100,40)(-40,0)
\epsfbox{ring.eps}
\end{picture}
\end{figure}
For $\phi^3$ graphs of genus zero the number of faces, $F$, is related to
the number of edges, $E$, through $F=2+\frac{1}{3} E$.
Defining $f_i$ to be the number of faces with
$i$ sides, then $F=\sum f_i$ and $E=\frac{1}{2} \sum i f_i$ so that,
$$ \sum_i f_i (1-\frac{i}{6})=2. \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
Since $f_i \ge 0$, in order for the equation to be satisfied we must
have $f_i \ne 0$ for some $i < 6$. Thus, any $\phi^3$ graph in this model
must contain a 2-loop (ie a loop of length two), a triangle,
a square or a pentagon.
Starting with a graph $G$, containing no tadpoles, replace any
subgraphs such as that in figure~\ref{fig:dress}a (which we will call
dressed propagators) with bare propagators (fig~\ref{fig:dress}b),
yielding a new $\phi^3$ graph, $G'$, to which the above
theorem applies. Putting the dressed propagators back we recover $G$ and have
shown that it contains at least one of the following: a dressed
2-loop, or a dressed or bare triangle, square or pentagon. The only
exception is the ring graph, which upon making the replacement shown
in fig~\ref{fig:dress} just gives a circle for graph $G'$. Thus, any
graph, except for the ring, contains one of the subgraphs in the above
list.
Dressed propagators containing $n$ 2-loops will be drawn as in
fig~\ref{fig:dress}c. We are going to replace the subgraphs
fig~\ref{fig:loop}a to fig~\ref{fig:pent}a with subgraphs
fig~\ref{fig:loop}b to fig~\ref{fig:pent}b respectively. The number of
vertices is unchanged and the number of 2-loops is increased by these
replacements. In appendix~\ref{app:ring} we show that the partition
function increases,
for any choice of dressed propagators in the original subgraph (ie for
all choices of $j,k,l,m,n \ge 0$; for the 2-loop case,
fig~\ref{fig:loop}a, $n,m$ are not both zero).
Thus by choosing the orientation of the
replacement, we can create a new graph, which is connected,
has no tadpoles, has the same number of vertices as the original and
has a larger partition function. By repeatedly
eliminating the subgraphs in the above list
we will eventually end up with a ring
graph,
proving that the ring graphs are maximal for all $\beta$. The
partition function for the ring graph is given by,
$$ Z_{ring} = \left( 1+t^2
\right)^\frac{N}{2} + \left( 2 t^2 \right)^\frac{N}{2} \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
and this graph does not magnetize for any finite $\beta$.
\begin{figure}[bhp]
\caption{(a) Dressed (b) Bare (c) Renormalized propagators}
\label{fig:dress}
\begin{picture}(170,45)(-5,0)
\epsfbox{dress3.eps}
\end{picture}
\end{figure}
\begin{figure}[pt]
\caption{(a) 2-loop (b) Replacement subgraph or equivalently (c)}
\label{fig:loop}
\begin{picture}(170,35)(5,0)
\epsfbox{loop3.eps}
\end{picture}
\end{figure}
\begin{figure}[ph]
\caption{(a) Triangle (b) Replacement subgraph}
\label{fig:tri}
\begin{picture}(170,55)(5,0)
\epsfbox{tri4.eps}
\end{picture}
\end{figure}
\begin{figure}[pht]
\caption{(a) Square (b) Replacement subgraph}
\label{fig:squ}
\begin{picture}(170,50)(5,0)
\epsfbox{squ4.eps}
\end{picture}
\end{figure}
\begin{figure}[pht]
\caption{(a) Pentagon (b) Replacement subgraph}
\label{fig:pent}
\begin{picture}(170,65)(5,0)
\epsfbox{pent4.eps}
\end{picture}
\end{figure}
\break
\subsection{Model~{III}}
In this case we have not found a procedure which converts any given
graph into a maximal graph; essentially this is because the form of
the maximal graph now depends on $\beta$.
We have identified the maximal graphs for the limits
of small and large $\beta$, but not for intermediate values of $\beta$.
A high temperature expansion $(\beta
\rightarrow 0)$ of the partition
function for a given graph gives,
\hight=\mycount
$$ Z_G = 1 + \sum_l n_l t^l , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $n_l$ is the number of closed, but possibly disconnected,
non-self-intersecting loops in the graph that contain $l$ links.
For model~{III}, $n_1=n_2=0$
because we have eliminated tadpoles and self-energies and the first
non-zero coefficient is $n_3$, the number of triangular loops.
The maximum possible value of $n_3$ is the integer part of
$\frac{N}{3}$ which we will denote by
$\left[\frac{N}{3}\right]$. Suppose that $N$ is divisible by
three, then taking any graph with $\frac{1}{3} N$ points and replacing
each point with a triangle (fig~\ref{fig:fractet}a),
gives an $N$ vertex graph with $n_3=
\frac{N}{3}$. If $N$ is not divisible by three, replace all except one or
two of the points giving $n_3=\left[\frac{N}{3}\right]$. To
show that this really is the maximum possible number,
first note that the only case
which has two triangles back to back is the tetrahedron
(fig~\ref{fig:fractet}b).
Then, ignoring this case, given an $N$ vertex graph we can collapse all
of its triangular loops to points and get a
graph with $N'$ vertices and no tadpoles or self-energies.
Clearly $N' \geq n_3$ (since each collapsed triangle yields a vertex)
and $N' = N - 2 n_3$ (as collapsing a triangle removes two vertices).
Thus $N-2n_3 \geq n_3$
and so $n_3 \leq \frac{N}{3}$. Hence the maximum value is
$n_3 = \left[\frac{N}{3}\right]$ for $N>4$.
\begin{figure}[htb]
\caption{(a) Replacement (b) Tetrahedron}
\label{fig:fractet}
\begin{picture}(100,30)(0,0)
\epsfbox{frac1b.eps}
\end{picture}
\end{figure}
Consider graphs, $G$, for which the total
number of points is $N=2 \times 3^n$, where $n$ is some sufficiently
large integer. In order to maximize $Z_G$ in the $\beta \to 0$ limit,
we maximize each coefficient in turn. Having maximized $n_3$ as
described above by replacing each point of $G'$ (with $\frac{1}{3} N$
points) with a triangle we now choose $G'$ in order to
maximize the next coefficients. The replacement of points with
triangles doubles the
number of edges bordering each face of $G'$, which being a
model~{III}~graph has a smallest loop length of three. Any such loops
will be doubled to length six; thus $n_4=n_5=0$. Now, $n_6 = n_6^c +
\frac{1}{2} n_3(n_3-1)$ (where $n_6^c$ is the number of connected loops of
length six), so that we need to maximize $n_6^c$ next.
To do this we must maximize $n_3$ of $G'$ (since loops of length three
in $G'$ become those of length six in $G$). So we take a graph with
$\frac{1}{9} N$ points and make the replacement of points with
triangles to get $G'$. Carrying
on in this fashion we end up with a fractal graph (see fig~\ref{fig:fractal}).
If $\frac{1}{2} N$ is not a power of three, then the graph will not quite be
regular, but will still be essentially fractal-like.
\begin{figure}[htb]
\caption{Fractal graph}
\label{fig:fractal}
\begin{picture}(100,50)(-35,0)
\epsfbox{fract2.eps}
\end{picture}
\end{figure}
At large $\beta$, graphs which magnetize can be studied in the low
temperature expansion for which,
$$ Z_G = \frac{1}{Z_0}
2 e^{\frac{3N}{2} \beta} \left( 1 + \sum_{r=3}^{\infty} m_r
x^r \right) \sim \frac{1}{Z_0}
\exp N \left(\frac{3}{2}\beta + \sum_{s=3}^{\infty}
a_s x^s \right) , \eqno ( \number\mycount ) \global\advance\mycount by 1$$
where $x=e^{-2 \beta}$ and $m_r$ is the number of
domain boundaries that cross $r$ links.
For graphs with no tadpoles or self-energies any
domain boundary must cross at least three links so that the sum starts
at $r=3$ and the sum for the free energy also starts at $s=3$.
However, this does not apply to graphs that do not magnetize; the
ladder graph (fig~\ref{fig:ladder}) has $m_3 \propto N$, but $m_4 =
\frac{1}{2} \frac{N}{2} \left( \frac{N}{2} -1 \right) $ so we might expect
it to exponentiate to give a series starting at $s=2$ with $a_2=
\frac{1}{2}$. It is straightforward to check that this is the case from the
ladder free energy, which is given by,
$$ \mu_{ladder} = \frac{1}{2} \log \left( \frac{1}{2} \left(1 + t^2 +
\sqrt{(1- t^2)^2 + 4 t^4}\right) \right) . \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
Because these graphs are one-particle irreducible it is not possible
to get $m_2 \propto N^2$ and $a_1$ is always zero. We conclude that at
large $\beta$ the ladder graphs, whose free energy starts at $O(x^2)$,
will dominate magnetizable graphs, whose free energy starts at
$O(x^3)$. Furthermore it is easy to check that $\mu_{ladder}$ is
bigger than that of the fractal graph for large $\beta$.
What happens at intermediate $\beta$ is not clear but is probably
quite complicated, involving graphs which in some sense interpolate
between the fractal and the ladder graphs; whether the transition from
fractal to ladder is continuous or discontinuous we cannot say, but
the former seems more likely on grounds of universality with
models~{I}~and~{II}~- but maybe this system is not universal.
Neither the fractal nor the ladder magnetize so probably the
intermediate graphs do not magnetize either and as $p \to \infty$ the
model only magnetizes at $\beta \to \infty$.
\begin{figure}[htb]
\caption{Ladder graph}
\label{fig:ladder}
\begin{picture}(100,35)(-25,0)
\epsfbox{ladd.eps}
\end{picture}
\end{figure}
It is interesting to note that for a Gaussian model on a random
triangulation embedded in $D$
dimensions~\cite{BKKM,KKM,ADFO,David,ADF3} the maximal graphs (for $D>0$)
are those with the minimal number of spanning trees and that the
corresponding $\phi^3$ graphs are tree graphs, ring graphs and ladders for
models~{{I}} to~{{III}} respectively.
Thus the maximal Ising graphs at
large $\beta$ are the same as the maximal graphs for the Gaussian
model, but they apparently differ at small $\beta$ in the case of model~{III}.
Reference~\cite{BKKM} lists the maximal graphs, but gives a more symmetric
version of the ladder
graph; presumably that is because
fig~\ref{fig:ladder} is excluded from the relevant model even though it has no
tadpoles or self-energies.
\section{Truncated models}
\label{sec:trunc}
\subsection{Model~{I}}
To get some indication of how large $p$ must be before the maximal
graphs become dominant we now consider a truncated model in the weak
coupling regime. For small $\beta$,
$$\mu_G =
\frac{1}{N} \left[ n_1 t + \left( n_2 - \frac{1}{2} {n_1}^2\right) t^2 +
\left(n_3 - n_1 n_2 +
\frac{1}{3} {n_1}^3 \right) t^3 + \cdots \right] , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
so we consider a model for which $\mu_G$ is truncated to
$$\mu_G^T =\frac{n_1}{N} t . \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
The grand canonical partition function for this model is then,
$$ {\cal Z}(\mu,pt) = \sum_{N=2 \atop even}^{\infty} e^{- \mu N}
\sum_{n_1 =0}^{\infty} {\cal G}^{(1)}(N,n_1) e^{n_1 p t}, \eqno ( \number\mycount ) \global\advance\mycount by 1$$
where ${\cal G}^{(1)}(N,n_1)$ is the number of graphs with $N$ vertices and
$n_1$ tadpoles (loops of length one) \break ($0 \le n_1 \le \frac{1}{2} N +1$). We
show in appendix~\ref{app:tad} that ${\cal G}^{(1)}(N,n_1)$ satisfies the
recurrence relation,
\rrb=\mycount
$$ {\cal G}^{(1)} (N+2,n_1+1) = \left( \frac{3N - 2 n_1}{n_1+1}
\right) {\cal G}^{(1)} (N,n_1)+ 2 {\cal G}^{(1)} (N,n_1+1) \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
and the total number of graphs is known so
$$\sum_{n_1=0}^{\frac{1}{2} N +1} {\cal G}^{(1)} (N,n_1) = {{\cal G}^{(1)}(N)}, \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where ${{\cal G}^{(1)}(N)}$ is given in (\number\eqb). Using the recurrence relation
(\number\rrb) and putting $y=e^{pt}$, $x=e^{-\mu}$ we find that $\cal Z$
satisfies the differential equation,
$$ \frac{\partial {\cal Z}}{\partial y} (1 + 2x^2 (y -1)) = 3 x^3 \frac{\partial {\cal Z}}{\partial x} + x^2 y \ , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
which has the solution,
$$ {\cal Z} = \frac{1}{12 x^2} \left( h^{\frac{3}{2}} -
1 \right) + \frac{1}{2} (y-1) - \frac{1}{4} \log h +
\sum_{ N = 2 \atop even}^{\infty} {{\cal G}^{(1)}(N)} x^N h^{- \frac{3N}{4}} , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $h = (1- 4x^2 (y-1))$. The asymptotic behaviour at large $N$ is
thus given by
\eqlargn=\mycount
$${\cal Z} \sim \sum_{N} {{\cal G}^{(1)}(N)} x^N h^{-\frac{3N}{4}} \sim
\sum_{N} N^{-\frac{7}{2}} \exp -N
\left( \mu + \frac{3}{4} \log h - \frac{1}{2} \log(12 \sqrt{3}) \right), \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
so the thermodynamic free energy $\mu_c(pt)$ obeys the cubic equation,
$$ \mu_c + \frac{3}{4} \log \left(1-4 e^{-2\mu_c} (y-1) \right)
- \frac{1}{2} \log (12 \sqrt{3}) =0. \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
The solution is
\vbox{
$$ \mu_c(pt) = \log 2 + \frac{1}{2} \log (y-1) - \frac{1}{2} \log \Biggl[ 1 -
\frac{9}{(y-1)^2} - 3 \left( \frac{1}{2 (y-1)^2} \right)^\frac{1}{3} \times $$
$$ \sum_{\sigma = \pm 1} \omega^{\sigma}
\left( 1- \frac{18}{(y-1)^2} +
\frac{54}{(y-1)^4} + \sigma \sqrt{1 - \frac{4}{(y-1)^2}}
\right)^\frac{1}{3} \Biggr] , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$}
\noindent with $\omega = e^{\frac{2 \pi i}{3}}$ for $y<3$
and $\omega =1$ for $y>3$ (the solution and its derivatives are
actually continuous across $y=3$). The solution is plotted in
fig~{{12}}a where it is compared to ${\mu_c}' = \frac{1}{2}
pt + \log 2$, which is what we would expect if trees were totally dominant. We
see that $\mu_c(pt)$ approaches ${\mu_c}'$ quite rapidly as
$pt$ increases, but it is necessary to have $p \to \infty$ for the
trees to dominate completely. It is also interesting to examine the
ratio $r \equiv { \langle n_1 \rangle}/{N}$, which is shown in
fig~{{12}}b; for $pt >6$ the ratio is very close to one half and
the fact that the gradient changes very quickly near this point leads
us to suspect that the full model might develop a discontinuity in one
of its derivatives at some finite value of $p$ (ie $p_c(\beta)$).
A formula for $r$ can be derived from that for $\cal Z$;
putting $b = 2 e^{- p t}$,
$\lambda = \frac{4 b^2}{(3b-2) (b+2)}$,
$$ r = \frac{1}{2-b} \left[ 1 + \lambda^\frac{1}{3} \left( \omega \left(1 -
\sqrt{1- \lambda} \right)^\frac{1}{3} + \omega^2 \left(1 + \sqrt{1- \lambda}
\right)^\frac{1}{3} \right) \right] , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $\omega$ is a cube root of unity and we need to use $\omega=1$
for $0 \leq b < \frac{2}{3}$ and
$\omega = e^{\frac{2 \pi}{3} i}$ for $\frac{2}{3} < b \leq 2$
(the solution and derivatives are continuous across $b=\frac{2}{3}$).
As is shown by (\number\eqlargn) the string susceptibility
$\gamma_{str}=-\frac{1}{2}$ for all finite $p$ in this truncated model; only for
$p=\infty$ does $\gamma_{str}$ change to $\frac{1}{2}$, but, again, in the full
model this change may occur at some finite $p_c(\beta)$.
\begin{figure}[hptb]
\caption{Truncated model (model I)}
\begin{picture}(200,110)(40,100)
\epsfbox{fig12.eps}
\end{picture}
\end{figure}
\subsection{Model~{III}}
This calculation can be repeated for model~{III}, which has,
$$ \mu_G = \frac{1}{N} (n_3 t^3 +
n_4 t^4 + n_5 t^5 + (n_6 - \frac{1}{2} n_3^2 ) t^6 + \cdots ) . \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
We truncate $\mu_G$ to
$$ \mu_G^T = \frac{n_3}{N} t^3 \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $0 \le n_3 \le \left[\frac{N}{3}\right]$.
The grand canonical partition function is defined as
$$ {\cal Z}(\mu,pt^3) = \sum_{N=6 \atop even}^{\infty} e^{-\mu N}
\sum_{n_3 =0}^{\infty} {\cal G}^{(3)}(N,n_3) e^{ n_3 p t^3}, \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where ${\cal G}^{(3)}(N,n_3)$ is the number of graphs in model~{{III}} with $N$
vertices and $n_3$ loops of length three. This obeys a recurrence
relation, derived in appendix~\ref{app:tri},
\rra=\mycount
$$ {\cal G}^{(3)} (N+2,n_3 +1)= \left( \frac{N-3n_3}{n_3+1} \right) {\cal G}^{(3)} (N,n_3) +
3 {\cal G}^{(3)} (N,n_3+1), \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
which holds for $N \geq 6$.
The recurrence relation (\number\rra) gives,
putting $y=e^{pt^3}$ and $x=e^{-\mu}$,
$$ \frac{\partial {\cal Z}}{\partial y} \left(1 + 3 x^2 \left( y-1
\right) \right) = x^3 \frac{\partial {\cal Z}}{\partial x} + \frac{1}{3}
x^6 y . \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
The solution is
$$ {\cal Z}(\mu,pt^3) = - \frac{1}{3} \left( h x - \frac{1}{2} x^2 + \frac{1}{4} x^4
\right) + \sum_{N=2 \atop even}^{\infty} {\cal G}^{(3)} (N) h^N , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where ${\cal G}^{(3)}(N)$ is given by (\number\eqa) and
$ h = x + x^3 (y-1) .$
Thus for large $N$,
$$ {\cal Z} \sim \sum_N {\cal G}^{(3)} (N) h^N \sim \sum_N N^{-\frac{7}{2}} \exp N \left(
\frac{1}{2} \log \left(
\frac{256}{27} \right) + \log \left( x + x^3 (y-1) \right) \right).
\eqno ( \number\mycount ) \global\advance\mycount by 1 $$
Solving for $\mu_c(pt^3)$ as in the previous case gives
$$ \mu_c(pt^3) = - \frac{1}{2} \log 3 + \frac{5}{3}\log 2 + \frac{1}{3} \log
(y-1) - \log \left[\sum_{\sigma= \pm 1}
\left( 1 + \sigma \sqrt{1 + \left( \frac{32}{27} \right)^2
\frac{1}{y-1} } \right)^\frac{1}{3} \right] . \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
We can also calculate $\langle n_3 \rangle/N$,
$$ \frac{\langle n_3 \rangle}{N} =
\frac{1}{a} \left[ 1 - \left( \frac{1}{2} \left( \frac{3 -
a}{12 - c a} \right) \right)^\frac{1}{3} \sum_{\sigma = \pm 1}
\left( 1+ \sigma \sqrt{1- 4 \left( \frac{3 - a}{12 - c a}
\right) } \right)^\frac{1}{3} \right] , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $ a = 3(1-e^{-pt^3}) $ and $c=295/256$.
Again, this saturates as $p$ is increased (see fig~{{13}}b).
Looking at the graph (fig~{{13}}a) of $\mu_c(pt^3)- \frac{1}{3} pt^3 -
\frac{1}{6} \log\left(\frac{256}{27}\right)$ (where the last term
takes account of the exponential number of graphs with
$\frac{n_3}{N}=\frac{1}{3}$) we see that for $p t^3 > 8$
the behaviour is essentially linear, because after this point
the integral is being dominated by a single type of graph (ie those
graphs with $n_3 \approx \frac{1}{3} N$). However, it should be noted that in
this truncated model we again
need $p=\infty$ in order to actually reach $ n_3/N =\frac{1}{3}$ and the set
of such graphs is quite large
(not just fractal-like graphs).
This is due to the fact that $\mu_G$ has been truncated to the
first term ($n_3$), but higher order terms are needed to show that fractals
are maximal for $\beta \to 0$. Again, $\gamma_{str} =-\frac{1}{2}$ for all
finite values of $p$.
\begin{figure}[hp]
\caption{Truncated model (model III)}
\begin{picture}(200,110)(40,0)
\epsfbox{fig13.eps}
\end{picture}
\end{figure}
\section{Magnetization transition}
\label{sec:mag}
\subsection{Derivation of a bound on $\beta_c$}
In this section we derive a bound on the critical value
of the coupling constant, $\beta_c$, for the case of a single Ising
spin ($p=1$) on a fixed
graph $G$. This implies a bound on the critical coupling for
magnetization related phase transitions in models~{I}, {II}~and~{III}.
The high temperature expansion of $Z_G$ (\number\hight)
gives,
$$ Z_G = 1+ \sum_l n_l t^l =
\exp \left({ \sum_{l=1}^{\infty} a_l t^l} \right) . \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
Now consider,
$$ Z_{G'} = \exp \left( \sum_{l=1}^\infty n_l^c t^l \right), \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $n_l^c$ is the number of connected,
non-backtracking closed loops of length $l$.
Expanding the exponential yields all closed non-intersecting loops,
but in addition it gives loops in which some links are used more than
once so that $Z_G < Z_{G'}$ and therefore
$$ \mu_G < \frac{1}{N} \sum_{l=1}^{\infty}n_l^c t^l .\eqno ( \number\mycount ) \global\advance\mycount by 1$$
However the number of non-backtracking closed loops of length $l$
originating at a given point is less than $2^l$ so
$$ \mu_G < \sum_{l=1}^\infty (2t)^l . \eqno ( \number\mycount ) \global\advance\mycount by 1$$
Thus the high temperature series for $\mu_G$ must converge if
$t<\frac{1}{2}$; it follows that any phase transition on $G$ must occur at a
$t_c \ge \frac{1}{2}$ (ie $\beta_c \ge 0.549$).
A slight modification of
this argument excludes the possibility of permanently magnetized graphs.
Fix one spin $S_+$ to be $+1$, so the magnetization $M_G(\beta)$ is given by,
$$ M_G(\beta) = \frac{1}{N} \sum_{+}
\frac{1}{N} \frac{1}{Z_G} \frac{1}{2^{N-1}} \sum_{\{S\}} \sum_i
S_i \prod_{\langle a,b \rangle} ( 1 + t
S_a S_b) , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where the $\sum\limits_{+}$ runs over all possible locations of the
fixed spin (this is necessary because unlike the regular lattice model
any two given spins may be almost disconnected from one another).
The only contribution to the numerator comes from paths connecting
$S_i$ to $S_+$, so
$$ M_G(\beta) = \frac{1}{N^2} \ \frac{ \sum\limits_{+} 2^{N-1}
\sum\limits_i \sum\limits_l d_l(i,+)t^l}
{\sum\limits_{\{S\}}
\prod\limits_{\langle a,b \rangle} (1+t S_a S_b)} , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $d_l(i,+)$ is the number of paths of length $l$ from $S_i$ to $S_+$
(which can be disconnected, but are non-self-intersecting and
non-backtracking). However,
$$\sum_l d_l(i,+) t^l = \sum_l w_l(i,+) t^l ( 1 + b_1 t + b_2 t^2 + \cdots
) , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $w_l(i,+)$ is the number of connected paths of length $l$ from $S_i$
to $S_+$ and the $b_l$ series gives the contributions from the closed
loops, which do not intercept the path. The denominator, however,
contains contributions from all closed loops and hence is larger than
$2^{N-1} \sum_l b_l t^l$ so
$$ M_G(\beta) < \frac{1}{N^2} \sum_+\sum_i \sum_l w_l(i,+) t^l .\eqno ( \number\mycount ) \global\advance\mycount by 1 $$
However, $\sum_i w_l(i,+)$ is just the number of connected paths of length $l$
from $S_+$ and we know that this is less than $2^l$. Thus,
$$ M_G(\beta) < \frac{1}{N} \sum_l (2t)^l = \frac{1}{N}
\frac{1}{1-2t} , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
for $t<\frac{1}{2}$ and hence $ M_G(\beta) \to 0$ as $N \to \infty$.
If more than one spin is fixed to be $+1$, then the
right hand side of this equation is multiplied by the number of such
spins, so that to avoid $ M_G(\beta) \to 0$ a number of spins
proportional to $N$ must be fixed. That is, we need to fix a thermodynamically
significant number of spins in order to get the system to magnetize
and thus there can be no spontaneously magnetized state for $t<\frac{1}{2}$.
This result was derived for a single graph, but obviously still applies
when we perform a summation over graphs; no phenomena associated with
magnetization can occur at $t<\frac{1}{2}$.
For the $\phi^3$ graphs the high temperature ($\beta \to 0$) expansion
converges for $\tanh \beta < \frac{1}{2}$, this implies that on the dual
triangulation the low temperature ($\beta \to \infty$) expansion
converges for $e^{-2\beta} < \frac{1}{2} $ (ie for $\beta > \frac{1}{2} \log 2
\approx 0.347$). Thus any critical value of $\beta$ must satisfy
$\beta_c \leq \frac{1}{2} \log 2$, on the triangulated surface. In contrast
with $\phi^3$ graphs, there exist both graphs which never magnetize
and those which are permanently magnetized (see fig~\ref{fig:dual}).
The argument
showing that there are no permanently magnetized $\phi^3$ graphs, can not
be used for the dual triangulation because the relation $\sum_i w_l(i,+)
< 2^l $ no longer holds. This is due to the fact that the vertices can
have coordination numbers greater than three (in fact the permanently
magnetized graph shown has points with coordination numbers of order
$N$).
\begin{figure}[htb]
\caption{Triangulations: (a) never magnetizes; (b) permanently magnetized}
\label{fig:dual}
\begin{picture}(100,45)(22,0)
\epsfbox{dual.eps}
\end{picture}
\end{figure}
\subsection{Mechanisms of magnetization}
\label{sec:geom}
For this model the magnetization can be written as
$$M(p,\beta) = \frac{\sum\limits_G \frac{1}{{{\scriptstyle s}_{\scriptscriptstyle G}}} M_G (Z_G)^p}{\sum\limits_G
\frac{1}{{{\scriptstyle s}_{\scriptscriptstyle G}}} (Z_G)^p} , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where $M_G(\beta)$ is the magnetization for a given $N$-vertex graph
$G$, and we are taking the limit $N \to \infty$.
\noindent
In the limit $p \to 0$ we effectively have a quenched magnetization,
$$M_0(\beta) \equiv M(0,\beta)= \frac{1}{{\cal G}(N)} \sum_G \frac{1}{{s_{\scriptscriptstyle G}}}
M_G(\beta) , \eqno ( \number\mycount ) \global\advance\mycount by 1$$
where ${\cal G}(N)$ is the number of graphs with $N$ vertices in whichever
model is being looking at. This case has recently been investigated
numerically~\cite{BHJ}. The critical value for the magnetization
of this model will be denoted by $\beta_c$ and the critical coupling
constant for a given graph $G$ by $\beta_G^*$. One can show that the
behaviour of $M_0(\beta)$ at $\beta_c$ only depends on those graphs
which magnetize near $\beta_c$. We will illustrate this by
proving the result for the case in which graphs $G$ undergo a first
order phase transition; in this case only those graphs for which
$\beta_c - \epsilon < \beta_G^* \le \beta_c$, (where $\epsilon$
is an arbitrary small positive number) contribute to $M_0(\beta_c)$.
In actual fact the phase transitions of
individual graphs are second order, however the extension of the
result to this case is relatively straightforward and is omitted.
Now,
\eqtriv=\mycount
$$ M_0(\beta) = \frac{1}{{\cal G}(N)} \left[ \sum_{G:\beta_G^* \le \beta -
\epsilon} \! \! \! M_G(\beta) \ + \sum_{G:\beta_G^* > \beta - \epsilon}
\! \! \! M_G(\beta) \ \right], \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
where the notation $G:\beta_G^* \le \beta - \epsilon$ means that we
are summing over graphs $G$ for which the inequality is satisfied and
we have absorbed the symmetry factors into the summations. Since the
system magnetizes at $\beta_c$,
\eqzero=\mycount
\defG:\beta_G^* \le \beta_c - \epsilon{G:\beta_G^* \le \beta_c - \epsilon}
$$M_0(\beta_c -\epsilon) = \frac{1}{{\cal G}(N)} \sum_{G:\beta_G^* \le \beta_c - \epsilon} \! \! \! M_G(\beta_c -
\epsilon) \ \ \mathop\to\limits_{\scriptscriptstyle N \to \infty} \ 0 .\eqno ( \number\mycount ) \global\advance\mycount by 1 $$
For a first order transition in which each graph jumps to $M_G^0$ at
its critical point,
$$\frac{1}{{\cal G}(N)} \sum_{G:\beta_G^* \le \beta_c - \epsilon} \! \! \! M_G(\beta_c -\epsilon) \ \ge \
\frac{1}{{\cal G}(N)} \sum_{G:\beta_G^* \le \beta_c - \epsilon} \! \! \! M_G^0 \ \ge
\min(M_G^0) \
\frac{1}{{\cal G}(N)} \sum_{G:\beta_G^* \le \beta_c - \epsilon} \! \! \! \! \! 1 \ \ . \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
While,
$$ \frac{1}{{\cal G}(N)} \sum_{G:\beta_G^* \le \beta_c - \epsilon} \! \! \! M_G(\beta_c) \ \le
\frac{1}{ {\cal G}(N)} \sum_{G:\beta_G^* \le \beta_c - \epsilon} \! \! \! \! \! 1 \ \
\le \frac{1}{
\min(M_G^0)}
\frac{1}{{\cal G}(N)} \sum_{G:\beta_G^* \le \beta_c - \epsilon} \! \! \! M_G(\beta_c - \epsilon) \ , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
which tends to zero by (\number\eqzero).
Hence (\number\eqtriv) gives
$$M_0(\beta_c)= \frac{1}{{\cal G}(N)} \sum_{G:\beta_G^* > \beta_c -
\epsilon} \! \! \! M_G(\beta_c) \ = \frac{1}{{\cal G}(N)} \sum_{G: \beta_c -
\epsilon < \beta_G^* \le \beta_c} \! \! \! \! \! M_G(\beta_c) \ , \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
so that only graphs which magnetize near $\beta_c$ contribute to
$M_0(\beta_c)$. As it stands the proof cannot be used for
second order transitions (where $M_G^0=0$), but careful consideration
shows that in this case only graphs for which $\beta_c - \epsilon <
\beta_G^* \le \beta_c + \delta$ (where $\epsilon$, $\delta$ are small
positive numbers) can contribute to $M_0(\beta_c+\delta)$.
Thus the critical
exponents can only depend on the behaviour of graphs which are
magnetizing near the critical point $\beta_c$ and the magnetization is
as a result of spin-ordering of these graphs.
For non-zero $p$, a different geometric type of transition is possible
and this seems to be what occurs for large $p$. In this case the
magnetization is caused by the changing of the relative weights
between different graphs. This can best be understood by looking at a
simple model in which there are only two
types of graph; suppose that there are
$n_1$ unmagnetized graphs ($n_1 \sim \exp f_1 N$) with partition
function $Z_1 \sim \exp \mu_1 N$ and $n_2$ magnetizable graphs ($n_2
\sim \exp f_2 N$) with magnetization $m(\beta)$ and partition
function $Z_2 \sim \exp \mu_2 N$. Then assuming that the magnetizable
graphs are the more numerous ($f_2>f_1$) and have smaller partition
functions ($\mu_2 < \mu_1$), which is certainly the case if we are
looking at transitions to tree graphs in model~{I}, the partition
function is
$$ Z_N(p) = n_1 {Z_1}^p + n_2 {Z_2}^p \similar e^{(f_1 + p \mu_1) N} +
e^{(f_2 + p \mu_2) N} \eqno ( \number\mycount ) \global\advance\mycount by 1 $$
and the magnetization,
$$ M(p,\beta)
\similar \frac{1}{Z_N(p)} \ m(\beta) \ e^{(f_2 + p \mu_2) N} .\eqno ( \number\mycount ) \global\advance\mycount by 1 $$
In the thermodynamic limit the magnetization $M(p,\beta)$ is zero if $f_1+ p
\mu_1 > f_2 + p \mu_2$, that is, if $p (\mu_1 -\mu_2) >(f_2 - f_1)$.
As $\beta$ is varied the difference $\mu_1 - \mu_2$
changes and at the point for which the inequality no longer holds ($\beta_c$)
the magnetization jumps from zero to $m(\beta_c)$. Thus there is a
first order transition.
In the large $p$ limit for which
$\beta_c \to \infty$ it is possible in model~{I}~to have
magnetized graphs with $r \equiv n_1/N$
arbitrarily close to $\frac{1}{2}$,
so that even if the
transition is generically first order the discontinuity,
$\Delta r$, could be zero in this
limit (ie perhaps $\Delta r \to 0$ as $p \to \infty$).
For $p \to \infty$ there is such a
transition, for model~{I}, between unmagnetized tree graphs and
magnetized non-tree-like graphs, with no discontinuity~\cite{Wex1,Wex2}.
\section{Conclusion
\label{sec:conc}
In figure~\ref{fig:phase} we have drawn possible forms of the phase
diagram for model~{I}. The magnetized region is labelled M, the
tree-like region T and the remaining unmagnetized non-tree-like region
U. We would expect similar diagrams for models~{II}~and~{III}, but in
the latter case it is not clear what is happening in the T region,
as we have not identified the maximal graphs for intermediate values
of $\beta$.
\begin{figure}[htbp]
\caption{Possible phase diagrams}
\label{fig:phase}
\begin{picture}(100,140)(15,0)
\epsfbox{phase.eps}
\end{picture}
\end{figure}
For models~{I}~and~{II}, we know that the partition functions of
graphs increase as we replace closed loops with disconnected
structures. Since the graphs are becoming
less well-connected one might expect
that they tend to magnetize at higher values of $\beta$ and have
lower magnetizations at a given value of $\beta$. As $p$ is increased
in this model, the graphs with the larger partition functions make a
relatively larger contribution to $Z_N(p)$ and so we
expect that $\beta_c(p)$ will increase. There is some evidence for this
from numerical simulations~\cite{BaiJoh2,ADJT,BFHM,KowKrz}. We might
also expect the magnetization, for a fixed $\beta$, to decrease as $p$
is increased and again there is some evidence for this from
Monte Carlo simulations~\cite{ADJT}. It should of course be noted that
these simulations were for model~{III}, for which we do not know the
maximal graphs. However, since all the maximal graphs that we {\it have}
identified are not
well-connected and do not magnetize, it is quite likely that this also
applies for model~{III} at intermediate values of $\beta$; it also
follows that in none of the models do we expect $\beta_c(p \! \to \!
\infty)$ to be finite.
In the papers of Wexler~\cite{Wex1,Wex2} the $p \to \infty$ limit of
model~{I}~is studied in the low temperature (large $\beta$)
expansion. In this limit there is a critical coupling $e^{-2\beta_c}
\sim 1/p$ below which the dominant graphs are tree-like being made up
of large numbers of almost disconnected baby universes within which
the spins are all aligned. This phase has $\gamma_{str}=\frac{1}{2}$ and our results
indicate that this behaviour must persist beyond the large $\beta$
expansion, in fact for all $\beta >0$. Above $\beta_c$ the dominant
graphs contain few baby universes each of whose volume (number of
vertices) is very large and $\gamma_{str}=-\frac{1}{2}$;
thus it seems that the properties of the M region do
not depend strongly on p. At the critical point, where both the number
of baby universes and their volume diverge, it is found that
$\gamma_{str}=\frac{1}{3}$. This result is also found in the paper of Ambj{\o}rn et
al~\cite{ADJ} who considered a restricted model in which only phase
interfaces of minimum length are allowed; they obtain the result that
the exponent at the critical point $\gamma^{*}$ is related to that in
the large $\beta$ phase, $\overline{\gamma}$, by $\gamma^{*} = \overline{\gamma} / (\overline{\gamma} -1)$ so
long as $\overline{\gamma}<0$. This phase transition at $p \to \infty$ is
geometrical, as discussed in section~\ref{sec:geom}, rather than one
of spin alignment; it occurs because the partition functions of
magnetized graphs are catching up with the others. It seems highly
likely that this phenomenon will persist to finite $p$ although a
calculation of the $1/p$ corrections (or equivalently the contribution
of domain boundaries of length greater than two in the model
of~\cite{ADJ}) is needed to rule out the phase diagram of~\ref{fig:phase}b
analytically.
The extent of the tree-region T is not yet fully understood. We know
that at $\beta=0$, where the Ising partition function is one, the
model is independent of $p$ and has $\gamma_{str}=-\frac{1}{2}$; in addition, the
absence of extra transitions in models with $c<1$ suggests that the line
AC separating the U and T regions does not go below $p=2$. We cannot
rule out the possibility that $p_c(\beta)=\infty$ as shown in
figure~\ref{fig:phase}a. However, the truncated models do display a
region of relatively rapid changeover to the T region and it seems
likely that the full models have a phase transition (with $p_c t
\approx 6$, $p_c t^3 \approx 8$ for models~{I}~and~{III}~respectively
at small enough $t$).
For model~{I},
$x \equiv \frac{1}{2} - r$ (where $r=n_1/N$ is the ratio of loops
of length one to vertices) serves as an order parameter. In the region T,
$x=0$ and approaching the line AC from below at fixed
$\beta$, we would expect $x$ to decrease continuously to zero, that is
a second (or higher) order phase transition.
This is because, just below the critical line,
the largest contributions to $Z_N(p)$ come from diagrams which are
tree-like except for a number of closed loops. As $p$ is increased the
number of closed loops (excluding loops of length one)
decreases to zero in a continuous fashion (see for
example fig~{{12}}b, which shows how $\left< r \right>$ approaches
a half as $p$ is
varied, for the truncated model).
The transition to T from M across CB is rather different.
Then we know that just below the
critical line the partition function is dominated by magnetized graphs
(which cannot be tree-like or almost tree-like, as such graphs do not
magnetize for $t \ne 1$) and such graphs have $r \ne \frac{1}{2}$. Thus we
might expect there to be a first order transition with $r$ jumping
discontinuously. It is tempting to hypothesise that
for the line DC between the M and U regions there is a continuous
spin-ordering transition whose characteristics are determined
by the graphs which are
actually magnetizing at the critical point
$\beta_c(p)$, as happens for the quenched case ($p \to 0$).
On the other hand the line CB between the T and M regions is
a geometrical phase transition in which the transition is caused
by the changes in geometry due to the changing relative weights
(ie ${Z_G}^p$) of the magnetized and tree-like graphs.
For model~{III}, there is direct
numerical evidence from computer simulations. In this case it is not
as clear what we should use as an order parameter for the U T
transition, but $n_5/N$ (the
ratio of closed loops of length five to vertices) seems a
plausible candidate. For both the fractal and ladder graphs this
quantity is zero and if, as seems possible, the intermediate maximal
graphs consist of mixtures of fractals arranged in a ladder-like
fashion, then it would be zero in the whole T region. Unfortunately,
none of the numerical studies looked at this quantity in detail.
Reference~\cite{ADJT} measure the quantity for $p=16$ (in fig 12 of
that paper), but only at the critical point $\beta_c$, where it is
very close to its pure gravity value.
However, references~\cite{BaiJoh1,BaiJoh2,ADJT} do give graphs of $n_3/N$ for
various values of $\beta$. This quantity peaks at some value of
$\beta$ below $\beta_c$ and drops to the pure gravity value as $\beta
\to 0$ or $\beta \to \beta_c$. The height of the peak grows with $p$
in an apparently linear fashion,
but we know that it is bounded by the relation $n_3/N \le \frac{1}{3}$. It would be
interesting to see how this quantity saturates as $p$ is increased,
for $\beta$ near $\beta_c$, and to compare it with the results we have
from the truncated model in the limit of small $\beta$.
It is interesting to examine the average maximal radius, $r_{max}$,
of the ensemble. For a given graph
$r_{max}$ is defined in the following fashion. Take an arbitrary
point on the graph (called the ``centre''), mark all of its neighbours
as being at a distance one from the centre, mark all the unmarked
neighbours of these points as being at distance two
and so on. The maximal extension is then the distance of the furthest
point from the centre and averaging over different centres gives
$r_{max}$ on the graph. This procedure can be carried out on either
the $\phi^3$ graph or the dual triangulation, giving two different
definitions, which we will denote $r_{max}^{G}$ and $r_{max}^{T}$
respectively. In figure~7 of~\cite{ADJT} $r_{max}^{T}$ is plotted for various
different models; the curve has a minimum for values of $\beta$ just
below $\beta_c$ and its depth increases with $p$ up to the maximum
value considered, $p=16$.
Both fractals and ladders have very small values of
$r_{max}^T$ and so as the T phase is
approached one would expect $r_{max}^T$ to fall.
Let $p^*$ be the value of $p$ at the point C in fig~\ref{fig:phase}c
and now suppose that $p$ is held fixed at $p<p^*$ and we approach the
magnetization transition from the U region; then as $\beta$ increases
we also get closer to the T region and thus might expect to see a
decrease in $r_{max}^T$, which disappears when the system magnetizes. As
$p \uparrow p^*$ this effect would become more pronounced until at $p$
just above $p^*$ we expect to see a sharp transition into a branched
polymer-like phase which immediately disappears as $\beta$ increases
and the system magnetizes; $r_{max}^T$ would decrease sharply as the
critical region around C is approached and then rebound when the
system magnetizes. This behaviour seems very like that observed
in~\cite{ADJT} which suggests to us that even the largest value of $p$
used in the simulation is probably no greater than $p^*$. If $p$ is
significantly greater than $p^*$ we would expect the minimum value of
$r_{max}^T$, when the ensemble is most branched polymer-like, to occur
somewhere in the middle of the T region rather than at the
magnetization transition; so as $p$ increases beyond $p^*$ the minimum
of $r_{max}^T$ will move away from the magnetization transition. It is
difficult to be entirely certain in interpreting the behaviour of
$r_{max}^T$ without knowing how the minimum value scales with $N$; it
would be useful to have simulation data on this and also at much
larger $p$ to be certain of our interpretation.
Although we have worked entirely with Ising models many of our
considerations probably extend to other spin systems which have second
order phase transitions on regular lattices; in particular the maximal
graphs and the general behaviour of the truncated models are likely to
be the same.
\bigskip
We would like to thank Th\'ordur J\'onsson for giving us advanced details
of~\cite{ADJ} and to acknowledge the support of the SERC under grant GR/J21354
and through the research studentship GR/H01243.
|
2,869,038,153,806 | arxiv | \section{A good news: a brief paper without answers}
\begin{enumerate}
\item Quantum theory describe physical reality?
\item Wavefunction is only a mathematical expression for evaluating probabilities?
\item The wavefunction is not an objective entity?
\item The time dependence of the wavefunction does not represent the
evolution of a physical system?
\item What the collapse of the wave function mean?
\item What causes it?
\item What is a measurement?
\item How does physics to describe reality?
\item The interpretation is merely philosophical bias, and
therefore no part of physics.
\item The wave function is said to refer exclusively to a human
mind and not any physical system external to that mind?
\item Theoretical terms are never directly observable?
\item Determinism has a connection with the physical reality?
\item Why we can ask about the problem of hidden variables? It is
admissible this question?
\item Is the above question supported by some tacit assumption?
Before the question, already we have in mind a picture of the
physical reality?
\item We understand better the physical reality with hidden variables?
\item It is possible have the hidden of the hidden variables?In that case the nature will be more intelligible?
\item It is admissible the link between determinism and the hidden variables?
\item Which is the nature of the Probability?
\item Why we introduce the notion of probability?
\item What is the probability of a single object?
\item Probability is merely a collective term?
\item How we introduce the properties of an object?
\item The properties of a single object are the same of an ensemble (of the same) objects?
\item We need to assume the existence of the observers?
\item How is the information content of a system quantified?
\item How is information transferred?
\item What is the physical status of information?
\item What role, if any, can an information-theoretic analysis of a physical phenomenon
play in an explanation of a physical phenomenon?
\item The information is something of physical(ontological problem)?
\item Any theorem is valid only for a fixed system of axioms? Why?
\item It is possible to have another axiomatic of quantum mechanics?
\item Where the GRW-collapse occur?
\item The standard view: an ontic measurement of an epistemic reality?
\item Classical ensemble probability could not work for ensembles of quantum systems?
\item Which is the origin of the randomness?
\item Which is the statute of quantum realism?
\item It is admissible an holistic nature of wavefunction?
\item It is possible a distinction within a quantum state between
ontic and epistemic elements?
\item Is the brain a quantum system?
\item Have quantum mechanical isolated systems a physical meaning?
\item The geometry could eliminate the observers?
\item Quantum information exist?
\item Probability require indistinguishability?
\item Physical Systems vs. Conceptual Systems?
\end{enumerate}
\section{Conclusion}
The pretext of 44 questions is only to affirm the importance of
the "questions" in our research, probably some questions listed
above will be: a)inadmissible b) of metaphysical nature c)with
simple answers.
\end{document}
|
2,869,038,153,807 | arxiv | \section{Introduction}
\label{intro}
In a~systematic study of low-energy states of polymers adsorbed to a~stringlike substrate, we recently found a~variety of different
conformational phases by just changing two basic substrate
properties~\cite{tv10prl}. Among these structures are spherical
droplets attached to the string as well as barrellike monolayer
conformations surrounding the string. The latter conformations exhibit
similarities to single-walled
carbon nanotubes.
Generally, the adsorption of polymers on material surfaces or
substrates is a~crucial and nontrivial process in nature and
nanotechnology. It is also known, for example, that the adsorption
process or, in fact, the potential of a~polymer to adsorb at
a~semiconductor surface depends essentially on details like the exact
position of single monomers in the primary structure of
a~hetero\-polymer~\cite{bachmann10acie}. However, specific hybrid
systems composed of inorganic matter and polymers potentially
facilitate the development of completely new nanotechnological devices
like sensors for specific single molecules or devices for ultrafast
photonics~\cite{gao03ea,hasan09am}.
Hence, fundamental investigations are inevitable for a~better understanding of such systems. Computational studies of
coarse-grained systems have proven to be adequate and quite useful
for this purpose in recent years, both for predicting and
interpreting specific and basic behavior of polymer
adsorption~\cite{milchev01jcp,bachmann05prl,bachmann06pre1,monika09jpcb,bachmann10acie}.
Particularly interesting are systems with a~cylindrical substrate,
like carbon nanotubes. Many of the special properties of these
structures, that make them potential candidates for technological applications,
can be controlled, influenced or amplified by coating them
with polymeric material~\cite{gao03ea,hasan09am}. In previous
computational works, for example, the wetting of cylindrical fibers
with polymers or the helical wrapping of single polymers around
nanocylinders were studied~\cite{milchev02jcp,srebnik07cpl}.
In a~recent study, we have revealed a~more general
picture of the adsorption of polymers at different ultrathin
cylindrical substrates~\cite{tv10prl} by performing generalized-ensemble
Monte Carlo simulations~\cite{bergneuh91plb,bergneuh92prl,wangl01prl}
for a~general coarse-grained model of this hybrid system. Here, after
describing technical details (Sects.~\ref{model} and~\ref{meths}), we focus on
specific conformational transitions at very low temperatures
(Sect.~\ref{gs}), and discuss in a~thermodynamic analysis the adsorption
transition at comparatively high temperatures (Sect.~\ref{thermo}).
\section{The model}
\label{model}
For our adsorption study, we employ a~coarse-grained off-lattice model, where
the polymer consists of identical monomers which are represented by beads
without internal structure. These are connected sequentially by stiff bonds
of unity length. In order to facilitate future enhancements and the
comparison with previous studies (see, e.g.,~\cite{monika09jpcb}), we
introduce a~weak stiffness between the bonds, i.e., the polymer is not
strictly flexible. The polymer is placed into a~simulation box which also
contains an attractive thin string located in its center. Its orientation defines the $z$-axis.
The edge
lengths of this box in $x$ and $y$ directions with
periodic boundary conditions are chosen to be twice as large as the length of the completely
stretched polymer. We note
that the polymer is not grafted to the string and
may move freely in space.
The total energy $E$ of the polymer consists of contributions from the
Lennard-Jones interaction $V_\mathrm{LJ}$ between all pairs of nonadjacent
monomers, a~weak
bending stiffness $V_\mathrm{bend}$ and the monomer--string interaction
$V_\mathrm{string}$:
\begin{equation}
\begin{split}
E=&\sum_{i=1,j>i+1}^{N-2}V_\mathrm{LJ}(r_{ij})+\sum_{i=2}^{N-1}V_\mathrm{bend}(\cos\theta_i)\\
&+\sum_{i=1}^{N}V_\mathrm{string}(r_{\mathrm{z};i})\,,
\end{split}
\end{equation}
with
\begin{align}
&V_\mathrm{LJ}(r_{ij})=4\epsilon_\mathrm{m}\left[\left(\frac{\sigma_\mathrm{m}}{r_{ij}}\right)^{12}-\left(\frac{\sigma_\mathrm{m}}{r_{ij}}\right)^{6}\right],\tag{1a}\label{eq:1_lj}\\
&V_\mathrm{bend}(\cos\theta_i)=\kappa\,(1-\cos\theta_i)\,,\tag{1b}\\
&V_\mathrm{string}(r_{\mathrm{z};i})=\pi\,\eta_\mathrm{f}\epsilon_\mathrm{f}\left(\frac{63\,\sigma_\mathrm{f}^{12}}{64\,r_{\mathrm{z};i}^{11}}-\frac{3\,\sigma_\mathrm{f}^{6}}{2\,r_{\mathrm{z};i}^5}\right),\tag{1c}\label{eq:1_str}
\end{align}
where $r_{ij}$ is the geometrical distance between two mono\-mers $i$
and $j$, $\theta_i$ is the angle between the two bonds connected to
monomer $i$, and $r_{\mathrm{z};i}$ is the distance of the $i$th
monomer perpendicular to the string. The
parameters are set as follows:
$\epsilon_\mathrm{m}=\sigma_\mathrm{m}=1$, such that
$V_\mathrm{LJ}(2^{1/6})=-1$. The bending
stiffness is chosen to be comparatively weak, $\kappa=0.25$~\cite{stefan07prl,monika09jpcb}.
The interaction $V_\mathrm{string}$ between the string and the
mono\-mers is also based on a~simple Lennard-Jones potential, where
the wire is assumed to have a~homogeneous ``charge''
distribution~\cite{milchev02jcp,srebnik07cpl,monika09jpcb}.
The string potential can then be considered as the
limiting case of the potential of a~cylinder~\cite{milchev02jcp} in
the limit of vanishing radius and keeping the overall charge
fixed~\cite{tvlong10tbp}.
\begin{figure}[b!]
\includegraphics[width=\columnwidth]{string_potential.eps}
\caption{The interaction potential between a~monomer and the
string. Note that by scaling the string ``charge'' density with
$\sigma_{\mathrm{f}}^{-1}$, the minimum value of the potential is
$-1$ independently of $\sigma_{\mathrm{f}}$.}
\label{fig1}
\end{figure}
Alternatively, the Lennard-Jones potential for the interaction between
a~monomer and the string can be integrated out along the string axis
to yield~(\ref{eq:1_str}),
\begin{equation}
V_\mathrm{string}(r_{\mathrm{z}})=4\,\eta_\mathrm{f}\epsilon_\mathrm{f}\int_{-\infty}^{\infty}\mathrm{d}z
\left[\frac{\sigma_\mathrm{f}^{12}}{(r_{\mathrm{z}}^2+z^2)^{6}}-\frac{3\,\sigma_\mathrm{f}^{6}}{(r_{\mathrm{z}}^2+z^2)^{3}}\right]\,,
\end{equation}
where $\sigma_\mathrm{f}$ is the van der Waals radius of the string and
can be considered as its effective ``thickness''. It is related
to the minimum distance $r_{\mathrm{z}}^\mathrm{min}$ of the string
potential via
\begin{equation}
\label{eq:r_vs_sigma}
r_{\mathrm{z}}^\mathrm{min}(\sigma_\mathrm{f})=\left(\frac{693}{480}\right)^{1/6}\sigma_\mathrm{f}\approx1.06\,\sigma_\mathrm{f}\,.
\end{equation}
For convenience, we scale the string ``charge'' density
$\eta_\mathrm{f}$ in such a~way that the minimum value of the
potential is $V_\mathrm{string}(r_{\mathrm{z}}^\mathrm{min})=-1$
independently of $\sigma_\mathrm{f}$, i.e., we set $\eta_{\rm
f}\approx 0.53\,\sigma_f^{-1}$. Fig.~\ref{fig1} shows the
correspondingly scaled string potential for different values of
$\sigma_\mathrm{f}$.
\section{Simulational details}
\label{meths}
Polymer systems are known to possess highly nontrivial, rugged
free-energy landscapes~\cite{janke08buch}. For the simulation, we
have, therefore, applied generalized-ensemble Monte Carlo methods. The
ground-state energies have been estimated by using various, but
conceptually similar, stochastic methods such as parallel tempering,
Wang--Landau, and multicanonical
sampling~\cite{bergneuh91plb,bergneuh92prl,janke98physA,wangl01prl},
as well as especially designed optimizing approaches like energy
landscape paving, where the energy landscape is deformed irreversibly
during the simulation~\cite{wenzel99prl,hansm99epjb,hansm02prl}. The efficiency of all
stochastic methods strongly depends on the conformational update set
used (and, of course, on the fine-tuning of each method). If a~move
set is chosen reasonably well, all methods lead to comparable results
in similar times. In addition, we have refined low-energy states by
standard deterministic optimization techniques such as the conjugate
gradient method.
For the estimation of the density of states and hence all
thermodynamic quantities, we employed the Wang-Landau
method~\cite{wangl01prl,zhoubhatt05pre} for the determination of the
multicanonical weights and performed a~final production run in the
multicanonical
ensemble~\cite{bergneuh91plb,bergneuh92prl,janke98physA}. Independently
of the values of the potential parameters, we partition the simulated
energy interval into 10\,000 bins in each simulation. The actual bin
size hence depends on the energy range delimited by the putative
ground-state energy and a~fixed boundary on high energies.
\begin{figure}[b!]
\includegraphics{fig2_updates.252pt.eps}
\caption{Conformational update moves used in our
simulations. (a)~local crankshaft move, (b) slithering snake or
reptation move, and (c) global spherical cap update.}
\label{fig2}
\end{figure}
As known and mentioned above, the choice of the update scheme is
crucial for the efficiency of any Monte Carlo simulation in
general. For conformational changes, we apply here a~variety of update
moves, see Fig.~\ref{fig2}, including local crankshaft
[Fig.~\ref{fig2}(a)] and slithering-snake moves [Fig.~\ref{fig2}(b)],
as well as global spherical-cap [Fig.~\ref{fig2}(c)] and translation
moves. Sets including these steps have been found to work quite well
in previous studies as
\hbox{well~\cite{schnabel09cpl,schnabel09jcp,taylor09jcp}}. The crankshaft
move is just a~rotation of a~single monomer around the virtual bond
between its neighbors, or, if the monomer is an end-monomer, around
the elongation of the one neighboring bond. For the slithering-snake
update, we cut a~monomer from one end and paste it at the other end
keeping the bond vector fixed. Both updates induce small
conformational changes of the whole chain, whereas the latter one
enables the polymer to leave very dense, adsorbed conformations.
The spherical-cap update consists of the shift of $1\leq n<N$ monomers by
a~small constant vector keeping the bond lengths at the $n$th monomer
fixed. It hence allows for larger steps in the conformational space
compared to the former local updates. The global translation update
finally allows for a~direct displacement of the chain relative to the
string, which in the entire simulation remains fixed in the box. In
any Monte Carlo step, we choose the different moves randomly with
equal weight. We convinced ourselves that this provides a~reasonable
sampling of the conformational space of this model system on large
scales as well as locally. It behaves in general not worse and for the
ground-state search even better than a~procedure consisting of a~few
global moves followed by much more local moves, whereas the ratio
depends on the system size. However, such a procedure has been found to be
favorable for other problems~\cite{taylor09jcp}.
Alternative, more sophisticated update
moves like bond-bridging or monomer-jump
moves~\cite{deutsch97jcp,schnabel09jcp,taylor09jcp,reith10cpc} have
not been included in our update set as they are rather time consuming
and would apparently not improve the principal findings of the present
study. Though, they are necessary for studying structural transitions
in the dense and crystalline
regime~\cite{schnabel09cpl,schnabel09jcp}, or for the investigation of
much larger systems. In the following, we discuss the structural
properties of a~polymer with $N=100$ monomers.
\section{Ground states of the system}
\label{gs}
\begin{figure}
\includegraphics{l100_eps1_pics.252pt.eps}
\caption{Low-energy conformations for $\epsilon_{\mathrm{f}}=1$ and (a)
$\sigma_{\mathrm{f}}=0.55$, (b) $0.573$, (c) $0.647$, (d) $1.0$, and (e)
$1.5$. (a) and (b) correspond to phase Gi, conformations in (c)--(e)
belong to phase Ge.}\vskip-.2\baselineskip
\label{fig3}
\end{figure}
There are two parameters in the interaction potential between the
string and monomers [Eq.~(\ref{eq:1_str}) and Fig.~\ref{fig1}], the
van der Waals radius proportional to the effective thickness of the
string, $\sigma_{\mathrm{f}}$, and the adsorption strength
$\epsilon_{\mathrm{f}}$. By varying these two parameters, we have
recently constructed the complete conformational phase diagram of
low-energy structures~\cite{tv10prl,tv10procathens}. By introducing
suitable observables, four major structural phases have been
identified. For small values of $\epsilon_{\mathrm{f}}$ and
$\sigma_{\mathrm{f}}$, i.e., for very thin and weakly attracting
strings, we find globular or spherical polymer droplets inclosing the
string (phase Gi). In this case, the polymer structures are similar to
that in bulk under poor solvent conditions. The string does not
influence the shape of these structures but affects only the internal
structure of the droplet.
Figure~\ref{fig3} shows conformations with
$\epsilon_{\mathrm{f}}=\epsilon_{\mathrm{m}}=1$. In
Figs.~\ref{fig3}(a) and~\ref{fig3}(b), droplets inclosing the string
are visualized. When increasing the van der Waals radius of the
string, monomer--monomer bonds inside the droplet will be broken and,
hence, the string is excluded from the droplet but still it is
attached to it (phase Ge). The radius at which this rearrangement
occurs depends on $\epsilon_{\mathrm{f}}$. For
$\epsilon_{\mathrm{f}}\lesssim\epsilon_{\mathrm{m}}$, i.e., where the
string attraction is not significantly stronger than the
monomer--monomer attraction, the transition occurs, roughly, when the
diameter of the string becomes comparable to the equilibrium distance
between two monomers. When further increasing the string radius, the
string moves outward and the structure approaches the bulk
conformation. See Figs.~\ref{fig3}(c)--(e) for examples of
conformations at different $\sigma_\mathrm{f}$ values, thus
visualizing the described ``process''.
\begin{figure*}[t]
\includegraphics{Fig4_rad_distr_hists.522pt.eps}
\caption{Top: Radial distribution functions of low-energy states with
$\sigma_{\mathrm{f}}=5/3$ and (a) $\epsilon_{\mathrm{f}}=1.5$, (b)
$3.0$, (c) $3.5$, and (d) $4.5$. Bottom: Visualizations of the
respective structures. Different colors or shapes encode different
monomer layers, i.e., regions within certain distances to the string.}
\label{fig4}
\end{figure*}
Increasing at any given value of $\sigma_{\mathrm{f}}$ the string attraction
strength $\epsilon_{\mathrm{f}}$, globular structures (Gi, Ge) start
to deform and to lose spherical symmetry at
$\epsilon_{\mathrm{f}}\gtrsim3$. For $\sigma_{\mathrm{f}}\lesssim1.5$,
we observe a~transition from phase Gi directly to the barrel phase B,
which is characterized by closed, stretched conformations with
cylinderlike shape wrapping around the string. In the extreme case of
very strong string attraction, polymers form monolayer tubes.
For $\sigma_{\mathrm{f}}\gtrsim1.5$, the polymer first
adopts clamshell-like conformations (phase~C) before it forms barrel
structures in phase B. Interestingly, the evolution from spherical
droplets to monolayer tubes involves the formation of distinguishable
monomer layers.
For illustration, we plot in the upper row in Fig.~\ref{fig4} the
radial distribution of monomers with respect to the string of certain
low-energy structures. One finds accumulations of monomers at
different distances from the string, i.e., in different layers. The
position of the first layer corresponds to the van der Waals radius of
the string [cp.\ Eq.~(\ref{eq:r_vs_sigma}),
$\sigma_{\mathrm{f}}=5/3$], whereas the location of the higher-order
layers is connected to the equilibrium distance between the monomers,
corresponding to $\sigma_{\mathrm{m}}$. In the lower row, respective
structures are depicted. In Fig.~\ref{fig4}(a), we plot the radial
distribution of monomers in a~conformation from phase Ge with
$\epsilon_{\mathrm{f}}=1.5$. The emergence of different peaks in that
function can be observed. A~clear 3-layer structure can be identified
in Fig.~\ref{fig4}(b), where a~typical conformation in phase C is
shown ($\epsilon_{\mathrm{f}}=3$). In Fig.~\ref{fig4}(c), a~two-layer
barrel-shaped conformation is shown ($\epsilon_{\mathrm{f}}=3.5$)
which transforms into a~monolayer tube at
$\epsilon_{\mathrm{f}}\gtrsim4.5$, as depicted in
Fig.~\ref{fig4}(d). We would like to note here that, in particular,
these monolayer tubes exhibit interesting similarities to other
structures in nature like, for example, carbon
nanotubes~\cite{tv10prl,tv10procathens,tvmbtmja10tbp1,tvmbtmja10tbp2}.
\section{Thermodynamics of the adsorption}
\label{thermo}
\begin{figure}[t!]
\includegraphics{fig5.eps}
\caption{(a) Adsorption peaks in the canonical heat capacities of the
100-mer for different values of $\epsilon_{\mathrm{f}}$ at
$\sigma_{\mathrm{f}}=3/2$ [cp.\ the peak temperatures with
temperatures from the microcanonical analysis in
Fig.~\ref{fig6}(b)]. (b) ``Phase diagram'', i.e., transition line
between adsorbed and desorbed phases. Constructed from peak
positions in~(a).}\vskip-.3\baselineskip
\label{fig5}
\end{figure}
\begin{figure*}[t]
\includegraphics{fig6.eps}
\caption{(a) Logarithm of the density of states (proportional to the
microcanonical entropy) for the system with $N=100$ monomers,
$\sigma_{\mathrm{f}}=3/2$ and different string adsorption strengths
$\epsilon_{\mathrm{f}}$. The convex intruder clearly emerges and
becomes larger for increasing values of $\epsilon_{\mathrm{f}}$.
(b) The derivatives of the functions in (a) with respect to $E$
(proportional to the inverse microcanonical temperature). The lines
mark the respective microcanonical adsorption temperatures obtained
by the Maxwell construction in the backbending
region.}
\label{fig6}
\end{figure*}
Finally, we short comment on thermodynamic properties of the
adsorption transition~\cite{tvlong10tbp}. For estimating the
finite-system transition temperature, we first identify peak positions
in the canonical specific-heat curves, associated to the
transition. In Fig.~\ref{fig5}(a) we plot these peaks for a~polymer
consisting of $N=100$ monomers and a~string with
$\sigma_{\mathrm{f}}=3/2$ for various values of the substrate adhesion
strength $\epsilon_{\mathrm{f}}$. In Fig.~\ref{fig5}(b), we plot the
transition temperatures corresponding to the peak positions depending
on $\epsilon_{\mathrm{f}}$. We find an almost linear increase of the
adsorption temperature with adsorption strength, a~behavior which was
in the same way observed in a~recent study, where this transition has
been studied for the same polymer model interacting with planar
substrates~\cite{monika09jpcb}.
A~more adequate description of the thermodynamic behavior of small
finite-size systems provides the micro\-canonical
analysis~\cite{gross01buch,christoph06prl,christoph08jcp,christoph09epl,taylor09jcp,monika10arxiv},
based on the fact that all the relevant information about the system
is encoded in its density of states $g(E)$. In Fig.~\ref{fig6}(a), we
plot the logarithm of this function, which is proportional to the
micro\-canonical entropy [$S(E)=k_\mathrm{B}\ln g(E)$]. The adsorption transition
is represented by the convex region in the micro\-canonical entropy. The
energetic width of this convex part is a~measure for the latent heat
which is nonzero for a~first-order-like transition. The derivative of
the micro\-canonical entropy with respect to the energy yields the
inverse micro\-canonical temperature $\beta(E)=dS(E)/dE$. It is plotted
in Fig.~\ref{fig6}(b) and exhibits a~monotonic change in the
transition region (therefore called ``backbending effect''). The
microcanonical transition temperature is obtained by a~Maxwell
construction in that region, indicated by horizontal lines in
Fig.~\ref{fig6}(b). We note, in particular, that for
$\epsilon_{\mathrm{f}}=1$ and $2$, backbending does not occur. The
inflection points in this energetic region indicate second-order-like
transitions and belong to the $\Theta$
transition in bulk. This agrees
with previous observations that the ``strength'' of the first-order
signal decreases with decreasing substrate adsorption
strength~\cite{monika10arxiv}. A more detailed analysis of the structural
transitions in the polymer--wire system by means of microcanonical
thermodynamics will be presented in a forthcoming paper~\cite{tvlong10tbp}.
\paragraph{Acknowledgment}
This work is supported by the Umbrella program under Grant No.~SIM6
and by supercomputer time provided by the Forschungszentrum J\"ulich under Pro\-ject
Nos.~jiff39 and jiff43.
\vspace{-6.6pt}
|
2,869,038,153,808 | arxiv | \section{Introduction}
Collider experiments at the TeV scale, in particular
the LHC experiments, are
expected to shed light on the mechanism of electroweak symmetry breaking
and to open up new horizons concerning our understanding of elementary particle
interactions.
In order to achieve these goals, expected signal as well as background rates
should be well under control, which implies
that a multitude of scattering processes should be
known at next-to-leading order (NLO) accuracy.
Over the last years, enormous efforts have been made to calculate NLO corrections,
in QCD as well as in the electroweak sector. For a review
see e.g. \cite{Bern:2008ef}.
These calculations in general involve two parts, the treatment of extra real emission
and the calculation of virtual corrections, i.e. one-loop amplitudes.
While the calculation of one-loop amplitudes with up to four external particles
has reached a quite mature state meanwhile, and automated tools
have been developed already some time ago~\cite{vanOldenborgh:1989wn,Mertig:1990an,Hahn:1998yk,Yuasa:1999rg,Hahn:2000kx},
the calculation of processes with five or more external legs
required and boosted new developments in various directions,
for recent developments see
e.g. \cite{Bern:2008ef,Binoth:2005ff,Denner:2005fg,Denner:2005nn,Ellis:2005zh,Ossola:2006us,Binoth:2006hk,Anastasiou:2006gt,Bern:2007dw,Ellis:2007br,Kilgore:2007qr,Britto:2008vq,Britto:2008sw,Giele:2008ve,Mastrolia:2008jb,Catani:2008xa,Ellis:2008ir,Glover:2008ffa}.
Initially, NLO calculations have mostly been done on a process-by-process basis,
but fortunately we are moving towards automation also
for multi-particle processes, as can be seen from the
tools which have been constructed recently.
For the automated calculation of one-loop amplitudes with more
than four external legs, there are
the publicly available programs
{\tt FormCalc/Loop\-Tools}\,\cite{Hahn:1998yk,Hahn:2000kx} which recently have been extended to
5-point processes \cite{Hahn:2006ig,Hahn:2006qw} and
the program {\tt CutTools} \cite{Ossola:2007ax} which is
based on a numerical unitarity formalism~\cite{Ossola:2006us,Ellis:2007br,Giele:2008ve,Mastrolia:2008jb,Ellis:2008ir}.
Further, there are the programs
{\tt BlackHat}~\cite{Berger:2008sj} and {\tt Rocket}~\cite{Giele:2008bc},
relying also on cutting techniques.
Concerning the generation of subtraction terms for real radiation,
automated tools have become publicly available recently as well~\cite{Gleisberg:2007md,Seymour:2008mu,Hasegawa:2008ae,Frederix:2008hu}.
Integral libraries for massive~\cite{vanOldenborgh:1989wn,Hahn:2006qw} as well as
infrared divergent~\cite{Ellis:2007qk} scalar
integrals also exist.
As already mentioned, a public program for the reduction of tensor integrals so far is available only for
infrared-finite integrals and for up to five external legs~\cite{Hahn:1998yk,Hahn:2006qw}.
In this paper we present a program for the numerical reduction of tensor integrals
with up to six external legs.
In the present version, we focus mainly on massless QCD applications,
i.e. processes with massless internal particles.
The master integrals are implemented in the code
to be valid in all kinematic regions. Infrared divergences are
regulated dimensionally, i.e the loop momenta live in $n=4-2\epsilon$ dimensions.
The output for a specific kinematic point
is a set of six numbers representing the real and imaginary parts of the
coefficients of the Laurent series in $\epsilon$,
i.e. the coefficients of the $1/\epsilon^2, 1/\epsilon$ poles and the finite part.
The reduction formalism is valid for massless as well as massive external and internal
particles. However, the basis integrals for processes involving {\it internal} massive
particles will be implemented in a forthcoming version.
We would like to emphasize that the program can be used not only for tensor reduction,
but also to calculate basis integrals, with or without Feynman parameters in the numerator,
and therefore is also of interest for calculations where the integral coefficients have
been determined by unitarity cut techniques: {\tt golem95} can be used as a library for
master integrals.
The paper is organised as follows. In section 2, we review shortly the theoretical background.
Section 3 contains a brief summary of the software structure, while section 4 contains a detailed
description of the individual components of the program.
The installation instructions are given in section 5, and section 6 contains the descriptions
of three different test runs:
the calculation of a form factor for a rank five five-point function,
the calculation of all form factors for rank one six-point functions in one go,
and finally the calculation of a full amplitude: the helicity amplitudes for
light-by-light scattering.
We give an outlook on future versions in section 7 and explain technical details in appendices
\ref{landau} and \ref{onedimint}.
The code comes with a number of demonstration programs, demonstrating for example
the behaviour near a scattering singularity, or the relation to LoopTools notation.
All these demonstration programs are listed in Appendix \ref{demos}.
\section{Theoretical background}
The program is an implementation of the formalism developed in Ref.~\cite{Binoth:2005ff}.
Here we will summarize only its main features relating to the {\tt golem95} program,
for further details we refer to \cite{Binoth:2005ff}.
\subsection{Form Factors}
\begin{figure}[ht]
\unitlength=1mm
\begin{picture}(150,60)
\put(55,5){\includegraphics[width=5cm, height=5cm]{n_point.eps}}
\put(50,15){$p_{N-2}$}
\put(60,5){$p_{N-1}$}
\put(85,5){$p_{N}$}
\put(100,15){$p_{1}$}
\put(102,34){$p_{2}$}
\put(92,50){$p_{3}$}
\put(70, 54){$p_{4}$}
\put(87,21){\footnotesize $N$}
\put(90,28){\footnotesize $1$}
\put(87,35){\footnotesize $2$}
\put(79,38){\footnotesize $3$}
\end{picture}
\caption{General $N$-point one-loop graph with momentum and propagator labelling.}
\label{fig1}
\end{figure}
A general one-loop tensor integral of rank $r$ can be written as
\begin{eqnarray}
I^{n,\,\mu_1\ldots\mu_r}_N(a_1,\ldots,a_r) =
\int \frac{d^n k}{i \, \pi^{n/2}}
\; \frac{q_{a_1}^{\mu_1}\,\dots q_{a_r}^{\mu_r}}{
(q_1^2-m_1^2+i\delta)\dots (q_N^2-m_N^2+i\delta)}
\label{eq0}
\end{eqnarray}
where $q_a=k+r_a$, and $r_a$ is a combination of external momenta.
For the diagram in Fig.~\ref{fig1}, $r_i=\sum_{j=1}^i p_j$.
Our method is defined in $n=4-2\epsilon$ dimensions and thus is
applicable to general scattering processes with arbitrary
propagator masses.
Taking integrals of the form (\ref{eq0}), i.e. with $q_{a}^{\mu}$
instead of just $k^\mu$ in the numerator,
as building blocks has two advantages: first, combinations of
loop and external momenta appear naturally in Feynman rules,
second, it allows for a formulation of the tensor reduction which
manifestly maintains
the invariance of the integral under a shift $k \to k+r_0$ in the loop momentum.
Such a shift can be
absorbed into a redefinition of the $r_j, \,r_j \to r_j - r_0$.
By setting $a_1,\ldots, a_r=N$, and using momentum conservation to set $r_N=0$,
we can always retrieve the commonly used form
\begin{equation}
I^{n,\,\mu_1\ldots\mu_r}_N(N,\dots,N) =
\int \frac{d^n k}{i \, \pi^{n/2}}\,
\frac{k^{\mu_1}\ldots k^{\mu_r}}{(q_1^2-m_1^2+i\delta)\dots (q_N^2-m_N^2+i\delta)}\;.
\label{conventional}
\end{equation}
The Lorentz structure of the integral (\ref{eq0}) is carried by
tensor products of the metric $g^{\mu\nu}$ and the difference vectors
\begin{equation}
\Delta_{ij}^\mu=
r_i^\mu - r_j^\mu\;,
\end{equation}
which are shift invariant.
Therefore, tensor integrals are expressible by linear combinations
of such Lorentz tensors and
{\it form factors} $A^{N,r}_{l_1\cdots l_r}$,
$B^{N,r}_{l_1\cdots }$, $C^{N,r}_{\cdots }$,
defined by
\begin{eqnarray}
\lefteqn{I^{n,\,\mu_1\ldots\mu_r}_N(a_1,\ldots,a_r;\,S) =}
\nonumber \\
& &
\;
\sum_{j_1\cdots j_{r}\in S} \;\;\;
\left[
\Delta_{j_1\cdot}^{\cdot} \cdots \Delta_{j_r\cdot}^{\cdot}
\right]^{\{\mu_1\cdots\mu_r\}}_{\{a_1\cdots a_r\}} \, A_{j_1 \ldots ,j_{r}}^{N,r}(S)
\nonumber\\
&+&
\sum_{j_1\cdots j_{r-2}\in S} \,
\left[
g^{\cdot\cdot} \Delta_{j_1\cdot}^{\cdot} \cdots \Delta_{j_{r-2}\cdot}^{\cdot}
\right]^{\{\mu_1\cdots\mu_r\}}_{\{a_1\cdots a_{r}\}}\, B_{j_1 \ldots,j_{r-2}}^{N,r}(S)
\nonumber\\
&+&
\sum_{j_1\cdots j_{r-4}\in S} \,
\left[
g^{\cdot\cdot}g^{\cdot\cdot} \Delta_{j_1\cdot}^{\cdot} \cdots
\Delta_{j_{r-4}\cdot}^{\cdot}
\right]^{\{\mu_1\cdots\mu_r\}}_{\{a_1\cdots a_{r}\}}\, C_{j_1 \ldots ,j_{r-4}}^{N,r}(S)
\label{fofageneral}
\end{eqnarray}\noindent
where $[\cdots]^{\{\mu_1\cdots\mu_r\}}_{\{a_1\cdots a_r\}}$
denotes the distribution of the $r$ Lorentz indices $\mu_i$, and momentum
labels $a_i$ to the vectors $\Delta_{j\,a_i}^{\mu_i}$ in all distinguishable ways.
$S$ denotes an ordered
set of propagator labels, related to
the kinematic matrix ${\cal S}$, defined by
\begin{eqnarray}
\mbox{$\cal S$}_{ij} &=& (r_i-r_j)^2-m_i^2-m_j^2\;\quad ; \;\quad i,j\in\{1,\ldots,N\}\;.
\label{eqDEFS}
\end{eqnarray}\noindent
There is a one-to-one correspondence between $\mbox{$\cal S$}_{ij}$
and the set $S=\{1,\ldots,N\}$.
We recall that standard form factor representations can be simply obtained
by replacing $a_j=N$ for all $j$, together with $r_N=0$.
This also shows that the form factors do {\em not} depend on the introduction
of the difference vectors $\Delta_{ij}^\mu$.
The form factors are shift invariant by themselves.
Therefore the program {\tt golem95} can be used without ever introducing
difference vectors, if the user prefers not to do so.
Due to the fact that for $N\geq 5$,
four linearly independent external vectors form a basis
of Minkowski space, the tensor reduction for $N\geq 6$
can be done in such a way that only form factors for $N\le 5$
are needed.
Therefore, the Lorentz structure of $(N>5)$-point rank $r$ tensor integrals
does not require the introduction of additional factors of $g^{\mu\nu}$
as compared to the $N=5$ case, only additional external vectors
appear. We note that for $N=5$,
one could already express
the metric by external momenta, but this would introduce
inverse Gram determinants.
In \cite{Bern:1993kr}, it is shown that all tensor
five-point functions can
be reduced to some basis integrals without generating higher dimensional five-point
functions. In \cite{Binoth:2005ff}, a formal proof of this fact can be found,
as well as a reduction method which avoids both inverse Gram determinants
and spurious higher dimensional five-point functions.
A method where inverse Gram determinants
in the reduction from five-point to four-point integrals are absent is also presented in
Ref. \cite{Denner:2005nn}.
The form factors are linear combinations of reduction coefficients and
basis\footnote{We call them ``basis integrals " because
they are the endpoints of our reduction, although
they do not form a basis in the mathematical sense.}
integrals, where our basis integrals are not necessarily scalar
integrals, as explained in section \ref{basisints}.
The reduction coefficients are derived from the kinematic matrices
${\cal S}$, where we define
\begin{eqnarray}
b_i&=&\sum_{k\in S}{\cal S}_{ki}^{-1}\;,\; B=\sum_{i \in S} b_i \;.
\label{bi}
\end{eqnarray}\noindent
The quantity $B$ is related to the Gram determinant by
\begin{equation}
B\;\det{\cal S}= (-1)^{N+1}\det G\;.\label{sumB}
\end{equation}
The form factors are all given explicitly in \cite{Binoth:2005ff}.
As an example, a rank two pentagon integral is represented as
\begin{eqnarray}
I_5^{n,\mu_1 \mu_2}(a_1,a_2;S)
&=&
\sum_{l_1,l_2 \in S} \;
\Delta^{\mu_1}_{l_1 \, a_1} \; \Delta^{\mu_2}_{l_2 \, a_2} \,
A^{5,2}_{l_1 \, l_2}(S) + g^{\mu_1 \, \mu_2} \, B^{5,2}(S)
\label{eqNpr2}\\
&&\nonumber\\
B^{5,2}(S)&=&- \frac{1}{2} \, \sum_{j \in S} \, b_j \,
I_4^{n+2}(S\setminus\{j\})\nonumber\\
A^{5,2}_{l_1 \, l_2}(S)&=&\sum_{j \in S} \,
\left( \, \icalst{j \, l_1} \, b_{l_2} + \icalst{j \, l_2} \,
b_{l_1} - 2 \, \icalst{l_1 \, l_2} \, b_{j} + b_{j} \,
{\cal S}^{\{j\}-1}_{l_1 \, l_2} \right) \, I^{n+2}_4(S\setminus\{j\}) \nonumber \\
& & \mbox{} + \frac{1}{2} \, \sum_{j \in S} \,
\sum_{k \in S\setminus\{j\}} \, \left[ \icalst{j \, l_2} \,
{\cal S}^{\{j\}-1}_{k \, l_1} + \icalst{j \, l_1} \, {\cal S}^{\{j\}-1}_{k \, l_2}
\right]
I_3^{n}(S\setminus\{j,k\}) \nonumber
\end{eqnarray}\noindent
and it is the form factors like $A^{5,2}_{l_1\,l_2}(S), B^{5,2}(S)$ which are implemented in
{\tt golem95}.
The program {\tt golem95} can be used for amplitude calculations in several ways:
One approach, which aims to avoid tensor integrals of high rank,
is to cancel the reducible numerators of an expression before
interfacing to {\tt golem95} to calculate the irreducible tensor integrals.
However, as all form factors for maximal rank (in a renormalisable gauge)
are implemented, the expression for an amplitude can also be
interfaced to {\tt golem95} without performing any cancellations
between numerators and propagators. This has the advantage that
the algebraic manipulations to do are minimal, and that even the
Dirac traces, which often appear as coefficients
of the form factors, can be done numerically.
\subsection{Feynman parameter representations}\label{Feynpar}
The {\tt golem95} program uses the fact that tensor integrals
are related to Feynman parameter integrals with Feynman parameters in the numerator.
The basic object is the set $S$, containing the labels of the propagators
which define the integral.
A scalar integral, after Feynman parametrisation, can be written as
\begin{eqnarray}
I^n_N(S) &=& (-1)^N\Gamma(N-\frac{n}{2})\int \prod_{i=1}^N dz_i\,
\delta(1-\sum_{l=1}^N z_l)\,\left(R^2\right)^{\frac{n}{2}-N}\nonumber\\
&& R^2 =
-\frac{1}{2} \sum\limits_{i,j=1}^N z_i\,\mbox{$\cal S$}_{ij} z_j\,\,-i\delta
\;.
\label{isca2}
\end{eqnarray}\noindent
In general, a one-loop $N$-point amplitude will contain
$N$-point integrals as well as $(N-1),(N-2),\ldots, (N-M)$-point integrals
with tree graphs attached to some of the external legs of the loop integral.
The latter are characterised by the omission (``pinch") of some propagators
(say $j_1,\ldots,j_m$) of the ``maximal" one loop $N$-point graph,
and therefore correspond to a subset of $S$ where certain
propagator labels are missing,
$S\setminus\{j_1,\ldots,j_m\}$. The program {\tt golem95} is based on this concept of
sets characterising the integrals.
The general relation between tensor integrals and parameter integrals
with Feynman parameters in the numerator is
well known~\cite{Davydychev:1991va,Tarasov:1996br,Bern:1992em,Binoth:1999sp}
\begin{eqnarray}
&&I^{n,\,\mu_1\ldots\mu_r}_N(a_1,\ldots,a_r\,;S)
=
(-1)^r \sum_{m=0}^{[r/2]} \left( -\frac{1}{2} \right)^m\nonumber\\
&&
\sum_{j_1\cdots j_{r-2m}=1}^N \left[
(g^{..})^{\otimes m}\,\Delta_{j_1\cdot}^{\cdot} \cdots \Delta_{j_r\cdot}^{\cdot}
\right]^{\{\mu_1\cdots\mu_r\}}_{\{a_1\cdots a_r\}}
\;
I_N^{n+2m}(j_1 \ldots ,j_{r-2m}\,;S)\;,
\label{eq32}
\end{eqnarray}\noindent
where $I_N^{n+2m}(j_1 \ldots ,j_{r-2m}\,;S)$
is an integral with Feynman parameters in the numerator.
$[r/2]$ stands for the nearest integer less or equal to $r/2$ and the symbol
$\otimes m$ indicates
that $m$ powers of the metric tensor are present.
Feynman parameter integrals corresponding to
diagrams where propagators $j_1,\dots,j_m$ are pinched
with respect to the ``maximal" topology
can be defined as
\begin{eqnarray}
&&I^n_N(j_1,\dots,j_r;S\setminus \{l_1,\dots,l_m\}) =(-1)^N\Gamma(N-\frac{n}{2})
\nonumber\\
&&
\int \prod_{i=1}^N dz_i\,
\delta(1-\sum_{k=1}^N z_k)\,
\delta(z_{l_1})\dots \delta(z_{l_m})z_{j_1}\dots z_{j_r}\left(R^2\right)^{n/2-N}\;.
\label{isca_pinch}
\end{eqnarray}\noindent
\subsection{Basis integrals}\label{basisints}
The basis integrals, i.e. the endpoints of our reduction,
are 4-point functions in 6 dimensions
$I_4^6$, which are IR and UV finite, UV divergent 4-point functions in
$n+4$ dimensions, and various 2-point and 3-point functions, some of
the latter with Feynman parameters in the numerator. This provides us with a very
convenient separation of IR/UV divergences, as the IR poles are
exclusively contained in
the triangle functions. Explicitly, our reduction
basis is given by integrals of the type
\begin{eqnarray}\label{basis_integral}
I^{n}_3(j_1, \ldots ,j_r) &=&
-\Gamma \left(3-\frac{n}{2} \right) \, \int_{0}^{1}
\prod_{i=1}^{3} \, d z_i \, \delta(1-\sum_{l=1}^{3} z_l)
\, \frac{z_{j_1} \ldots z_{j_r}}{
(-\frac{1}{2}\, z \cdot \mbox{$\cal S$}
\cdot z-i\delta)^{3-n/2}}\;,\nonumber\\
I^{n+2}_3(j_1) &=&
-\Gamma \left(2-\frac{n}{2} \right) \, \int_{0}^{1}
\prod_{i=1}^{3} \, d z_i \, \delta(1-\sum_{l=1}^{3} z_l)
\, \frac{z_{j_1}}{
(-\frac{1}{2}\, z \cdot \mbox{$\cal S$}
\cdot z-i\delta)^{2-n/2}}\;,\nonumber\\
I^{n+2}_4(j_1, \ldots ,j_r) &=&
\Gamma \left(3-\frac{n}{2} \right) \, \int_{0}^{1}
\prod_{i=1}^{4} \, d z_i \, \delta(1-\sum_{l=1}^{4} z_l)
\, \frac{z_{j_1} \ldots z_{j_r}}{
(-\frac{1}{2}\, z \cdot \mbox{$\cal S$}
\cdot z-i\delta)^{3-n/2}}\;,\nonumber\\
I^{n+4}_4(j_1) &=&
\Gamma \left(2-\frac{n}{2} \right) \, \int_{0}^{1}
\prod_{i=1}^{4} \, d z_i \, \delta(1-\sum_{l=1}^{4} z_l)
\, \frac{z_{j_1}}{
(-\frac{1}{2}\, z \cdot \mbox{$\cal S$}
\cdot z-i\delta)^{2-n/2}}\;,\nonumber\\
\end{eqnarray}
where $r^{\rm{max}}=3$, as well as $I^{n}_3,I^{n+2}_3,I^{n+2}_4,I^{n+4}_4$
with no Feynman parameters in the numerator,
and two-point functions.
Note that $I^{n+2}_3$ and $I^{n+4}_4$ are UV divergent, while $I^{n}_3$
can be IR divergent. In the code, the integrals are represented as
arrays containing the coefficients of their Laurent expansion in $\epsilon=(4-n)/2$.
Further reduction of these integrals to scalar basis integrals (i.e. integrals with
no Feynman parameters in the numerator) introduces factors of $1/B$, i.e.
inverse Gram determinants. A particular feature of {\tt golem95} is the fact that
the above integrals are {\it not} reduced to scalar basis integrals
in cases where $B$ becomes small, thus avoiding problems with small inverse determinants.
In these cases, the above integrals are evaluated numerically.
As $B= (-1)^{N+1}\det(G)/\det({\cal S})$ is a dimensionful quantity,
the switch to the numerical evaluation of the basis integrals is
implemented such that the value of the dimensionless parameter $\hat{B}$ is tested, where
\begin{equation}
\hat{B}=B\times (\rm{largest \; entry \; of \;} {\cal S})\;.
\label{bhat}
\end{equation}
If $\hat{B}>\hat{B}^{\rm{cut}}$, the reduction is performed, else the program
switches to the direct numerical evaluation of the integral.
The default value is $\hat{B}^{\rm{cut}}=0.005$.
A major improvement with respect to the numerical evaluation method
used in \cite{Binoth:2005ff} is the following: while in \cite{Binoth:2005ff}
the numerical evaluation of box integrals was based on three-dimensional
parameter representations, we use a certain one-dimensional parameter representation
here, obtained after performing two integrations analytically, as outlined in Appendix \ref{onedimint}.
In this way one can use deterministic integration routines, leading to a fast and precise numerical evaluation. This has been done for box integrals with up to three off-shell legs and all triangle
integrals.
The relative error to be achieved in the numerical integration has been set to
the default value $10^{-8}$. If this precision has not been reached, the program will
write a message to the file {\tt error.txt}.
In some cases, calculating in double precision Fortran may not be sufficient.
The code is designed such that it can be compiled in quadruple precision as well.
We would like to emphasize that the program also can be used as a library for
master integrals with massless internal particles. For example, the scalar box integrals
in $n$ dimensions, with up to four off-shell external legs, can be calculated by
just calling the form factor $A^{4,0}$. Depending on the kinematics, the
program will call the appropriate box type automatically.
The scalar box integrals in $n+2$ and
$n+4$ dimensions are related to
the form factors by $B^{4,2}=-I^{n+2}_4/2$, $C^{4,4}=I^{n+4}_4/4$, analogous for $N=3$.
\section{Overview of the software structure}
The structure of the {\tt golem95} program is the following:
There are four main directories:
\begin{enumerate}
\item {\bf src:} the source files of the program
\item {\bf demos:} some programs for demonstration
\item {\bf doc:} documentation which has been created with robodoc~\cite{robodoc}
\item {\bf test:} supplements the
demonstration programs, containing files to produce form factors with user-defined kinematics.
The user can specify the rank, numerator, numerical point
etc. via a steering file.
\end{enumerate}
\section{Description of the individual software components}
Here we give a short summary of the contents of the individual modules.
A detailed description of the usage, dependencies and output of each
module is given at the beginning of each file of the program
and can also be read in {\it html} format by loading
the file {\tt masterindex.html} from the subdirectory {\tt doc}
into the browser and following the various links.\\
The program is written in Fortran95 and is downwards compatible
to Fortran90.
The directory {\bf src} contains the subdirectories
\begin{itemize}
\item
{\bf form\_factor:} contains five modules to compute the form factors
for two-point to six-point functions:\\
{\tt form\_factor\_2p.f90,
form\_factor\_3p.f90,
form\_factor\_4p.f90},\\
{\tt form\_factor\_5p.f90,
form\_factor\_6p.f90}.
\item {\bf integrals: } contains the subdirectories {\bf four\_point, three\_point, two\_point}.
\begin{description}
\item[four\_point:] contains six modules to compute the four-point functions
with $p_i^2\not=0$ holding for four, three, two, one or none of the external legs:
{\tt function\_4p1m.f90,
function\_4p2m\_opp.f90,
function\_\-4p2m\-\_adj.f90,
function\_4p3m.f90,
function\_4p4m.f90,
generic\_function\_4p.\-f90}.
\item[three\_point:] contains six modules to compute the three-point functions
with three, two or one external legs off-shell: \\
{\tt function\_3p1m.f90,
function\_3p2m.f90,
function\_3p3m.f90,}\\
{\tt gene\-ric\_\-function\_3p.f90,
mod\_h0.f90, mod\_hf.f90, mod\_he.f90}.
\item[two\_point:] contains one module to compute the two-point functions: \\
{\tt generic\_function\_2p.f90}.
\end{description}
\item {\bf kinematic:} contains two modules to compute the matrix ${\cal S}$
and its inverse
and to compute the reduction coefficients $b_i$: \\
{\tt matrice\_s.f90, inverse\_matrice.f90}.\\
The definition of $\hat{B}$ (see eq.~(\ref{bhat})\,) is contained in {\tt matrice\_s.f90}.
\item {\bf module: } contains auxiliary functions/subroutines and the
definition of some default parameters: \\
The file {\tt parametre.f90} contains the parameters defining the switch
between the reduction
down to scalar basis integrals (which are implemented in analytic form)
versus the numerical evaluation of integrals
(with or without Feynman parameters in the numerator),
as explained in section \ref{basisints}.
The default value for $\hat{B}$ for three-point as well as four-point functions
has been set to 0.005.
The other default parameters for the numerical integration
are also fixed in {\tt parametre.f90}.
Further, there is a switch to calculate the rational parts of amplitudes
only. The default is {\tt tot} to calculate the
complete form factors. If {\tt tot} is replaced by {\tt rat}, only the
rational parts
will be calculated.
\noindent
The auxiliary functions will not all be listed here, we only point to
the most important ones:
\begin{itemize}
\item
Polylogarithms and other special functions are defined in
{\tt z\_log.f90, zdilog.f90, kronecker.f90, constante.f90}.
\item
{\tt spinor.f90} contains functions to compute scalar products of four-momenta,
spinorial products and totally antisymmetric epsilon tensors.
\item
The files {\tt preci\_double.f90} and
{\tt preci\_quad.\-f90} are needed for the switch between double precision and
quadruple precision. The default is double precision.
If quadruple precision should be used,
one has to define {\tt \$precision = "quadruple"} in the file {\tt configure.pl}.
Note that quadruple precision is at present only supported by the {\tt ifort} compiler.
\item
The file {\tt form\_factor\_type.f90} defines a type {\it form\_factor}
such that form factors, which are arrays of three complex numbers,
can be involved in algebraic manipulations.
\item
{\tt cache.f90} is used to reserve memory in order to store results for three-point or
four-point functions which already have been computed.
\end{itemize}
\item {\bf numerical: } contains two modules for the numerical integration : \\
{\tt mod\_adapt\_\-gauss.\-f90, mod\_numeric.f90}.
\end{itemize}
Concerning the numerical integration, the following features should be pointed out:
\begin{itemize}
\item The user can change the integration method for the numerical integration
of the one-dimensional parameter integrals by
changing the module {\tt numerical\_evaluation} in the file {\tt mod\_numeric.f90 }
in the directory \\
{\tt src/nu\-me\-ri\-cal}.
\item The values for the cuts defining the switch to a one-dimensional numerical
integration of the basis integrals are given in {\tt parametre.f90}
and can be changed easily by the user.\\
{\em Note:} If the user wants to change the
default values defined in {\tt parametre.\-f90}, it is
{\em not} necessary to recompile the library.
If the desired values are defined in the main program,
the default values will be overwritten.
The command {\tt use parametre} still has to be included in the header of the main program.
\item For boxes with 4 off-shell external legs, the expressions
for one-dimensional numerical integrations are not worked out
in this version. Here the program will always reduce numerically to scalar
basis integrals, irrespective of the size of the Gram determinants.
\end{itemize}
\section{Installation instructions}
The program can be downloaded as a .tar.gz archive from the following URL:
{\tt http://lappweb.in2p3.fr/lapth/Golem/golem95\_v1.0.tar.gz}.
The installation instructions given below also can be found in the {\tt Readme} file
coming with the code.
To install the {\tt golem95} library, type the following commands:\\
{\tt ./configure.pl [--install\_path=mypath] [--compiler=mycompiler]}\\
{\tt make}\\
{\tt make install}
Please note that {\tt mypath} must be the absolute path of the directory
where you would like the library to be intalled.
If no option for {\tt install\_path} is given,
a subdirectory of
the current directory with the name {\tt libgolem}
will be created and the library will be installed in
this subdirectory.
For example, if you want to put the library into the directory \\
{\tt /home/myname/lib/libgolem} and use the compiler {\tt g95}, then type:\\
{\tt ./configure.pl --install\_path=/home/myname/lib/libgolem --compi\-ler=g95}\\
{\tt make}\\
{\tt make install}
The directory {\tt /home/myname/lib/libgolem} will then contain a collection of
files of type {\tt .mod} plus a file named {\tt libgolem.a} which is the {\tt golem95} library.
If no option for the compiler is specified, the installation script will search for
fortran 95 compilers installed on your system and will take the first
matching compiler found.
The program has been tested with the GNU compilers g95 and gfortran,
the intel compiler ifort,
the dec compiler f95, the NAG compiler f95 and
the portland compiler pgf95.
\section{Test run description}
The program comes with several demonstration programs located in the subdirectory
{\tt demos}.
A list of all options contained in the {\tt demos} directory is given in Appendix \ref{demos}.
We will describe some selected examples in the following.
\subsection{Rank five five-point form factor}
As an example for a test run, we first describe the calculation of
a form factor for a rank five 5-point integral, $A^{5,5}_{j_1\ldots j_5}$.
We choose $j_i=i$, i.e. $z_1\ldots z_5$ in the numerator.
Further, we choose the following numerical point (in terms of entries
of the kinematic matrix ${\cal S}$ containing the invariants):
\begin{equation}
{\cal S} =
\left( \begin{array}{ccccc}
0&p_2^2&s_{23}&s_{51}&p_1^2\\
p_2^2&0&p_3^2&s_{34}&s_{12}\\
s_{23}&p_3^2&0&p_4^2&s_{45}\\
s_{51}&s_{34}&p_4^2&0&p_5^2\\
p_1^2&s_{12}&s_{45}&p_5^2&0
\end{array}\right)
=\left( \begin{array}{ccccc}
0&0&-3&-4&0\\
0&0&0&6&15\\
-3&0&0&0&2\\
-4&6&0&0&0\\
0&15&2&0&0
\end{array}\right)
\end{equation}
These values are already implemented in the file {\tt demo\_5point.f90}
in the subdirectory {\tt demos}.
All the user has to do is the following:
\begin{itemize}
\item go to the subdirectory {\tt demos}
\item type``perl configure.pl". The shell will prompt for the choice of
the demo to be run:\\
{\tt Choose which demo program you want to run:\\
1) three-point functions\\
2) four-point functions\\
3) five-point functions\\
4) six-point functions\\
5) 4-photon helicity amplitudes\\
6) numerical stability demo: $\det G\to 0$\\
7) numerical stability demo: $\det S\to 0$\\
8) Golem $\leftrightarrow$ LoopTools conventions
}
\item
Choosing option 3 will produce the following output:\\
{\tt
you have chosen option 3: five-point functions\\
The Makefile has been created\\
Please run:\\
make\\
./comp.exe\\
}
\item Running ``make" will produce the executable {\tt comp.exe} where the
{\tt demo*.f90} files matching the choice above will be compiled automatically.
Running {\tt comp.exe} will prompt for the rank of the form factor to be calculated:\\
{\tt
Choose what the program should compute:\\
0) form factor for five-point function, rank 0\\
1) form factor for five-point function, rank 3 (z1*z2*z4)\\
2) form factor for five-point function, rank 5 (z1*z2*z3*z4*z5)\\
3) form factor for diagram with propagator 3 pinched, rank 0\\
4) form factor for diagram with propagators 1 and 4 pinched, rank0
}
\item Choosing option 2 will produce the result
which will be written to the file {\tt test5point.txt} and looks as follows:
The kinematics is:
\begin{picture}(150,60)
\put(135,-150){\includegraphics[width=4cm, height=4cm]{5ptGolem.eps}}
\end{picture}
\begin{eqnarray*}
&& p_1+p_2+p_3+p_4+p_5 = 0\\
&&S(1,3) = (p_2+p_3)^2=-3.\\
&&S(2,4) = (p_3+p_4)^2=6.\\
&&S(2,5) = (p_1+p_2)^2=15.\\
&&S(3,5) = (p_4+p_5)^2=2.\\
&&S(1,4) = (p_1+p_5)^2=-4.\\
&&S(1,2) = p_2^2=0.\\
&&S(2,3) = p_3^2=0.\\
&&S(3,4) = p_4^2=0.\\
&&S(4,5) = p_5^2=0.\\
&&S(1,5) = p_1^2=0.
\end{eqnarray*}
A factor $\Gamma(1+\epsilon) \Gamma(1-\epsilon)^2/\Gamma(1-2 \epsilon)\,(4\pi\,\mu^2)^{\epsilon}$
is factored out from the result.\\
\begin{tabular}{ll}
result=& 1/$\epsilon^2$ * (0.0000000000E+00 + I* 0.0000000000E+00)\\
&+ 1/$\epsilon$ * (0.0000000000E+00 + I* 0.0000000000E+00)\\
&+ (--.8615520644E-04 + I* 0.1230709464E-03)\\
CPU time=& 7.999000000000001E-003
\end{tabular}
\end{itemize}
We recall that we use the integral measure as in eq.~(\ref{eq0}).
The factor
$r_\Gamma=\Gamma(1+\epsilon) \Gamma(1-\epsilon)^2/\Gamma(1-2 \epsilon)\,(4\pi\,\mu^2)^{\epsilon}$
has been extracted from the integrals to comply with the conventions
of ref.~\cite{Binoth:2005ff} and the
$\overline{\rm{MS}}$ subtraction scheme.
Note that it may be advantageous to call {\tt golem95} with rescaled, dimensionless
invariants (e.g. $s_{ij}/\mu^2$), for example in cases where most of the invariants
have very small numerical values in all kinematic regions.
\subsection{Calculating all possible numerators at once}
In the previous example, the numerical point (and the type of numerator
for tensor integrals) has been fixed in the demo programs.
If the user would like to give the the numerical point and the
Feynman parameters in the numerator as an {\it input}, he
can use the file {\tt param.input} in the subdirectory {\tt test}.
This setup also allows to calculate all possible numerators for a certain rank
in one go.
A typical example looks as follows:
Assume we would like to calculate the form factors for rank one six-point functions
for all possible numerators $z_j, j=1\ldots 6$, for the following numerical point
($p_i=(E_i,x_i,y_i,z_i)$):
\begin{eqnarray}
p_1&=&(0.5,0.,0., 0.5)\nonumber\\
p_2&=&(0.5,0.,0.,-0.5)\nonumber\\
p_3&=&(-0.19178191 ,-0.12741180,-0.08262477,-0.11713105)\nonumber\\
p_4&=&(-0.33662712, 0.06648281, 0.31893785, 0.08471424)\nonumber\\
p_5&=&(-0.21604814, 0.20363139,-0.04415762, -0.05710657)\nonumber\\
p_6&=&-\sum_{i=1}^5 p_i \nonumber
\end{eqnarray}\noindent
To use these momenta, go to the subdirectory {\tt test} and
edit\footnote{Alternatively, random momenta can be generated using the
program mom\_rambo.f, adapted from \cite{Kleiss:1985gy} and
also contained in the subdirectory {\tt test}.}
the file
{\tt momenta.\-dat}, writing each component of the above momenta
into a single line.
To calculate an $N$-point function,
the program will use the first $N$ momenta found in
{\tt momenta.dat} (respectively the momenta file specified in {\tt param.input}).
For $N=5$ and $N=6$, it is important that momentum
conservation, i.e. $\sum_{i=1}^N p_i = 0$, is fulfilled because
momentum conservation has been assumed to hold true in the reduction.
To generate results, the user only has to do the following:
\begin{itemize}
\item edit the file {\tt param.input} to choose the number of legs, rank and numerator.
If only a particular numerator should be calculated,
give the labels of Feynman parameters, else put {\tt all} into the numerator field,
\item type {\tt perl maketest.pl}.
\end{itemize}
The program will automatically compile the corresponding functions
and run the executable.
The following output will be produced:
\begin{enumerate}
\item a separate output file called N[nb of legs][rank]\_[pt].out
for each individual numerator
([pt] denotes the ``label" of a particular numerical point,
which can be chosen by the user to distinguish results
for different numerical points)
\item a file called N[nb of legs][rank]\_[pt].numbers,
where all form factors that have been calculated
for a particular rank and number of legs and numerical point
are {\it appended}.
For example, if the option {\tt all} has been chosen to
calculate all possible combinations of Feynman parameters
in the numerator, this file will contain the results for all
these numerators.
The format is such that it can be read by {\tt Mathematica},
to allow direct comparisons to results obtained from algebraic programs.
If the result is $P_2/\epsilon^2+P_1/\epsilon+P_0$ for a
rank $r$ $N$-point form factor of type $A$,
the output will be a list
$a_N[j_1,\ldots,j_r]=\{{\cal R}e[P_2],{\cal I}m[P_2],{\cal R}e[P_1],{\cal I}m[P_1],{\cal R}e[P_0],{\cal I}m[P_0]\}$.
\end{enumerate}
For example, for rank one six-point functions at the numerical
point given above (``pt1"), having chosen {\tt all} in {\tt param.input}
to calculate all six possible
numerators, the program produces seven output files:
{\tt N6rank1zi\_pt1.out} for $i=1\ldots 6$ and the file
{\tt N6rank1\_pt1.numbers}.
While the files {\tt N6rank1zi\_pt1.out} contain, in addition to the result
for the particular numerator, also the kinematic point and CPU time information,
the file {\tt N6rank1\_pt1.numbers} just lists the results.
Note that for $N\ge 5$, individual form factors
are not uniquely defined because the metric tensor
$g_{\mu\nu}$ can be expressed by external
momenta, such that individual terms can be shifted between
form factors of type $A,B$ or $C$.
\subsection{Calculation of the 4-photon helicity amplitudes}
In order to show how {\tt golem95} can be embedded into the
calculation of full one-loop amplitudes, we give here the
calculation of the light-by-light scattering amplitude in massless QED
as a pedagogical example.
This amplitude is defined by
six Feynman diagrams where the four photons are attached to a closed
fermion loop in all possible ways. Diagrams
which differ by the charge flow only lead to the same value,
which leaves us with three different topologies defined by the
photon orderings $1243$, $1234$ and $1324$, respectively.
Each diagram is IR finite and UV divergent. The UV divergence only
cancels in the sum of the diagrams.
The results for the three independent helicity amplitudes
$++++$, $+++-$, $++--$ are well known, see for example
\cite{Binoth:2002xg,Bernicot:2008th}.
For completeness we list the analytic formulae, omitting
the irrelevant phases
\begin{eqnarray}
{\cal A}^{++++} &=& 8 \quad , \quad
{\cal A}^{+++-} = -8 \;,\nonumber \\
{\cal A}^{++--} &=& -8 \Bigl[ 1 + \frac{t-u}{s} \log\left(\frac{t}{u}\right)
+ \frac{t^2+u^2}{2 s^2} \Bigl( \log\left(\frac{t}{u}\right)^2 + \pi^2 \Bigr)\Bigr] \;\; .
\end{eqnarray}
The analytic expressions in terms of
form factors and Mandelstam invariants
which are given in the demo program {\tt demo\_4photon.f90} were obtained as
follows.
After working out the trace of gamma matrices one finds
for each graph a polynomial in scalar products of polarization
vectors, $\varepsilon_j$, external momenta $p_j$
and the $D=4-2\epsilon$ dimensional loop momentum $k$.
All reducible scalar products, i.e. those which can be written in terms of inverse propagators,
were cancelled directly. The remaining expressions, containing only
irreducible scalar products, are proportional to tensor integrals
which are transformed to form factors using eq.~(\ref{fofageneral}).
Each form factor now has scalar coefficients containing
polarisation vectors and external momenta, i.e.
$\varepsilon_i \cdot \varepsilon_j$, $\varepsilon_i \cdot p_j$,
$s=2 p_1 \cdot p_2$, $t=2 p_2\cdot p_3$ and $u=2 p_1\cdot p_3$,
where we defined all external momenta as incoming.
Using spinor helicity methods
one can map these coefficients to polynomials in the Mandelstam variables $s$, $t$ and $u=-s-t$.
Choosing reference momenta $p_2$, $p_1$, $p_4$,
$p_3$ for the polarization vectors $\varepsilon_1$, $\varepsilon_2$,
$\varepsilon_3$, $\varepsilon_4$ respectively,
one easily can show the following relations~\cite{Binoth:2003xk}
relevant for the $++++$ amplitude
\begin{eqnarray}
\varepsilon_1^+ \cdot \varepsilon_1^+ = -\frac{2 s}{tu} \varepsilon_1^+ \cdot p_3\,\varepsilon_2^+ \cdot p_3 &,&
\varepsilon_1^+ \cdot \varepsilon_3^+ = \frac{2 }{t } \varepsilon_1^+ \cdot p_4\,\varepsilon_3^+ \cdot p_1 \;,\nonumber\\
\varepsilon_1^+ \cdot \varepsilon_4^+ = \frac{2 }{u } \varepsilon_1^+ \cdot p_3\,\varepsilon_4^+ \cdot p_1 &,&
\varepsilon_2^+ \cdot \varepsilon_3^+ = \frac{2 s}{u } \varepsilon_2^+ \cdot p_4\,\varepsilon_3^+ \cdot p_2 \;,\nonumber\\
\varepsilon_2^+ \cdot \varepsilon_4^+ = \frac{2 s}{t } \varepsilon_2^+ \cdot p_3\,\varepsilon_4^+ \cdot p_2 & ,&
\varepsilon_3^+ \cdot \varepsilon_4^+ = -\frac{2 s}{tu} \varepsilon_3^+ \cdot p_1\,\varepsilon_4^+ \cdot p_1 \;,\nonumber\\
\varepsilon_1^+ \cdot p_3\,\varepsilon_2^+ \cdot p_3 \,\varepsilon_3^+ \cdot p_1\, \varepsilon_4^+ \cdot p_1
&=& \left(\frac{tu}{2s}\right)^2 \frac{[21][43]}{\langle 12 \rangle \langle 34\rangle}\;.
\label{4p}
\end{eqnarray}
The phase factor in the last line is irrelevant for observables and thus can be dropped.
All coefficients of the form factors of the $++++$ amplitude are now rational polynomials in $s$, $t$ and $u=-s-t$.
For the $+++-$ amplitude one needs instead of eq.\,(\ref{4p})
\begin{eqnarray}
\varepsilon_1^+ \cdot \varepsilon_4^- = \frac{2 }{t } \varepsilon_1^+ \cdot p_4\,\varepsilon_4^+ \cdot p_1 &,&
\varepsilon_2^+ \cdot \varepsilon_4^- = \frac{2 }{u } \varepsilon_2^+ \cdot p_4\,\varepsilon_4^+ \cdot p_2 \;,\nonumber\\
\varepsilon_3^+ \cdot \varepsilon_4^- = 0\qquad\qquad\qquad \;,\nonumber\\
\varepsilon_1^+ \cdot p_3\,\varepsilon_2^+ \cdot p_3 \,\varepsilon_3^+ \cdot p_1\,\varepsilon_4^- \cdot p_1
&=& \left(\frac{tu}{2s}\right)^2 \frac{[21] \langle 14 \rangle [31]}{\langle 12 \rangle [41] \langle 13 \rangle}
\end{eqnarray}
and for the $++--$ amplitude
\begin{eqnarray}
\varepsilon_1^+ \cdot \varepsilon_3^- = \frac{2 }{u } \varepsilon_1^+ \cdot p_3\,\varepsilon_3^+ \cdot p_1 &,&
\varepsilon_1^+ \cdot \varepsilon_4^- = \frac{2 }{t } \varepsilon_1^+ \cdot p_4\,\varepsilon_4^+ \cdot p_1 \;,\nonumber\\
\varepsilon_2^+ \cdot \varepsilon_3^- = \frac{2 }{t } \varepsilon_2^+ \cdot p_3\,\varepsilon_3^+ \cdot p_2 &,&
\varepsilon_3^- \cdot \varepsilon_4^- = -\frac{2 s}{tu} \varepsilon_3^- \cdot p_1\,\varepsilon_4^- \cdot p_1 \;,\nonumber\\
\varepsilon_1^+ \cdot p_3\,\varepsilon_2^+ \cdot p_3 \,\varepsilon_3^- \cdot p_1\,\varepsilon_4^- \cdot p_1
&=& \left(\frac{tu}{2s}\right)^2 \frac{[21]\langle 34 \rangle }{\langle 12 \rangle [ 34 ]}\;.
\end{eqnarray}
These relations define the different coefficients of the
form factors present in the file {\tt demo\_4photon.f90} which evaluates the
four photon amplitude\footnote{We note that the three helicity amplitudes can be evaluated in a much simpler way.
By applying spinor helicity methods at an earlier stage, one can achieve a representation without
any tensor four-point function. The given representation should only illustrate
a generic form factor representation of an amplitude.}.
The form
factors for each momentum ordering have to be evaluated only once.
We have compared our numerical result with the well-known results for these amplitudes
and find perfect agreement.
The program {\tt demo\_4photon.f90} can be used as a guideline how
to express any amplitude with massless
loops in terms of form factors and scalar coefficients before evaluating it
with {\tt golem95}.
\section{Conclusions and Outlook}
We have presented the Fortran 95 program {\tt golem95} for the numerical evaluation of
tensor integrals up to rank six six-point functions.
The program is based on a form factor representation of tensor integrals and
performs the reduction to a certain set of basis integrals numerically.
The basis integrals are implemented in analytic form.
If during the reduction process an inverse determinant becomes small, the program
switches to a numerical evaluation of the (tensor-)integral without further
reduction, thus avoiding small denominators.
The numerical evaluation is based on one-dimensional parameter integral representations
for most of the basis integrals, allowing for a fast and precise numerical integration.
The results are given as
a set of three complex numbers representing the
coefficients of the Laurent series in the dimensional regularisation parameter $\epsilon$,
i.e. the coefficients of the $1/\epsilon^2, 1/\epsilon$ poles and the finite part.
The program can also be used as a library for master integrals (including infrared
divergent ones), as the form factors with no Feynman parameter labels
directly correspond to scalar integrals.
In the current version, master integrals with massive {\it internal} particles
are not implemented yet. They will be available in a forthcoming version.
There is no restriction on the number of massive {\it external } legs.
A future version will also combine the {\tt golem95} code for the form factor evaluation
with a code for the generation of amplitudes, thus moving towards a
full automatisation of the calculation of one-loop amplitudes.
\section*{Acknowledgements}
We would like to thank A.~Guffanti and G.~Sanguinetti for collaboration at an earlier stage of this work.
TB, GH and TR would like to thank the LAPTH for hospitality
while part of this work was carried out.
This research was supported by the UK Science and Technology Facilities Council
(STFC) and the Scottish Universities Physics Alliance (SUPA).
\renewcommand \thesection{\Alph{section}}
\setcounter{section}{0}
\section{Appendices}
\renewcommand{\theequation}{\Alph{section}.\arabic{equation}}
\setcounter{equation}{0}
\subsection{Landau singularities}\label{landau}
Besides the spurious appearence of powers of inverse Gram determinants caused
by the decomposition on scalar integrals in the reduction process, which can be
avoided e.g. with the method advocated here and in \cite{Binoth:2005ff},
another source of problems in the numerical
evaluation of scattering amplitudes may be caused by the occurence of
{\em actual} kinematic singularities, the so-called Landau singularities
\cite{ELOP}.
The latter may appear in some diagrams contributing to the considered amplitude
whenever the determinant of the kinematic matrix ${\cal S}$ associated with
these diagrams - or with reduced diagrams obtained by one or several pinches
- vanishes. Typical cases of such singularities are
threshold singularities in loop calculations with internal and external masses,
collinear and infra-red singularities with massless internal and external lines.
Another type is the one of scattering singularities
\cite{Bern:2008ef,Nagy:2006xy}, for which $(\det G)\to 0$ and
$\det S$ becomes proportional to $(\det G)^{2}$,
such that both
vanish simultaneously. The occurence of these particular cases of
vanishing Gram determinants should not be confused with the spurious ones.
Individual
diagrams lead to infinities at such kinematic configurations, where a mass
singularity and a scattering singularity coincide.
In addition, it should be noted that for a given diagram, the reduction
algorithm breaks down\footnote{In the example of double parton scattering for $2$(massless)
$\to 2$ massive legs with no internal mass, it can be checked that for the four
leg diagram with two opposite masses, the equations determining $B$ and $b_{4}$
in the notations of ref. \cite{Binoth:2005ff} have no solution,
because $(\delta v).H.(\delta v) = 0$, hence the equation for $B$ becomes
$0\times B=1$ which has obviously no solution. For a discussion of
the reduction in exceptional kinematic configurations
see also \cite{Duplancic:2003tv}.} at such a scattering singularity,
as inferred from the relation $B \propto \det G / \det S \to \infty$.
As one combines the diagrams to scattering amplitudes,
gauge cancellations may occur analytically, which in general
reduces the degree of singularity as compared to individual
diagrams, or even make the singularity bounded, as e.g. observed in
the 6 photon amplitudes\footnote{{\em Singularity} means {\em non-analyticity};
the latter can be either infinite - integrable or
not - or bounded.} \cite{Bern:2008ef,Bernicot:2007hs}.
On the other hand, the numerical combination
of the singularities from separate diagrams is expected to be problematic, and
leads to instabilities even in cases of expected finiteness. Note that this
problem is known to all methods based on the reduction of diagrams, so it is
is not specific to our reduction formalism.
We note that the problem of large numerical cancellations
is inherent to any method based on the reduction to scalar master integrals
like $I_3^n, I_4^n$, as the latter may become linearly dependent near such singularities.
Depending on the inclusiveness of the observable to be calculated,
and the degree of the singularity, possible cures could be to
resort to mutliple precision in some vicinity of the kinematic singularities,
and/or place a hole in the phase space around the singularity together with a smooth
interpolation over it. Certainly, in specific cases,
when the observable would be controlled
by the singularity, such methods would be inadequate.
\subsection{One-dimensional integral representations}\label{onedimint}
In this appendix we will derive representations of
IR finite box-and triangle integrals as one-dimensional
Feynman parameter integrals.
These representations have the advantage that they can be integrated
numerically in a very fast and precise way using deterministic
numerical integration routines. This approach is similar to the one in ~\cite{Binoth:2002xh}
where one parameter integration has been carried out analytically.
The program switches to this numerical evaluation if
$\hat{B}$ becomes smaller than a value defined in {\tt module/parametre.f90}
(the default is 0.005).
\subsubsection{Four-point functions}\label{secOMFPF}
Our starting point are the higher dimensional four-point functions $I^{n+2}_4,I^{n+4}_4$
given by
\begin{eqnarray}
I^{n+2}_4(j_1, \ldots ,j_r) &=&
\Gamma \left(3-\frac{n}{2} \right) \, \int_{0}^{1}
\prod_{i=1}^{4} \, d z_i \, \delta(1-\sum_{l=1}^{4} z_l)
\, \frac{z_{j_1} \ldots z_{j_r}}{
(-\frac{1}{2}\, z \cdot \mbox{$\cal S$}
\cdot z-i\delta)^{3-n/2}}\;,\nonumber\\
I^{n+4}_4(j_1) &=&
\Gamma \left(2-\frac{n}{2} \right) \, \int_{0}^{1}
\prod_{i=1}^{4} \, d z_i \, \delta(1-\sum_{l=1}^{4} z_l)
\, \frac{z_{j_1}}{
(-\frac{1}{2}\, z \cdot \mbox{$\cal S$}
\cdot z-i\delta)^{2-n/2}}\;,\nonumber
\end{eqnarray}
The reduction of these integrals to integrals with no Feynman parameters in
the numerator introduces inverse Gram determinants.
Therefore it can be advantageous to evaluate these integrals without further
reduction. To do so, we proceed as follows:\\
First, to get rid of the $\delta$ distribution, we make the following change of variables:
\begin{eqnarray}
z_1 & = & w \, (1-x)\; ,\;
z_2 = w \, x \, y \, z \; ,\;
z_3 = w \, x \, y \, (1-z) \; ,\;
z_4 = w \, x \, (1-y)
\label{eqCHANGEV}
\end{eqnarray}
Now, instead of computing directly the three-dimensional integrals numerically as
proposed in \cite{Binoth:2005ff}, we perform analytically the integration over $x$
and $y$ and integrate numerically over the leftover variable $z$, using
an adaptive Gauss-Kronrod method~\cite{numrecipes}.
For the cases treated in the {\tt golem95} library (no internal masses), the $x$ and $y$
integration for the six- and eight-dimensional four-point functions can be computed
using two basis integrals:
\begin{eqnarray}
\int^1_0 \, dx \, \frac{x^n}{A + B \, x} &=& J(n,A,B)\;,
\label{eqCAS1}\\
\int^1_0 \, dx \, x^n \, \ln(A + B \, x) &=& K(n,A,B)
\label{eqCAS2}
\end{eqnarray}\noindent
which obey to the following relations:
\begin{eqnarray}
J(n,A,B) & = & \frac{1}{n \,B} - \frac{A}{B} \, J(n-1,A,B) \\
J(0,A,B) & = & \frac{\ln(A+B) - \ln(A) }{B} \\
K(n,A,B) & = & \frac{ (A+B) \, \ln(A+B) - n \, A \, K(n-1,A,B) }{(n+1) \, B} \nonumber \\
& & \mbox{} - \frac{1}{(n+1)^2} \\
K(0,A,B) & = & \frac{ (A+B) \, \ln(A+B) - A \, \ln(A) -B }{B}
\label{eqPROP}
\end{eqnarray}
Here we assume that $B \ne 0$; if $B=0$ the integrations are trivial.
When the two first integrations have been done, we are left with the $z$ integration.
To explain how we proceed, we treat the case of the six-dimensional
three-mass four-point function as an example.
After integration over $x$ and $y$, we are left with the following structure:
\begin{eqnarray}
I & = & \frac{ - h \, \ln(h) + e \, \ln(e) }{f \, g} +
\frac{ h \, \ln(h) - c \, \ln(c) }{f \, d}
\label{eqZDEPEND}
\end{eqnarray}
with
\begin{eqnarray}
c & = & z \, \mbox{$\cal S$}_{12}+(1-z) \, \mbox{$\cal S$}_{13}\nonumber \\
f & = & z \, (\mbox{$\cal S$}_{24}-\mbox{$\cal S$}_{12})+(1-z) \, (\mbox{$\cal S$}_{34}-\mbox{$\cal S$}_{13})\nonumber \\
g & = & z \, (1-z) \, \mbox{$\cal S$}_{23}-z \, \mbox{$\cal S$}_{24}-(1-z) \, \mbox{$\cal S$}_{34}\nonumber \\
d & = & z \, (1-z) \, \mbox{$\cal S$}_{23}-z \, \mbox{$\cal S$}_{12}-(1-z) \, \mbox{$\cal S$}_{13}\nonumber \\
e & = & z \, \mbox{$\cal S$}_{24}+(1-z) \, \mbox{$\cal S$}_{34}\nonumber \\
h & = & z \, (1-z) \, \mbox{$\cal S$}_{23}
\label{eq DEFCFGDEH}
\end{eqnarray}
where $\mbox{$\cal S$}_{ij}$ are the $\mbox{$\cal S$}$-matrix elements, they must be understood
as $\mbox{$\cal S$}_{ij} + i \, \delta$ .
The first thing to note is that $I$ has no poles. All six-dimensional
four-point functions are infrared finite, and the UV pole of the
eight-dimensional four-point functions is contained in the overall
$\Gamma$-function.
Indeed, it is easy to see that: $g=h-e$, $d=h-c$ and $f=e-c$, so when
$g \rightarrow 0$ or $d \rightarrow 0$ or $f \rightarrow 0$, the numerator
of $I$ goes to zero.
To compute the $z$ integral numerically, we use a contour deformation:
we complexify the $z$ variable
\begin{equation}
z = u - i \, \epsilon \, g(u)
\label{eqCHANGEZ}
\end{equation}
i.e. we have to compute the following integrals:
\begin{equation}
\int^1_0 \, dz \, f(z) = \int^1_0 \, du \, C \, f(u - i \, \epsilon \, g(u))
\label{eqTRANSZU}
\end{equation}
where $C$ is the jacobian of the transformation :
$C = 1 - i \, \epsilon \, dg/du$ and $\epsilon = \pm1$.
The function $g$ has the following properties: $g(0) = g(1) = 0$ and $g(u) > 0$ for $u \in [0,1]$.
For practical applications, we took $g(u) = u \, (1-u)$.
As the numerator of $I$ contains some logarithms, some care has to be taken in order to avoid a clash between the cut of the logarithm and the contour. To analyse that, let us consider the following example:
\begin{equation}
E = \int^1_0 \, dz \, \frac{\ln(a+b \, z + i \, s_1 \, \lambda)}{c + d \, z + i \, s_2 \, \lambda}
\label{eqEXAMPLEE}
\end{equation}
with $a$, $b$, $c$ and $d \in \mathbb{R}$, $s_1, \, s_2 = \pm1$ and $\lambda > 0$. Making the change of variable (\ref{eqCHANGEZ}) , we get:
\begin{equation}
E = \int^1_0 \, du \, C \, \frac{\ln(a+b \, u + i \, (s_1 \, \lambda - b \, \epsilon g(u)) )}{c + d \, u + i \, (s_2 \, \lambda - d \, \epsilon \, g(u)) }
\label{eqEXAMPLEE1}
\end{equation}
By choosing $\epsilon = - \, s_1 \, \mbox{sign}(b)$, the imaginary part of the argument of the
logarithm will be constant and have the sign of $s_1$. This choice of $\epsilon$ defines
the contour but the important point is that by varying $u$ (walking on the contour) the cut
of the logarithm is never crossed. The pole is located at:
\begin{equation}
z_0 = - \frac{c}{d} - i \, \frac{s_2}{d} \, \lambda
\label{eqLOCATIONOFTHEPOLE}
\end{equation}
Using the Cauchy theorem, we arrive at the following relation:
\begin{eqnarray}
\int^1_0 \, dz \, f(z) & = & \int^1_0 \, du \, C \, \frac{\ln(a+b \, u + i \, (s_1 \, \lambda - b \, \epsilon g(u)) )}{c + d \, u + i \, (s_2 \, \lambda - d \, \epsilon \, g(u)) } \nonumber \\
& & \mbox{} - 2 \, i \, \pi \, R \, \epsilon \, \Theta \left(1+\frac{c}{d}\right) \, \Theta \left(-\frac{c}{d}\right) \, \delta^{\rm{sign}(\epsilon)}_{\rm{sign}(s_2/d)}
\label{eqRESFINEX}
\end{eqnarray}
where $R$ is the residue of $f(z)$ at $z = z_0$.
This is the way we proceed to compute numerically the two terms of eq. (\ref{eqZDEPEND}). We compute the two terms separately despite the fact that each term has a pole when $f \rightarrow 0$
while the sum does not, because they contain two kinds of logarithms ($\ln(e)$ and $\ln(c)$),
and there is no reason that the choice for $\epsilon$ for one term prevents the contour
from crossing the cut for the other term.
For the case where there are Feynman parameters in the numerator, everything works
like the preceding example: we always split the integrand of the $z$ integration into
two pieces (each piece having more terms than the scalar case) by separating the two
kinds of logarithms. For the other types of four-point functions, we proceed in an
analogous way.
\subsubsection{Three-mass three-point functions}\label{secT3MTPF}
In the case of the three-point functions with three off-shell legs,
after making a change of variables of type (\ref{eqCHANGEV}),
we are left with two-dimensional integrals.
One parameter is integrated out analytically using (\ref{eqCAS1}),
the remaining integral is computed numerically, using the same techniques as
for the four-point functions.
\subsubsection{Two-mass three-point functions}\label{secTMTPF}
The two mass three-point functions are written in terms of functions $H_i$ \cite{Binoth:2005ff},
which are defined such that in the numerically problematic case where $X\to Y$,
their evaluation is numerically stable.
The functions $H_0$, $H_1$, $H_2$, $H_3$ and $H_4$ are given by:
\begin{eqnarray}
H_0(X,\alpha) & = & \frac{\bar{X}^{\alpha}}{X} \\
\label{eqH0}
H_1(X,Y,\alpha) & = & \frac{\bar{X}^{\alpha}-\bar{Y}^{\alpha}}{X-Y} \\
\label{eqH1}
H_2(X,Y,\alpha) & = & \frac{\bar{Y}^{\alpha}}{Y-X}+\frac{1}{1+\alpha} \,
\frac{\bar{Y}^{1+\alpha}-\bar{X}^{1+\alpha}}{(Y-X)^2} \\
\label{eqH2}
H_3(X,Y,\alpha) & = & \frac{\bar{Y}^{\alpha}}{Y-X}+\frac{2}{1+\alpha} \,
\frac{\bar{Y}^{1+\alpha}}{(Y-X)^2} \nonumber \\
& & \mbox{}+\frac{2}{(1+\alpha) \, (2+\alpha)} \,
\frac{\bar{Y}^{2+\alpha}-\bar{X}^{2+\alpha}}{(Y-X)^3} \\
\label{eqH3}
H_4(X,Y,\alpha) & = & \frac{\bar{Y}^{\alpha}}{Y-X}+\frac{3}{1+\alpha} \,
\frac{\bar{Y}^{1+\alpha}}{(Y-X)^2}+\frac{6}{(1+\alpha) \, (2+\alpha)} \,
\frac{\bar{Y}^{2+\alpha}}{(Y-X)^3} \nonumber \\
& & \mbox{}+\frac{6}{(1+\alpha) \, (2+\alpha) \, (3+\alpha)} \,
\frac{\bar{Y}^{3+\alpha}-\bar{X}^{3+\alpha}}{(Y-X)^4}\label{eqH4}\\
\bar{X}&=&-X-i\,\delta\nonumber
\end{eqnarray}
For each function $H_i(X,Y,\epsilon)$, one can define
\begin{equation}
H_i(X,Y,\epsilon) = \epsilon \, H_{E_i}(X,Y) + \frac{\epsilon^2}{2} \, H_{F_i}(X,Y)\;,
\end{equation}
and one can show that
\begin{eqnarray}
H_{E_n}(X,Y) & = & \int^1_0 dz \, z^{(n-1)} \; \frac{1}{z \, \bar{X}+(1-z) \, \bar{Y}} \label{eqDEFNMHE}\\
H_{F_n}(X,Y) & = & \int^1_0 dz \, z^{(n-1)} \; \frac{\ln(z \, \bar{X}+(1-z) \, \bar{Y})}{z \, \bar{X}+(1-z) \, \bar{Y}}
\label{eqDEFNMHF}
\end{eqnarray}
From this definition, it is easy to show that
\begin{eqnarray}
H_{E_n}(X,Y) & = & \frac{1}{X-Y} \, \left( \frac{1}{n-1} -Y \,H_{E_{n-1}}(X,Y) \right)
\label{eqHEREC}
\end{eqnarray}
The equations (\ref{eqDEFNMHE}) and (\ref{eqDEFNMHF}) are used to compute numerically the functions $H_{E_n}$ and $H_{F_n}$.
\subsection{Contents of the demonstration programs}\label{demos}
The demo programs calculate the following examples, listed also in the file {\tt DemoContents}
in the subdirectory {\tt demos}:
\begin{enumerate}
\item three-point functions
\item four-point functions
\item five-point functions
\item six-point functions
\item calculation of 4-photon helicity amplitudes
\item numerical stability demo: $\det G\to 0$
\item numerical stability demo: $\det S\to 0$
\item Golem $\leftrightarrow$ LoopTools conventions
\end{enumerate}
The items above contain the following options:
\begin{itemize}
\item Three-point functions:
\begin{enumerate}
\item one off-shell leg
\item two off-shell legs
\item three off-shell legs\\
For each of the three options above, one can choose to calculate:
\begin{enumerate}
\item scalar three-point function in n dimensions
\item three-point function in n dimensions with one Feynman parameter $(z_1)$ in the numerator
\item three-point function in n dimensions with two Feynman parameters $(z_1\,z_2)$
\item three-point function in n dimensions with three Feynman parameters $(z_1^2\,z_3)$
\item scalar three-point function in n+2 dimensions
\item three-point function in n+2 dimensions with one Feynman parameter $(z_2)$
\end{enumerate}
\end{enumerate}
\item Four-point functions:
\begin{enumerate}
\item no off-shell leg
\item one off-shell leg
\item two opposite off-shell legs
\item two adjacent off-shell legs
\item three off-shell legs
\item four off-shell legs\\
For each of the five options above, one can choose to calculate:
\begin{enumerate}
\item scalar four-point function in n dimensions
\item four-point function in n dimensions with one Feynman parameter $(z_1)$
\item four-point function in n dimensions with two Feynman parameters $(z_1\,z_4)$
\item four-point function in n dimensions with three Feynman parameters $ (z_1^2\,z_3)$
\item four-point function in n dimensions with four Feynman parameters $(z_1\,z_2\,z_3\,z_4)$
\item scalar four-point function in n+2 dimensions
\item four-point function in n+2 dimensions with two Feynman parameters $(z_1\,z_2)$
\item scalar four-point function in n+4 dimensions
\end{enumerate}
\end{enumerate}
\item Five-point functions:
\begin{enumerate}
\item form factor for five-point function, rank 0
\item form factor for five-point function, rank 3 ($z_1\,z_2\,z_4$ in numerator)
\item form factor for five-point function, rank 5 ($z_1\,z_2\,z_3\,z_4\,z_5$ in numerator)
\item form factor for a pinched 5-point diagram (propagator 3 missing), rank 0
\item form factor for a doubly pinched 5-point diagram (propagators 1 and 4 missing), rank 0
\end{enumerate}
\item Six-point functions:
\begin{enumerate}
\item form factor for six-point function, rank 0
\item form factor for six-point function, rank 4 ($z_1^2\,z_2\,z_3$ in numerator)
\item form factor A5 for pinched diagram, propagator 3 missing, rank 0
\item form factor for double pinched diagram, propagators 2,5 missing, rank 0
\item form factor for triple pinched diagram, propagators 2,4,6 missing, rank 0
\end{enumerate}
\item Calculation of 4-photon helicity amplitudes: \\
the purpose of this example is to demonstrate how to use {\tt golem95} for the
calculation of full amplitudes.
It calculates three different helicity configurations of the
on-shell 4-photon amplitude for a certain kinematic point.
\item Numerical stability demo: $\det G\to 0$:\\
calculates a rank three four-point function (in 6 dimensions)
in a region where $|B|=\det G/\det S$ becomes small,
i.e. where a representation based on the reduction to scalar integrals would fail.
The Feynman parameters in the numerator are $z_1\,z_2^2$.
The example follows closely the one described in section 7.2 of \cite{Binoth:2005ff}
and is also described in the golem95 manuscript:
The program makes 30 iterations where $B=-\det G/\det S$ becomes smaller in
each iteration.
The results for real and imaginary parts of $I_4^6(z_1\,z_2^2)$
are written to the file {\tt demo\_detG.dat} as a function of $x$,
where $ |B|~x^2$ for small $x$.
The files {\tt plotDetG\_Re.gp} and {\tt plotDetG\_Im.gp} can be used to
plot the result with gnuplot by {\tt load 'plotDetG\_Re/Im.gp' }.
One can see from the plots that
The file {\tt demo\_detG.txt} contains the details of the kinematics
for each iteration.
\item Numerical stability demo: $\det S\to 0$:\\
tests the rank 5 five-point tensor coefficient $A^{5,5}(1,1,1,1,1)$
with respect to its behaviour
when a sub-determinant $\det S \sim (\det G)^2 \to 0$.
The results for real and imaginary parts of the $\epsilon^0$ part of $A^{5,5}$
are written to the file {\tt demo\_a55\_dets\_sing.dat} as a function
of the transverse momentum of particle 5
and can be plotted with gnuplot by {\tt load 'plot\_demo\_A55.gp'}.
\item Relation between Golem output and LoopTools format:\\
produces Golem output for four-point functions up to rank four
and gives the relation to LoopTools conventions.
If LoopTools is linked, the lines containing the call of LoopTools
functions can be uncommented to produce LoopTools output in parallel.
\end{itemize}
\include{golemBib}
\end{document}
|
2,869,038,153,809 | arxiv | \section{#1}\setcounter{equation}{0}}
\def\nappendix#1{\vskip 1cm\no{\bf Appendix #1}
\def#1{#1}
\setcounter{equation}{0}}
\renewcommand{\theequation}{#1.\arabic{equation}}
\textwidth = 16truecm
\textheight = 24truecm
\begin{document}
\hoffset = -1truecm
\voffset = -2truecm
\thispagestyle{empty}
\begin{flushright}
{\large \bf DFTUZ/94/26}\\
{\large \bf hep-th/9412137}
\end{flushright}
\vskip 3truecm
\begin{center}
{\large \bf DIRAC VERSUS REDUCED PHASE SPACE QUANTIZATION\footnote{Talk
presented at the Geometry of Constrained Dynamical Systems Conference,
Cambridge, 15-18 June 1994}}\\
\vskip0.8cm
{ \bf
Mikhail S. Plyushchay${}^a$\footnote{On leave from the
Institute for High Energy Physics,
Protvino, Moscow Region, Russia; E-mail: mikhail@cc.unizar.es}
and Alexander V. Razumov${}^b$\footnote{E-mail: razumov@mx.ihep.su}
}\\[0.3cm]
{\it ${}^a$Departamento de F\'{\i}sica Te\'orica, Facultad de Ciencias}\\
{\it Universidad de Zaragoza, Zaragoza 50009, Spain}\\
{\it ${}^b$ Institute for High Energy Physics, Protvino}\\
{\it Moscow Region, 142284 Russia}\\
\vskip2cm
{\bf Abstract}
\end{center}
The relationship between the Dirac and reduced phase space quantizations is
investigated for spin models belonging to the class of Hamiltonian systems
having no gauge conditions. It is traced out that the two quantization
methods may give similar, or essentially different physical results, and,
moreover, it is shown that there is a class of constrained systems, which can
be quantized only by the Dirac method. A possible interpretation of the
gauge degrees of freedom is given.
\vfill
\newpage
\nsection{Introduction}
There are two main methods to quantize the Hamiltonian systems with first
class constraints: the Dirac quantization \cite{Dir} and the reduced phase
space quantization \cite{Fad69}, whereas two other methods, the path
integral method \cite{FPo67,Fad69} and the BRST quantization \cite{brst}
being the most popular method for the
covariant quantization of gauge-invariant systems, are based on and
proceed from them \cite{Fad69,sun}. The basic idea of the Dirac
method consists in imposing quantum mechanically the first class
constraints as operator conditions on the states for singling out the
physical ones \cite{Dir}. The reduced phase
space quantization first identifies
the physical degrees of freedom at the classical level by the
factorization of the constraint surface with respect to the action of the
gauge group, generated by the constraints. Then the resulting Hamiltonian
system is quantized as a usual unconstrained system \cite{Fad69}.
Naturally, the problem of the relationship of these two methods arises. It
was discussed in different contexts in literature \cite{kuc}, and
there is an opinion that the differences
between the two quantization methods can be traced out to a choice of
factor ordering in the construction of various physical operators.
We investigate the relationship of the two methods of
quantization for the special class of Hamiltonian systems with first class
constraints corresponding to different physical models of spinning
particles. The specific general property of the examples of
constrained systems considered here is the following:
their constraints generate SO(2)
transformations and, hence, corresponding gauge orbits topologically are
one-spheres $S^{1}$. This fact implies that these systems {\it do not
admit gauge conditions}, and, therefore, for the construction
of their reduced phase spaces
we shall use a general geometrical approach to
the Dirac--Bergmann theory of the constrained systems \cite{AbM78,PR}.
\nsection{Plane Spin Model}
The first model we are going to consider is the plane spin model,
which is a subsystem of the (3+1)--dimensional models of massless
particles with arbitrary helicity \cite{ply2},
and of the (2+1)--dimensional relativistic
models of fractional spin particles \cite{ply3}.
The initial phase space of the model is a cotangent bundle
$T^* S^1$ of the one--dimensional sphere $S^1$, that is a cylinder
$S^{1}\times {\bf R}$. It can be described {\it locally}
by an angular variable
$0\le \varphi < 2\pi$ and the conjugate momentum $S\in {\bf R}$.
The symplectic
two--form $\omega$ in terms of the local variables $\varphi$, $S$ has the
form
$
\omega = dS \wedge d \varphi,
$
and, thus, we have locally
$
\{\varphi, S\} = 1.
$
Actually, any $2\pi$--periodical function of the variable $\varphi$ that
is considered as a variable, taking values in ${\bf R}$, can be considered
as a function on the phase space, i.e., as an observable, and any
observable is connected with the corresponding $2\pi$--periodical
function. Therefore, we can introduce the functions
$q_1 = \cos \varphi,$ $q_2 = \sin \varphi$,
$q_1^2 + q_2^2 = 1$,
as the dependent functions on the phase space of the system.
For these functions we have
$\{q_1, q_2\}= 0,$
$\{q_{1}, S\} = -q_{2},$
$\{q_{2}, S\} = q_{1}.$
Any function on the phase space can be considered as a function
of dependent coordinates $q_1$, $q_2$ and $S$, which will be
taken below as the quantities, forming a restricted set of observables
whose quantum analogs have the commutators which are in the direct
correspondence with their Poisson brackets.
We come to the plane spin model by introducing the `spin' constraint
\begin{equation}
\psi = S - \theta = 0, \label{2.9}
\end{equation}
where $\theta$ is an arbitrary real constant.
Let us consider the Dirac quantization of the system.
To this end we take
as the Hilbert space the space of complex $2\pi$--periodical functions of
the variable $\varphi$ with the scalar pro\-duct
$(\Phi_1,\Phi_2) = \frac{1}{2\pi} \int_0^{2\pi}
\overline{\Phi_1(\varphi)}\Phi_2 (\varphi)\, d\varphi.$
The operators $\hat q_1$ and $\hat q_2$, corresponding to the functions
$q_1$ and $q_2$, are the operators of multiplication by the functions
$\cos \varphi$ and $\sin\varphi$, respectively, whereas
the operator $\hat S$ is defined by
$
\hat S \Phi = (-id/d \varphi + c)\Phi,
$
where $c$ is an arbitrary real constant. The operators
$\hat q_1$, $\hat q_2$ and $\hat S$ are Hermitian operators with respect
to the introduced scalar product, and they satisfy the
relation $[\hat{A},\hat{B}]=i\{A,B\}$, $A,B=q_1 ,q_2 ,S$.
The quantum analog of the constraint (\ref{2.9}) gives the equation for
the physical state wave functions:
$(\hat S~-~\theta)\Phi_{phys} = 0.$
Decomposing the function $\Phi_{phys}(\varphi)$ over the orthonormal basis,
formed by the functions $e^{ik\varphi}$, $k\in {\bf Z}$,
we find this equation has a nontrivial solution only when
$c = \theta + n,$
where $n$ is some fixed integer, $n \in {\bf Z}.$ In this case the
corresponding physical normalized wave function is
$\Phi_{phys}(\varphi) = e^{in\varphi}.$
The only physical operator \cite{sun}, i.e., an operator commuting
with the quantum constraint $\hat \psi$ here is $\hat S$,
which is reduced to the
constant $\theta$ on the physical subspace.
Now we come back to the classical theory in order to construct the reduced
phase space of the model. Let us show that for the surface, defined by
Eq.~(\ref{2.9}), there is no `good' gauge condition, but, nevertheless, the
reduced phase space of the system can be constructed. Indeed, it is clear
that the one--parameter group of transformations, generated by the
constraint $\psi$, consists of the rotations of the phase space. This group
acts transitively on the constraint surface, and we have only one gauge
orbit, which is the constraint surface itself. The gauge conditions must
single out one point of an orbit. In our case we have to define only one
gauge condition, let us denote it by $\chi$. The function $\chi$ must be
such that the pair of equations
$\psi = 0,$
$\chi = 0$
would determine a set, consisting of only one point, and in this point we
should have
$\{\psi, \chi\} \ne 0.$
Recall that any function on the phase space of the system
under consideration can be considered as a function of the variables
$\varphi$ and $S$, which is $2\pi$--periodical with respect to $\varphi$.
Thus, we require the $2\pi$--periodical function
$\chi(\varphi, S)$
turn into zero at only one point
$\varphi = \varphi_0$ from the interval $0 \le \varphi < 2\pi$
when $S=\theta$. Moreover,
we should have
$
\{\psi, \chi\}(\varphi_0, \theta) = - \left. \partial \chi(\varphi,
\theta)/\partial \varphi\right|_{\varphi = \varphi_0} \ne 0.
$
It is clear that such a function does not exist. Nevertheless, here we have
the reduced phase space that consists of only one point. Therefore,
the reduced space quantization is trivial: physical operator $\hat S$
takes here constant value $\theta$ in correspondence with the results
obtained by the Dirac quantization method. When the described plane spin
model is a subsystem of some other system, the reduction means simply that
the cylinder $T^{*}S^{1}$ is factorized into a point, where $S = \theta$,
and that wave functions do not depend on the variable $\varphi$.
Let us point out one interesting analogy in interpretation of the
situation with nonexistence of a global gauge condition. Here the
condition of $2\pi$--periodicity can be considered as a `boundary'
condition. If for a moment we forget about it, we can take as a gauge
function any monotonic function $\chi(\varphi, S)$, $\chi \in {\bf R}$,
such that $\chi(\varphi_{0}, \theta)=0$ at some point $\varphi =
\varphi_{0}$, and, in particular, we can choose the function
$\chi(\varphi, S) = \varphi$. The `boundary' condition excludes all such
global gauge conditions. In this sense the situation is similar to the
situation in the non--Abelian gauge theories where without taking into
account the boundary conditions for the fields it is also possible to find
global gauge conditions, whereas the account of those leads, in the end, to
the nonexistence of global gauge conditions \cite{sin}.
\nsection{Rotator Spin Model}
Let us consider now the rotator spin model \cite{ply4}. The
initial phase space of the system is described by a spin three--vector
${\bm S}$ and a unit vector ${\bm q}$,
$
{\bm q}^2=1,
$
being orthogonal one to the other,
$
{\bm q}{\bm S} = 0.
$
The variables $q_i$ and $S_i$, $i=1,2,3$, can be considered as dependent
coordinates in the phase space of the system. The Poisson brackets for
these coordinates are
$\{q_i, q_j\} = 0,$ $\{S_i, S_j\} = \epsilon_{ijk}S_k,$
$\{S_i,q_j\} = \epsilon_{ijk} q_k.$
Using these Poisson brackets,
we find the following expression for the symplectic two--form:
$
\omega = d p_i \wedge d q_i = d(\epsilon_{ijk} S_j q_k) \wedge d q_i.
$
Introducing the spherical angles $\varphi$, $\vartheta$ ($0 \le \varphi <
2\pi,\, 0 \le \vartheta \le \pi)$ and the corresponding momenta
$p_\varphi, p_\vartheta \in {\bf R}$, we can write the
parameterization for the vector $\bm q$,
$\bm q=(\cos \varphi \sin \vartheta,
\sin \varphi \sin \vartheta,\cos \vartheta)$
and corresponding parameterization for the vector $\bm S=\bm S(\vartheta,
\varphi, p_\varphi, p_\vartheta)$, whose explicit form we do not
write down here (see Ref.~\cite{PR}).
Then for the symplectic two--form we get the expression
$
\omega = d p_\vartheta \wedge d \vartheta + d p_\varphi \wedge d\varphi.
$
{}From this relation we conclude that the initial phase space of the system
is symplectomorphic to the cotangent bundle $T^*S^2$ of the
two--dimensional sphere $S^2$, furnished with the canonical symplectic
structure.
The rotator spin model is obtained from the initial phase space by
imposing the constraint
\begin{equation}
\psi = \frac{1}{2}(\bm S^2 - \rho^2) = 0, \qquad \rho > 0, \label{r4}
\end{equation}
fixing the spin of the system.
Using the Dirac method, we quantize the model in the following way.
The state space is a space of the square integrable functions on the
two--dimensional sphere. The scalar product is
$
(\Phi_1, \Phi_2) = \int_{S^2} \overline{\Phi_1(\varphi, \vartheta)}
\Phi_2(\varphi, \vartheta) \sin \vartheta d\vartheta d\varphi.
$
The above mentioned parameterization
allows us to use as the operator $\hat {\bm S}$
the usual orbital angular momentum operator expressed via spherical
angles. The wave functions as the functions on a sphere are
decomposable over the complete set of the spherical harmonics:
$
\Phi(\varphi, \vartheta) =
\sum_{l=0}^{\infty} \sum_{m=-l}^{l} \Phi_{lm} Y^l_m(\varphi, \vartheta),
$
and, therefore, the quantum analog of the first class constraint (\ref{r4}),
\begin{equation}
(\hat {\bm S}{}^2 - \rho^2) \Phi_{phys} = 0, \label{r9}
\end{equation}
leads to the quantization condition for the constant $\rho$:
\begin{equation}
\rho^2 = n(n+1), \label{r10}
\end{equation}
where $n > 0$ is an integer.
Only in this case equation (\ref{r9}) has a nontrivial solution of the form
$
\Phi_{phys}^n(\vartheta, \varphi) = \sum_{m=-n}^n \Phi_{nm} Y^n_m
(\varphi, \vartheta),
$
i.e., with the choice of (\ref{r10}) we get the states with spin equal to $n$:
$
\hat{\bm S}{}^2 \Phi_{phys}^n = n(n+1) \Phi_{phys}^n.
$
Thus, we conclude that the Dirac quantization leads to the quantization
(\ref{r10}) of the parameter $\rho$ and, as a result, the quantum system
describes the states with integer spin $n$.
Let us turn now to the construction of the reduced phase space of the system.
The constraint surface of the model can be considered as a set composed of
the points specified by two orthonormal three--vectors. Each pair of
such vectors can be supplemented by a unique third three--vector,
defined in such a way that we get an oriented orthonormal basis in three
dimensional vector space. It is well known that the set of all oriented
orthonormal bases in three dimensional space can be smoothly parameterized
by the elements of the Lie group SO(3). Thus, the
constraint surface in our case is diffeomorphic to the group manifold of
the Lie group SO(3).
The one--parameter group of canonical
transformations, generated by the constraint $\psi$,
acts in the following way:
$
\bm q(\tau) = \bm q \cos(S \tau) +
(\bm S \times \bm q) S^{-1}\sin(S \tau),
$
$
\bm S(\tau) = \bm S,
$
where
$
S =\sqrt{\bm S^2}.
$
Hence, we see that the gauge transformations are the
rotations about the direction, given by the spin vector. Thus, in the case
of a general position the orbits of the one--parameter group of
transformations under consideration are one dimensional spheres. Note,
that only the orbits, belonging to the constraint surface where $S = \rho
\ne 0$, are interesting to us. It is clear that an orbit is uniquely
specified by the direction of the spin three--vector $\bm S$ whose length
is fixed by the constraint $\psi$. As a result of our consideration, we
conclude that the reduced phase space of the rotator spin model is the
coset space SO(3)/SO(2), which is diffeomorphic to the two--dimensional
sphere $S^2$. Due to the reasons discussed for the preceding
model there is no gauge condition in this case either. In fact, since
SO(3) is a nontrivial fiber bundle over $S^2$, we can neither find a
mapping from $S^2$ to SO(3) whose image would be diffeomorphic to the
reduced phase space. In other words, in this case the reduced phase space
cannot be considered as a submanifold of the constraint surface.
Our next goal is to write an expression for the symplectic two--form on
the reduced phase space. We can consider the variables
$S_i$ as dependent coordinates in the reduced phase space, and the
symplectic two--form on it may be expressed in terms of them.
With the help of an orthonormal basis formed by
the vectors $\bm q$, $\bm s = \bm S/S$ and $\bm q \times \bm s$,
we get for the symplectic two-form on the reduced phase space the
following expression \cite{PR}:
\begin{equation}
\omega = - \frac{1}{2\rho^2} (\bm S \times d \bm S) \wedge d \bm S.
\label{omega1}
\end{equation}
Thus, we see that the dependent coordinates $S^i$
in the reduced phase space of the system provide a realization of the basis
of the Lie algebra so(3):
\begin{equation}
\{S_i, S_j\} = \epsilon_{ijk} S_k.
\label{sss}
\end{equation}
The quantization on the reduced phase space can be performed with the help
of the geometric quantization method proceeding from the classical relations
(\ref{omega1}),
(\ref{sss}) and $\bm S^2 = \rho^2$.
This was done in detail, e.g., in
Ref.~\cite{pl6}, and we write here the final results of
this procedure. The constant $\rho$ is quantized:
\begin{equation}
\rho = j, \qquad 0 < 2j \in {\bf Z}, \label{r22}
\end{equation}
i.e., it can take only integer or half-integer value,
and the Hermitian operators, corresponding to the components of the spin
vector, are realized in the form:
$
\hat S_1 =\frac{1}{2}(1-z^{2})d/dz+jz,
$
$
\hat S_2 =\frac{i}{2}(1+z^{2})d/dz -ijz,
$
$
\hat S_{3} =zd/dz-j,
$
where
$
z = e^{-i\varphi} \tan \vartheta/2,
$
or, in terms of the dependent coordinates,
$
z = (S_1 - iS_2)/(\rho~+~S_3).
$
Operators $\hat{S}_{i}$ act in the space of holomorphic functions $f(z)$
with the scalar product
$
(f_1,f_2) = \frac{2j+1}{\pi} \int\int \overline{f_1(z)}f_2(z)(1 +
\vert z\vert^{2})^{-(2j+2)} d^{2}z,
$
in which the functions
$
\psi^{m}_{j}\propto z^{j+m},
$
$
m=-j,-j+1,...,j,
$
form the set of eigenfunctions of the operator $\hat S_{3}$ with the
eigenvalues $s_{3}=m$.
These operators satisfy the relation
$
\hat{\bm S}{}^2 = j(j+1),
$
and, therefore,
we have the $(2j+1)$--dimensional irreducible representation $D_j$ of the
Lie group SU(2).
Thus, we see that for the rotator spin model the reduced phase space
quantization method leads to the states with integer or
half--integer spin, depending on the choice of the quantized parameter
$\rho$, and gives in general the results physically different from the
results obtained with the help of the Dirac quantization method.
Let us stress once again here that within the Dirac quantization method
in this model the spin operator $\hat{\bm S}$ has a nature of
the orbital angular momentum operator,
and it is this nature that does not allow spin to
take half-integer values \cite{bied}.
\nsection{Top Spin Model}
Let us consider now the top spin model \cite{ply5}. The initial phase
space of the model is described by the spin three--vector $\bm S$, and by
three vectors $\bm e_i$ such that
$
\bm e_i \bm e_j = \delta_{ij},
$
$
\bm e_i \times \bm e_j =
\epsilon_{ijk} \bm e_k.
$
Denote the components of the vectors $\bm e_i$
by $E_{ij}$. The components
$S_i$ of the vector $\bm S$ and the quantities $E_{ij}$ form a set of
dependent coordinates in the phase space of the system. The corresponding
Poisson brackets are
\begin{equation}
\{E_{ij}, E_{kl}\} = 0,\quad
\{S_i, E_{jk}\} = \epsilon_{ikl}E_{jl}, \quad \{S_i, S_j\} =
\epsilon_{ijk} S_k.\label{5.3}
\end{equation}
The vectors $\bm e_i$ form a right orthonormal
basis in ${\bf R}^3$. The set of all such bases can be identified with the
three--dimensional rotation group. Taking into account
Eqs.~(\ref{5.3}) we conclude that the initial phase space
is actually the cotangent bundle $T^*{\rm SO(3)}$, represented as the
manifold ${\bf R}^3 \times {\rm SO(3)}$.
Using Eqs.~(\ref{5.3}), one can get the
following expression for the symplectic two--form $\omega$ on the initial
phase space:
$
\omega = \frac{1}{2} d (\bm S \times \bm e_l) \wedge d \bm e_l =
\frac{1}{2}d(\epsilon_{ijk} S_j E_{lk})\wedge d E_{li}.
$
It is useful to introduce the variables $J_i = \bm e_i \bm S = E_{ij} S_j$.
For these variables we have the following Poisson brackets:
$
\{J_i, E_{jk}\} = - \epsilon_{ijl} E_{lk},
$
$
\{J_i, J_j\} =
-\epsilon_{ijk} J_k.
$
Note, that we have the equality
$
S_i S_i = J_i J_i.
$
The phase space of the top spin model is obtained from the phase space,
described above, by introducing two first class constraints
\begin{equation}
\psi = \frac{1}{2}(\bm S^2 - \rho^2) = 0,\qquad
\chi = \bm S \bm e_3 - \kappa = 0,
\end{equation}
where
$\rho > 0,$
$|\kappa| < \rho.$
Consider now the Dirac quantization of the model.
Let us parameterize the matrix $E$, which can be identified
with the corresponding rotation matrix,
by the Euler angles, $E = E(\alpha,
\beta, \gamma)$, and use the representation where the operators,
corresponding to these angles are diagonal. In this representation state
vectors are functions of the Euler angles, and the operators $\hat S_i$ and
$\hat J_i$ are realized as linear differential operators, acting on such
functions \cite{var}. The quantum analogs of the constraints $\psi$ and
$\chi$ turn into the equations for the physical states of the system:
\begin{equation}
(\hat{\bm S}{}^2 - \rho^2) \Phi_{phys} = 0,\qquad
(\hat J_3 - \kappa) \Phi_{phys} = 0.\label{5.10}
\end{equation}
An arbitrary state vector can be decomposed over the set of the Wigner
functions, corresponding to either integer or half--integer spins
\cite{var}:
$
\Phi(\alpha, \beta, \gamma) = \phi_{jmk} D^j_{mk}(\alpha, \beta, \gamma),
$
where $j = 0, 1, \ldots$, or $j = 1/2, 3/2, \ldots$, and $k, m = -j, -j+1,
\ldots, j$. The Wigner functions $D^j_{mk}$ have the properties:
$
\hat{\bm S}{}^2 D^j_{mk} = j(j+1) D^j_{mk},
$
$
\hat S_3 D^j_{mk} = m D^j_{mk},
$
$
\hat J_3 D^j_{mk} = k D^j_{mk}.
$
Using the decomposition of the state vector, we see that
Eqs.~(\ref{5.10}) have nontrivial solutions only when
$\rho^2 = j(j+1)$, and $\kappa = k$, for some integer or half--integer
numbers $j$ and $k$, such that $-j \le k \le j$. In other words we get the
following quantization condition for the parameters of the model:
\[
\rho^{2} = j(j+1),\quad
\kappa = k,\qquad
-j \le k \le j,\quad
0 < 2j \in{\bf Z}.
\]
The corresponding physical state vectors have the form
\[
\Phi_{phys}(\alpha,\beta,\gamma) = \sum_{m=-j}^j \varphi_m
D^j_{mk}(\alpha,\beta,\gamma).
\]
Thus, we see that the Dirac quantization of the top spin model leads to
an integer or half--integer spin system.
Proceed now to the construction of the reduced phase space of the system.
As the constraints $\psi$ and $\chi$ have zero Poisson bracket, we can
consider them consecutively. Let us start with the constraint $\psi$.
{}From the expressions for the Poisson brackets (\ref{5.3})
it follows that the group of gauge transformations, generated by the
constraint $\psi$, acts in the initial phase space variables as follows:
\[
\bm e_i(\tau) = \bm e_i \cos (S \tau) + (\bm S \times \bm e_i) S^{-1}
\sin (S \tau) + \bm S (\bm S \bm e_i)S^{-2} (1 - \cos (S \tau)),
\quad
\bm S (\tau) = \bm S,
\]
where $S = \sqrt{\bm S^2}$.
We see that the transformation under consideration have the sense of the
rotation by the angle $S\tau$ about the direction of the spin vector.
Let us consider
the initial phase space of the system being diffeomorphic to
${\bf R}^3 \times {\rm SO(3)}$
as a trivial
fibre bundle over ${\bf R}^3$ with the fibre SO(3).
The gauge transformations
act in fibres of this bundle. It is
clear that the constraint surface, defined by the constraint $\psi$, is a
trivial fibre subbundle $S^2 \times {\rm SO(3)}$. As ${\rm SO(3)/SO(2)} =
S^2$, then after the reduction over the action of the gauge group we come
to the fibre bundle over $S^2$ with the fibre $S^2$. As it follows from
general theory of fibre bundles \cite{Hus66}, this fibre bundle is again
trivial. Thus the reduced phase space, obtained using only the constraint
$\psi$, is the direct product $S^2 \times S^2$. The
symplectic two--form on this reduced space can be written in the form
\cite{PR}:
$
\omega = - (2 \rho^{2})^{-1} (\epsilon_{ijk} S_i dS_j \wedge dS_k -
\epsilon_{ijk} J_i dJ_j \wedge dJ_k).
$
Here the quantities $S_i$ and $J_i$ form a set of
dependent coordinates in the reduced phase space under consideration:
$S_i S_i = J_i J_i = \rho^2$.
Let us turn our attention to the constraint $\chi$. It is easy to get
convinced that the transformations of the gauge group, generated by this
constraint act in the initial phase space in the following way:
$
\bm e_i(\tau)=\bm e_i \cos\tau+(\bm e_3\times \bm e_i)\sin\tau,
$
$i=1,2,$
$
\bm e_3(\tau)=\bm e_3,
$
$
\bm S(\tau) = \bm S.
$
So, we see that the gauge group, generated by the constraint $\chi$, acts only
in one factor of the product $S^2 \times S^2$, which is a reduced phase
space obtained by us after reduction with the help of the constraint
$\psi$. Thus we can consider only that factor, which is evidently
described by the quantities $J_i$. From such point of view, the constraint
surface, defined by the constraint $\chi$, is a one dimensional sphere
$S^1$, where the group of gauge transformations acts transitively. Hence,
after reduction we get only one point. Thus, the final reduced phase space
is a two--dimensional sphere $S^2$, and
the symplectic two--form on the reduced phase space has the form given by
Eq. (\ref{omega1}).
Therefore, the reduced phase space we have obtained,
coincides with the reduced phase
space for the rotator spin model. Hence the geometric quantization method
gives again the quantization condition (\ref{r22}) for the parameter $\rho$,
while the parameter $\kappa$ remains unquantized here. Therefore, while
for this model unlike the previous one, two methods of quantization lead
to the quantum system, describing either integer or half-integer spin
states, nevertheless, the corresponding quantum systems are different: the
Dirac method gives discrete values for the observable $\hat J_3$, whereas
the reduced phase space quantization allows it to take any value $\kappa$,
such that $\kappa^2 < j^{2}$ for a system with spin $j$.
Let us note here one interesting property of the system. We can use a
combination of the Dirac and reduced phase space quantization methods.
After the first reduction with the help of the constraint $\psi$, the system,
described by the spin vector and the `isospin' vector \cite{ply5} with
the components $I_i = -J_i$, $S_i S_i = I_i I_i$, can be quantized
according to Dirac by imposing the quantum analog of the constraint $\chi$
on the state vectors for singling out the physical states. In this case
we have again the quantization of the parameter $\kappa$ as in the pure
Dirac quantization method, and, therefore, here the observable $\hat J_3$
can take only integer or half--integer value. Hence, in this sense, such
a combined method gives the results coinciding with the results of the
Dirac quantization method.
\nsection{Discussion and conclusions}
The first considered model
gives an example of the classical
constrained system with finite number of the degrees of freedom for which
there is no gauge condition, but nevertheless, the reduced phase space can
be represented as a submanifold of the constraint surface.
As we have seen, Dirac and reduced phase space quantization methods lead to
the coinciding physical results for this plane spin model.
Moreover, we have revealed
an interesting analogy in interpretation of the
situation with nonexistence of a global gauge condition for this simple
constrained system with the situation taking place for the non-Abelian
gauge theories \cite{sin}.
The rotator and top spin models
give examples of the classical systems, in which
there is no global section of the space of gauge orbits. In
spite of impossibility to impose gauge conditions such systems admit the
construction of the reduced phase space.
These two models demonstrate that the reduced
phase space and the Dirac quantization methods
can give essentially different physical results.
Thus, for
Hamiltonian systems with first class constraints we encounter two related
problems.
The first problem consists in the choice of a `correct'
quantization method for such systems. From the mathematical point of view
any quantization leading to a quantum system, which has the initial system
as its classical limit, should be considered as a correct one, but
physical reasonings may distinguish different quantization methods.
Consider, for example, the above mentioned systems. The rotator spin
model, quantized according to the Dirac method, represents by itself the
orbital angular momentum system with additional condition (\ref{r9})
singling out the states with a definite eigenvalue of angular momentum
operator $\hat{\bm S}{}^{2}$. This eigenvalue, in turn, is defined by the
concrete value of the quantized parameter of the model: $\rho^2 =
n(n+1)>0$. On the other hand, the reduced phase space quantization of the
model gives either integer or half--integer values for the spin of the
system. If we suppose that the system under consideration is to describe
orbital angular momentum, we must take only integer values for the
parameter $\rho$ in the reduced phase space quantization method.
But in this case we must, nevertheless, conclude, that the reduced phase
space quantization method of the rotator spin model describes a more general
system than the quantum system obtained as a result of the Dirac
quantization of that classical system.
The Dirac quantization of the top spin model, or its combination
with the reduced phase space quantization gives us a possibility to
interpret this system as a system having spin and isospin degrees of
freedom (with equal spin and isospin: $\hat{\bm S}^2=\hat{I}_i\hat{I}_i =
j(j+1)$), but in which the isospin degrees of freedom are
`frozen' by means of the condition $\hat{I}_{3}\Phi_{phys} = -k
\Phi_{phys}$. On the other hand, as we have seen, the reduced space
quantization method does not allow one to have such interpretation of the
system since it allows the variable $I_{3}$ to take any (continuous) value
$-\kappa$ restricted only by the condition $\kappa^2 < j^2$, i.e.,
the operator $\hat{I}_{3}$ (taking here only one value) cannot be
interpreted as a component of the isospin vector operator.
{}From this point of view a `more correct' method of quantization is the
Dirac quantization method.
In this respect it is worth to point out that there is
a class of physical models, for which it is impossible to get
the reduced phase space description, and which, therefore, can be
quantized only by the Dirac method.
Indeed, there are various pseudoclassical models containing first class
nilpotent constraints of the form \cite{spin1}--\cite{cor}:
\begin{equation}
\psi = \xi_{i_{1}}...\xi_{i_{n}}G^{i_{1}...i_{n}} = 0, \label{7.1}
\end{equation}
where $\xi_{i_{k}}$, are real Grassmann variables with the Poisson brackets
$
\{\xi_{k},\xi_{l}\}=-ig_{kl},
$
$g_{kl}$ being a real nondegenerate symmetric constant matrix. Here
it is supposed that
$G^{i_{1}...i_{n}}$, $n \ge 2$, are some functions of other variables,
antisymmetric in their indices, and
all the terms in a sum have simultaneously either even or odd
Grassmann parity.
For our considerations it is important that constraints
(\ref{7.1}) are the constraints, nonlinear in Grassmann variables,
and that they have zero projection on the unit of Grassmann algebra.
In the simplest example of relativistic massless vector particle in
(3+1)--dimensional space--time \cite{spin1} the odd part of the
phase space is described by two Grassmann vectors $\xi_{\mu}^{a}$,
$a=1,2$, with brackets
$
\{\xi_{\mu}^{a},\xi_{\nu}^{b}\}=-i\delta^{ab}g_{\mu\nu},
$
and the corresponding nilpotent first class constraint has the form:
\begin{equation}
\psi = i\xi_{\mu}^{1}\xi_{\nu}^{2}g^{\mu\nu} = 0, \label{7.4}
\end{equation}
where $g_{\mu\nu}= \mbox{diag}(-1,1,1,1)$.
This constraint is the generator of
the SO(2)--rotations in the `internal isospin' space:
$
\xi_\mu^1 (\tau) = \xi_\mu^1 \cos \tau + \xi_\mu^2 \sin \tau,
$
$
\xi_\mu^2 (\tau) = \xi_\mu^2 \cos\tau - \xi_\mu^1 \sin \tau.
$
The specific property of this transformation is
that having $\xi^a_\mu(\tau)$ and $\xi_\mu^a$, we cannot determine the
rotation angle $\tau$ because there is no notion of the inverse element
for an odd Grassmann variable. Another specific feature of the nilpotent
constraint (\ref{7.4}) is the impossibility to introduce any, even local,
gauge constraint for it. In fact, we cannot find a gauge constraint $\chi$
such that the Poisson bracket $\{\psi,\chi\}$ would be invertible.
Actually, it is impossible {\it in principle} to construct the
corresponding reduced phase space for such a system.
Obviously, the same situation arises for the constraint of general form
(\ref{7.1}). It is necessary to note here that in the case when the
constraint $\psi$ depends on even variables of the total phase space (see,
e.g., ref. \cite{cor}), and, therefore, generates also transformations of
some of them, we cannot fix the transformation parameter (choose a
point in the orbit) from the transformation law of those even variables,
because the corresponding parameter is present in them with a
noninvertible factor, nonlinear in Grassmann variables. Therefore, the
pseudoclassical systems containing the constraints of form (\ref{7.1}) can
be quantized only by the Dirac method, that was done in
original papers \cite{spin1}--\cite{cor}.
Let us come back to the discussion of the revealed difference
between two methods of quantization, and point out that
the second related problem is clearing up the sense of gauge degrees of
freedom. The difference appearing under the Dirac and reduced phase
space quantization methods can be understood as the one proceeding from the
quantum `vacuum' fluctuations corresponding to the `frozen' (gauge)
degrees of freedom. Though these degrees of freedom are `frozen' by the
first class constraints, they reveal themselves through quantum
fluctuations, and in the Dirac quantization method they cannot be
completely `turned off' due to the quantum uncertainty principle. Thus, we
can suppose that the gauge degrees of freedom serve not simply for
`covariant' description of the system but have `hidden' physical meaning,
in some sense similar to the compactified degrees of freedom in the
Kaluza--Klein theories. If we adopt such a point of view, we have to use
only the Dirac quantization method. Further, the gauge principle cannot
be considered then as a pure technical principle. From here we arrive also
at the conclusion that the Dirac separation of the constraints into first
and second class constraints is not technical, and nature `distinguish'
these two cases as essentially different, since gauge degrees of freedom,
corresponding to the first class constraints, may reveal themselves at the
quantum level (compare with the point of view advocated in Ref.~\cite{jac}).
$\ $
The work of M.P. was supported in part by MEC-DGICYT, Spain.
|
2,869,038,153,810 | arxiv |
\subsection{Irrelevance of short ranged interactions at QCP}\vskip-.25cm
The imaginary-time action describing spinless fermion with short ranged interaction at the QCP ($\mu=0$)
takes the form
\begin{align}
S=\int d\tau\int dx \left\{
\bar{\psi} \left(
\partial_\tau - {\hbar^2\over 2m} \partial^2_x
\right) \psi + V \bar{\psi} ( \partial_x \bar \psi ) ( \partial_x
\psi ) \psi
\right\}
\end{align}
The derivatives in the interaction term reflect Pauli's principle,
$\psi^2=0$. From this action one can directly read off the scaling
dimensions of the field, $[\psi]\sim L^{-1/2}$, and the (imaginary)
time, $[\tau]\sim L^2$, for $[x]\sim L$. This simple power-counting
shows that the interaction strength scales to smaller values, $V\to V/\lambda$,
as the distance of Fermions is increased, $x\to \lambda x$, or as temperature is
lowered, $T \to T/\lambda^2$. Therefore the interactions are irrelevant at the QCP.
This result also implies that the quasiparticle picture and
therefore the Boltzmann equation becomes asymptotically exact upon
approaching the QCP.\vskip-.25cm
\subsection{Conservative splitting method}\vskip-.25cm
For the numerical implementation of the Boltzmann equation we follow
Ref.~\cite{SAristov} and solve the time-dependent Boltzmann equation, by
splitting time evolution into a free flow and a relaxation stage.
The advantage of the splitting procedure is that the distribution
obtained after a relaxation step can be corrected such that conservation
laws are fulfilled in each collision, see Ref.~\cite{SAristov} for details.
Free flow and relaxation steps were implemented by a finite
difference scheme, i.e. by discretizing two-dimensional phase-space.
We used a first order implicit-explicit upwind scheme to model the
free propagation step, and an implicit scheme for the relaxation
step~\cite{SAristov}. The steady state is reached when currents of
the conserved quantities, $j_c$, $j_p$ and $j_E$ are constant along
the wire. For the linear response calculation we parametrize
$f_{x,p}=f^0_{p} +\delta f_{x,p}$, where $f^0_{p}$ is the
distribution at zero bias and $\delta f_{x,p}$ is linear in $V$, and
linearize the collision integral in $\delta f$. For the numerically
more demanding calculations employing the full collision integral
we used meshes with $22$ and $42$ points in momentum space and
various different homogeneous and inhomogeneous discretization of
space with $\sim100$ grid-points. We checked that our results are
independent of discretization.
\subsection{Linear conductance in long wires}\vskip-.25cm
A method to calculate the linear conductance in long wires
$\ell_{\rm eq}\ll L \ll |\ell_V|$, based on conservation laws and
the distribution of fully equilibrated electrons Eq.~(6) in the main
text, was originally introduced in Refs.~\cite{Sfeq,Speq}. As
described in the main text, we need to calculate the ratios
$r_1=j_E/j_c$, $r_2=\partial_x\delta T/\partial_x\delta\mu$ and
$r_3=\partial_x j_E^R/\partial_x j_c^R$. The current $j_c$ in
response to the applied voltage (to linear order $V$) is then found
from
\begin{align}
\label{r3}
r_3={r_1j_c-j_E^0\over j_c-j_c^0}
\end{align} where $j_c^0$ and $j_E^0$ are the (linear response)
charge and energy currents of non-interacting electrons which is
directly described by Eq.~(2) of the main text,
\begin{align}
j_c^0 &= {e^2V\over h} \alpha_0,
\quad
j^0_E = {eV\over h}
\left(
\mu \alpha_0 + T \alpha_1
\right),
\quad
\alpha_k = \langle \xi^k \rangle_{z}
\end{align}
Here and in the following $\langle ... \rangle_{z}= - \int_{-{z}}^{\infty} d\xi (...) {d f^0_\xi \over d\xi}$,
$f^0_{\xi} ={1\over 1 + e^{\xi} }$, and $z= \mu/T$.
With these definitions we first calculate $r_1$. Using the
equilibrium distribution Eq.~(6) of the main text
we find
\begin{align}
r_1={j_E\over j_c} = {1\over e} \left( \mu + T\kappa \right),
\quad
\kappa = { \langle \xi \sqrt{1 + \xi/ z} \rangle_{z}
\over
\langle \sqrt{1 + \xi/ z }\rangle_{z}}
\end{align}
To calculate $r_2$ we use that momentum conservation implies
homogeneity of momentum-current in the steady state. The latter can
again be calculated with help of Eq.~(6) given in the main text by
expanding $T(x)=T+\delta T(x)$, $\mu(x)=\mu+\delta \mu(x)$ form
small $V$. To linear order in $V$, i.e., for small $\delta T$ and
$\delta \mu$, we find
\begin{align}
const. =
j_p(x) = n \delta\mu(x)
+
n \kappa \delta T(x) + const.,
\end{align}
resulting in
\begin{align}
r_2={\partial_x \delta T\over \partial_x \delta\mu }= -\kappa^{-1}
\end{align}
To obtain $r_3$ we can directly use the definition of $j_c^R$ and
$j_E^R$ given in the main text combined with the equilibrium
distribution function~(6) of the main text
\begin{align}
\label{dnr}
\partial_x j^R_c
= {e \alpha_0\over h} \partial_x \delta\mu
+ {e \alpha_1\over h} \partial_x \delta T
\end{align} and
\begin{align}
\label{der}
\partial_x j^R_E
=
{\mu\over e} \partial_x j^R_c
+ {T\over h} \left( \alpha_1 \partial_x \delta\mu
+ \alpha_2 \partial_x \delta T \right).
\end{align} For the ratio $r_2$ we therefore obtain
\begin{align}
r_3 = {\partial_x j^R_E\over \partial_x j^R_c}
= {\mu\over e} + {T\over e}{\alpha_1\kappa - \alpha_2\over \alpha_0\kappa - \alpha_1}
\end{align}
Inserting above expressions into \eqref{r3} and solving for $j_c$
gives Eq.~(13) of the main text.
\subsection{Measuring voltage profiles}\vskip-.25cm
To measure the voltage drop across a quantum wire one can, for
example, use a weakly coupled tunneling contact realized by the tip of
a scanning tunneling microscope. Assuming a constant
tunneling matrix element $M$ and a constant density of states $\nu_0$ of the
tunneling tip, the charge current $I_c(x)$ and the energy current
$I_E(x)$ through the tip located at position $x$ are given by
\begin{eqnarray}\label{ic}
I_c(x)&=& {4\pi e\nu_0\over \hbar} |M|^2 \int dp\, (f_{x,p} - f^{\rm tip}_p(\tilde \mu,\tilde T)) \\
I_E(x)&=& {4\pi e\nu_0\over \hbar} |M|^2 \int dp\, \epsilon_p (
f_{x,p} - f^{\rm tip}_p(\tilde \mu,\tilde T) \nonumber
\end{eqnarray}
Here $f_{x,p}$ is the distribution function of the wire and
$f^{\rm tip}_{x,p}=f^{\rm tip}_p(\tilde \mu(x),\tilde T(x))$
is the Fermi distribution describing the
occupation of the states in the tip.
The latter is parametrized by the chemical potential $\tilde \mu$ and the temperature $\tilde T$.
The local chemical potential $\mu(x)$ of the quantum wire and the local
temperature $T(x)$ of the wire are now obtained from the condition that
particle and energy currents flow only if there is a difference
in the chemical potential and temperature. $T(x)$ and $\mu(x)$
are therefore obtained from the condition
\begin{eqnarray}\label{def}
T(x)=\tilde T, \quad \mu(x)=\tilde \mu \quad {\rm for}\ I_c(x)=I_E(x)=0
\end{eqnarray}
Note that this definition can
be used for distribution functions far from equilibrium (and the usual
result is obtained in equilibrium).
Eq. (\ref{ic}) implies that the local chemical potential and
temperature can directly be obtained from the local charge- and energy
density of the system.
In Fig.~\ref{fig5} we show by the symbols chemical potential and temperature profiles obtained
from the definition (\ref{ic},\ref{def}) for a wire of length $L=10 \ell_{\rm eq}$
with $eV/T=0.4$ at the QCP ($\mu=0$). The separate points
at $\pm L/2$ denote the chemical potential in the two leads.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=82mm]{fig5a.eps}
\end{center}\vskip-.55cm
\caption{Local chemical potential $\mu(x)$ (blue) and temperature
$T(x)=T+\delta T(x)$ (red) (in units $|eV|$) for a wire of length
$L=10\ell_{\rm eq}$ and voltage $eV/T=0.4$. Symbols: Profiles
obtained using Eq.~(\ref{ic},\ref{def}), i.e. for voltage contacts in the
tunneling regime. Solid lines: Corresponding profiles obtained from an alternative fitting procedure (used in the main text) defined by Eq.~(\ref{defT2}). Both definitions give similar results for the parameters used in the paper.} \label{fig5}
\end{figure}
In the insets of Fig.~1 of the main text we also show $\mu(x)$ and
$T(x)$ but in this case we use a {\em different} definition of these
quantities, which is more appropriate to illustrate our analytical
arguments.
In the main text, we fit the local charge, energy {\em and} momentum
densities to the equilibrium distribution given by Eq.~(6) of the main text, which depends not only $\mu$ and $T$ but also on
the average velocity $u$. At each point $x$, we therefore define the three space-dependent functions $u(x),\mu(x)$, and $T(x)$ by
\begin{eqnarray}\label{defT2}
\sum_p f_{x,p}&=&\sum_p f^{\rm eq}_p(u,\mu,T) \\
\sum_p \epsilon_p f_{x,p}&=&\sum_p \epsilon_p f^{\rm eq}_p(u,\mu,T) \nonumber \\
\sum_p p f_{x,p}&=&\sum_p p f^{\rm eq}_p(u,\mu,T) \nonumber
\end{eqnarray}
The two definitions, (\ref{ic},\ref{def}) and
(\ref{defT2}), are different, as for the tunneling tip momentum is not conserved and we fit in the first case to an equilibrium function $f^{\rm tip}$ which depends only on the chemical potential and the temperature but not on the velocity.
For the range of applied voltages discussed in this paper, however, both
methods lead to nearly identical profiles. This is shown in
Fig.~\ref{fig5}, where temperature and chemical-potential profiles
obtained from Eqs.~ (\ref{ic},\ref{def}) (symbols) are compared to the
corresponding curves from Eq.~(\ref{defT2}) (lines) which are also used in the main text.
In conclusion, we have discussed an experimental procedure which allows to measure the local chemical potential and local temperature by using tunneling contacts. While we used in the main text a different definition (the one needed for our analytic arguments), the resulting profiles are almost identical which guarantees that the voltage and temperature profiles shown in the main text can be measured.
|
2,869,038,153,811 | arxiv | \section{Introduction}
Censoring and endogeneity are common problems in data analysis. For
example, income survey data are often top-coded and many economic
variables such as hours worked, wages and expenditure shares are
naturally bounded from below by zero. Endogeneity is also an
ubiquitous phenomenon both in experimental studies due to partial
noncompliance (Angrist, Imbens, and Rubin, 1996), and in
observational studies due to simultaneity (Koopmans and Hood, 1953),
measurement error (Frish, 1934), sample selection (Heckman, 1979) or
more generally to the presence of relevant omitted variables. \
Censoring and endogeneity often come together. Thus, for example, we
motivate our analysis with the estimation of Engel curves for
alcohol -- the relationship between the share of expenditure on
alcohol and the household's budget. For this commodity, more than
15\% of the households in our sample report zero expenditure, and
economic theory suggests that total expenditure and its composition
are jointly determined in the consumption decision of the household.
Either censoring or endogeneity lead to inconsistency of traditional
mean and quantile regression estimators by inducing correlation
between regressors and error terms. We introduce a quantile
regression estimator that deals with both problems and name this
estimator the censored quantile instrumental variable (CQIV)
estimator.
Our procedure deals with censoring semiparametrically through the
conditional quantile function following Powell (1986). \ This
approach avoids the strong parametric assumptions of traditional
Tobit estimators. \ The key ingredient here is the equivariance
property of quantile functions to monotone transformations such as
censoring. \ Powell's censored quantile regression estimator,
however, has proven to be difficult to compute. \ We address this
problem using the computationally attractive algorithm of
Chernozhukov and Hong (2002). \ An additional advantage of focusing
on the conditional quantile function is that we can capture
nonadditive heterogeneity in the effects of the regressors across
the distribution of the response variable by computing CQIV at
different quantiles (Koenker, 2005). The traditional Tobit
framework rules out this heterogeneity by imposing a location shift
model.
We deal with endogeneity using a control variable approach. \ The
basic idea is to add a variable to the regression such that, once we
condition on this variable, regressors and error terms become
independent. \ This so-called control variable is usually
unobservable and need to be estimated in a first stage. \ Our main
contribution here is to allow for semiparametric models with
infinite dimensional parameters and nonadditive error terms, such as
quantile regression and distribution regression, to model and
estimate the first stage and back out the control variable. \ This
part of the analysis constitutes the main theoretical difficulty
because the first stage estimators do not live in spaces with nice
entropic properties, unlike, for example, in Andrews (1994) or Newey
(1994). To overcome this problem, we develop a new technique to
derive asymptotic theory for two-stage procedures with plugged-in
first stage estimators that, while not living in Donsker spaces
themselves, can be suitably approximated by random functions that
live in Donsker spaces.\ The CQIV estimator is therefore obtained in
two stages that are nonadditive in the unobservables. The first
stage estimates the control variable, whereas the second stage
estimates a nonadditive censored quantile regression model for the
response variable of interest, including the estimated control
variable to deal with endogeneity.
We analyze the theoretical properties of the CQIV estimator in large
samples. \ Under suitable regularity conditions, CQIV is
$\sqrt{n}$-consistent and has a normal limiting distribution. \ We
characterize the expression of the asymptotic variance. \ Although
this expression can be estimated using standard methods, we find it
more convenient to use resampling methods for inference.\ We focus
on weighted bootstrap because it has practical advantages over
nonparametric bootstrap to deal with discrete regressors with small
cell sizes (Ma and Kosorok, 2005, and Chen and Pouzo, 2009). \ We
give regularity conditions for the consistency of weighted bootstrap
to approximate the distribution of the CQIV estimator. \ For our
leading cases of quantile and distribution regression estimation of
the control variable, we provide more primitive assumptions that
verify the regularity conditions for asymptotic normality and
weighted bootstrap consistency. \ The verification of these
conditions for quantile and distribution regression estimators of
the first stage is new to the best of our knowledge.
The CQIV estimator is simple to compute using standard statistical
software. \ We demonstrate its implementation through Monte-Carlo
simulations and an empirical application to the estimation of Engel
curves for alcohol. The results of the Monte-Carlo exercise
demonstrate that the performance of CQIV is comparable to that of
Tobit IV in data generated to satisfy the Tobit IV assumptions, and
it outperforms Tobit IV under heteroskedasticity. The results of the
application to Engel curves demonstrate the importance of accounting
for endogeneity and censoring in real data. Another application of
our CQIV estimator to the estimation of the price elasticity of
expenditure on medical care appears in Kowalski (2009).
\subsection{Literature review.}
There is an extensive previous literature on the control variable
approach to deal with endogeneity in models without censoring.
Hausman (1978) and Wooldridge (2010) discussed parametric triangular
linear and nonlinear models. \ Newey, Powell, and Vella (1999)
described the use of this approach in nonparametric triangular
systems of equations for the conditional mean, but limited the
analysis to models with additive errors both in the first and the
second stage. \ Lee (2007) set forth an estimation strategy using a
control variable approach for a triangular system of equations for
conditional quantiles with an additive nonparametric first stage. \
Imbens and Newey (2002, 2009) extended the analysis to triangular
nonseparable models with nonadditive error terms in both the first
and second stage. They focused on identification and nonparametric
estimation rates for average, quantile and policy effects. Our paper
complements Imbens and Newey (2002, 2009) by providing inference
methods and allowing for censoring. \ Chesher (2003) and Jun (2009)
considered local identification and semiparametric estimation of
uncensored triangular quantile regression models with a nonseparable
control variable. \ Relative to CQIV, these local methods impose
less structure in the model at the cost of slower rates of
convergence in estimation. \ While the previous papers focused on
triangular models, Blundell and Matzkin (2010) have recently derived
conditions for the existence of control variables in nonseparable
simultaneous equations models. \ We refer also to Matzkin (2007) for
an excellent comprehensive review of results on nonparametric
identification of triangular and simultaneous equations models.
Our work is also closely related to Ma and Koenker (2006). \ They
considered identification and estimation of quantile effects without
censoring using a parametric control variable. \ Their parametric
assumptions rule out the use of nonadditive models with infinite
dimensional parameters in the first stage, such as quantile and
distribution regression models in the first stage. \ In contrast,
our approach is specifically designed to handle the latter, and in
doing so, it puts the first stage and second stage models on the
equally flexible footing. \ Allowing for a nonadditive infinite
dimensional control variable makes the analysis of the asymptotic
properties of our estimator very delicate and requires developing
new proof techniques. \ In particular, we need to deal with control
variable estimators depending on random functions that do not live
in Donsker classes. \ We address this difficulty approximating these
functions with sufficient degree of accuracy by smoother functions
that live in Donsker classes. \ In the case of quantile and
distribution regression, we carry out this approximation by
smoothing the empirical quantile regression and distribution
regression processes using third order kernels.
For models with censoring, the literature is more sparse. Smith and
Blundell (1986) pioneered the use of the control variable approach
to estimate a triangular parametric additive model for the
conditional mean. \ More recently, Blundell and Powell (2007)
proposed an alternative censored quantile instrumental variable
estimator that assumes additive errors in the first stage. Our
estimator allows for a more flexible nonadditive first stage
specification.
\subsection{Plan of the paper.} The rest of the paper is organized as
follows. In Section \ref{model}, we present the CQIV model, and
develop estimation and inference methods for the parameters of
interest of this model. In Section \ref{montecarlo}, we describe the
associated computational
algorithms and present results from a Monte-Carlo simulation exercise. In Section \ref%
{engel}, we present an empirical application of CQIV to Engel
curves. In Section \ref{conclusion}, we provide conclusions and
discuss potential empirical applications of CQIV. The proofs of the
main results, and additional details on the computational algorithms
and numerical examples are given in the Appendix.
\section{Censored Quantile Instrumental Variable Regression}
\label{model}
\subsection{The Model}
We consider the following triangular system of quantile equations:
\begin{eqnarray}
Y &=& \max (Y^{\ast },C), \label{2.1} \\
Y^{\ast } &=& Q_{Y^{\ast }}(U \mid D,W,V), \label{2.2} \\
D &=& Q_{D}(V \mid W,Z). \label{2.3}
\end{eqnarray}%
In this system, $Y^{\ast }$ is a continuous latent response
variable, the observed variable $Y$ is obtained by censoring
$Y^{\ast }$ from below at the level determined by the variable $C$,
$D$ is the continuous regressor of interest, $W$ is a vector of
covariates, possibly containing $C$, $V$ is a latent unobserved
regressor that accounts for the possible endogeneity of $D$, and $Z$
is a vector of ``instrumental variables'' excluded from
(\ref{2.2}).\footnote{We focus on left censored response variables
without loss of generality. If $Y$ is right censored at $C$, $Y =
\min(Y^{\ast },C)$, the analysis of the paper applies without change
to $\widetilde Y = - Y$, $\widetilde Y^{\ast} = - Y^{\ast}$,
$\widetilde C = - C$, and $Q_{\widetilde Y^{\ast }} = - Q_{Y^{\ast
}}$, because $\widetilde Y = \max(\widetilde Y^{\ast }, \widetilde
C)$.} Further, $u\mapsto Q_{Y^{\ast }}(u \mid D,W,V)
$ is the conditional quantile function of $Y^{\ast }$ given $(D,W,V)$; and $%
v\mapsto Q_{D}(v \mid W,Z)$ is the conditional quantile function of
the regressor $D$ given $(W,Z)$. $\ $Here, $U$ is a Skorohod
disturbance for $Y$ that satisfies the independence assumption
\begin{equation*}
U\sim U(0,1) \mid D,W,Z,V,C,
\end{equation*}%
and $V$ is a Skorohod disturbance for $D$ that satisfies
\begin{equation*}
V\sim U(0,1) \mid W,Z,C.
\end{equation*}%
In the last two equations, we make the assumption that the censoring
variable $C$ is independent of the disturbances $U$ and $V$. This
variable can, in principle, be included in $W$.
To recover the conditional quantile function of the latent response
variable in equation (\ref{2.2}), it is important to condition on an
unobserved regressor $V$ which plays the role of a \textquotedblleft
control variable.\textquotedblright \ Equation (\ref{2.3}) allows us
to recover this unobserved regressor as a residual that explains
movements in the variable $D$, conditional on the set of instruments
and other covariates. \
In the Engel curve application, $Y$ is the expenditure share in
alcohol, bounded from below at $C=0$, $D$ is total expenditure on
nondurables and services, $W$ are household demographic
characteristics, and $Z$ is labor income measured by the earnings of
the head of the household. Total expenditure is likely to be jointly
determined with the budget composition in the household's allocation
of income across consumption goods and leisure. Thus, households
with a high preference to consume ``non-essential'' goods such as
alcohol tend to expend a higher proportion of their incomes and
therefore to have a higher expenditure. The control variable $V$ in
this case is the marginal propensity to consume, measured by the
household ranking in the conditional distribution of expenditure
given labor income and household characteristics. This propensity
captures unobserved preference variables that affect both the level
and composition of the budget. Under the conditions for a two stage
budgeting decision process (Gorman, 1959), where the household first
divides income between consumption and leisure/labor and then decide
the consumption allocation, some sources of income can provide
plausible exogenous variation with respect to the budget shares. For
example, if preferences are weakly separable in consumption and
leisure/labor, the consumption budget shares do not depend on labor
income given the consumption expenditure (see, e.g., Deaton and
Muellbauer, 1980). This justifies the use of labor income as an
exclusion restriction.
An example of a structural model that has the triangular representation
(\ref{2.2})-(\ref{2.3}) is the following system of equations:
\begin{eqnarray}
Y^{\ast } &=& g_Y(D,W,\epsilon), \label{eq: normal1} \\
D &=& g_{D}(W,Z, V), \label{eq: normal2}
\end{eqnarray}%
where $g_Y$ and $g_D$ are increasing in their third arguments, and
$\epsilon \sim U(0,1)$ and $V \sim U(0,1)$ independent of $(W,Z,C)$.
By the Skorohod representation for $\epsilon$, $\epsilon =
Q_{\epsilon}(U \mid V) = g_{\epsilon}(V,U)$, where $U \sim U(0,1)$
independent of $(D,W,Z,V,C)$. The corresponding conditional
quantile functions have the form of (\ref{2.2}) and (\ref{2.3}) with
\begin{eqnarray*}
Q_{Y^{\ast }}(u\mid D,W,V) &=& g_{Y}(D,W, g_{\epsilon}(V,u)), \\
Q_{D}(v\mid W,Z) &=& g_{D}(W,Z,v).
\end{eqnarray*}
In the Engel curve application, we can interpret $V$ as the marginal
propensity to consume out of labor income and $U$ as the unobserved
household preference to spend on alcohol relative to
households with the same characteristics $W$ and marginal
propensity to consume $V$.
In the system of equations (\ref{2.1})--(\ref{2.3}), the observed
response variable has the quantile representation
\begin{equation}
Y=Q_{Y}(U\mid D,W,V,C)=\max (Q_{Y^{\ast }}(U\mid D,W,V),C),
\label{2.5}
\end{equation}%
by the equivariance property of the quantiles to monotone
transformations.
For example, the quantile function for the observed response in the
system of equations (\ref{eq: normal1})--(\ref{eq: normal2}) has the
form:
\begin{equation*}
Q_{Y}(u\mid D,W,V,C)= \max \{g_{Y}(D,W, g_{\epsilon}(V,u)),C\}.
\end{equation*}%
Whether the response of interest is the latent or observed variable
depends on the source of censoring (e.g., Wooldridge, 2010). When
censoring is due to data limitations such as top-coding, we are
often interested in the conditional quantile function of the latent
response variable $Q_{Y^{\ast}}$ and marginal effects derived from
this function. For example, in the system (\ref{eq:
normal1})--(\ref{eq: normal2}) the marginal effect of the endogenous
regressor $D$ evaluated at $(D,W,V,U) = (d,w,v,u)$ is
$$\partial_d Q_{Y^{\ast}}(u \mid d,w,v) =
\partial_d g_{Y}(d,w, g_{\epsilon}(v,u)),$$ which corresponds
to the ceteris paribus effect of a marginal change of $D$ on the
latent response $Y^{\ast}$ for individuals with $(D,W,\epsilon) =
(d,w,g_{\epsilon}(v,u))$. When the censoring is due to economic or
behavioral reasons such are corner solutions, we are often
interested in the conditional quantile function of the observed
response variable $Q_{Y}$ and marginal effects derived from this
function. For example, in the system (\ref{eq: normal1})--(\ref{eq:
normal2}) the marginal effect of the endogenous regressor $D$
evaluated at $(D,W,V,U,C) = (d,w,v,u,c)$ is
$$\partial_d Q_{Y}(u \mid d,w,v,c) =
1\{g_{Y}(d,w, g_{\epsilon}(v,u)) > c\} \partial_d g_{Y}(d,w,
g_{\epsilon}(v,u)),$$ which corresponds to the ceteris paribus
effect of a marginal change of $D$ on the observed response $Y$ for
individuals with $(D,W,\epsilon,C) = (d,w,g_{\epsilon}(v,u),c)$.
Since either of the marginal effects might depend on individual
characteristics, average marginal effects or marginal effects
evaluated at interesting values are often reported.
\subsection{Generic Estimation}
To make estimation both practical and realistic, we impose a
flexible semiparametric restriction on the functional form of the
conditional quantile function in (\ref{2.2}). In particular, we
assume that
\begin{equation}\label{define 1 model}
Q_{Y^{\ast} }(u\mid D,W,V)=X^{\prime }\beta_0 (u),\ \ X = x(D,W,V),
\end{equation}%
where $x(D,W,V)$ is a vector of transformations of the initial
regressors $(D,W,V)$. \ The transformations could be, for example,
polynomial, trigonometric, B-spline or other basis functions that
have good approximating properties for economic problems.
An important property of this functional form is linearity in
parameters, which is very convenient for computation.
The resulting conditional quantile function of the censored random
variable
$$
Y= \max(Y^*,C),
$$
is given by
\begin{equation}\label{define 2 model}
Q_{Y}(u\mid D,W,V,C)=\max (X^{\prime }\beta_0 (u),C).
\end{equation}%
This is the standard functional form for the censored quantile
regression (CQR) first derived by Powell (1984) in the exogenous
case.
Given a random sample $\{Y_i,D_i,W_i,Z_i,C_i\}_{i = 1}^{n}$, we form
the estimator for the parameter $\beta_0(u)$ as
\begin{equation}\label{eq: cqiv}
\widehat{\beta }(u)=\arg \min_{\beta \in \mathbb{R}^{\dim(X)}}\frac{1}{n}\sum_{i=1}^{n} 1(%
\widehat{S}_{i}^{\prime }\widehat{\gamma}>\varsigma)\rho
_{u}(Y_{i}-\widehat X_{i}^{\prime }\beta),
\end{equation}%
where $\rho _{u}(z)=(u-1(z<0))z$ is the asymmetric absolute loss
function of
Koenker and Bassett (1978), $\widehat{X}_{i}=x(D_{i},W_{i},\widehat{V_{i}})$%
, $\widehat{S}_{i}=s(\widehat{X}_{i},C_i),$ $s(X,C)$ is a vector
of transformations of $(X,C)$, and $\widehat{V_{i}}$ is an estimator of $%
V_{i}$. This estimator adapts the algorithm for the CQR estimator
developed in Chernozhukov and Hong (2002) to deal with endogeneity.
We call the multiplier $1(\widehat{S}_{i}^{\prime
}\widehat{\gamma}>\varsigma)$ the selector, as its purpose is to
predict the subset of individuals for which the probability of
censoring is sufficiently low to permit using a linear -- in place
of a censored linear -- functional form for the conditional
quantile. We formally state the conditions on the selector in the
next subsection. The estimator in $(\ref{eq: cqiv})$ may be seen as
a computationally attractive approximation to Powell estimator
applied to our case:
\begin{equation*}
\widehat{\beta }_{p}(u)=\arg \min_{\beta \in \mathbb{R}^{\dim(X)}}\frac{1}{n}%
\sum_{i=1}^{n} \rho _{u}[Y_{i}-\max (\widehat{X}_{i}^{\prime}\beta ,
C_i)].
\end{equation*}
The CQIV estimator will be computed using an iterative procedure
where each step will take the form specified in equation (\ref{eq:
cqiv}). We start selecting the set of ``quantile-uncensored''
observations for which the conditional quantile function is above
the censoring point. We implement this step by estimating the
conditional probabilities of censoring using a flexible binary
choice model. Quantile-uncensored observations have probability of
censoring lower than the quantile index $u$. We estimate the linear
part of the conditional quantile function, $X_i'\beta_0(u)$, on the
sample of quantile-uncensored observations by standard quantile
regression. Then, we update the set of quantile-uncensored
observations by selecting those observations with conditional
quantile estimates that are above their censoring points and
iterate. We provide more practical implementation details in the
next section.
The control variable $V$ can be estimated in several ways. Note that
if $Q_D(v \mid W,Z)$ is invertible in $v$, the control variable has
several equivalent representations:
\begin{equation}\label{define 3 model}
V= \vartheta_0(D,W,Z)\equiv F_{D}(D \mid W,Z) \equiv
Q_{D}^{-1}(D\mid W,Z)\equiv \int_{0}^{1}1\{Q_{D}(v\mid W,Z)\leq
D\}dv.
\end{equation}%
For any estimator of $F_{D}(D \mid W,Z)$ or $Q_{D}(V\mid W,Z)$,
denoted by $\widehat F_{D}(D \mid W,Z)$ or $\widehat Q_{D}(V\mid
W,Z)$, based on any parametric or semi-parametric functional form,
the resulting estimator for the control variable is
\begin{equation*}
\widehat{V}=\widehat{\vartheta}(D,W,Z)\equiv \widehat{F}_{D}(D\mid
W,Z) \text{ or } \widehat{V}=\widehat{\vartheta}(D,W,Z)\equiv
\int_{0}^{1}1\{\widehat{Q}_{D}(v\mid W,Z)\leq D\}dv.
\end{equation*}%
Here we consider several examples: in the classical additive
location model, we have that $Q_{D}(v\mid W,Z)=R^{\prime }\pi_0
+Q_{V}(v),$ where $Q_{V}$ is a quantile function, and $R = r(W,Z)$
is a vector collecting transformations of $W$ and $Z$. The control
variable is
\begin{equation*}
V=Q_{V}^{-1}(D-R^{\prime }\pi_0 ),
\end{equation*}%
which can be estimated by the empirical CDF of the least squares
residuals. Chernozhukov, Fernandez-Val and Melly (2009) developed
asymptotic theory for this estimator. If $D \mid W,Z \sim N(R'\pi_0,
\sigma^2)$, the control variable has the common parametric form $V =
\Phi^{-1}([D-R^{\prime }\pi_0]/\sigma)$, where $\Phi^{-1}$ denotes
the quantile function of the standard normal distribution. This
control variable can be estimated by plugging in estimates of the
regression coefficients and residual variance.
In a non-additive quantile regression model, we have that
$Q_{D}(v\mid W,Z)=R^{\prime }\pi_0 (v),$ and
\begin{equation*}
V= Q_{D}^{-1}(D\mid W,Z) = \int_{0}^{1}1\{R^{\prime }\pi_0 (v)\leq
D\}dv.
\end{equation*}%
The estimator takes the form
\begin{equation}
\widehat{V}=\int_{0}^{1}1\{R^{\prime }\widehat{\pi }(v)\leq D\}dv,
\label{equation: fe_qr_estimator}
\end{equation}%
where $\widehat{\pi }(v)$ is the Koenker and Bassett (1978) quantile
regression estimator and the integral can be approximated
numerically using a finite grid of quantiles. The use of the
integral to obtain a generalized inverse is convenient to avoid
monotonicity problems in $v \mapsto R^{\prime }\widehat{\pi }(v)$
due to misspecification or sampling error. Chernozhukov,
Fernandez-Val, and Galichon (2010) developed asymptotic theory for
this estimator.
We can also estimate $\vartheta_0$ using distribution regression. In
this case we consider a semiparametric model for the conditional
distribution of $D$ to construct a control variable
$$
V = F_D(D \mid W,Z) = \Lambda(R' \pi_0 (D)),
$$
where $\Lambda$ is a probit or logit link function. The estimator
takes the form
\begin{equation*}
\widehat V = \Lambda(R' \widehat{\pi}(D)),
\end{equation*}
where $\widehat{\pi}(d)$ is the maximum likelihood estimator of
$\pi_0(d)$ at each $d$ (see, e.g., Foresi and Peracchi, 1995, and
Chernozhukov, Fernandez-Val and Melly, 2009). Chernozhukov,
Fernandez-Val and Melly (2009) developed asymptotic theory for this
estimator.
\subsection{Regularity Conditions for Estimation}
In what follows, we shall use the following notation. We let the
random vector $A= (Y,D,W,Z,C,X,V)$ live on some probability space
$(\Omega_0, \mathcal{F}_0, P)$. Thus, the probability measure $P$
determines the law of $A$ or any of its elements. We also let
$A_1,...,A_n$, i.i.d. copies of $A$, live on the complete
probability space $(\Omega, \mathcal{F}, \Pr)$, which contains the
infinite product of $(\Omega_0, \mathcal{F}_0, P)$. Moreover, this
probability space can be suitably enriched to carry also the random
weights that will appear in the weighted bootstrap. The distinction
between the two laws $P$ and $\Pr$ is helpful to simplify the
notation in the proofs and in the analysis. Calligraphic letter such
as $\mathcal{Y}$ and $\mathcal{X}$ denote the support of $Y$ and
$X$; and $\mathcal{YX}$ denotes the joint support of $(Y,X)$. Unless
explicitly mentioned, all functions appearing in the statements are
assumed to be measurable.
We now state formally the assumptions. The first assumption is our
model.
\begin{condition}[Model] We have $\{Y_{i},D_{i},W_{i},Z_{i},C_{i}\}_{i =
1}^{n}$, a sample of size $n$ of independent and identically
distributed observations from the random vector $(Y,D,W,Z,C)$ which
obeys the model assumptions stated in equations (\ref{define 1
model}) - (\ref{define 3 model}), i.e.
\begin{eqnarray*}
&& Q_{Y}(u \mid D,W,Z,V,C) = Q_{Y}(u \mid X,C) = \max(X'\beta_0(u),C), \ \ X = x(D,W,V), \\
&& V = \vartheta_0(D,W,Z) \equiv F_{D}(D \mid W,Z) \sim U(0,1) \mid
W,Z.
\end{eqnarray*}
\end{condition}
The second assumption imposes compactness and smoothness conditions.
Compactness can be relaxed at the cost of more complicated and
cumbersome proofs, while the smoothness conditions are fairly tight.
\begin{condition}[Compactness and smoothness] \ \ (a) The set
$\mathcal{YDWZCX}$ is compact. (b) The endogenous regressor $D$ has
a continuous conditional density $f_{D}(\cdot \mid w,z)$ that is
bounded above by a constant uniformly in $(w,z) \in \mathcal{WZ}$. (c) The random variable $Y$ has a
conditional density $f_{Y}(y \mid x,c)$ on $(c,\infty)$ that is
uniformly continuous in $y \in (c,\infty)$ uniformly in $(x,c) \in
\mathcal{XC}$, and bounded above by a constant uniformly in $(x,c)
\in \mathcal{XC}$. (d) The derivative vector $\partial_v x(d,w,v)$
exists and its components are uniformly continuous in $v\in [0,1]$
uniformly in $(d,w) \in \mathcal{DW}$, and are bounded in absolute
value by a constant, uniformly in $(d,w,v) \in \mathcal{DWV}$.
\end{condition}
The following assumption is a high-level condition on the
function-valued estimator of the control variable. We assume that it
has an asymptotic functional linear representation. Moreover, this
functional estimator, while not necessarily living in a Donsker
class, can be approximated by a random function that does live in a
Donsker class. We will fully verify this condition for the case of
quantile regression and distribution regression under more primitive
conditions.
\begin{condition}[Estimator of the control variable]\label{condition:control}
We have an estimator of the control variable of the form
$\widehat{V}=\widehat{\vartheta}(D,W, Z),$ such that uniformly over
$(d,w,z) \in \mathcal{DWZ}$, (a)
\begin{equation*}
\sqrt{n}(\widehat{\vartheta}(d,w,z)-\vartheta_0(d,w,z))= \frac{1}{\sqrt{n}}%
\sum_{i=1}^{n} \ell(A_i, d,w,z) + o_{\Pr}(1), \ \ {\mathrm{E}}_{P}[\ell(A,
d,w,z)]=0,
\end{equation*}%
where ${\mathrm{E}}_{P}[\ell(A, D,W,Z)^2] < \infty$ and $\| \frac{1}{\sqrt{n}}%
\sum_{i=1}^{n} \ell(A_i, \cdot)\|_{\infty} = O_{\Pr}(1)$, and (b)
$$
\|\widehat \vartheta - \widetilde \vartheta\|_{\infty} = o_{\Pr}(1/\sqrt{n}), \ \ \text{ for } \ \ \widetilde \vartheta \in \Upsilon,
$$
where the entropy of the function class $\Upsilon$ is not too high, namely
$$
\log N (\epsilon, \Upsilon, \|\cdot\|_{\infty}) \lesssim 1/(\epsilon \log^2 (1/
\epsilon)), \text{ for all } 0 < \epsilon < 1.
$$
\end{condition}
The following assumptions are on the selector. The first part is a
high-level condition on the estimator of the selector. The second
part is a smoothness condition on the index that defines the
selector. We shall verify that the CQIV estimator can act as a
legitimate selector itself. Although the statement is involved, this
condition can be easily satisfied as explained below.
\begin{condition}[Selector]\label{condition: selector}
(a) The selection rule has the form
\begin{equation*}
1[s(x(D,W,\widehat V),C)^{\prime }\widehat{\gamma}>\varsigma],
\end{equation*}%
for some $\varsigma > 0$, where $\widehat{\gamma}\rightarrow
_{\Pr}\gamma_0$ and, for some $\epsilon'>0$,
\begin{equation*}
1[S^{\prime }\gamma_0 >\varsigma/2] \leq 1[X'\beta_0(u)>C + \epsilon'] \\
\leq 1[X'\beta_0(u)>C ] \text{ $P$-a.e.,}
\end{equation*}%
where $S = s(X,V)$ and $1[X'\beta_0(u)>C ]\equiv 1[P(Y = C \mid
Z,W,V)<u ]$. (b) The set $\mathcal{S}$ is compact. (c) The density
of the random variable $s(x(D,W,\vartheta(D,W,Z)),C)'\gamma$ exists
and is bounded above by a constant, uniformly in $\gamma \in
\Gamma$
and in $\vartheta \in \Upsilon$, where $\Gamma$ is an open neighborhood
of $\gamma_0$ and $\Upsilon$ is defined in Assumption \ref{condition:control}. (d) The
components of the derivative vector $\partial_v s(x(d,w,v),c)$ are uniformly continuous at each
$v\in [0,1]$, uniformly in $(d,w,c) \in \mathcal{DWC}$, and are
bounded in absolute value by a constant, uniformly in $(d,w,v,c) \in
\mathcal{DWVC}$.
\end{condition}
The next assumption is a sufficient condition to guarantee local
identification of the parameter of interest as well as
$\sqrt{n}$-consistency and asymptotic normality of the estimator.
\begin{condition}[Identification and non-degeneracy]
(a) The matrix
\begin{equation*}
J(u) := {\mathrm{E}}_P [f_{Y}(X^{\prime}\beta_0(u) \mid X,C) X X^{\prime} \
1( S' \gamma_0 >\varsigma )]
\end{equation*}
is of full rank. (b) The matrix
\begin{equation*}
\Lambda (u) := \mathrm{Var}_P[ f(A) + g(A) \ ],
\end{equation*}
is finite and is of full rank, where
$$
f(A) := \{1(Y< X^{\prime}\beta_0(u)) - u\} X 1 (S^{\prime}\gamma_0
> \varsigma),
$$
and, for $\dot{X} = \partial_{v} x(D, W, v)|_{v= V}$,
$$
g(A) := {\mathrm{E}}_P[ f_{Y} (X'\beta_0(u) \mid X, C) X \dot X'\beta_0(u) 1
( S'\gamma_0 > \varsigma) \ell(a,D,W,Z) ] \big|_{a=A}.
$$
\end{condition}
Assumption 4(a) requires the selector to find a subset of the
quantile-censored observations, whereas Assumption 5 requires the
selector to find a nonempty subset. Given $\widehat \beta^0(u)$, an
initial consistent estimator of $\beta_0(u)$, we can form the
selector as $1[s(x(D,W,\widehat V),C)'\widehat \gamma > \varsigma],$
where $s(x(D,W,\widehat V),C) = [x(D,W,\widehat V)', C]'$, $\widehat
\gamma = [\widehat \beta^0(u)', -1]'$, and $\varsigma$ is a small
fixed cut-off that ensures that the selector is asymptotically
conservative but nontrivial. To find $\widehat \beta^0(u)$, we use
a selector based on a flexible model for the probability of
censoring. This model does not need to be correctly specified under
a mild separating hyperplane condition for the quantile-uncensored
observations (Chernozhukov and Hong, 2002). Alternatively, we can
estimate a fully nonparametric model for the censoring
probabilities. We do not pursue this approach to preserve the
computational appeal of the CQIV estimator.
\subsection{Main Estimation Results}
The following result states that the CQIV estimator is consistent,
converges to the true parameter at a $\sqrt{n}$ rate, and is
normally distributed in large samples.
\begin{theorem}[Asymptotic distribution of CQIV]
Under the stated assumptions
\begin{equation*}
\sqrt{n}(\widehat\beta(u) - \beta_0(u)) \to_d N(0, J^{-1}(u) \Lambda
(u) J^{-1}(u)).
\end{equation*}
\end{theorem}
We can estimate the variance-covariance matrix using standard
methods and carry out analytical inference based on the normal
distribution. Estimators for the components of the variance can be
formed, e.g., following Powell (1991) and Koenker (2005). However,
this is not very convenient for practice due to the complicated form
of these components and the need to estimate conditional densities.
Instead, we suggest using weighted bootstrap (Chamberlain and
Imbens, 2003, Ma and Kosorok, 2005, Chen and Pouzo, 2009) and prove
its validity in what follows.
We focus on weighted bootstrap because it has practical advantages
over nonparametric bootstrap to deal with discrete regressors with
small cell sizes and the proof of its consistency is not overly
complex, following the strategy set forth by Ma and Kosorok (2005).
Moreover, a particular version of the weighted bootstrap, with
exponentials acting as weights, has a nice Bayesian interpretation
(Chamberlain and Imbens, 2003).
To describe the weighted bootstrap procedure in our setting, we
first introduce the ``weights''.
\begin{condition}[Bootstrap weights] The weights
$(e_1, ..., e_n)$ are i.i.d. draws from a random variable $e \geq
0$, with ${\mathrm{E}}_P[e] = 1$ and $\mathrm{Var}_P[e] = 1$, living on the
probability space $(\Omega, \mathcal{F},\Pr)$ and are independent of
the data $\{Y_{i},D_{i},W_{i},Z_{i},C_i\}_{i=1}^n$ for all $n$.
\end{condition}
\noindent \textbf{Remark 1} (Bootstrap weights). The chief and
recommended example of bootstrap weights is given by $e$ set to be
the standard exponential random variable. Note that for other
positive random variables with ${\mathrm{E}}_P[e] = 1$ but $\mathrm{Var}_P[e]
> 1$, we can take the transformation $\tilde e = 1 + (e -
1)/\mathrm{Var}_P[e]^{1/2}$, which satisfies $\tilde e \geq 0$,
${\mathrm{E}}_P[\tilde e] =1$, and $\mathrm{Var}_P[\tilde e]=1$.
\medskip
The weights act as sampling weights in the bootstrap procedure. In
each repetition, we draw a new set of weights $(e_1,\ldots, e_n)$
and recompute the CQIV estimator in the weighted sample. We refer to
the next section for practical details, and here we define the
quantities needed to verify the validity of this bootstrap scheme.
Specifically, let $\widehat V_{i}^e$ denote the estimator of the
control variable for observation $i$ in the weighted sample, such as
the quantile regression or distribution regression based estimators
described in the next section. The CQIV estimator in the weighted
sample solves
\begin{equation}\label{eq: weighted cqiv}
\widehat{\beta }^{e}(u) =\arg \min_{\beta \in \mathbb{R}^{\dim(X)}}\frac{1}{n}\sum_{i=1}^{n} e_i 1(%
\widehat{\gamma}^{\prime}\widehat{S}_{i}^{e} >\varsigma)\rho
_{u}(Y_{i}-\beta^{\prime}\widehat X_{i}^{e }),
\end{equation}
where $\widehat{X}_{ i}^e = x(D_i, W_i, \widehat V_{ i}^e)$,
$\widehat{S}_{ i}^e = s(\widehat{X}_{ i}^e, C_i)$,
and
$\widehat{\gamma}$ is a consistent estimator of the selector. Note
that we do not need to recompute $\widehat \gamma$ in the weighted
samples, which is convenient for computation.
We make the following assumptions about the estimator of the control
variable in the weighted sample.
\begin{condition}[Weighted estimator of control variable]
Let $(e_1, \ldots, e_n)$ be a sequence of weights that satisfies
Assumption 6. We have an estimator of the control variable of the
form $\widehat{V}^e=\widehat{\vartheta}^e(D,W, Z),$ such that
uniformly over $\mathcal{DWZ}$,
\begin{equation*}
\sqrt{n}(\widehat{\vartheta}^e(d,w,z)-\vartheta_0(d,w,z))= \frac{1}{\sqrt{n}}%
\sum_{i=1}^{n} e_i \ell(A_i, d,w,z) + o_{\Pr}(1), \ \ {\mathrm{E}}_P[\ell(A,
d,w,z)]=0,
\end{equation*}%
where ${\mathrm{E}}_P[\ell(A, D,W,Z)^2] < \infty$ and $\| \frac{1}{\sqrt{n}}%
\sum_{i=1}^{n} e_i \ell(A_i, \cdot)\|_{\infty} = O_{\Pr}(1)$, and
$$
\|\widehat \vartheta^e - \widetilde \vartheta^e\|_{\infty} =
o_{\Pr}(1/\sqrt{n}), \ \ \text{ for } \ \ \widetilde \vartheta^e
\in \Upsilon,
$$
where the entropy of the function class $\Upsilon$ is not too high, namely
$$
\log N (\epsilon, \Upsilon, \|\cdot\|_{\infty}) \lesssim
1/(\epsilon \log^2 (1/ \epsilon)), \text{ for all } 0 < \epsilon <
1.
$$
\end{condition}
Basically this is the same condition as Assumption 3 in the
unweighted sample, and therefore both can be verified using
analogous arguments. Note also that the condition is stated under
the probability measure $\Pr$, i.e. unconditionally on the data,
which actually simplifies verification. We give primitive conditions
that verify this assumption for quantile and distribution regression
estimation of the control variable in the next section.
The following result shows the consistency of
weighted bootstrap to approximate the asymptotic distribution of the
CQIV estimator.
\begin{theorem}[Weighted-bootstrap validity for CQIV]
Under the stated assumptions, conditionally on the data
\begin{equation*}
\sqrt{n}(\widehat\beta^{e} (u) - \widehat \beta(u)) \to_d N(0,
J^{-1}(u) \Lambda(u) J^{-1}(u)),
\end{equation*}
in probability under $\Pr$.
\end{theorem}
Note that the statement above formally means that the distance
between the law of $\sqrt{n}(\widehat\beta^{e} (u) - \widehat
\beta(u))$ conditional on the data and the law of the normal vector
$N(0, J^{-1}(u) \Lambda(u) J^{-1}(u))$, as measured by any metric
that metrizes weak convergence, conveges in probability to zero.
More specifically,
$$
d_{BL}\{ \mathcal{L}[\sqrt{n}(\widehat\beta^{e} (u) - \widehat
\beta(u))| \text{data}], \mathcal{L}[N(0, J^{-1}(u) \Lambda(u)
J^{-1}(u))] \} \to_{\Pr} 0,
$$
where $d_{BL}$ denotes the bounded Lipshitz metric.
In practice, we approximate
numerically the distribution of $ \sqrt{n}(\widehat\beta^{e} (u) -
\widehat \beta(u))$ conditional on the data by simulation. For $b =
1,\ldots,B,$ we compute $\widehat\beta^{e}_{b} (u)$ solving the
problem (\ref{eq: weighted cqiv}) with the data fixed and a set of
weights $(e_{1b}, ..., e_{nb})$ randomly drawn for a distribution
that satisfies Assumption 6. By Theorem 2, we can use the empirical
distribution of $\sqrt{n}(\widehat\beta^{e}_b (u) - \widehat
\beta(u))$ to make asymptotically valid inference on $\beta_0(u)$
for large $B$.
\subsection{Quantile and distribution regression estimation of the control variable}
One of the main contributions of this paper is to allow for quantile
and distribution regression estimation of the control variable. The
difficulties here are multifold, since the control variable depends
on the infinite dimensional function $\pi_0(\cdot)$, and more
importantly the estimated version of this function, $\widehat
\pi(\cdot)$, does not seem to lie in any class with good entropic
properties. We overcome these difficulties by demonstrating that the
estimated function can be approximated with sufficient degree of
accuracy by a random function that lies in a class with good
entropic properties. To carry out this approximation, we smooth the
empirical quantile regression and distribution regression processes
by third order kernels, after suitably extending the processes to
deal with boundary issues. Such kernels can be obtained by
reproducing kernel Hilbert space methods or via twicing kernel
methods (Berlinet, 1993, and Newey, Hsieh, and Robins, 2004). In the
case of quantile regression, we also use results of the asymptotic
theory for rearrangement-related operators developed by
Chernozhukov, Fern\'andez-Val and Galichon (2010). Moreover, all the
previous arguments carry over weighted samples, which is relevant
for the bootstrap.
\subsubsection{Quantile regression} We impose the following
condition:
\begin{condition}[QR control variable]\label{Ass: QR} (a) The conditional
quantile function of $D$ given $(W,Z)$ follows the quantile
regression model, i.e., $$Q_D(\cdot \mid W,Z) = Q_D(\cdot \mid R) =
R'\pi_0(\cdot), \ \ R = r(W,Z),$$ where the coefficients $v \mapsto
\pi_0(v)$ are three times continuously differentiable with uniformly
bounded derivatives, and $\mathcal{R}$ is compact; (b) The
conditional density $f_{D}(\cdot \mid R)$ is uniformly bounded by a
constant $P$-a.e., and is continuous at $R'\pi_0(v)$ uniformly in $v
\in (0,1)$ $P$-a.e. (c) The Gram matrix $E[RR']$ is finite and has
full rank.
\end{condition}
For $\rho_v(z):= (v - 1(z<0))z$, let
$$
\widehat \pi^e(v) \in \arg \min_{\pi \in \mathbb{R}^{\dim(R)}}
\frac{1}{n} \sum_{i=1}^n e_i \rho_v(D_i - R_i'\pi),
$$
where either $e_i=1$ for the unweighted sample, to obtain the
estimates; or $e_i$ is drawn from a positive random variable with
unit mean and variance for the weighted sample, to obtain bootstrap
estimates. Then set
$$
\vartheta_0(d,r) = \int_{(0,1)} 1\{r'\pi_0(v) \leq d\} d v ; \ \widehat \vartheta^e (d,r) = \int_{(0,1)} 1\{r'\widehat \pi^e(v) \leq d\} d v.
$$
The following result verifies that our main high-level conditions
for the control variable estimator in Assumptions 3 and 7 hold under
Assumption 8. The verification is done simultaneously for weighted
and unweighted samples by including weights that can be equal to the
trivial unit weights, as mentioned above.
\begin{theorem}[Validity of Assumptions 3 \& 7 for QR] Suppose that Assumption \ref{Ass: QR} holds.
(1) We have that
\begin{eqnarray*}
\sqrt{n}(\widehat \vartheta^e(d,r) - \vartheta_0(d,r)) &=& \frac{1}{\sqrt{n}} \sum_{i=1}^n e_i \ell(A_i, d,r) + o_\Pr(1) \rightsquigarrow \Delta^e(d,r) \text{ in } \ell^{\infty}(\mathcal{DR}), \\
\ell(A, d,r) &:=& f_{D}(d \mid r) r' {\mathrm{E}}_P \left[
f_{D}(R'\pi_0(\vartheta_0(d,r)) \mid R) R R'\right]^{-1} \times\\
&& \times [ 1\{D \leq R'\pi_0(\vartheta_0(d,r))\}- \vartheta_0(d,r)] R, \\
{\mathrm{E}}_P [\ell(A, d,r)] &=& 0, \ {\mathrm{E}}_P [\ell(A, D,R)^2] < \infty,
\end{eqnarray*}
where $\Delta^e(d,r)$ is a Gaussian process with continuous paths
and covariance function given by ${\mathrm{E}}_P[\ell(A, d,r)\ell(A,
\tilde{d},\tilde{r})']$. (2) Moreover, there exists $\widetilde
\vartheta^e: \mathcal{DR} \mapsto [0,1]$ that obeys the same first order
representation, is close to $\widehat \vartheta^e$ in the sense that
$\|\widetilde \vartheta^e - \widehat \vartheta^e\|_{\infty}=o_\Pr(1/\sqrt{n})$,
and, with probability approaching one, belongs to a bounded function
class $\Upsilon$ such that
$$
\log N (\epsilon, \Upsilon, \|\cdot\|_{\infty}) \lesssim \epsilon^{-1/2}, \ \ 0 < \epsilon <1.
$$
Thus, Assumption 3 holds for the case $e_i=1$, and Assumption 7
holds for the case of $e_i$ being drawn from a positive random
variable with unit mean and variance as in Assumption 6. Thus, the
results of Theorem 1 and 2 apply for the QR estimator of the control variable.
\end{theorem}
\subsubsection{Distribution regression} We impose the following
condition:
\begin{condition}[DR control variable]
(a) The conditional distribution function of $D$ given $(W,Z)$
follows the distribution regression model, i.e., $$F_D(\cdot \mid
W,Z) = F_D(\cdot \mid R) = \Lambda(R'\pi_0(\cdot)), \ \ R =
r(W,Z),$$ where $\Lambda$ is either the probit or logit link
function, the coefficients $d \mapsto \pi_0(d)$ are three times
continuously differentiable with uniformly bounded derivatives; (b)
$\mathcal{D}$ and $\mathcal{R}$ are compact; (c) The Gram matrix
$ERR'$ has full rank.
\end{condition}
Let
$$
\widehat \pi^e(d) \in \arg \min_{\pi \in \mathbb{R}^{\dim(R)}}
\frac{1}{n} \sum_{i=1}^n e_i \{ 1(D_i \leq d ) \log \Lambda( R_i'\pi
) + 1(D_i>d) \log [1- \Lambda( R_i'\pi )]\},
$$
where either $e_i=1$ for the unweighted sample, to obtain the
estimates; or $e_i$ is drawn from a positive random variable with
unit mean and variance for the weighted sample, to obtain bootstrap
estimates. Then set
$$
\vartheta_0(d,r) = \Lambda(r'\pi_0(d)); \ \widehat \vartheta^e (d,r) = \Lambda(r'\widehat \pi^e(d)).
$$
The following result verifies that our main high-level conditions
for the control variable estimator in Assumptions 3 and 7 hold under
Assumption 9. The verification is done simultaneously for weighted
and unweighted samples by including weights that can be equal to the
trivial unit weights.
\begin{theorem}[Validity of Assumptions 3 \& 7 for DR] Suppose that Assumption 9 holds. (1) We have that
\begin{eqnarray*}
\sqrt{n}(\widehat \vartheta^e(d,r) - \vartheta_0(d,r)) &=& \frac{1}{\sqrt{n}} \sum_{i=1}^n e_i \ell(A_i, d,r) + o_\Pr(1) \rightsquigarrow \Delta^e(d,r) \text{ in } \ell^{\infty}(\mathcal{DR}), \\
\ell(A,d,r) &:=& \partial \Lambda(r'\pi_0(d)) r' {\mathrm{E}}_P \left[
\frac{\partial \Lambda(R'\pi_0(d))^2}{\Lambda(R'\pi_0(d))[1
- \Lambda(R'\pi_0(d))]} RR' \right]^{-1} \times \\
&& \times \frac{1 \{D \leq d\} - \Lambda(R'\pi_0(d)) }{
\Lambda(R'\pi_0(d))[1- \Lambda(R'\pi_0(d))]} \partial
\Lambda(R'\pi_0(d))R, \\ {\mathrm{E}}_P [\ell(A,d,r)] &=& 0, {\mathrm{E}}_P
[\ell(A,D,R)^2] < \infty,
\end{eqnarray*}
where $\Delta^e(d,r)$ is a Gaussian process with continuous paths
and covariance function given by ${\mathrm{E}}_P[\ell(A, d,r)\ell(A,
\tilde{d},\tilde{r})']$, and $\partial \Lambda$ is the derivative of
$\Lambda$. (2) Moreover, there exists $\widetilde \vartheta^e:
\mathcal{DR} \mapsto [0,1]$ that obeys the same first order
representation, is close to $\widehat \vartheta^e$ in the sense that
$\|\widetilde \vartheta^e - \widehat \vartheta^e\|_{\infty}=o_\Pr(1/\sqrt{n})$
and, with probability approaching one, belongs to a bounded function
class $\Upsilon$ such that
$$
\log N (\epsilon, \Upsilon, \|\cdot\|_{\infty}) \lesssim
\epsilon^{-1/2}, \ \ 0 < \epsilon < 1.
$$
Thus, Assumption 3 holds for the case $e_i=1$, and Assumption 7
holds for the case of $e_i$ being drawn from a positive random
variable with unit mean and variance as in Assumption 6. Thus, the
results of Theorem 1 and 2 apply for the DR estimator of the control variable.
\end{theorem}
\section{Computation and Numerical Examples}\label{montecarlo}
This section describes the numerical algorithms to compute the CQIV
estimator and weighted bootstrap confidence intervals, and shows the
results of a Monte Carlo numerical example.
\subsection{CQIV Algorithm}
The algorithm to obtain CQIV estimates is similar to Chernozhukov
and Hong (2002). We add an initial step to estimate the control
variable $V$. We name this step as Step 0 to facilitate comparison
with the Chernozhukov and Hong (2002) 3-Step CQR algorithm. \
\medskip
\begin{algorithm}[CQIV]
For each desired quantile $u$, perform the following steps:
\begin{itemize}
\item[0.] Obtain an estimate of the control variable for each individual, $\widehat{V}_i$,
and construct $\widehat{X}_i=x(D_i,W_i, \widehat{V}_i)$.
\item[1.] Select a subset of quantile-uncensored observations, $J_{0}$,
whose conditional quantile function is likely to be above the
censoring point, namely select a subset of $\{i: X_i'\beta_0(u) >
C\}$. To find these observations, we note that $X'\beta_0(u) > C$ is
equivalent to $P(Y>C \mid X,C) > 1- u.$ Hence we predict the
quantile-uncensored observations using a flexible binary choice
model:
\begin{equation*}
P(Y>C \mid X,C) = \Lambda(S_i'\delta_0), \ \ S_i = s(X_i, C_i),
\end{equation*}%
where $\Lambda$ is a known link function, typically a probit or a
logit. In estimation, we replace $S_i$ by
$\widehat S_i = s(\widehat X_i, C_i)$.
Then, we select the sample $J_{0}$ according to the following criterion:
\begin{equation*}
J_{0}=\{i:\Lambda(\widehat{S}_{i}^{\prime }\widehat{\delta
})>1-u+k_0\}.
\end{equation*}
\item[2.] Estimate a standard quantile regression on the
subsample defined by $J_{0}$:
\begin{equation}\label{eq: QR}
\widehat{\beta }^0(u) = \arg \min_{\beta \in \mathbb{R}^{\dim(X)}} \sum\limits_{i \in J_{0}}\rho _{u}(Y_{i}-%
\widehat{X}_{i}^{\prime }\beta).
\end{equation}%
Next, using the predicted values, select another subset of quantile-uncensored observations, $%
J_{1}$, from the full sample according to the following criterion:
\begin{equation}\label{eq: selector}
J_{1}=\{i:\widehat{X}_{i}^{\prime }\widehat{\beta }^0(u)>C_{i} +
\varsigma_{1}\}.
\end{equation}%
\item[3.] Estimate a standard quantile regression on the
subsample defined by $J_{1}$.
Formally, replace $J_{0}$ by $J_{1}$ in (\ref{eq: QR}). The new estimates, $%
\widehat{\beta }^1(u)$, are the 3-Step CQIV coefficient estimates.
\item[4.] (Optional) With the results from the previous step,
select a new sample $J_{2}$ replacing $\widehat{\beta }^0(u)$ by
$\widehat{\beta }^1(u)$ in (\ref{eq: selector}). Iterate this and
the previous step a bounded number of times.
\end{itemize}
\end{algorithm}
\noindent \textbf{Remark 2} (Step 0). A simple additive strategy is
to estimate the control variable using the empirical CDF of the
residuals from the first stage OLS regression of $D$ on $W$ and $Z$.
\ More flexible non-additive strategies based on quantile regression
or distribution regression are described in the previous section.
\medskip
\noindent \textbf{Remark 3} (Step 1). To predict the
quantile-uncensored observations,
a probit, logit, or any other model that
fits the data well can be used. Note that the model does not need
to be correctly specified; it suffices that it selects a nontrivial
subset of observations with $X_i'\beta_0(u)>C_i$. To choose the
value of $k_0$, it is advisable that a constant fraction of
observations satisfying $\Lambda(\widehat{S}_{i}^{\prime
}\widehat{\delta })>1-u$ are excluded from $J_{0}$ for each
quantile. To do so, set $k_0$ as the
$q_{0}$th quantile of $\Lambda(\widehat{S}_{i}^{\prime }\widehat{%
\delta })$ conditional on $\Lambda(\widehat{S}_{i}^{\prime
}\widehat{\delta})>1-u$, where $q_{0}$ is a percentage (10\% worked
well in our simulation). The empirical value of $k_0$ and the
percentage of observations retained in $J_{0}$ can be computed as
simple robustness diagnostic tests at each quantile.
\medskip
\noindent \textbf{Remark 4} (Step 2). To choose the cut-off
$\varsigma_{1}$, it is advisable that a constant
fraction of observations satisfying $\widehat{X}_{i}^{\prime }\widehat{%
\beta }^0(u)>C_{i}$ are excluded from $J_{1}$ for each quantile. \
To do so,
set $\varsigma_{1}$ to be the $q_{1}$th quantile of $\widehat{X}%
_{i}^{\prime }\widehat{\beta }^0(u) - C_i$ conditional on $\widehat{X}_{i}^{\prime }%
\widehat{\beta }^0(u)>C_{i}$, where $q_{1}$ is a percentage less
than $q_{0}$
(3\% worked well in our simulation). In practice, it is desirable that $%
J_{0}$ $\subset $ $J_{1}$. If this is not the case, we recommend
altering $q_{0}$, $q_{1}$, or the specification of the regression
models. At each quantile, the empirical value of $\varsigma_1$, the
percentage of observations from the full sample retained in $J_{1}$,
the percentage of observations from $J_{0}$ retained in $J_{1}$, and
the number of observations in $J_{1}$ but not in $J_{0}$ can be
computed as simple robustness diagnostic tests. The estimator $%
\widehat{\beta }^0(u)$ is consistent but will be inefficient
relative to the estimator obtained in the subsequent step.
\medskip
\noindent \textbf{Remark 5} (Steps 1 and 2). In the notation of
Assumption \ref{condition: selector}, the selector of Step 1 can be
expressed as $1(\widehat{S}_i'\widehat{\gamma} > \varsigma_0)$,
where $\widehat{S}_i'\widehat{\gamma} =
\widehat{S}_i'\widehat{\delta} - \Lambda^{-1}(1-u)$ and $\varsigma_0
= \Lambda^{-1}(1-u+k_0) - \Lambda^{-1}(1-u)$. The selector of Step 2
can also be expressed as $1(\widehat{S}_i'\widehat{\gamma} >
\varsigma_1),$ where $\widehat{S}_i = (\widehat{X}_i', C_i)'$ and
$\widehat{\gamma} = (\widehat{\beta}^{0}(u)',-1)'$.
\medskip
\noindent \textbf{Remark 6} (Steps 2, 3 and 4). Beginning with Step
2, each successive iteration of the algorithm should yield estimates
that come closer to minimizing the Powell objective function. As a
simple robustness diagnostic test, we recommend computing the
Powell objective function using the full sample and the estimated
coefficients after each iteration, starting with Step 2. This
diagnostic test is computationally straightforward because computing
the objective function for a given set of values is much simpler
than maximizing it. In practice, this test can be used to determine
when to stop the CQIV algorithm for each quantile. If the Powell
objective function increases from Step $s$ to Step $s+1$ for $s\geq
2$, estimates from Step $s$ can be retained as the coefficient
estimates.
\medskip
\subsection{Weighted Bootstrap Algorithm}
We recommend obtaining confidence intervals through a weighted
bootstrap procedure, though analytical formulas can also be used. If
the estimation runs quickly on the desired sample, it is
straightforward to rerun the entire CQIV algorithm $B$ times
weighting all the steps by the bootstrap weights. To speed up the
computation, we propose a procedure that uses a one-step CQIV
estimator in each bootstrap repetition.
\begin{algorithm}[Weighted bootstrap CQIV] For $b = 1, \ldots, B$, repeat the following steps:
\begin{itemize}
\item[1.] Draw a set of weights $(e_{1b}, \ldots, e_{nb})$
i.i.d from a random variable $e$ that satisfies Assumption 6. For
example, we can draw the weights from a standard exponential
distribution.
\item[2.] Reestimate the control variable in the weighted
sample, $\widehat V_{ib}^{e} = \widehat \vartheta_{b}^e(D_i, W_i,
Z_i)$, and construct $\widehat X_{ib}^{e} = x(D_i, W_i, \widehat
V_{ib}^{e})$.
\item[3.] Estimate the weighted quantile regression:
\begin{equation*}
\widehat{\beta }_{b}^{e}(u) = \arg \min_{\beta \in \mathbb{R}^{\dim(X)}} \sum\limits_{i \in J_{1b}} e_{ib} \rho _{u}(Y_{i}-%
\beta^{\prime}\widehat{X}_{ib}^{e}), \label{QR}
\end{equation*}
where $J_{1b} = \{i:\widehat{\beta }(u)^{\prime}\widehat{X}_{ib}^{e}
> C_{i} + \varsigma_{1} \},$ and $\widehat{\beta }(u)$ is a consistent estimator
of $\beta_0(u)$, e.g., the 3-stage CQIV estimator $\widehat
\beta^1(u)$.
\end{itemize}
\end{algorithm}
\medskip
\noindent \textbf{Remark 7} (Step 2). The estimate of the control
function $ \widehat \vartheta_{b}^{e}$ can be obtained by weighted
least squares, weighted quantile regression, or weighted
distribution regression.
\medskip
\noindent \textbf{Remark 8} (Step 3). A computationally less
expensive alternative is to set $J_{1b} = J_{1}$ in all the
repetitions, where $J_1$ is the subset of selected observations in
Step 2 of the CQIV algorithm.
\medskip
We can construct an asymptotic $(1-\alpha)$-confidence interval for
a function of the parameter vector $g(\beta_0(u))$ as $[\widehat
g_{\alpha/2}, \widehat g_{1-\alpha/2}]$, where $\widehat g_{\alpha}$
is the sample $\alpha$-quantile of $[g(\widehat \beta^{e}_{1}(u)),
\ldots, g(\widehat \beta^{e}_{B}(u))]$. For example, the 0.025 and
0.975 quantiles of $(\widehat{\beta }_{1, k}^{e}(u), \ldots,
\widehat{\beta }_{B,k}^{e}(u))$ form a 95\% asymptotic confidence
interval for the $k$th coefficient $\beta_{0,k}(u)$.
\subsection{Monte-Carlo illustration}
The goal of the following numerical example is to compare the
performance of CQIV relative to tobit IV and other quantile
regression estimators in finite samples. We generate data according
to a normal design that satisfies the tobit parametric assumptions
and a design with heteroskedasticity in the first stage equation for
the endogenous regressor $D$ that does not satisfy the tobit
parametric assumptions. To facilitate the comparison, in both
designs we consider a location model for the response variable
$Y^{\ast}$, where the coefficients of the conditional expectation
function and the conditional quantile function are equal (other than
the intercept), so that tobit and CQIV estimate the same parameters.
A comparison of the dispersion of the tobit estimates to the
dispersion of the CQIV estimates at each quantile in the first
design serves to quantify the relative efficiency of CQIV in a case
where tobit IV can be expected to perform as well as possible. The
appendix provides a more detailed description of the designs.
We consider two tobit estimators for comparison. Tobit-iv is the
full information maximum likelihood estimator developed by Newey
(1987), which is implemented in Stata with the command
\verb"ivtobit". Tobit-cmle is the conditional maximum likelihood
tobit estimator developed by Smith and Blundell (1986), which uses
least squares residuals as a control variable. For additional
comparisons, we present results from the censored quantile
regression (cqr) estimator of Chernozhukov and Hong (2002), which
does not address endogeneity; the quantile instrumental variables
estimator (qiv-ols) of Lee (2007) with parametric first and second
stage, which does not account for censoring; and the quantile
regression (qr) estimator of Koenker and Bassett (1978), which does
not account for endogeneity nor censoring. For CQIV we consider
three different methods to estimate the control variable: cqiv-ols,
which uses least squares; cqiv-qr, which uses quantile regression;
and cqiv-dr, which uses probit distribution regression. The appendix
also provides technical details for all CQIV estimators, as well as
diagnostic test results for the cqiv-ols estimator.
We focus on the coefficient on the endogenous regressor $D$. We
report mean bias and root mean square error (rmse) for all the
estimators at the $\{.05, .10, ..., .95\}$ quantiles.
For the homoskedastic design, the bias results are reported in the
upper panel of Figure \ref{fig: homos} and the rmse results are
reported in the lower panel. In this figure, we see that tobit-cmle
represents a substantial improvement over tobit-iv in terms of mean
bias and rmse.
Even though tobit-iv is theoretically efficient in this design, the
CQIV estimators out-perform tobit-iv, and compare well to
tobit-cmle.
The figure also demonstrates that the CQIV estimators out-perform
the other quantile estimators at all estimated quantiles.
All of our qualitative
findings hold when we consider unreported alternative measures of
bias and dispersion such as median bias, interquartile range, and
standard deviation.
The similar performance of tobit-cmle and cqiv can be explained by
the homoskedasticity in the first stage of the design. Figure
\ref{fig: heteros} reports mean bias and rmse results for the
heteroskedastic design. Here cqiv-qr outperforms cqiv-ols and
cqiv-dr at every quantile, which is expected because cqiv-ols and
cqiv-dr are both misspecified for the control variable. Cqiv-dr has
lower bias than cqiv-ols because it uses a more flexible
specification for the control variable.
Cqiv-qr also outperforms all other quantile estimators. Most
importantly, at every quantile, cqiv-qr outperforms both tobit
estimators, which are no longer consistent given the
heteroskedasticity in the design of the first stage. In summary,
CQIV performs well relative to tobit in a model that satisfies the
parametric assumptions required for tobit-iv to be efficient, and it
outperforms tobit in a model with heteroskedasticy.
\section{Empirical Application: Engel Curve Estimation}
\label{engel}
In this section, we apply the CQIV estimator to the estimation of
Engel curves. The Engel curve relationship describes how a
household's demand for a commodity changes as the household's
expenditure increases. Lewbel (2006) provides a recent survey of the
extensive literature on Engel curve estimation. For comparability to
the recent studies, we use data from the 1995 U.K. Family
Expenditure Survey (FES) as in Blundell, Chen, and Kristensen (2007)
and Imbens and Newey (2009). Following Blundell, Chen, and
Kristensen (2007), we restrict the sample to 1,655 married or
cohabitating couples with two or fewer children, in which the head
of household is employed and between the ages of 20 and 55. The FES
collects data on household expenditure for different categories of
commodities. We focus on estimation of the Engel curve relationship
for the alcohol category because 16\% of families in our data report
zero expenditure on alcohol. Although zero expenditure on alcohol
arises as a corner solution outcome, and not from bottom coding,
both types of censoring motivate the use of censored estimators such
as CQIV.
Endogeneity in the estimation of Engel curves arises because the
decision to consume a particular category of commodity may occur
simultaneously with the allocation of income between consumption and
savings. Following the literature, we rely on a two-stage budgeting
argument to justify the use of labor income as an instrument for
expenditure. Specifically, we estimate a
quantile regression model in the first stage, where the logarithm of total expenditure, $%
D$, is a function of the logarithm of gross earnings of the head of
the household, $Z$, and demographic household characteristics, $W$.
The control variable, $V$, is obtained using the CQIV-QR estimator
in (\ref{equation: fe_qr_estimator}), where the integral is
approximated by a grid of 100 quantiles. For comparison, we also
obtained control variable estimates using least squares and probit
distribution regression. We do not report these comparison estimates
because the correlation between the different control variable
estimates was virtually 1, and all the methods resulted in very
similar estimates in the second stage.
In the second stage we focus on the following quantile specification for
Engel curve estimation:
\begin{equation*}
Y_{i}=\max (X_{i}^{\prime }\beta_0 (U_{i}),0),\
X_{i}=(1,D_{i},D_{i}^{2},W_{i},\Phi^{-1}(V_{i})),\ U_{i}\backsim
U(0,1)\mid X_{i},
\end{equation*}%
where $Y$ is the observed share of total expenditure on alcohol
censored at zero, $W$ is a binary household demographic variable
that indicates whether the family has any children, and $V$ is the
control variable. We define our binary demographic variable
following
Blundell, Chen and Kristensen (2007).\footnote{%
Demographic variables are important shifters of Engel curves. In
recent literature, \textquotedblleft shape invariant" specifications
for demographic variable have become popular. For comparison with
this literature, we also estimate an unrestricted version of shape
invariant specification in which we include a term for the
interaction between the logarithm of expenditure and our demographic
variable. The results from the shape invariant specification are
qualitatively similar but less precise than the ones reported in
this application.}
To choose the specification, we rely on recent studies in Engel
curve estimation. Thus, following Blundell, Browning, and Crawford
(2003) we impose separability between the control variable and other
regressors. Hausman, Newey, and Powell (1995) and Banks, Blundell,
and Lewbel (1997) show that the quadratic specification in
log-expenditure gives a better fit than the linear specification
used in earlier studies. In particular, Blundell, Duncan, and
Pendakur (1998) find that the quadratic specification gives a good
approximation to the shape of the Engel curve for alcohol. To check
the robustness of the specification to the linearity in the control
variable, we also estimate specifications that include nonlinear
terms in the control variable. The results are very similar to the
ones reported.
Figure \ref{coefficients} reports the estimated coefficients $u
\mapsto \widehat \beta (u)$ for a variety of estimators. In addition
to reporting results for CQIV with a quantile estimate of the
control variable (cqiv), as in the previous numerical examples, we
report estimates from the censored quantile regression (cqr) of
Chernozhukov and Hong (2002), the quantile instrumental variables
estimator with a quantile regression estimate of the control
variable (qiv) of Lee (2007), and the quantile regression (qr)
estimator of Koenker and Bassett (1978). We also estimate a model
for the conditional mean with the tobit-cmle of Smith and Blundell
(1986) that incorporates a least squares estimate of the control
variable. The tobit-iv algorithm implemented in Stata does not
converge in this application. Given the level of censoring, we focus
on conditional quantiles above the .15 quantile.
In the panels that depict the coefficients of expenditure and its
square, the importance of controlling for censoring is especially
apparent. Comparison between the censored quantile estimators (cqiv
and cqr), plotted with thick light lines, and the uncensored
quantile estimators (qiv and qr), plotted with thin dark lines,
demonstrates that the censoring attenuates the uncorrected estimates
toward zero at most quantiles in this application. In particular,
censoring appears very important at the lowest quantiles. Relative
to the tobit-cmle estimate of the conditional mean, cqiv provides a
richer picture of the heterogenous effects of the variables.
Comparison of the quantile estimators that account for endogeneity
(cqiv and qiv), plotted with solid lines, and those that do not (cqr
and qr), plotted with dashed lines, shows that endogeneity also
influences the estimates, but the pattern is more difficult to
interpret. The estimates of the coefficient of the control variable
indicate that the endogeneity problem is more severe in the upper
half of the distribution. This is consistent with a situation where
a strong preference to consume alcohol raises total household
expenditure.
Our quadratic quantile model is flexible in that it permits the
expenditure elasticities to vary across quantiles of the alcohol
share and across the level of total expenditure. These quantile
elasticities are related to the coefficients of the model by
\begin{equation*}
\partial_d Q_Y(u \mid x) = 1\{
x'\beta_0(u) > 0\} \{\beta_{01}(u) + 2 \beta_{02}(u) \ d\},
\label{elasform}
\end{equation*}
where $\beta_{01}(u)$ and $\beta_{02}(u)$ are the coefficients of
$D$ and $D^2$, respectively. Figure \ref{elasticities} reports point
and interval estimates of average quantile elasticities as a
function of the quantile index $u$, i.e., $u \mapsto
{\mathrm{E}}_P[\partial_d Q_Y(u \mid X)]$. Here we see that accounting for
endogeneity and censoring also has important consequences for these
economically relevant quantities. The difference between the
estimates is more pronounced along the endogeneity dimension than it
is along the censoring dimension. The right panel plots 95\%
pointwise confidence intervals for the cqiv quantile elasticity
estimates obtained by the weighted bootstrap method described in
Section 3 with standard exponential weights and $B=200$ repetitions.
Here we can see that there is significant heterogeneity in the
expenditure elasticity across quantiles. Thus, alcohol passes from
being a normal good for low quantiles to being an inferior good for
high quantiles. This heterogeneity is missed by conventional mean
estimates of the elasticity.
In Figure \ref{engelfamily}\thinspace\ we report families of Engel
curves based on the cqiv coefficient estimates. We predict the value
of the alcohol share, $Y$, for a grid of values of log expenditure
using the cqiv coefficients at each quartile. The subfigures depict
the Engel curves for each quartile of the empirical values of the
control variable, for individuals with and without kids, that is
$$
d \mapsto \max\{(1,d,d^2,w,\Phi^{-1}( v))'\widehat{\beta}(u), 0 \}
$$
for $(w,\Phi^{-1}( v), u)$ evaluated at $w \in \{0,1\}$, the
quartiles of $\widehat V$ for $v$, and $u \in \{0.25, 0.50, 0.75\}$.
Here we can see that controlling for censoring has an important
effect on the shape of the Engel curves even at the median. The
families of Engel curves are fairly robust to the values of the
control variable, but the effect of children on alcohol shares is
more pronounced. The presence of children in the household produces
a downward shift in the Engel curves at all the levels of
log-expenditure considered.
\bigskip
\section{Conclusion}
\label{conclusion}
In this paper, we develop a new censored quantile instrumental
variable estimator that incorporates endogenous regressors using a
control variable approach. Censoring and endogeneity abound in
empirical work, making the new estimator a valuable addition to the
applied econometrician's toolkit. For example, Kowalski (2009) uses
this estimator to analyze the price elasticity of expenditure on
medical care across the quantiles of the expenditure distribution,
where censoring arises because of the decision to consume zero care
and endogeneity arises because marginal prices explicitly depend on
expenditure. Since the new estimator can be implemented using
standard statistical software, it should prove useful to applied
researchers in many applications.
|
2,869,038,153,812 | arxiv | \section{Introduction}
A leading approach to quantum information processing is via linear-optical quantum computing (LOQC),
first proposed in 2001 by Knill, Laflamme and Milburn \cite{KLM}. Major progress has been made both on the
theoretical and experimental fronts towards implementation of LOQC. Modifications have been proposed that greatly
reduce the overhead costs \cite{KokRMP}, a quantum error correction protocol has been introduced
\cite{Dawson06,Varnava}, and experimental implementation of primary gates has been demonstrated \cite{KLMexp}.
In spite of this progress, practical LOQC is still out of reach. Many of the difficulties arise because the
single-photon sources required for LOQC, as well as computational circuits themselves, suffer from losses. Although a
certain degree of tolerance to losses does exist in some LOQC schemes \cite{Varnava}, the efficiency of existing
single-photon sources \cite{NJPissue} as well as the quality of individual circuit elements and waveguides are far
below the required minima.
Under these circumstances it appears beneficial to develop a procedure that would reverse the effect of losses,
perhaps at a cost of introducing extra resources. It would be useful, for example, to employ the outputs of $N$
imperfect single-photon sources to obtain $\nomod<N$ single-photon sources of improved quantum efficiency. Accomplishing this task
would be straightforward if nonlinear-optical interactions with single photons were readily available: for example, one could employ non-demolition photon number measurements to select only those modes that contain photons.
However, achieving such interactions is extremely technically challenging
\cite{KimblePhotonGate}, whereas linear-optical (LO) processing is easily achieved in the laboratory.
It is therefore important to investigate whether elimination of losses is possible under
LO processing. Under this processing we understand arbitrary interferometric transformations and
conditioning on results of arbitrary \emph{destructive} measurements on some of the optical modes involved. The
efforts to construct such a scheme began in 2004, mostly ending with various no-go results
\cite{Berry04a,Berry04b,Berry06,Berry07}. The most general result to date was obtained in Ref.~\cite{BL}.
In that work, we quantified the \emph{efficiency} of a quantum optical state by the amount of loss that state might have experienced.
We then proved that the efficiency in any single-mode optical state obtained through LO processing cannot exceed the quantum efficiency of the best available single-mode input \cite{BL}.
However, those previous results had limited application to multimode states.
First, as we show below, extending the definition of the efficiency of a quantum state to the multimode case is not straightforward, particularly when the loss has been ``mixed" among the modes by interferometric transformations.
Second, our earlier results do not provide any information on how the efficiencies can be distributed among the output modes, aside from the general upper bound mentioned above.
For example, they leave open the possibility of a ``catalytic'' scheme, in which some high-efficiency single photons are used to obtain additional high-efficiency single photons.
In the present work we generalize our study of the dynamics of optical losses to the multimode case. We introduce the notion of quantum
efficiency of a (possibly entangled) multimode state which quantifies the amount of loss this state may have
experienced. We show that this efficiency cannot increase under LO processing. That is, any loss that has occurred at
the input can neither be removed nor redistributed so as to improve the efficiency in some of the modes at the
expense of lower-efficiency modes. This means that there is a majorization relation between the efficiencies at the input and the output. The LO processing can act to average the efficiencies, but not to concentrate them. This rules out, in particular, any possibility of catalytic efficiency improvement.
\section{Single-mode measures of efficiency}
Before describing multimode measures of efficiency, we discuss the properties and relationships for single-mode measures of efficiency that have been previously proposed.
Usually efficiency is used to describe a process for producing a state.
However, it is also convenient to regard efficiency as a measure on the state itself, regardless of the process used to produce it \cite{Berry04a,Berry04b,Berry06,Berry07,BL}.
Specifically, Ref.~\cite{BL} uses the efficiency to quantify \emph{the maximum amount of loss an optical mode carrying the given state might have previously experienced}:
\begin{equation}\label{mostgen}
E(\hat\rho) := \inf \left\{ p |\ \exists \hat \rho_0\ge 0 ~:~ {\cal E}_{p} (\hat \rho_0)=\hat \rho \right\},
\end{equation}
where ${\cal E}_{p}$ is a loss channel with transmissivity $p$. That is, one considers all hypothetical methods of producing the given state $\hat\rho$ via loss from some valid initial quantum state $\hat\rho_0$.
We emphasize that the loss is just a way of mathematically quantifying the efficiency of the state.
It is not necessary that the state were created by such a process.
The efficiency is a measure on the state, and should not be regarded as an intrinsic feature of the mode.
Let us study a few examples. Ideally, single photon sources would produce a single photon state $\ket 1$ on demand.
In practice, such sources may with some probability fail to produce a photon, and there is no way to detect this failure without destructive measurement. Therefore the state produced by a generic single photon source may be approximated as
\begin{equation}
\label{eq:mix}
\hat\rho = p \ket{1}\bra{1} + (1-p) \ket{0}\bra{0}.
\end{equation}
Here the quantity $p$ is commonly referred to as the efficiency of the single photon source. In the context of our definition, state \eqref{eq:mix} can be obtained from the single-photon state by transmitting a (perfect) single photon through a loss channel with transmissivity $p$, and hence its efficiency equals $p$. In this way, the efficiency of state \eqref{eq:mix} according to our new definition is consistent with the traditional definition of the efficiency of a single-photon source.
Coherent states have efficiency exactly equal to zero, regardless of their amplitude.
This is because coherent states remain coherent states under loss.
A coherent state of amplitude $\alpha$ can be obtained from one of amplitude $\alpha/\sqrt{p}$ under a loss channel of transmissivity $p$. Although one cannot take $p=0$ (because complete loss always results in the vacuum state), possible values of $p$ form an open set with zero infimum.
On the other hand, any pure state \emph{other} than a coherent state (or the vacuum state) must have efficiency 1.
This is because a state under loss is a mixture of the original state, and the state with different numbers of photons lost.
That is, a pure state $\ket{\chi}$ becomes a mixture of $\ket{\chi}$, $\hat a\ket{\chi}$, $\hat a^2\ket{\chi}$, and so forth.
The only way in which the state after loss can remain pure is if $\ket{\chi}\propto\hat a\ket{\chi}$.
The only states for which this is true are eigenstates of the annihilation operator; i.e.\ coherent states.
Determining the efficiency of a known single-mode state is a straightforward computational task.
The loss channel ${\cal E}_{p}$ corresponds to a linear transformation known as the generalized Bernoulli transformation.
Provided the state $\hat\rho$ can be obtained via loss channel ${\cal E}_{p}$ from some initial operator, we can define the inverse map ${\cal E}_{p}^{-1}$, which can be calculated as in Ref.\ \cite{Herzog}.
Therefore, we need to find the infimum of the values of $p$ such that the inverse Bernoulli mapping ${\cal E}_{p}^{-1}(\hat\rho)$ exists and yields a valid quantum state, i.e.\ can be represented by a positive semidefinite density matrix.
A further interesting feature of a state's efficiency is that it equals zero if and only if the state is classical, i.e.\ it can be written as a statistical mixture of coherent states, or, equivalently, its Glauber-Sudarshan $P$ function has the properties of a probability density. As discussed above, any coherent state has efficiency zero, and hence so does any statistical mixture of coherent states. To prove the converse, let us suppose there exists a nonclassical state $\hat\rho$ such that $E(\hat\rho)=0$. Let $\Phi_{\hat\rho}(\eta)$ denote the Fourier transform of this state's $P$ function $P(\alpha)$ over the phase space. According to Bochner's theorem \cite{Bochner}, because $P(\alpha)$ is not a probability density, there exist two sets of $n$ complex numbers $\eta_k$ and $z_k$, such that
\begin{equation}\label{bochnereq}
\sum\limits_{i,j=1}^n \Phi_{\hat\rho}(\eta_i-\eta_j)z_iz_j^*<0.
\end{equation}
Because $E(\hat\rho)=0$, for any $p>0$ there exists state $\hat\rho_0$ such that $\hat \rho$ is obtained from $\hat\rho_0$ by means of attenuation by factor $p$. Because attenuation corresponds to ``shrinkage" of the $P$ function in the phase space \cite{Leonhardt}, we have $\Phi_{\hat\rho_0}(\eta)=\Phi_{\hat\rho}(\eta/\sqrt p)$ and hence
\begin{equation}\label{bochnereq0}
\sum\limits_{i,j=1}^n \Phi_{\hat\rho_0}(\eta'_i-\eta'_j)z_iz_j^*<0,
\end{equation}
where $\eta'_k=\eta_k\sqrt p$. By choosing $p$ close to zero, the set of arguments of function $\Phi_{\hat\rho_0}$ in the above equation can be upper bounded by an arbitrarily small value $A$.
Now recall that the Husimi $Q$ function of any quantum state must be non-negative. This means that the Fourier transform $\Psi_{\hat\rho_0}(\eta)$ of the $Q$ function of state $\hat\rho_0$ must obey
\begin{equation}\label{bochnerpsi}
\sum\limits_{i,j=1}^n \Psi_{\hat\rho_0}(\eta'_i-\eta'_j)z_iz_j^*\ge 0.
\end{equation}
But the $Q$ function is obtained from the $P$ function by convolving the latter with a Gaussian, $e^{-|\alpha|^2}/\pi$ \cite{Leonhardt}. This means that the Fourier transforms of these functions are connected by multiplication,
\begin{equation}\label{}
\Psi_{\hat\rho}(\eta)=\Phi_{\hat\rho}(\eta)e^{-|\eta|^2}.
\end{equation}
By choosing $p$ close to zero, one can make the factor $e^{-|\eta|^2}$ arbitrarily close to 1 within radius $A$. Accordingly, the left-hand sides of Eqs.~\eqref{bochnereq0} and \eqref{bochnerpsi} are equal in the limit $p\to 0$. We arrive at a contradiction, which means that any nonclassical state $\hat \rho$ must have a finite efficiency $E(\hat\rho)>0$.
\section{Multimode measures of efficiency}
\label{sec:mul}
Let us now generalize the notion of efficiency to an optical state carried by multiple modes. A direct generalization can be obtained by assuming that each mode has propagated through its own loss channel, and taking the sum of the transmissivities:
\begin{equation}\label{ED}
\Ed(\hat\rho,\nomod) := \inf \left\{ \sum_{\ell=1}^\nomod p^\downarrow_{\ell}\ |\ \exists \hat \rho_0\ge 0 ~:~ {\cal E}_{\vec p} (\hat \rho_0)=\hat \rho \right\}.
\end{equation}
The notation $p^\downarrow_\ell$ indicates the elements of the vector $\vec p$ sorted in non-increasing order. The value of $\nomod$ can be less than the number of modes constituting state $\hat\rho$. In this way, the efficiency is defined not only for the entire state, but also for a subset of $\nomod$ modes with the lowest losses. This extension facilitates comparison of efficiencies of states with different number of modes.
A drawback of this definition is that it does not adequately take account of loss that has been mixed between modes. For example, consider two polarization modes carrying a single-photon qubit in the state $\ket{\psi}=\ket{1_H}\ket {0_V}$. The efficiency of the state in the horizontally polarized mode is 1, and that in the vertically polarized mode 0, so $\Ed(\ketbra\psi\psi,2)=1$. On the other hand, writing the same state in terms of diagonal polarization modes, we find $\ket{\psi'}=(\ket{1_{+45^\circ}}\ket{0_{-45^\circ}}+\ket{0_{+45^\circ}}\ket{1_{-45^\circ}})/\sqrt 2$. This state cannot be obtained by independent loss in the two modes, and would have a different efficiency, $\Ed(\ketbra{\psi'}{\psi'},2)=2$, even though its utility for quantum information processing is exactly the same as that of $\ket\psi$.
An alternative approach to quantifying the efficiency is to treat each mode separately, and calculate the sum of single-mode efficiencies for $\nomod$ highest-efficiency modes:
\begin{equation}\label{Ei}
\Es(\hat\rho,\nomod):= \sum_{\ell=1}^\nomod E(\tr_{\forall k\ne \ell}\hat\rho)^\downarrow.
\end{equation}
This definition is also problematic. First, similarly to the d-efficiency \cite{FootNoteNom}, it depends on the choice of the mode basis. For the example above, $\Es(\ketbra\psi\psi,1)=1$, but $\Es(\ketbra{\psi'}{\psi'},1)=1/2$. Second, it may underestimate the efficiency in many cases. For example, the s-efficiency of the state $\ket\phi=\sqrt{1-p}\ket{00}+\sqrt{p}\ket{11}$ equals $\Es(\ketbra\phi\phi,1)=p$, and can be very small. On the other hand, conditioning on detection of a photon in one of the modes of $\ket\phi$ results in a perfect single photon in the other mode, as is the case with producing heralded single photons via parametric down-conversion. State $\ket\phi$ is thus much more useful than, for example, single-mode state $\hat\sigma=(1-p)\ketbra{0}{0}+p\ketbra{1}{1}$, which has the same s-efficiency but cannot be processed to produce a high-quality single photon.
We aim to provide a definition of efficiency that would be invariant with respect to transformation of modes and adequately reflect the state's value for quantum information purposes.
To this end we modify the definition $\Ed$ by including an optimization over interferometers.
That is, we consider simultaneous loss channels on each of the modes ${\cal E}_{\vec p}$, followed by an arbitrary interferometer $W$, as shown in Fig.\ \ref{fig:int1}(a).
The efficiency is then the sum of the $\nomod$ largest values of $p_\ell$:
\begin{equation}
\label{eq:kef} \Ep(\hat\rho,\nomod):= \inf \left\{ \!\sum_{\ell=1}^\nomod p^\downarrow_{\ell} \Big| \exists
\hat \rho_0\ge 0, W :W{\cal E}_{\vec p} (\hat \rho_0)=\hat \rho \right\}.
\end{equation}
An important property of the u-efficiency \eqref{eq:kef} is its invariance with respect to interferometric transformation of modes. Indeed, if state $\hat\rho'$ can be obtained from state $\hat\rho$ by applying interferometric transformation $U$, so that $\hat\rho'=U\hat\rho$, and we have $W{\cal E}_{\vec p} (\hat \rho_0)=\hat \rho$ in the context of \eeqref{eq:kef}, we also have $UW{\cal E}_{\vec p} (\hat \rho_0)=\hat \rho'$. But transformation $UW$ can be treated as a single interferometer, which means that $\Ep(\hat\rho',\nomod)\le\Ep(\hat\rho,\nomod)$. But because interferometric transformations are reversible, we also have $\Ep(\hat\rho,\nomod)\le\Ep(\hat\rho',\nomod)$ and hence $\Ep(\hat\rho',\nomod)=\Ep(\hat\rho,\nomod)$.
Similar to the case for the efficiency $E$, the \nomodf-efficiency can be calculated via inverting the channel.
In finite dimension, the channel given by the loss followed by the unitary operation $W$ may be represented by a matrix, which may be inverted to find $\hat\rho_0$.
The efficiency can then be found by a minimization over $\vec p$ and $W$ such that $\hat\rho_0$ is a valid quantum state.
For the other two efficiencies, the calculation is simpler.
For the d-efficiency, one only needs to minimize over $\vec p$, and for the s-efficiency one can just determine the single-mode efficiencies for the reduced density matrices in the individual modes.
Let us evaluate the multimode efficiency of the example states studied above. State $\ket\psi$ is a tensor product and has $\Es(\ketbra\psi\psi,2)=\Es(\ketbra\psi\psi,1)=1$. As we show below, the d-, s-, and u-efficiencies coincide for tensor product states, so we also have $\Ep(\ketbra\psi\psi,2)=\Ep(\ketbra\psi\psi,1)=1$. Since the u-efficiency is invariant under interferometric transformations, state $\ket{\psi'}$ has the same u-efficiency. Analyzing each of the modes of state $\ket{\psi'}$ on its own, we find them to carry the state $(\ket{1}\bra{1} + \ket{0}\bra{0})$/2, so $\Es(\ketbra{\psi'}{\psi'},1)=1/2$ and $\Es(\ketbra{\psi'}{\psi'},2)=1$. For state $\ket\phi$, both the u- and d-efficiencies equal 2. This is because, even if subjected to an interferometric transformation, it is a pure state that is not coherent, and hence cannot be obtained by attenuating another state.
\section{Proof that the \nomodf-efficiency cannot increase under LO processing}
In this section we show that it is impossible to increase the \nomodf-efficiency using LO processing.
A general LO scheme is shown in Fig.\ \ref{fig:int1}(b).
The input state $\hat\rho$, carried by $N$ optical modes with annihilation operators $\hat a_1,\ldots,\hat a_N$, is passed through a general interferometer which performs a unitary operation $Y$ on these mode operators.
We retain $\noout$ of the output modes $\hat a'_i$, and the remaining $N-\noout$ modes are subjected to a generalized destructive quantum measurement.
We consider postselection on a particular result of this measurement, and determine the \nomodf-efficiency of the state $\hat\rho_{\rm out}$ carried by the remaining output modes.
Our goal is to prove that
\begin{equation}\label{noincr}
\Ep(\hat\rho_{\rm out},\nomod)\le \Ep(\hat\rho,\nomod)
\end{equation}
for any $\nomod\le \noout$.
\begin{figure}[b]
\center{\includegraphics[width=\columnwidth]{Ints.eps}} \caption{\label{fig:int1} A general setup for LO processing.
(a) To determine the efficiency of the input state $\hat\rho$, we find an initial state $\hat\rho_0$, such that $\hat\rho$ may be obtained by attenuation and interferometer $W$ according to \eeqref{eq:kef}.
(b) LO processing of the input state.
The modes pass through a general interferometer, and all but $\noout$ of the output modes are detected via a measurement.
The state $\hat\rho_{\rm out}$ of the remaining $\noout$ modes can be conditioned on a particular measurement result.
(c) The upper limit on the efficiency of the output state is established by choosing an interferometer, $X$, through which this state can be transmitted such that the resulting state, carried by modes $\hat a''_m$, can be obtained by multimode attenuation of another state.}
\end{figure}
In accordance with definition \eqref{eq:kef}, we model the state $\hat\rho$ as being obtained from some initial state $\hat\rho_0$ by combining each of its modes, $\hat b_j$, with vacuum $\hat w_j$ on a beam splitter with transmissivity $p_j$ [Fig.\ \ref{fig:int1}(a)]:
\begin{equation}
\hat a^0_j = \sqrt{p_j} \hat b_j + \sqrt{1-p_j} \hat w_j,
\end{equation}
followed by interferometer $W$.
We assume that the settings are chosen such that, for some $\epsilon>0$,
\begin{equation}\label{Ep}
\sum_{\ell=1}^\nomod p^\downarrow_\ell \le \Ep(\hat\rho,\nomod)+\epsilon.
\end{equation}
The introduction of $\epsilon$ takes account of the possibility that there does not exist a setting which achieves the infimum.
Because interferometers $W$ and $Y$ are adjacent to each other, we can without loss of generality treat them as a single interferometer, corresponding to unitary transformation $U=YW$. The action of this interferometer can be written as
\begin{equation}\label{IntAct}
\hat a'_i = \sum_{j=1}^N U_{ij} \hat a^0_j =\sum_{j=1}^N U_{ij}\sqrt{p_j} \hat b_j + \sum_{j=1}^N U_{ij}\sqrt{1-p_j}
\hat w_j.
\end{equation}
We see that each vacuum mode contributes to each of the output modes, including those that are subjected to conditional measurements. These measurements may ``compromise" the vacuum contributions to the output state \cite{FootNoteComp}, so the output efficiency cannot be calculated directly from the matrix elements $U_{ij}$. We address this issue by performing an RQ decomposition on the matrix
$U_{ij}\sqrt{1-p_j}$ such that
\begin{equation}\label{Rmat}
U_{ij}\sqrt{1-p_j} = \sum_{\ell=1}^N R_{i\ell} Q_{\ell j},
\end{equation}
where $Q$ is unitary and $R$ is an upper triangular matrix, so $R_{i\ell}=0$ for $\ell<i$. Then we get
\begin{equation}
\label{eq:out} \hat a'_i = \sum\limits_{\ell=1}^N U_{i\ell}\sqrt{p_\ell} \hat b_\ell + \sum\limits_{\ell=1}^N
R_{i\ell} \hat v_\ell,
\end{equation}
where
\begin{equation}
\hat v_\ell := \sum_{j=1}^N Q_{\ell j}\hat w_j
\end{equation}
are obtained by transforming modes $\hat w_j$ in a fictitious interferometer $Q$. Because all the $\hat w_j$
correspond to vacuum states, so do the $\hat v_\ell$. The subset $\{\hat v_1,\ldots,\hat v_M\}$ of these modes does not contribute to the set of output modes $\{\hat a'_{M+1},\ldots, \hat a'_N\}$ that is subjected to measurement, and thus directly leads to the loss of efficiency in the output state.
Without loss of generality, we append another interferometer, $X$, acting on the $\noout$ output modes.
Because the \nomodf-efficiency is independent of linear interferometers, this interferometer does not affect the \nomodf-efficiency at the output.
To determine the interferometer to use, we perform a singular value decomposition on the upper left $\noout\times \noout$ block of $R$ such that
\begin{equation}\label{XRpQp}
R = X^\dagger R' Q',
\end{equation}
where the upper left $\noout\times \noout$ block of $R'$ is diagonal, and unitaries $X$ and $Q'$ are equal to the identity outside the upper left $\noout\times \noout$ block.
We choose the unitary matrix $X$ for the final interferometer to be that given by this decomposition.
Denoting the annihilation operators for the modes after the interferometer $X$ by $\hat a''_k$, we have, for $k\le\noout$,
\begin{align}
\label{eq:out2} & \hat a''_k = \sum_{i=1}^\noout X_{ki} \hat a'_i \nn
&=\sum_{i=1}^\noout X_{ki} \left( \sum_{\ell=1}^N U_{i\ell}\sqrt{p_\ell} \hat b_\ell
+ \sum_{\ell=1}^N R_{i\ell} \hat v_\ell \right) \nn
&=\sum_{i=1}^\noout \sum_{\ell=1}^N X_{ki}U_{i\ell}\sqrt{p_\ell} \hat b_\ell
+\sum_{i=1}^\noout \sum_{\ell=\noout+1}^N X_{ki} R_{i\ell} \hat v_\ell \nn
& \quad + \sum_{i=1}^\noout \sum_{\ell=1}^\noout X_{ki} \sum_{k',n=1}^\noout [X^\dagger]_{ik'} R'_{k'n} Q'_{n\ell} \hat v_\ell \nn
&= \sum_{i=1}^K\sum\limits_{\ell=1}^N X_{ki} U_{i\ell}\sqrt{p_\ell} \hat b_\ell
+ \sum_{i=1}^\noout \sum_{\ell=\noout+1}^N X_{ki} R_{il} \hat v_{\ell}
+ R'_{kk} \hat v''_k,
\end{align}
where
\begin{equation}
\hat v''_k := \sum_{\ell=1}^N Q'_{k\ell} \hat v_\ell.
\end{equation}
As the set $\{\hat v''_k\}$ may be regarded as being obtained from initial vacuum modes $\{\hat w_k\}$ via a unitary transformation, they represent an orthonormal set of bosonic modes in the vacuum state. Furthermore, those $\hat v''_k$ that contribute to $\hat a''_k$ do not contain any contribution from the ``compromised" vacuum modes. Indeed, they only contain contributions from $\hat v_\ell$ for $\ell\le M$, whereas the operators for the measured modes only contain contributions from $\hat v_\ell$ for $\ell>M$. As a result, these vacuum contributions are equivalent to loss.
To make this result explicit, we write the annihilation operator in the form $\hat a''_k = \hat B''_k + \hat
V''_k$, where
\begin{equation}
\hat B''_k = \sum_{i=1}^\noout\sum\limits_{\ell=1}^N X_{ki} U_{i\ell}\sqrt{p_\ell} \hat b_\ell + \sum_{i=1}^\noout
\sum_{\ell=\noout+1}^N X_{ki} R_{il} \hat v_{\ell} ,
\end{equation}
and
\begin{equation}
\label{eq:vdef} \hat V''_k = R'_{kk} \hat v''_k.
\end{equation}
We then find that
\begin{align}
[\hat V''_k , (\hat V''_{k'})^\dagger ] &= \delta_{kk'}|R'_{kk}|^2, \\ [\hat B''_k , (\hat B''_{k'})^\dagger ] &=
\delta_{kk'}(1-|R'_{kk}|^2).
\end{align}
The first line follows immediately from Eq.\ \eqref{eq:vdef}. The second line is obtained because $\hat B''_k =
\hat a''_k - \hat V''_k$ and $[\hat a''_k , (\hat a''_{k'})^\dagger ] = \delta_{kk'}$.
Defining
\begin{equation}
p''_k := 1-|R'_{kk}|^2, \qquad \hat b''_k := \hat B''_k/\sqrt{p''_k},
\end{equation}
we have
\begin{equation}
\hat a''_k = \sqrt{p''_k} \hat b''_k + \sqrt{1-p''_k}\hat v''_k.
\end{equation}
Therefore, the output state may be obtained by an interferometer that produces the modes with annihilation
operators $\hat b''_k$, then combining with vacua on beam splitters with transmissivities $p''_k$, as shown in Fig.\ \ref{fig:int2}.
\begin{figure}[b]
\center{\includegraphics[width=\columnwidth]{figx.eps}} \caption{\label{fig:int2} A rearrangement of the interferometer.
The vacuum modes $\{\hat v''_1,\ldots,\hat v''_\noout,\hat v_{\noout+1}\ldots\hat v_N\}$ can be obtained via an interferometer (not shown) from the original vacuum modes $\hat w_\ell$.
The modes $\hat b_\ell$ and $\{\hat v_{\noout+1},\ldots,\hat v_N\}$ are combined in the interferometer to produce $\{\hat b''_1,\ldots,\hat b''_\noout\}$, as well as $\{\hat a'_{\noout},\ldots,\hat a'_N\}$, which are measured, and some modes which are discarded.
The modes $\{\hat b''_1,\ldots,\hat b''_\noout\}$ are then combined with vacuum modes $\{\hat v''_1,\ldots,\hat v''_\noout\}$ to generate the output state.}
\end{figure}
Without loss of generality, we can assume $X$ and $Q'$ to have been chosen such that the numbers $p''_k$ are in non-increasing
order.
The \nomodf-efficiency at the output is therefore upper bounded by
\begin{equation}\label{Epout}
\Ep(\hat\rho_{\rm out},\nomod) \le \sum_{i=1}^\nomod p''_k.
\end{equation}
To determine the sum \eqref{Epout}, we can define the unitaries
\begin{equation}
U' := XU, \qquad Q'' := Q' Q.
\end{equation}
It follows from \eeqref{XRpQp} that $R' = X R (Q')^\dagger$. Therefore, according to \eeqref{Rmat},
\begin{equation}
R'_{k\ell} = \sum_{m=1}^N U'_{km} \sqrt{1-p_m} (Q''_{\ell m})^*.
\end{equation}
Then we obtain
\begin{align}
&\sum_{k=1}^\nomod p''_k \le \nomod-\sum_{k=1}^\nomod\sum_{\ell=1}^\nomod |R'_{k\ell}|^2 \nn
&= \nomod-\sum_{k=1}^N\sum_{\ell=1}^\nomod\sum_{m,j=1}^N U'_{km}\sqrt{1-p_m} (Q''_{\ell m})^* \nn &\quad \times (U'_{kj})^*\sqrt{1-p_j} Q''_{\ell j} \nn
&= \nomod-\sum_{\ell=1}^\nomod\sum_{m,j=1}^N \delta_{m j}\sqrt{1-p_m} (Q''_{\ell m})^*\sqrt{1-p_j} Q''_{\ell j} \nn
&= \nomod-\sum_{\ell=1}^\nomod\sum_{j=1}^N (1-p_j) |Q''_{\ell j}|^2 \nn &= \sum_{\ell=1}^\nomod\sum_{j=1}^N p_j |Q''_{\ell j}|^2 \le \sum_{\ell=1}^\nomod p^\downarrow_\ell.
\label{longproof}
\end{align}
The last inequality in \eeqref{longproof} is obtained because $Q''_{ij}$ is unitary, so $|Q''_{ij}|^2$ is a doubly stochastic matrix, and thus vector $p_l$ majorizes vector \cite{NielsenChuang}
\begin{equation}
q_\ell := \sum_{j=1}^N p_j |Q''_{ij}|^2.
\end{equation}
Now, according to Eqs.~\eqref{Ep}, \eqref{Epout}, \eqref{longproof} and because we can choose $\epsilon$ to be arbitrarily close to zero, we obtain
\begin{equation}
\label{EKleEK}
\Ep(\hat\rho_{\rm out},\nomod) \le \Ep(\hat\rho,\nomod).
\end{equation}
This is the main result of this work: the universal measure of quantum efficiency of a multimode state, the
\nomodf-efficiency, cannot increase under LO processing.
\section{Comparison of efficiency measures}
We now use the above result to prove some additional properties of the different measures of multimode efficiency defined in Sec.~III. First, we show that these efficiencies are related according to
\begin{equation}\label{EffIneq}
\Es(\hat\rho,\nomod)\le \Ep(\hat\rho,\nomod)\le \Ed(\hat\rho,\nomod).
\end{equation}
To examine the s-efficiency, we can again assume that state $\hat\rho$ is obtained via a set of beam splitters with transmissivities
$p_j$ and an interferometer $W$ as in Fig.\ \ref{fig:int1}(a), such that the sum of the $\nomod$ largest values of
$p_j$ is no more than $\Ep(\hat\rho,K)+\epsilon$.
Then the operators for the state $\hat\rho$ are given by
\begin{equation}
\hat a_j = \hat B_j + \hat V_j,
\end{equation}
with
\begin{equation}
\hat B_j :=\sum_{\ell=1}^N W_{j\ell}\sqrt{p_\ell}\hat b_\ell, \qquad \hat V_j :=\sum_{\ell=1}^N
W_{j\ell}\sqrt{1-p_\ell}\hat w_\ell,
\end{equation}
corresponding to operators carrying signal and vacuum fields, respectively.
To determine the s-efficiency, we determine the efficiency for each mode individually.
When determining the efficiency for mode $\hat a_j$, we can regard modes $\hat a_k$ for $k\ne j$ as being discarded.
The vacuum operators $\hat V_k$ for $k\ne j$ are not orthogonal to $\hat V_j$; however, since those modes are discarded, the addition of vacuum $\hat V_j$ is equivalent to loss.
Therefore, the efficiency of the state in mode $\hat a_j$ is no greater than
\begin{equation}
p'_j:=[\hat B_j,\hat B_j^\dagger]=\sum_{\ell=1}^N |W_{j\ell}|^2 p_\ell.
\end{equation}
The sum of the $K$ largest values of $p'_j$ upper bounds the s-efficiency; that is,
\begin{equation}
\Es(\hat\rho,\nomod)\le \sum_{j=1}^\nomod {p'}^\downarrow_j.
\end{equation}
Because $W$ is unitary, $|W_{j\ell}|^2$ is a doubly stochastic matrix, and the vector of values $\vec p$
majorizes $\vec p'$. That means that
\begin{equation}
\sum_{j=1}^\nomod {p'}^\downarrow_j \le \sum_{j=1}^\nomod p_j^\downarrow \le \Ep(\hat\rho,\nomod)+\epsilon.
\end{equation}
Because this holds for all $\epsilon>0$, we have $\Es(\hat\rho,\nomod)\le \Ep(\hat\rho,\nomod)$.
The second inequality in \eeqref{EffIneq} is because the definition of $\Ep(\hat\rho,\nomod)$ in \eeqref{eq:kef} looks for the minimum in a larger set of states than that of $\Ed(\hat\rho,\nomod)$ in \eeqref{ED}.
For tensor product states, the s- and d-efficiencies are the same.
To see this, use the definition \eqref{Ei} on the tensor product state
\begin{equation}
\hat \rho = \bigotimes_{j=1}^N \hat\rho_j.
\end{equation}
One obtains
\begin{equation}
\Es(\hat\rho,\nomod)= \sum_{\ell=1}^\nomod E(\hat\rho_{\ell})^\downarrow.
\end{equation}
Therefore, there exists a set of states $\hat\rho^0_j$ and transmissivities $p_j$, such that the sum of the $\nomod$ largest values of $p_j$ is no more than $\Es(\hat\rho,\nomod)+\epsilon$, and the final states $\hat\rho_j$ may be obtained via loss channels with transmissivities $p_j$ from initial states $\hat\rho^0_j$.
This would also provide a scheme for producing $\hat\rho$ for the definition of $\Ed(\hat\rho,\nomod)$, and therefore
\begin{equation}
\Ed(\hat\rho,\nomod) \le \sum_{j=1}^\nomod p_j^\downarrow \le \Es(\hat\rho,\nomod)+\epsilon.
\end{equation}
Because this is true for all $\epsilon>0$, we obtain $\Ed(\hat\rho,\nomod) \le \Es(\hat\rho,\nomod)$.
Combining this with \eeqref{EffIneq}, we find that $\Ed(\hat\rho,\nomod) = \Es(\hat\rho,\nomod)$ for tensor product states, and all inequalities in \eqref{EffIneq} saturate.
This result leads us to an important conclusion. Suppose we start with $N$ separable states (for example, imperfect single photons as in \eeqref{eq:mix}) with efficiencies $p_\ell$, which we subject to LO processing, resulting in a set of modes in which the efficiencies, when analyzed separately, are given by $p'_\ell$.
Using the result that LO processing cannot increase the u-efficiency, and \eeqref{EffIneq}, we have for any integer $K$,
\begin{equation}
\label{eq:cat}
\sum_{\ell=1}^\nomod p'_\ell \le \sum_{\ell=1}^\nomod p^\downarrow_\ell .
\end{equation}
In other words, the LO processing can act to average the efficiencies, but not to concentrate them.
One consequence is the exclusion of any possibility for ``catalytic'' efficiency improvement, in which some
highly efficient sources are used to increase the efficiency in other optical modes, without themselves suffering
from loss.
These results do not rule out increases in the individual efficiencies; for example, if the largest efficiency is decreased, it is possible for the second largest efficiency to be increased.
\section{Summary}
We have introduced a number of measures that enable us to quantify the efficiency in multimode systems.
The \nomodf-efficiency is a powerful general measure that takes account of how loss may have been mixed between the different modes.
It is unchanged under linear interferometers, and cannot increase under more general LO processing with destructive measurements.
We have used this result to show that catalytic improvement of photon sources is not possible with LO processing.
If one starts with independent optical sources (which produce a tensor product of states), then the efficiencies in the individual output modes are weakly majorized by the efficiencies in the input.
This means that it is not possible to concentrate the efficiencies, such that the sum of the highest $\nomod$ output efficiencies is greater than the sum of the highest $\nomod$ input efficiencies.
It is clearly possible to increase the \nomodf-efficiency if one uses \emph{non}linear optical elements.
For example, a standard method of producing single photons is via parametric downconversion (a nonlinear process), and postselection on detection of a photon in one of the output modes.
The initial beam is coherent, with efficiency zero, but the final output (ideally) has unit efficiency.
\acknowledgments
This work has been supported by NSERC, AIF, CIFAR and Quantum{\it Works}. We thank B.\ C.\ Sanders and H.\ M.\ Wiseman for stimulating
discussions.
|
2,869,038,153,813 | arxiv | \section{Introduction}
\label{sec:intro}
We consider the following composite linear ill-posed operator
equation~$A x = y$ with
\begin{equation} \label{eq:opeq}
\begin{CD}
A:\; @. X @> D >> Z @> B>> Y
\end{CD}
\end{equation}
where $A=B \circ \E: X \to Y$ denotes the compact linear operator with infinite dimensional range~$\Rg(A)$. This forward operator $A$ is a
composition of a compact linear operator $\E: X \to Z$ with infinite dimensional range $\Rg(\E)$ and a bounded non-compact linear operator $B: Z \to Y$ with
non-closed range $\Rg(B) \not=\overline{\Rg(B)}^Y$. Here~$X,Y$ and $Z$
denote three infinite dimensional separable real Hilbert spaces. In the nomenclature of Nashed \cite{Nashed87}, the inner problem is a linear operator equation
\begin{equation} \label{eq:inner}
\E\, x\,=\,z\,,
\end{equation}
which is ill-posed of type~II due to the compactness of $\E$, whereas the outer problem
\begin{equation} \label{eq:outer}
B\, z\,=\,y
\end{equation}
is ill-posed of type~I, since $B$ is non-compact.
Operator equations with non-compact operators possessing a non-closed range are
often assumed to be less ill-posed (ill-posedness of
type~I), and we refer to M.~Z. Nashed in~\cite[p.~55]{Nashed87} who states that ``\dots an equation
involving a bounded non-compact operator with non-closed range is
`less' ill-posed than an equation with a compact operator with
infinite-dimensional range.'' For compact operator equations it is
common to measure the \emph{degree of ill-posedness} in terms of the decay rate of the
singular values, and the above composite operator~(\ref{eq:opeq}) is of this type
despite of the non-compact factor~$B$.
In our subsequent analysis we will mainly analyze and compare
the following cases, which are of the above type and seemingly should have similar
properties. The compact factor~$D$ is given either
\begin{itemize
\item[--] as the simple integration operator
\begin{equation}\label{eq:J}
[J x](s):=\int_0^s x(t)dt\qquad(0 \le s \le 1)
\end{equation}
mapping in $L^2(0,1)$, or
\item[--] as the natural (compact) embedding
\begin{equation}
\label{eq:embk}
\mathcal E^{(k)}\colon H^{k}(0,1) \hookrightarrow L^{2}(0,1)
\end{equation}
from the Sobolev space $H^{k}(0,1)$ of order $k \in \N$ to $L^2(0,1)$.
\end{itemize}
This will be composed with~$B$ being either
\begin{itemize
\item[--] a bounded linear multiplication operator
\begin{equation}\label{eq:multioperator}
[B^{(M)}x](t):=m(t)\,x(t) \qquad (0 \le t \le 1)
\end{equation}
with a multiplier function $m \in L^\infty(0,1)$ possessing essential
zeros, or
\item[--] the Hausdorff moment operator $B^{(H)}: Z=L^2(0,1) \to Y=\ell^2$ defined as
\begin{equation} \label{eq:Haus}
[B^{(H)}z]_j:= \int_0^1 t^{j-1}z(t)dt \qquad (j=1,2,...).
\end{equation}
\end{itemize}
The inner operators~(\ref{eq:J}) and (\ref{eq:embk}) are known to be compact, even Hilbert-Schmidt, and
the decay rates of their singular values $\sigma_i(J)$ and $\sigma_i(\mathcal{E}^{(k)})$ to zero are available.
Both the above outer operators~(\ref{eq:multioperator}) and~(\ref{eq:Haus}) are known to be non-compact with non-closed range.
The composition~$B^{(M)}\circ J$ was studied
in~\cite{Freitag05,HW05,HW09,VuGo94}. Recent studies of the Hausdorff moment problem, which goes back to
Hausdorff's paper \cite{Hausdorff23}, have been presented
in~\cite{GHHK21}. In particular, we refer to ibid. Theorem~1 and
Proposition~13,
which yield assertions for the composition of type $B^{(H)}\circ \mathcal{E}^{(k)}$.
The question that we are going to address is the following:
What is, in terms of the decay of the singular values $\sigma_i(B \circ D)$ of the composite operator $B \circ D$ from (\ref{eq:opeq}), the impact of the non-compact outer
operator $B$?
In the case of $B:= B^{(M)}$ and $D:=J$ results are known.
For several classes of multiplier functions~$m,$ including $m(t)=t^\theta$ for all $\theta>0$,
it was seen that the singular values of the composite operator $A$ obey the equivalence~
\begin{equation}
\label{eq:SVDcon}
\sigma_{i}(A) = \sigma_{i}(B^{(M)}\circ J)\; \asymp
\footnote{We shall measure the decay rates of the singular values
asymptotically; thus for decreasing sequences~$s_{i}\geq 0$
and~$t_{i}\geq 0$ we say that~$s_{i}\asymp t_{i}\;$ as $\;i\to\infty$
if there are constants $0<\underline c \le \overline c < \infty$
such that the inequalities
$$ \underline{c} \,s_{i} \le t_{j} \le \overline{c} \,s_{i} \qquad (i=1,2,...)$$
are valid.}\; \sigma_{i}(J)\asymp
\frac 1 i\quad {\rm as} \; i\to\infty,
\end{equation}
which means that $B^{(M)}$ does not `destroy' the degree of ill-posedness of $J$ by composition.
\begin{remark}
The right-hand inequalities $\sigma_i(B \circ J) \le \overline{c} \,\sigma_i(J)$, for example required in~(\ref{eq:SVDcon}), are
trivially satisfied if $B$ is bounded. We have
$\sigma_{i}(B^{(M)}\circ J) \leq C\, \sigma_{i}(J)$ with~$C:=
\norm{B^{(M)}}{L^{2}(0,1) \to L^{2}(0,1)} $. Clearly, the same reasoning applies to
the composition operator $B^{(H)} \circ J$, and we have with $C:=
\norm{B^{(H)}}{L^{2}(0,1) \to \ell^2}= \sqrt{\pi}$
(cf.~\cite{Inglese92})
the upper estimate $\sigma_{i}(B^{(H)} \circ J) \leq
\sqrt{\pi}\,\sigma_{i}(J)\;(i=1,2,...)$.
\end{remark}
To the best of our knowledge, by now no examples are known that show a
violation of $\sigma_i(B \circ \E) \asymp \sigma_i(\E)$.
In the present study we shall show that~$\sigma_{i}(B^{(H)}\circ J)/\sigma_{i}(J)
\leq C \,i^{-1/2} \;(i=1,2,...)$ with some positive constant $C$, and the non-compact Hausdorff moment
operator~$B^{(H)}$ enlarges the degree of ill-posedness of $J$ by a
factor~$1/2$, at least.
We shall start in
Section~\ref{sec:general} with some results for general operators,
relating conditional stability estimates to the decay of the singular
numbers of the composition~$B \circ D$.
Conditional stability estimates for the composition with the
Hausdorff moment operator are given in Section~\ref{sec:HMP
, both for the embedding operator and the integration
operator. According to Theorem~\ref{thm:general} we
derive lower bounds for the decay rates of the
compositions~$B^{(H)}\circ \mathcal E^{(k)}$ and~$B^{(H)} \circ J$,
respectively.
The composite operators, both,~$A^*A$ for~$A=B^{(H)}\circ J$, and
$\widetilde A^* \widetilde A$ for~$\widetilde A=B^{(M)}\circ J$ are Hilbert-Schmidt operators, because the
factor~$J$ is such. In particular these may be expressed as linear Fredholm
integral operators acting in~$L^{2}(0,1)$ with symmetric positive kernels~$k$
and~$\widetilde k$, respectively. There are well-known results which
state that certain type of kernel smoothness yields a minimum decay
rate of the corresponding singular values of the integral
operator. Therefore, in Section~\ref{sec:kernel} we establish the
form of the kernels~$k$ and~$\widetilde k$, and we study their
smoothness. In particular, for the composition~$B^{(H)}\circ J$ we
shall see that the known results are not applicable, whereas in
case~$B^{(M)}\circ J$ these known results are in alignment with~$\sigma_{i}(B^{(M)}\circ J) \asymp \sigma_{i}(J)\asymp \frac 1 i$.
Finally, in Section~\ref{sec:bounding-sing-numbers}, we
improve the upper bounds for
the decay of the singular values of the composition~$B^{(H)}\circ J$,
giving the first example that violates~$\sigma_{i}(B \circ D)\asymp
\sigma_{i}(D)$ as~$i\to\infty$ in the context of a non-compact outer
operator~$B$. This approach bounds the singular values by means of
bounds for the Hilbert-Schmidt norm of the composition~$\norm{\lr{ B^{(H)}\circ J}(I -
Q_{n})}{HS}$, where $Q_n$ is a projection on the $n$-dimensional subspace of
adapted Legendre polynomials in $L^2(0,1)$. We continue to discuss the
obtained result in Section~\ref{sec:discussion}. An appendix completes the paper.
\section{Results for general operators} \label{sec:general}
We start with a general theorem explaining the interplay of
conditional stability estimates and upper bounds for the degree of
ill-posedness. To this end we shall use results from the theory
of~$s$-numbers, and we refer to
the monograph~\cite[Prop.~2.11.6]{Pie87}. In particular, for a compact
operator, say~$T\colon X \to Y$ the singular
values~$\sigma_{i}(T)$ coincide with the corresponding (linear)
approximation numbers~$a_{i}(T)$, and hence the identities
\begin{equation}
\label{eq:s-nums}
\sigma_{i}(T) = \|T(I-P_{i-1})\|_{X \to Y} = \inf\{\|T - L\|_{X \to Y}: {\mathrm{dim}}(\mathcal{R}(L))<i\}
\end{equation}
hold for all $i=1,2,\dots\,$.
Above, we denote by $\{\sigma_i(T),u_i,v_i\}_{i=1}^\infty$ with
$T u_i=\sigma_i(T)v_i,\ (i=1,2,...)$ the well-defined (monotonic) singular system of
the compact operator $T$, and~$P_n: X \to X\;(n=1,2,...)$
the orthogonal projection onto~${\rm span}(u_1,...,u_n)$, the $n$-dimensional subspace of $X$, where we assign~$P_0=0: X \to X$.
The main estimate is as follows:
\begin{theorem} \label{thm:general}
Let~$\E: X \to Z$ and~$A: X \to Y$ be compact linear operators between the infinite dimensional Hilbert spaces $X,\,Y$ and
$Z$ with non-closed ranges $\mathcal{R}(\E)$ and
$\mathcal{R}(A)$. Suppose that there exists an index function $\Psi:(0,\infty) \to (0,\infty)$
such that for $0<\delta \le \|A\|_{X \to Y}$ the conditional stability estimate
\begin{equation} \label{eq:condstab}
\sup\{\,\|\E x\|_Z:\,\|A x\|_Y \le \delta,\;\|x\|_X \le 1\} \le \Psi(\delta)
\end{equation}
holds. Then we have
\begin{equation} \label{eq:eststab}
\sigma_i(\E) \le \Psi(\sigma_i(A)) \qquad (i=1,2,...)
\end{equation}
and also
\begin{equation} \label{eq:estinv}
\Psi^{-1}(\sigma_i(\E)) \le \sigma_i(A) \qquad (i=1,2,...).
\end{equation}
If the operators~$\E^{\ast}\E\colon X \to X$ and~$A^{\ast}A\colon X
\to X$ commute, and if the index function~$t\mapsto \Psi^{2}(\sqrt t),\ t>0$ is concave then the
converse holds true in the sense that~(\ref{eq:eststab}) implies the stability estimate~(\ref{eq:condstab}).
\end{theorem}
\begin{proof}
Suppose that~(\ref{eq:condstab}) holds true. Then for every $u \in
X,\ \|u\|_X \le 1$, we see that
\begin{equation} \label{eq:impli}
\|Au\|_Y \le \delta \quad \mbox{implies that} \quad \|\E u\|_Z \le \Psi(\delta) \qquad (\delta>0).
\end{equation}
Consider the singular projections~$P_{i}$ for the operator~$A$.
For arbitrarily chosen~$x \in X$ with $\|x\|_X \le 1$ we see that
$$
\|A(I-P_{i-1})x\|_Y \le \|A(I-P_{i-1})\|_{X \to Y}\,\|x\|_X
\le \sigma_i(A).
$$
Applying~(\ref{eq:impli}) with $u:=(I-P_{i-1})x$ and $\delta:=\sigma_i(A)$ yields
$\|\E(I-P_{i-1})x\|_Z \le \Psi(\sigma_i(A))$. Since $x \in X$ with
$\|x\|_X \le 1$ was chosen arbitrarily, we even arrive at
$\|\E(I-P_{i-1})\|_{X \to Y} \le \Psi(\sigma_i(A)$.
By virtue of (\ref{eq:s-nums}) we find for
$$\sigma_i(\E) = \inf\{\|\E - L\|_{X \to Z}: {\mathrm{dim}}(\mathcal{R}(L))<i\}$$
that
\begin{equation} \label{eq:Qj}
\sigma_i(\E) \leq \|\E(I-P_{i-1})\|_{X \to Z} \le \Psi(\sigma_i(A),
\end{equation}
which proves~(\ref{eq:eststab}). Since the inverse of an index function exists and is also an index function, hence
monotonically increasing, the estimate~(\ref{eq:estinv}) is a
consequence of~(\ref{eq:eststab}).
Next, suppose that the operators~$\E^{\ast}\E$ and~$A^{\ast}A$
commute, and hence they share the same singular
functions~$u_{1},u_{2},\dots$ Clearly, for~$x=0$ we have
that~$\norm{\E x}{Z}=0\leq \Psi(\delta)$, so we may and do assume that~$x\neq 0$.
Assume that~(\ref{eq:eststab}) holds. We abbreviate~$f(t):=
\Psi^{2}(\sqrt t),\ t>0$.
First, if~$\norm{x}{X}=1$ then we bound
\begin{align*}
\norm{\E x}{Z}^{2} & = \sum_{i=1}^{\infty}
\sigma_{i}^{2}(\E)\abs{\scalar{x}{u_{i}}}^{2} \leq \sum_{i=1}^{\infty}
f\lr{\sigma_{i}^{2}(A)}\abs{\scalar{x}{u_{i}}}^{2} \\
& \leq
f\lr{\sum_{i=1}^{\infty}\sigma_{i}^{2}(A)\abs{\scalar{x}{u_{i}}}^{2}}
= f\lr{\norm{Ax}{Y}^{2}},
\end{align*}
where we used Jensen's Inequality for~$f$. Hence~$\norm{\E x}{Z} \leq
\Psi\lr{\norm{Ax}{Y}}$. Consequently, for~$x\in X, \norm{x}{X}>0$ this
extends to
\begin{equation}
\label{eq:square-bound-normneq1}
\frac{\norm{\E x}{Z}}{\norm{x}{X}}\leq
\Psi\lr{\frac{\norm{Ax}{Y}}{\norm{x}{X}}},\quad x\neq 0.
\end{equation}
For the concave index function~$f$ we see that~$f(at) \geq a f(t),\ t>0$
whenever~$a\leq 1$. Thus for~$a:= \norm{x}{X}^{2}\leq 1$ and~$t:= \lr{\frac{\norm{Ax}{Y}}{\norm{x}{X}}}^{2}$ we find that
$$
\norm{\E x}{Z} \leq \Psi(\norm{Ax}{Y}),\quad x\neq 0,
$$
which in turn yields the validity of~(\ref{eq:condstab}), and this
completes the proof.
\end{proof}
\begin{remark} \label{rem:sufficient}
If the conditional stability estimate \eqref{eq:condstab} is not valid for all $\delta>0$, but for sufficiently small $\delta>0$, then the estimates \eqref{eq:eststab} and \eqref{eq:estinv} are
not valid for all $i \in \mathbb{N}$, but for $i$ sufficiently large. Hence, the corresponding assertions about the singular value asymptotics do not change.
\end{remark}
\begin{remark} \label{rem:modul}
We mention here that the term $$\sup\{\,\|\E x\|_Z:\,\|A x\|_Y \le \delta,\;\|x\|_X \le 1\},$$ occurring in formula \eqref{eq:condstab}, is a special case of the modulus of continuity
\begin{equation} \label{eq:modul}
\omega_M(\delta):= \sup\{\,\|\E x\|_Z:\,\|A x\|_Y \le \delta,\;x \in M\}
\end{equation}
with some closed and bounded set $M \subset X$ such that $\E M$ represents a compact set of $Z$. This is due to the compactness of the operator $\E: X \to Z$.
Note that $\omega_M(\delta)$ is increasing in $\delta>0$ with the limit condition
$\lim_{\delta \to 0\,} \omega_M(\delta)=0$. Moreover, we have for constants $E>1$ and
centrally symmetric and convex sets $M$ that $\omega_{E M}(\delta)=E\,\omega_M(\delta/E)$.
For further details of this concept we refer, for example, to \cite{BHM13,HMS08}.
In general, one is interested in bounding the modulus of continuity by a majorant index function $\Psi$ as in formula \eqref{eq:condstab},
which leads to conditional stability estimates. Precisely in \eqref{eq:condstab} we have the situation of a centrally symmetric and convex
$M=\{x \in X: \|x\|_X \le 1\}$ under consideration with associated majorant index function $\Psi$. Consequently, this also yields for $E>1$
$$
\sup\{\,\|\E x\|_Z:\,\|A x\|_Y \le \delta,\;\|x\|_X \le E\} \le E\,\Psi(\delta/E).
$$
It is known from approximation theory, and it was highlighted
in~\cite[Prop.~2.9]{BHM13}, that there is always a concave
majorant for the modulus of continuity, such that without loss of
generality we may assume~$\Psi$ to be concave.
\end{remark}
\begin{remark}
It is shown in~\cite[Thm.~4.1]{BHM13}
that the required concavity of the function~$t\mapsto \Psi^{2}(\sqrt
t),\ t>0$ automatically holds true whenever~$\E^{\ast}\E$ is a function of~$A^{\ast}A$,
i.e.,\ $\E^{\ast}\E = \varphi(A^{\ast}A)$ for an index function~$\varphi$. For power type functions~$\Psi$ the
concavity assertion holds true if and only if~$\Psi$ is concave, see the end of
Remark~\ref{rem:modul}.
\end{remark}
\section{Compositions with the Hausdorff moment operator} \label{sec:HMP}
In order to apply Theorem~\ref{thm:general} to compositions with the
integration operator~$B^{(H)}$ from~(\ref{eq:Haus}), we formulate
appropriate conditional stability estimates.
\begin{theorem} \label{thm:Hausdorff}
There are constants~$C_k>0$ depending on $k=0,1,2,...$ such that
\begin{enumerate}
\item[(a)] \label{it:stability-E}
For the composite problem~$B^{(H)}\circ \mathcal E^{(k)}$ the bound
\begin{equation} \label{eq:supH}
\sup\{\,\|x\|_{L^2(0,1)}:\,\|B^{(H)}(\mathcal E^{(k)} x)\|_{\ell^2} \le
\delta,\;\|x\|_{H^k(0,1)} \le 1\}
\le \frac{C_k}{(\ln(1/\delta))^k}
\end{equation}
holds for sufficiently small $\delta>0$.
\item[(b)]\label{it:stability-J} For the composite problem~$B^{(H)}\circ J$ the bound
\begin{equation} \label{eq:supJ}
\sup\{\,\|J x\|_{L^2(0,1)}:\,\|B^{(H)}(J x)\|_{\ell^2} \le \delta,\;\|x\|_{L^2(0,1)} \le 1\} \le \frac{C_0}{\ln(1/\delta)}
\end{equation}
holds for sufficiently small $\delta>0$.
\end{enumerate}
\end{theorem}
The proof will be along the lines of~\cite{Tal87}, and we shall state
the key points here. The analysis will be based on the (normalized) shifted
Legendre polynomials~ $\{L_j\}_{j=1}^\infty$ with the explicit representation
\begin{equation}
\label{eq:legendre}
L_j(t)=\frac{\sqrt{2j-1}}{(j-1)!}\left(\frac{d}{dt} \right)^{j-1} t^{j-1}(1-t)^{j-1} \qquad (t \in [0,1],\;j=1,2,...)
\end{equation}
The system $\{L_j\}_{j=1}^\infty$ is the result of the Gram-Schmidt
orthonormalization process of the system $\{t^{j-1}\}_{j=1}^\infty$ of
monomials. Consequently, we have
\begin{equation}
\label{eq:span-span}
{\operatorname{span}}(1,t,...,t^{N-1})={\operatorname{span}}(L_1,L_2,...,L_N).
\end{equation}
These polynomials form an orthonormal basis in~$L^{2}(0,1)$, and we denote~$Q_{n}$ the orthogonal
projections onto the span~$\mathcal D(Q_{n})\subset L^2(0,1)$ of the
first~$n$ Legendre polynomials, and~$P_{n}$ the projection onto the
first~$n$ unit basis vectors in~$\ell^{2}$.
\begin{lemma}\label{lem:PAQ}
For the Hausdorff moment operator~$B=B^{(H)}$ from~(\ref{eq:Haus}) the following holds true.
\begin{enumerate}
\item[(I)] $P_{n} B Q_{n} = P_{n}B$,
\item[(II)] $P_{n} B B^{\ast}P_{n} = H_{n}$
with~$H_{n}\colon \ell^{2}_{n}\to \ell^{2}_{n}$ being the
$n$-dimensional segment of the Hilbert matrix,
\item[(III)]
$\norm{Q_{n}x}{L^{2}(0,1)} \leq \frac{\norm{P_{n} B
x}{\ell^{2}}}{\sigma_{n}(P_{n}B)}$, and
\item[(IV)]
$\sigma_{n}(P_{n}B) = \norm{H_{n}^{-1}}{\ell^{2}_{n}\to
\ell^{2}_{n}}^{1/2}$.
\end{enumerate}
Consequently we have
that~$\norm{Q_{n}x}{L^{2}(0,1)}
\leq{\norm{H_{n}^{-1}}{\ell^{2}_{n}\to
\ell^{2}_{n}}^{1/2}}{\norm{P_{n} B x}{\ell^{2}}}$.
\end{lemma}
\begin{proof}
The first assertion (I) is easily checked and it results from the fact
that the Gram-Schmidt matrix for turning from the monomials to the
Legendre coefficients, see~(\ref{eq:span-span}), is lower triangular. The second assertion (II) was
shown in~\cite[Prop.~4]{GHHK21}. The final assertion (IV) is a
re-statement of
$$
\sigma_{n}(P_{n}B) \leq \frac{\norm{P_{n} B
x}{\ell^{2}}}{\norm{Q_{n}x}{L^{2}(0,1)}},\quad x\neq 0.
$$
In view of the first item (I) it is enough to prove that
$$
\sigma_{n}(P_{n}B) \leq \inf_{0 \neq z\in \mathcal
D(Q_{n})}\frac{\norm{P_{n} B z}{\ell^{2}}}{\norm{z}{L^{2}(0,1)}}.
$$
It is well known from approximation theory that
$$
\sigma_{n}(P_{n}B) = \inf_{X_{n}:\, \dim(X_{n}=n)}\;\inf_{0 \neq z\in
\mathcal D(Q_{n})}\frac{\norm{P_{n} B
z}{\ell^{2}}}{\norm{z}{L^{2}(0,1)}}.
$$
Indeed, the right-hand side above corresponds to the definition of
the \emph{Bernstein numbers}, which constitute an $s$-number,
see~\cite[Thm.~4.5]{Pie74}, and this proves item (III). The last
item (IV) follows from
$$
\sigma_{n}^{2}(P_{n}B) = \sigma_{n}(P_{n} B B^{\ast}P_{n}) =
\sigma_{n}(P_{n} H P_{n}) = \sigma_{n}(H_{n}) = \frac 1
{\norm{H_{n}^{-1}}{{\ell^{2}_{n}\to
\ell^{2}_{n}}}},
$$
which in turn yields the final assertion. The proof is complete.
\end{proof}
The next result concerns the approximation power of smooth functions
by Legendre polynomials.
\begin{lemma}\label{lem:AP-Legendre}
For functions~$x\in H^{k}(0,1)$ there is a constant~$K_{k}$ such
that
\begin{equation} \label{eq:rN} \|(I - Q_{n})x\|_{L^2(0,1)} \le
K_k\, \frac 1 {n^{k}}\quad (n\in\N).
\end{equation}
For~$k=1$ and hence~$x\in H^{1}(0,1)$ this my be specified as
$$
\|(I - Q_{n}) x\|_{L^2(0,1)} \le
\frac{\|x^\prime\|_{L^2(0,1)}}{2n}
\quad (n\in\N).
$$
\end{lemma}
\begin{remark}In~\cite[Thm.~4.1]{Angbook02} the proof of~(\ref{eq:rN})
is given for~$k=1$. In ibid.~Remark~4.1 the extension for other values of~$k$
is stated without explicit proof. In~\cite[Thm.~2.5]{Wang12} a proof
is given for the Legendre polynomials on the interval~$(-1,1)$,
based on ibid.~Theorem~2.1 which describes the decay rates of the
expansions in terms of Legendre polynomials for functions with
Sobolev type smoothness. The specification in the second bound is
taken from~\cite[Eq.~(27)]{Tal87}.
\end{remark}
Based on the above preparations we turn to the
\begin{proof}[Proof of Theorem~\ref{thm:Hausdorff}]
For both assertions (a) and (b) we are going to use a decomposition of the form
\begin{equation}
\label{eq:z-deco}
\norm{z}{L^{2}(0,1)} \leq \norm{Q_{n}z}{L^{2}(0,1)} + \norm{(I - Q_{n})z}{L^{2}(0,1)}
\end{equation}
where~$Q_{n}$ is the orthogonal projection on the span of the first~$n$
Legendre polynomials.
For the first assertion (a) we let~$z:= x$, and we bound each summand.
Recall that here~$\mathcal E^{(k)}$ is the natural embedding with $\mathcal
E^{(k)} x = x$ for all $x \in H^k(0,1)$. Thus, by Lemma~\ref{lem:PAQ} the first summand
is bounded as
$$
\norm{Q_{n}x}{L^{2}(0,1)} \leq \delta \norm{H_{n}^{-1}}{\ell^{2}_{n}\to
\ell^{2}_{n}}^{1/2}.
$$
From~\cite{Todd54,Wilf70} and \cite{Beckermann00} we know that there
is a constant~$\hat C$, independent of $n$, for which
$$\|H_n^{-1}\|_{\ell^{2}_{n}\to \ell^{2}_{n}} \le \hat
C\exp(4\ln(1+\sqrt{2})\,n) \le \hat C\exp(4n).
$$
This yields
\begin{equation} \label{eq:firstsum} \|Q_{n} x\|_{L^2(0,1)} \le
\sqrt{\hat C}\exp(2n)\,\delta.
\end{equation}
The second summand in~(\ref{eq:z-deco}) is bounded in
Lemma~\ref{lem:AP-Legendre}, and altogether we find that
\begin{equation} \label{eq:total} \|x\|_{L^2(0,1)} \le \sqrt{\hat
C}\exp(2n)\,\delta + K_k\,\frac{1}{n^{k}},
\end{equation}
We choose an integer~$N = n(\delta)$ such that the two terms on the right-hand
side of the estimate \eqref{eq:total} are equilibrated.
This is achieved by letting~$N$ be given from
$$
N = \lfloor \frac 1 4 \ln(1/\delta)\rfloor+1,\quad 0 < \delta \leq \exp(-4).
$$
Substituting $n:=N$ in~\eqref{eq:total} yields for sufficiently small
$\delta>0$
the final estimate
$$\|x\|_{L^2(0,1)} \le \frac{C_k}{\left(\ln \left( 1/\delta\right)\right)^k}, $$
with some positive constant
$C_k$ depending on $k$.
For proving the second assertion (b) we assign~$z:= Jx$. Then the first
summand in~(\ref{eq:z-deco}) allows for an estimate of the form
$$\|Q_{n}(Jx)\|_{L^2(0,1)}\le \sqrt{\hat C} \exp(2n)\, \delta, $$
again for some constant $\hat C>0$. For bounding~$\norm{(I -
Q_{n})(Jx)}{L^{2}(0,1)}$ we use the second estimate in
Lemma~\ref{lem:AP-Legendre} which gives, for~$\|x\|_{L^2(0,1)} \le 1$,
the bound
$$
\|(I - Q_{n}) (Jx)\|_{L^2(0,1)} \le \frac{\|x\|_{L^2(0,1)}}{2n} \leq
\frac 1 {2n}.
$$
Then we can proceed as for the first assertion
in order to
complete the proof of the second assertion, and of the theorem.
\end{proof}
The proof formulated above is an alternative to the proof of
\cite[Theorem~1]{GHHK21} for $k=1$ and an extension to the cases
$k=2,3,...$. Consequences of Theorems~\ref{thm:general} and
\ref{thm:Hausdorff} for the singular value decay rate of the Hausdorff
moment composite operator $A:=B^{(H)} \circ \mathcal{E}^{(k)}$ are
summarized in the following corollary.
\begin{corollary} \label{cor:Hausdorff} For the composite Hausdorff
moment problem~$B^{(H)}\circ \mathcal E^{(k)}$
there exist
positive constants $C_k,\; \underline{C}$ and $\overline C$ such
that
\begin{equation*}
\exp(-\underline{C}\,i) \le
\exp\left(-\left(\frac{C_k}{ \sigma_i(\mathcal
E^{(k)})}\right)^{\frac1k} \right)
\le \sigma_i(B^{(H)}\circ \mathcal E^{(k)}) \le \sqrt{\pi}\,\sigma_i\lr{\mathcal E^{(k)}} \le
\frac{\overline{C}}{i^k}
\end{equation*}
is valid for sufficiently large indices $i \in \mathbb{N}$.
\end{corollary}
\begin{proof}
Taking into account the well-known
singular value asymptotics $\sigma_i\lr{\mathcal E^{(k)}} \asymp
i^{-k}$ as~$i\to\infty$ (cf.~\cite[\S3.c]{Koenig86}) and the norm
$\|B^{(H)}\|_{L^2(0,1)\to \ell^2}=\sqrt{\pi}$, we simply find
for the composition~$A = B^{(H)} \circ \mathcal E^{(k)}$ the estimates from above
$$\sigma_i(B^{(H)}\circ \mathcal E^{(k)}) \le \sqrt{\pi}\,\sigma_i\lr{\mathcal E^{(k)}}\, \le
\frac{\overline{C}}{i^k},$$ with some positive constant
$\overline{C}$.
We need to show the lower bounds, and we are going to apply
Theorem~\ref{thm:general} in combination with the estimate \eqref{eq:supH} from
Theorem~\ref{thm:Hausdorff}.
To do so we set
$X:=H^k(0,1)$, $Z:=L^2(0,1)$,
$Y:=\ell^2$, as well as~$D:=\mathcal E^{(k)}$, $A:=B^{(H)}\circ
\mathcal E^{(k)}$, and~$\Psi(\delta):= \frac{C_k}{(\ln(1/\delta))^k}$ for sufficiently
small $\delta>0$. This function has the inverse~$\Psi^{-1}(t)=\exp\left(-\left(\frac{C_k}{t}\right)^{1/k}\right).$
Then the conditional stability estimate \eqref{eq:condstab} attains
the form~\eqref{eq:supH}, and we derive from \eqref{eq:estinv} that
$$ \exp\left(-\left(\frac{C_k}{ \sigma_i(\E)}\right)^{1/k} \right)
=\Psi^{-1}(\sigma_i(\mathcal E^{(k)})) \le \sigma_i\lr{B^{(H)}\circ
\mathcal E^{(k)}} $$
for sufficiently large indices $i \in \mathbb{N}$. This completes the
proof.
\end{proof}
Theorem~\ref{thm:general} also applies to the composition~$B^{(H)}\circ
J$, and yields along the lines of the proof of
Corollary~\ref{cor:Hausdorff} the following result.
\begin{corollary} \label{cor:HausdorffJ}
For the composite Hausdorff moment problem~$B^{(H)}\circ J$ there exist positive constants $\underline{C}$ and $\overline C$ such that
\begin{equation} \label{eq:rough1}
\exp(-\underline{C}\,i) \le \sigma_i\lr{B^{(H)}\circ J} \le \frac{\overline{C}}{i}
\end{equation}
is valid for sufficiently large indices $i \in \N$.
\end{corollary}
The gap between the lower and upper bounds for the singular values~$\sigma_i\lr{B^{(H)}\circ \mathcal E^{(k)}}$ and~$\sigma_i\lr{B^{(H)}\circ J}$
expressed in Corollaries~\ref{cor:Hausdorff} and~\ref{cor:HausdorffJ},
respectively, is quite large.
\section{Discussion of kernel smoothness}
\label{sec:kernel}
The composite operators that were considered so far are
Hilbert-Schmidt operators, because its factors $J$ and $\mathcal
E^{(k)}$, respectively, have this property. Hilbert-Schmidt operators
acting in~$L^{2}(0,1)$ are integral operators, and hence these can be
given in the form of a Fredholm integral operator~$[G(x)](s):=\int \limits_0^1
k(s,t) x(t) dt\;(0 \le s \le 1)$ with kernel
$k=k(s,t) \in L^2((0,1) \times (0,1)$.
It is well-known that decay rates of the singular values grow with the smoothness of the kernel
$k$, and we refer in this context to the following result.
\begin{lemma}[{see~\cite{Chang52}}] \label{lem:Allen}
Consider in $L^2(0,1)$ the Fredholm integral operator~$[G(x)](s):=\int \limits_0^1
k(s,t) x(t) dt\;(0 \le s \le 1)$
and assume that the kernel~$k$, and the derivatives~$\frac{\partial k}{\partial
s}$,...,$\frac{\partial^{l-1}k}{\partial s^{l-1}}$ exist and are
continuous in $s$ for almost all~$t$.
Moreover, assume that there exist~$g \in L^2((0,1) \times (0,1))$
and $V \in L^1(0,1)$ such that
\begin{equation} \label{eq:H1}
\frac{\partial^{l}k(s,t)}{\partial s^{l}}= \int \limits_0^s g(\tau,t)\, d\tau + V(t),
\end{equation}
Then we have
\begin{equation} \label{eq:Fredholmrates}
\sigma_i(G)=o\left( i^{-l-1.5}\right) \quad \mbox{as} \quad i \to \infty.
\end{equation}
\end{lemma}
We emphasize that Lemma~\ref{lem:Allen} provides us with upper rate bounds,
corresponding to a minimum speed of the decay to zero of the singular
values. If, in particular, the kernel is infinitely smooth on the whole unit square,
then the decay rate of the associated singular values is faster than
$\mathcal{O}(i^{-\eta})$ for arbitrarily large~$\eta>0$. Consequently
an exponential-type decay of the singular values can take place.
Lower bounds cannot be expected in general, as shows
the simple rank-one example~$k(s,t) = (s-1/2)_{+}\times
(t-1/2)_{+}\; (0\leq s,t\leq1)$, which exhibits low smoothness, but the
sequence of singular values with $\sigma_{1}=1$ and $\sigma_i=0\;(i=2,3,...)$ decays at any rate.
However, non-smoothness aspects like non-differentiability, non-Lipschitz and occurring poles
in the kernel give limitations for the decay rate of the singular values.
So we are not aware of examples of exponentially ill-posed
linear problems with kernel~$k$ that does not belong to~$C^\infty([0,1]\times [0,1])$.
Below, we shall determine the kernels $k$ and $\tilde k$ of the self-adjoint
companions~$A^{\ast}A$ and~$\widetilde A^{\ast}\widetilde A$
of the compositions~$A:= B^{(H)}\circ J\colon L^{2}(0,1)\to \ell^{2}$
(with the Hausdorff moment operator),
and~$\widetilde A:= B^{(M)} \circ J\colon L^{2}(0,1) \to L^{2}(0,1)$ (with
a multiplication operator), respectively.
For the first composition we have the following proposition, the proof
of which is given in the appendix.
\begin{proposition} \label{pro:kernel}
The kernel $k$ of the Fredholm integral operator $A^*A$ mapping in~$L^2(0,1)$
with $A=B^{(H)}\circ J$ attains the form
\begin{equation} \label{eq:kernel}
k(s,t)= \sum \limits_{j=1}^\infty \frac{(1-s^j)(1-t^j)}{j^2} \qquad (0 \le s,t \le 1).
\end{equation}
\end{proposition}
The second composition~$\widetilde A:= B^{(M)}\circ J$ with multiplier function $m(t)=t^\theta$ for $\theta>0$ constitutes a linear Volterra
integral operator. However, it can be rewritten as a linear Fredholm integral operator
\begin{equation}\label{eq:tildeA}
[\widetilde A x](s)= \int \limits _0^1 \kappa(s,t) x(t) dt,\quad \mbox{with} \quad \kappa(s,t)=\left\{\begin{array}{cl}
s^\theta & \; (0\leq t\leq s\leq 1)\\
0 & \; (0\leq s < t\leq 1)
\end{array}
\right..
\end{equation}
and we refer to~\cite{Freitag05,HW05} for further
investigations. Taking into account that $\kappa(t,s)$ with switched
variables is the kernel of the adjoint integral operator $\widetilde
A^*$, we have that the kernel $k$ of the operator $\widetilde A^*
\widetilde A$ mapping in $L^2(0,1)$ is given as
$$
\tilde k(s,t)=\int_0^1 \kappa(\tau,s)\kappa(\tau,t) d\tau.
$$
This yields the following proposition for the second composition case.
\begin{proposition} \label{pro:multip}
The kernel $\tilde k$ of the Fredholm integral operator $\widetilde A^* \widetilde A$ mapping in $L^2(0,1)$ with $\widetilde A$ from \eqref{eq:tildeA} attains the form
\begin{equation}\label{eq:kerneltilde}
\tilde k(s,t)=\int \limits _{\max(s,t)}^1 \tau^{2\theta}d \tau =1-\frac{\max(s,t)^{2 \theta+1}}{2\theta+1} \quad (0 \le s,t \le 1).
\end{equation}
\end{proposition}
We are going to discuss the implications of Lemma~\ref{lem:Allen} on
the decay rates of the singular values of both~$A^{\ast}A$
and~$\widetilde A^{\ast}\widetilde A$. We start with the latter.
The kernel~$\tilde k$ from \eqref{eq:kerneltilde} is continuous and satisfies for all $\theta>0$ the Lipschitz condition $\tilde k \in Lip_1([0,1] \times [0,1])$, which means that there is a constant $L>0$ such that for all $s,\hat s,t,\hat t \in [0,1]$
$$|\tilde k(s,t)-\tilde k(\hat s,\hat t)| \le L\,(|s-\hat s|+|t-\hat t|). $$
The author in~\cite{Reade83} proves that in this case we can guarantee
the decay rate
$$
\sigma_i(\widetilde A^* \widetilde A)= \mathcal{O}\left(i^{-2}\right)
\quad \mbox{as} \quad i \to \infty.
$$
The kernel $\tilde k$ from \eqref{eq:kerneltilde}, containing a maximum term,
is not differentiable at the diagonal of the unit square. If it were
continuously differentiable on $[0,1] \times [0,1]$ then the decay
rate would even be improved to $\sigma_i(\widetilde A^* \widetilde
A)=o\left(i^{-2}\right)$, and we refer to~\cite{Read83}.
Indeed, the exact asymptotics~$\sigma_i(\widetilde A^*
\widetilde A) \asymp i^{-2}$ for all $\theta>0$ was shown in~\cite{HW05} .
We turn to discussing the singular values of the
operator~$A^{\ast}A$ with kernel $k$ from~\eqref{eq:kernel}.
Since the series~$\sum _{j=1}^\infty \frac{(1-s^j)(1-t^j)}{j^2}$ of continuous functions
is uniformly absolutely convergent the kernel $k$ actually belongs to the
space~$C([0,1]\times [0,1])$ of continuous functions. This allows for
partial differentiation with respect to $s$ as
\begin{equation} \label{eq:kernel1}
k_s(s,t)= \sum \limits_{j=1}^\infty \frac{-s^{j-1}(1-t^j)}{j}.
\end{equation}
Figure \ref{fig:kernels} presents a plot of the kernel $k$ and its first partial derivative $k_s$.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.49\linewidth]{pics/kernel.eps}
\includegraphics[width=0.49\linewidth]{pics/kernelderiv.eps}
\caption{Plot of the kernel $k$ and its derivative $k_s$} \label{fig:kernels}
\end{center}
\end{figure}
{\parindent0em The} right picture shows that the partial derivative has a pole at the right boundary with $s=1$ of the unit square. This pole implies that $k \notin Lip_1([0,1]
\times [0,1])$. On the other hand, $k_s$ is smooth elsewhere
and allows for further partial differentiation with a second partial derivative
\begin{equation} \label{eq:kernel2}
k_{ss}(s,t)= \sum \limits_{j=2}^\infty \frac{-(j-1)s^{j-2}(1-t^j)}{j}\,,
\end{equation}
which has also a pole at $s=1$.
We note that the order of the pole there is growing by one for every higher partial differentiation step
with respect to $s$.
Based on \eqref{eq:kernel2} one can derive, in light of formula~\eqref{eq:H1} from Lemma~\ref{lem:Allen}, that
$$\frac{\partial k(s,t)}{\partial s}= \int \limits_0^s g(\tau,t)\, d\tau + V(t)$$
with $g(\tau,t)=\sum _{j=2}^\infty \frac{-(j-1)s^{j-2}(1-t^j)}{j} $
and $V(t)=t-1$. Notice, that~$g \notin L^2((0,1) \times (0,1))$, which
prevents the application of Lemma~\ref{lem:Allen}, even in the
case~$l=1$. Thus, Lemma~\ref{lem:Allen} is not applicable, and
we may not make inference on the decay rates of the singular values
by means of considering kernel smoothness.
\begin{remark}
We have not found assertions in the literature, which handle the situation
of such poles in light of decay rates of singular values.
\end{remark}
In summary, the smoothness of the kernel~$k$ from \eqref{eq:kernel} is
strongly limited. In particular we have $k \notin C^\infty([0,1]\times [0,1])$.
This makes an exponential decay rate of the singular
values~$\sigma_i(A)$ appear rather unlikely. However, at present we
have no analytical approach to check this in more detail.
\section{Bounding the singular values of the composite operator~$B^{(H)}
\circ J$}
\label{sec:bounding-sing-numbers}
Our aim of this section is to improve the upper bound
in~\eqref{eq:rough1} for the singular values $\sigma_i(A)=\sigma_i\lr{B^{(H)}
\circ J}$ of the composite operator
\begin{equation} \label{eq:BH-J-graph}
\begin{CD}
A:\;
@. L^2(0,1) @> J >> L^2(0,1) @> B^{(H)} >> \ell^2
\end{CD}
\end{equation}
We emphasize that this composition constitutes a
Hilbert-Schmidt operator, since its component~$J$ is Hilbert-Schmidt, and our argument
will be based on bounding the Hilbert-Schmidt norm
$$\norm{A}{HS}= \left(\sum \limits_{i=1}^{\infty} \sigma^{2}_{i}(A) \right)^{1/2}.$$
The main result will be the following.
\begin{theorem} \label{thm:improvedrate}
For the composite Hausdorff moment problem~$B^{(H)} \circ
J$ with operators~$J$ from
\eqref{eq:J} and~$B^{(H)}$ from \eqref{eq:Haus}, there exists a positive constant $C$ such that
\begin{equation} \label{eq:rough2}
\sigma_i\lr{B^{(H)}\circ J}\le \frac{C}{i^{3/2}}\quad (i \in \N),
\end{equation}
Consequently, there is some constant~$K>0$ such that
that
$$
\sigma_{i}(B^{(H)} \circ J)/\sigma_{i}(J) \leq
\frac{K}{i^{1/2}}\quad (i \in \N).
$$
\end{theorem}
For its proof we make the following preliminary considerations.
We recall the definition of the shifted Legendre
polynomials~$L_{j}\;(j=1,2,\dots)$ from~(\ref{eq:legendre}), as well
as~$Q_{n}\colon L^{2}(0,1) \to L^{2}(0,1)$, being the orthogonal
projection onto the $n$-dimensional subspace of the polynomials up to
degree $n-1$.
For the further estimates the next result is important. Here, we denote by~$\{\sigma_{i}(A), u_{i},v_{i})\}_{i=1}^\infty$ the singular system
of the compact operator~$A=B^{(H)} \circ J$.
\begin{proposition} \label{pro:Baumeister}
Let~$Q_{n}$ denote the projections onto $\operatorname{span}\set{L_{1},\dots,L_{n}}$ of the Legendre polynomials up to degree $n-1$, and
let~$P_{n}$ be the singular projection
onto~$\operatorname{span}\set{u_{1},\dots,u_{n}}$ of the first $n$ eigenelements of $A$. Then we have
for~$A=B^{(H)} \circ J$ that
$$
\sum_{i=n+1}^{\infty} \sigma^{2}_{i}(A) = \norm{A(I - P_{n})}{HS}^{2} \leq \norm{A(I - Q_{n})}{HS}^{2}.
$$
\end{proposition}
\begin{proof}
We shall use the \emph{additivity} of the singular values,
i.e.,\ it holds true that
$$
\sigma_{n+i+1}(K+L) \leq \sigma_{n+1}(K) + \sigma_{i+1}(L),\quad
\text{for all} \ n \in \N, \ i\geq 1.
$$
In particular we see that
$$
\sigma_{n+i+1}(A) \le \sigma_{n+1}(AQ_{n}) + \sigma_{i+1}(A(I - Q_{n})) = \sigma_{i+1}(A(I - Q_{n})),
$$
because $\sigma_{n+1}(AQ_{n})$ vanishes by definition of $Q_n$.
Consequently we can bound
\begin{align*}
\sum_{i=n+1}^{\infty} \sigma^{2}_{i}(A) &= \sum_{i=0}^{\infty}
\sigma^{2}_{n+i+1}(A) \leq
\sum_{i=1}^{\infty}\sigma^{2}_{i}(A(I
- Q_{n}))\\
&= \norm{A(I - Q_{n})}{HS}^{2},
\end{align*}
with equality for $Q_n$ being the singular projections~$P_{n}$.
\end{proof}
Finally we mention the following technical result, which is
well-known. For the sake of completeness we add a brief proof.
\begin{lemma}\label{lem:sj-kappa}
Let~$s_{i} \;(i \in \N)$ be non-increasing, and let~$\kappa>0$. Suppose that there is a
constant~$C_{1}<\infty$ such that~$\sum_{i=n+1}^{\infty}
s_{i}^{2}\leq C n^{-2\kappa}$ for $n=1,2,\dots$. Then there is a
constant~$C_{2}$ such that~$s_{i}^{2} \leq C_{2} i^{-(1+2\kappa)}$ for $i=1,2,\dots.$
\end{lemma}
\begin{proof}
We can estimate as
$$n\, s_{2n}^2 \le \sum_{i=n+1}^{2n}
s_{i}^{2} \leq C n^{-2\kappa}, $$
which gives $s_{2n}^2 \le C n^{-(1+2\kappa)}$ and proves the lemma.
\end{proof}
Let us introduce the normalized functions
$$
h_{i}(s):= \sqrt{2i+1}s^{i}\,\in L^{2}(0,1) \quad (i=0,1,2,\dots).
$$
\begin{lemma}\label{lem:ji}
For each~$i\geq 1,\ j\geq 2$ we have that
$$
\scalar{A L_{j}}{e_{i}}_{\ell^2} = -\frac{1}{i\sqrt{2i+1}}\scalar{h_{i}}{L_{j}}.
$$
\end{lemma}
\begin{proof} We have $\scalar{A
L_{j}}{e_{i}}_{\ell^2}=\scalar{B^{(H)}(J \,L_{j})}{e_{i}}_{\ell^2}
$.
Using the formula (\ref{eq:BJ}) with~$x:= L_{j} \;\ (j=2,3,\dots)$, and since~$L_{j}
\perp 1$ for~$j\geq 2$, we see that
$$
\scalar{A L_{j}}{e_{i}}_{\ell^2}
=
-\frac{1}{i}\left[\int_{0}^{1}s^i\, L_{j}(s) \; ds \right] =
-\frac{1}{i} \scalar{s^{i}}{L_{j}}_{L^2(0,1)} = -\frac{1}{i\sqrt{2i+1}} \scalar{h_{i}}{L_{j}}_{L^2(0,1)}.
$$
This completes the proof.
\end{proof}
\begin{proof}
[Proof of Theorem~\ref{thm:improvedrate}]
Since the system $\{L_j\}_{j=1}^\infty$ of shifted Legendre
polynomials is an orthogonal basis in $L^2(0,1)$, we have by virtue of~\cite[Thm.~15.5.5]{Pie78} that
\begin{equation}
\label{eq:HS-expand}
\norm{A (I- Q_{n})}{HS}^{2} = \sum_{j=1}^{\infty} \norm{A (I-
Q_{n})L_{j}}{\ell^{2}}^{2}
= \sum_{j=n+1}^{\infty} \norm{A L_{j}}{\ell^{2}}^{2},
\end{equation}
and we shall bound by using Lemma~\ref{lem:ji} that
\begin{align} \label{eq:hope}
\norm{A (I- Q_{n})}{HS}^{2} &=
\sum_{i=n}^{\infty}\frac{1}{i^{2}(2i+1)}
\sum_{j=n+1}^{\infty}\abs{\scalar{h_{i}}{L_{j}}_{L^{2}(0,1)}}^{2}\\
&= \sum_{i=n}^{\infty}\frac{1}{i^{2}(2i+1)}\norm{(I - Q_{n}) h_{i}}{L^{2}(0,1)}^{2}.\label{it:hope}
\end{align}
The norm square within the above sum is less than or equal to one,
such we arrive at
$$
\norm{A (I- Q_{n})}{HS}^{2} \leq
\sum_{i=n}^{\infty}\frac{1}{i^{2}(2i+1)} \leq \frac 1 2 \sum_{i=n}^{\infty}\frac{1}{i^{3}}
$$
The sum on the right is known to be minus one half of the second derivative
$\psi^{(2)}(n)$ of the digamma function, see
\cite[(6.4.10)]{Handbook64}. Thus we have
\begin{equation} \label{eq:psi1}
\norm{A (I- Q_{n})}{HS}^{2} \le \frac{-\psi^{(2)}(n)}{4}.
\end{equation}
Moreover, from the series expansion of the digamma function, see~\cite[(6.4.13)]{Handbook64},
we see that~$\lim \limits _{n \to \infty} n^2 \,\psi^{(2)}(n)=-1$,
which implies
\begin{equation} \label{eq:psi2}
\norm{A (I- Q_{n})}{HS}^{2} \le \frac{1}{3.999\,n^2}
\end{equation}
for sufficiently large $n$.
Finally, applying Proposition~\ref{pro:Baumeister} and
Lemma~\ref{lem:sj-kappa} (with~$\kappa=1$ and
$s_i=\sigma_i(A)\;(i \in \mathbb{N})$)
we see that~$\sigma_i(A) \le \frac{C}{i^{1.5}}$ for some constant
$C>0$. This completes the proof.
\end{proof}
\section{Discussion}
\label{sec:discussion}
We extend the previous discussions in a few aspects. As it is seen
from Corollary~\ref{cor:HausdorffJ} and Theorem~\ref{thm:improvedrate}
there is a gap for the composition~$B^{(H)}\circ J$ between the
obtained decay rate of the order~$i^{-3/2}$ of the singular values and the
available lower bound of the order~$\exp(-\underline{C}\,i)$ as $i\to\infty$. We
shall dwell on this further, and we highlight the main points that are
responsible for the lower and upper bounds, respectively.
The overall results are entirely based on considering the Legendre
polynomials~$L_j$ as means for approximation.
Clearly, these play a prominent role in our handling of compositions that contain
the operator~$B^{(H)}$.
In particular, the normalized polynomials $L_j$ constitute an orthonormal basis in~$L^{2}(0,1)$,
and the upper bounds from Lemma~\ref{lem:AP-Legendre} show that these are
suited for approximation. However, as a consequence of using the
Legendre polynomials we arrive at the $n$-sections of the Hilbert
matrix~$H_{n}$, see Lemma~\ref{lem:PAQ}. As emphasized in the proof of
Theorem~\ref{thm:Hausdorff}, the condition numbers of the
Hilbert matrix~$H_{n}$ are of the order~$\exp(4n)$, and this in turn
yields the lower bound, after applying
Theorem~\ref{thm:general}. Despite of the fact that this general
result may not be sharp for non-commuting operators in the composition, we may argue that
using $n$-sections~$H_{n}$ is not a good advice for obtaining sharp
lower bounds. So, it may well be that the lower bounds could be
improved by using other orthonormal bases than the Legendre
polynomials.
The obtained upper bound is based on the approximation of~$B^{(H)}\circ
J$ by Legendre polynomials in the Hilbert-Schmidt norm, and we refer to the inequality
\eqref{eq:psi2}. There are indications in our analysis, for example in the context of \eqref{it:hope}, that this
bound cannot be improved, but what when using other bases?
Another aspect may be interesting. While we established an improved
rate for the composition~$B^{(H)}\circ J$, this is not possible for
the composition~$B^{(M)}\circ J$, see the discussion in
Section~\ref{sec:intro}. In the light of the spectral theorem, and we
omit details, the
operator~$B^{(M)}$ is orthogonally equivalent to a multiplication
operator $M_{f}$ mapping in $L^2(0,1)$ with a multiplier function~$f$ and possessing zero as
accumulation point, and isometries~$U\colon \ell^{2} \to L^{2}(0,1)$
and~$V\colon L^{2}(0,1) \to L^{2}(0,1)$,
for which we have $B^{(H)} = U^{\ast} M_{f} V$. This implies that
$$
B^{(H)} \circ J = U^{\ast} \circ M_{f}\circ V \circ J.
$$
Clearly we have that~$\sigma_{i}( U^{\ast} \circ M_{f}\circ V \circ
J) = \sigma_{i}(M_{f}\circ V \circ J)$, which looks very similar to
the problem of the composition $B^{(M)}\circ J$, where
$
\sigma_{i}(B^{(M)} \circ V \circ J)\asymp \sigma_{i}(B^{(M)}\circ J),
$
but with the intermediate
isometry~$V$. Therefore, we may search for isometries~$V\colon
L^{2}(0,1) \to L^{2}(0,1)$ such that we arrive at $$
\sigma_{i}(B^{(H)} \circ V \circ J)\asymp \sigma_{i}(B^{(H)}
\circ J).
$$
Clearly, this holds true for the identity, and this does
not hold true for~$V$ from above connected with the Hilbert matrix. Because isometries turn orthonormal
bases onto each other, we are again faced with the problem, which
approximating orthonormal basis is best suited as means of
approximation in the composition~$B^{(H)}\circ J$. Thus, the results
presented here are only a first step for better understanding the
problem of approximating a composition of a compact mapping followed by
a non-compact one.
\section*{Acknowledgment}
The authors express their deep gratitude to Daniel Gerth (TU Chemnitz,
Germany) for fruitful discussions and that he kindly provided Figure~\ref{fig:kernels}.
We also appreciate thanks to Robert Plato (Univ.~of Siegen, Germany) for his hint on studying the Fredholm integral operator kernel
of $A^*A$ in $L^2$, which gives additional motivation. Bernd Hofmann is supported by the German Science Foundation (DFG) under the grant~HO~1454/13-1 (Project No.~453804957).
|
2,869,038,153,814 | arxiv | \section{introduction}
\label{sec:intro}
Correlated Dirac semimetals are one of the most fundamental systems not only in
condensed matter physics but also in high energy physics.
They exhibit semimetal-insulator transitions at some critical strength of interactions $V=V_c>0$
at zero temperature,
and magnetic/charge ordered states are stabilized for stronger interactions $V>V_c$
~\cite{Sorella1992,Assaad2013,Wang2014,Wang2016,Li_PRB2015,Li2015,
Hesselmann2016,Huffman2017,Hohenadler2014,Toldin2015,Otsuka2016,Otsuka2018,su4_2018,
Corboz2018,Braun2012,Rosenstein1993,Wetterich2001,Herbut2006,Herbut2009,Herbut2014,Ihrig2018,
DiracQCP}.
These ordered phases correspond to the dynamically massive states with broken chiral symmetry in high energy physics.
Interestingly, criticality of the quantum phase transitions are qualitatively different from those of
conventional magnetic/charge orders in purely bosonic systems,
which is dubbed as fermionic criticality.
In these criticalities, bosonic order parameter fluctuations are intimately coupled with gapless Dirac fermions,
which results in non-trivial quantum critical behaviors depending on fermionic degrees of freedom
in addition to the order parameter symmetry and dimensionality of the system
~\cite{longrange}.
The fermionic criticality has been discussed extensively by various theoretical methods such as
lattice model simulations
~\cite{Sorella1992,Assaad2013,Wang2014,Wang2016,Li_PRB2015,Li2015,Hesselmann2016,Huffman2017,
Hohenadler2014,Toldin2015,Otsuka2016,Otsuka2018,su4_2018,
Corboz2018} and
renormalization group calculations
~\cite{Braun2012,Rosenstein1993,Wetterich2001,Herbut2006,Herbut2009,Herbut2014,Ihrig2018},
and now the basic understanding of these systems has been well established.
Correlation effects in a Dirac system become even more significant in presence of an applied
magnetic field.
It is known that an infinitesimally small magnetic field induces
a magnetic/charge order for any non-zero interaction $V$, which is called the ``magnetic catalysis"
~\cite{Shovkovy2013book,Miransky2015review,RMP2016,Fukushima2019review,
Gusynin1996,Gusynin1996NPB,Fukushima2012,Scherer2012,QCD1,QCD2,QCD3,QCD4,
graphite2001,Gorbar2002,Semenoff1998,
Roy2008,Roy2011,Roy2014,Boyda2014,DeTar2016,DeTar2017}.
A uniform magnetic field $B$ will effectively reduce spatial dimensionality $d$ of the system via
the Landau quantization, $d\rightarrow d-2$.
Therefore, the system becomes susceptible to formation of a bound state by interactions.
For example in the $(2+1)$-dimensional Gross-Neveu-Yukawa type models, it is shown that in
the limit of the large number of fermion flavors $N_f$
corresponding to
a mean field approximation,
the order parameter behaves as $M(B)\sim B$ for weak interactions $V\ll V_c$,
$M(B)\sim \sqrt{B}$ near the critical point $V=V_c$, and $M(B)-M(0)\sim B^2$ for strong interactions
$V\gg V_c$.
Although the magnetic catalysis was first studied in high energy physics,
it was also discussed in condensed matter physics,
especially for graphene and related materials~\cite{graphite2001,Gorbar2002,Semenoff1998,
Roy2008,Roy2011,Roy2014,Boyda2014,DeTar2016,DeTar2017}.
Recently, there are a variety of candidate Dirac materials with strong electron correlations
~\cite{Hirata2017,CaIrO2019,Sow2017,KondoSemimetal2017,synthetic2011}
which could provide a platform for an experimental realization of the magnetic catalysis,
and a detailed understanding of this phenomenon is an important issue.
However, most of the previous theoretical studies for systems near quantum criticality
are based on perturbative approximations
~\cite{Shovkovy2013book,Miransky2015review,RMP2016,Fukushima2019review,
Gusynin1996,Gusynin1996NPB,Fukushima2012,Scherer2012,QCD1,QCD2,QCD3,QCD4,
graphite2001,Gorbar2002,Semenoff1998,
Roy2008,Roy2011,Roy2014,Boyda2014,DeTar2016,DeTar2017,MCmagcata},
and the true critical behaviors beyond the large $N_f$ limit are
rather poorly explored.
This is in stark contrast to the correlated Dirac systems without a magnetic field, for which
there are extensive numerical simulations in addition to the field theoretical calculations,
and critical behaviors have been well established
~\cite{Sorella1992,Assaad2013,Wang2014,Wang2016,Li_PRB2015,Li2015,Hesselmann2016,Huffman2017,
Hohenadler2014,Toldin2015,Otsuka2016,Otsuka2018,su4_2018,Corboz2018,
Braun2012,Rosenstein1993,Wetterich2001,Herbut2006,Herbut2009,Herbut2014,Ihrig2018,DiracQCP}.
Therefore, further theoretical developments are required for clarifying the genuine nature of
the quantum critical magnetic catalysis.
In this work, we study quantum criticality of
the field induced charge density wave (CDW) order in spinless Dirac fermions
on the two-dimensional $\pi$-flux square lattice, which is one of the simplest realizations of the magnetic catalysis.
We use a non-perturbative numerical technique,
infinite density matrix renormalization group (iDMRG) which can directly describe
spontaneous ${\mathbb Z}_2$ symmetry breaking of the CDW order
~\cite{White1992,DMRG_review1,DMRG_review2,DMRG_review3,TenPy1,TenPy2}.
It is found that the order parameter exhibits an anomalous critical behavior,
which characterizes the fermionic criticality as clarified by a scaling argument with respect to the magnetic length.
Based on these observations, we establish
a global phase diagram for the ground state near the quantum critical point.
\section{Model}
We consider spinless fermions on a $\pi$-flux square lattice at half-filling under a uniform magnetic field.
There are two Dirac cones in the Brillouin zone and each Dirac fermion has two (sublattice) components,
which corresponds to a case where the total number of Dirac fermion components is four,
similarly to the honeycomb lattice model
~\cite{Wang2014,Wang2016,Li_PRB2015,Li2015,Hesselmann2016,Huffman2017}.
The Hamiltonian is given by
\begin{align}
H=-\sum_{\langle i,j\rangle}t_{ij}c^{\dagger}_ic_j+V\sum_{\langle i,j\rangle}n_in_j,
\label{eq:H}
\end{align}
where $\langle i,j\rangle$ is a pair of the nearest neibghbor sites and the energy unit is $t=1$.
The hopping is $t_{ij}=te^{i\pi y_i}\exp (iA_{ij})$
along the $x$-direction on the $y=y_i$ bond and $t_{ij}=t\exp (iA_{ij})$ along the $y$-direction.
The vector potential is given in the string gauge~\cite{Hatsugai1999} with the period $L_x'\times L_y$
where $L_x'$ is
the superlattice unit period used in the iDMRG calculations for the system size $L_x\times L_y=\infty\times L_y$.
Typically, we use $L_x'=20$ for $L_y=6$ and $L_x'=10$ for $L_y=10$.
See also Appendix~\ref{app:iDMRG}.
$A_{ij}=0$ corresponds to the conventional $\pi$-flux square lattice without an applied field,
while $A_{ij}\neq0$ describes
an applied magnetic field for a plaquette $p$, $B_p=\sum_{\langle ij\rangle\in p}A_{ij}$.
The magnetic field is spatially uniform and an integer multiple
of a unit value $B=n\times \delta B \quad (n=1,2,\cdots, L_x'L_y)$ allowed by the superlattice size, where
$\delta B=2\pi/L_x'L_y$.
The lattice constant $a$ as a length unit and the electric charge $e$ have been set as $a=1,e=1$,
and the magnetic field is measured in the unit of $B_0=2\pi$.
The $V$-term
is a repulsive nearest neighbor interaction leading to the CDW state and the quantum phase transition with $B=0$
takes place at $V=V_c\simeq 1.30t$ according to the quantum Monte Carlo calculations for
the bulk two dimensional system, where
the criticality belongs to the $(2+1)$-dimensional chiral Ising universality class~\cite{Wang2014,Wang2016,Li_PRB2015,Li2015,Hesselmann2016,Huffman2017}.
On the other hand, our cylinder system is anisotropic and the CDW quantum phase transition
at $B=0$ is simply (1+1)-dimensional Ising transition and critical interaction strength depends
on the system size $L_y$, which may be regarded as a finite size effect~\cite{Tada2019}.
However, the system can be essentially two-dimensional in space under a magnetic field
when the magnetic length $l_B=1/\sqrt{B}$ becomes shorter than the system size $L_y$.
We will use this property to discuss the $(2+1)$-dimensional criticality.
Note that the critical interaction strength $V_c\simeq 1.30t$ will be confirmed later within the present framework.
In the following, we focus on the CDW order parameter,
\begin{align}
M=\left| \frac{1}{L_x'L_y}\sum_i(-1)^{|i|}n_i \right|,
\end{align}
where the summation runs over the superlattice unit cell.
In the iDMRG calculation, we introduce a finite bond dimension $\chi$ up to $\chi=1600$ as a cut-off
to approximate the ground state wavefunction in the form of a matrix product state,
and we can obtain the true ground state by a careful extrapolation to
$\chi\rightarrow\infty$ from the finite $\chi$ results
~\cite{White1992,DMRG_review1,DMRG_review2,DMRG_review3,TenPy1,TenPy2}
(see also Appendix~\ref{app:iDMRG}).
Generally, iDMRG with finite $\chi$ gives a good approximation especially when the system considered
is gapful.
As we will show, an extrapolation to $\chi\rightarrow\infty$ works well,
because our system has a gap in presence of
a non-zero $B$ due to the magnetic catalysis of the broken discrete symmetry ${\mathbb Z}_2$
where there is no gapless Nambu-Goldstone mode.
For a comparison,
we also discuss a two-leg ladder system with $L_y=2$ in Appendix~\ref{app:a}.
\section{Away from quantum critical point}
\label{sec:nonQCP}
Before discussing quantum criticality,
we investigate the magnetic catalysis when the system is away from the critical point.
Firstly, we consider a weak interaction $V=0.50t<V_c=1.30t$ for which the system at $B=0$ is a Dirac semimetal
renormalized by the interaction.
As exemplified in Fig.~\ref{fig:extrap}, dependence of $M(B)$ on the bond dimension $\chi$ used in the calculation
is negligibly small for $L_y=6$, and it can be safely extrapolated to $\chi\rightarrow\infty$ even for
$L_y=10$.
Standard deviations of the extrapolations are less than 1\% and within the symbols.
Such an extrapolation can be done also for other values of $V$ as mentioned before,
and all results shown in this study are extrapolated ones.
\begin{figure}[tbh]
\includegraphics[width=5.5cm]{V05_extrap-crop.pdf}
\caption{Extrapolation of the CDW order parameter $M(B)$ for the $\chi\rightarrow\infty$ limit
at $V=0.50t$.
The blue (red) symbols are for $L_y=6 (L_y=10)$ and the curves are power law fittings.
Each curve corresponds to a magnetic field in the range $0\leq B\leq 0.06B_0$.
}
\label{fig:extrap}
\end{figure}
In Fig.~\ref{fig:MV052} (a), we show the CDW order parameter $M$ extrapolated to $\chi\rightarrow\infty$
for the system sizes $L_y=6$ and $L_y=10$
at $V=0.50t$.
The calculated results almost converge for $L_y=6,10$ and are independent of the system size,
except for $B=0$ where there is a finite size effect due to $l_B=\infty$,
although there is some accidental deviation around $B\simeq0.1B_0$.
Therefore, these results give the CDW order parameter essentially in the thermodynamic limit $L_y\rightarrow\infty$.
In order to understand impacts of quantum fluctuations,
we also performed a mean field calculation for a comparison~\cite{MF}.
The critical interaction within the mean field approximation is found to be $V_c=0.78t$
and the interaction $V=0.30t$ corresponds to
the same coupling strength in terms of the normalized interaction $g=(V-V_c)/V_c=-0.62$.
The iDMRG reuslts of $M$ (blue symbols) are larger than the corresponding mean field results (red symbols),
$M>M_\textrm{MF}$,
which suggests that quantum fluctuations enhance the order parameter even for the present weak $V$.
It is noted that the order parameter behaves roughly as $M(B)\sim B$ as seen for small $B$,
which is consistent with the large $N_f$ field theory
~\cite{Shovkovy2013book,Miransky2015review,RMP2016,Fukushima2019review,
Gusynin1996,Gusynin1996NPB,Fukushima2012,Scherer2012}.
\begin{figure}[tbh]
\includegraphics[width=5.5cm]{MV052_-crop.pdf}
\caption{
(a) The CDW order parameter $M$ at a weak coupling.
The blue symbols are the iDMRG results at $V=0.50t<V_c$ for
$L_y=6$ (squares) and $L_y=10$ (circles),
while the red symbols are the mean field results ($V=0.30t$) for the same system sizes.
(b) $M$ at a strong coupling $V=2.0t>V_c$ calculated by iDMRG (blue) and
$V=1.20t$ by the mean field approximation (red).
The interactions for iDMRG and the mean field approximation correspond to
the same value of the normalized coupling constant $g$.
}
\label{fig:MV052}
\end{figure}
Similarly to the weak interaction case, the CDW order parameter $M$ calculated by iDMRG (blue symbols)
is enhanced at
a strong interaction $V=2.0t>V_c$ compared to the mean field result $M_\textrm{MF}$ (red symbols)
at the corresponding interaction
$V=1.20t$ (or equivalently $g= 0.54$) as shown in Fig.~\ref{fig:MV052} (b).
However, this is governed by the $B=0$ values and increase of $M(B)$ by the magnetic field
is roughly comparable to that of $M_\textrm{MF}(B)$.
The result that $M>M_\textrm{MF}$ already at $B=0$ is because they behave as $M(V,B=0)\sim g^{\beta}$ with
$\beta\simeq 0.5\sim 0.6<1$~\cite{Wang2014,Wang2016,Li_PRB2015,Li2015,Hesselmann2016,Huffman2017}
while $M_\textrm{MF}(V,B=0)\sim g^{\beta_\textrm{MF}}$ with $\beta_\textrm{MF}=1$ near the quantum critical point,
and these critical behaviors essentially determine magnitudes of
the CDW order parameters away from the critical points.
For $B\neq 0$, the order parameter behaves roughly as $M(B)-M(0)\sim B^2$ in agreement with the
large $N_f$ field theory~\cite{Shovkovy2013book,Miransky2015review,RMP2016,Fukushima2019review,
Gusynin1996,Gusynin1996NPB,Fukushima2012,Scherer2012}.
\section{Near quantum critical point}
In this section, we discuss quantum criticality of the magnetic catalysis based on a variant of
finite size scaling ansatzes.
Then, we establish a global phase diagram around the quantum critical point
in the interaction-magnetic field plane,
in close analogy with
the well-known finite temperature phase diagram near a quantum critical point.
\subsection{Scaling argument}
The enhancement of $M(B)$ by the quantum fluctuations can be
even more pronounced
near the quantum critical point.
\begin{figure}[tbh]
\includegraphics[width=5.5cm]{MV13_-crop.pdf}
\caption{
(a) The CDW order parameter $M$ at the quantum critical point $V=V_c=1.30t$ calculated by iDMRG
together with the mean field result at $V=0.78t$ corresponding to $g=0$.
Definitions of the symbols are the same as in Fig.~\ref{fig:MV052}.
(b) $M$ in the log-log plot. The black solid line is the power law fitting $M\sim B^{0.355}$, while
the black dashed line is the large $N_f$ result $M\sim\sqrt{B}$ shown for the eyes.
}
\label{fig:MV13}
\end{figure}
Figure~\ref{fig:MV13} shows the CDW order parameter at $V=V_c=1.30t$ (blue symbols)
together with the mean field result
for $V=0.78t$ (red symbols), corresponding to $g= 0$.
Clearly, the iDMRG result is significantly larger than the mean field result,
and the enhancement is much stronger than that in the weak interaction case.
There are some deviations between the results for $L_y=6$ and $L_y=10$ for small magnetic fields,
$B\lesssim 0.01B_0$, due to a long magnetic length $l_B$,
and the CDW order gets more strongly stabilized when the system size $L_y$ increases from $L_y=6$ to $L_y=10$.
This should be a general tendency
since the CDW phase at $B=0$ extends to a smaller interaction region when the system size
increases~\cite{Tada2019}.
From this observation, we can discuss scaling behaviors of the CDW order parameter
in the thermodynamic limit as
a function of $B$ near the quantum critical point.
Indeed, as shown in Fig.~\ref{fig:MV13} (b), the calculated $M$ except for the smallest values of $B$ converge
for different system sizes $L_y=6, 10$, and $M(B)$ exhibits a power law behavior
for $0.02B_0\lesssim B\lesssim 0.1B_0$.
The finite size effects are negligible in this range of the magnetic field,
and furthermore the scaling behavior would hold for smaller magnetic fields down to $B=0$
in a thermodynamic system $L_y\rightarrow\infty$, since $M(L_y=10)$ shows the scaling behavior
in a wider region of $B$ than $M(L_y=6)$ does.
If we focus on $0.02B_0\lesssim B\lesssim 0.1B_0$ in Fig.~\ref{fig:MV13},
we obtain the anomalous scaling behavior
$M(B)\sim B^{0.355(6)}$ by power law fittings for different sets of data points.
This is qualitatively different from the mean field (or equivalently large $N_f$ limit) result $M_\textrm{MF}\sim \sqrt{B}$,
which eventually leads to the strong enhancement of $M(B)$ compared to $M_\textrm{MF}(B)$.
The calculated magnetic field dependence of the CDW order parameter
near $V= V_c$ implies a scaling relation characteristic of the quantum criticality.
Here, we propose a scaling ansatz for the leading singular part of
the ground state energy density of a thermodynamically large $(2+1)$-dimensional system,
\begin{align}
\varepsilon_\textrm{sing}(g,h,l_B^{-1})=b^{-D}\varepsilon_\textrm{sing}(b^{y_g}g,b^{y_h}h,bl_B^{-1}),
\label{eq:ansatz}
\end{align}
where $D=2+z=2+1=3$ with $z=1$ being the dynamical critical exponent and $h$ is the conjugate field
to the CDW order parameter $M$.
The exponents $y_{g,h}$ are corresponding scaling dimensions, and
the scaling dimension of the magnetic length is assumed to be one as will be confirmed later.
For a thermodynamic system, the magnetic length $l_B$ will play a role of
a characteristic length scale similarly to a finite system size $L$.
Then, a standard argument similar to that for a finite size system at $B=0$ applies, leading to
\begin{align}
M(g=0,l_B^{-1})\sim (l_B^{-1})^{\beta/\nu}\sim B^{\beta/2\nu},
\label{eq:M0}
\end{align}
where $\beta$ and $\nu$ are the critical exponents at $B=0$ for the order parameter
$M(g,l_B^{-1}=0)\sim g^{\beta}$ and the correlation length $\xi(g,l_B^{-1}=0)\sim g^{-\nu}$.
One sees that this coincides with the familiar finite size scaling if we replace $l_B$ with
a system size $L$~\cite{Cardy}.
The critical exponents of the CDW quantum phase transition in $(2+1)$-dimensions
are $\beta=\nu=1$ in the mean field approximation,
and the resulting $M\sim B^{0.5}$ is consistent with our mean field numerical calculations
~\cite{hyperscaling}.
The true
critical exponents for the present $(2+1)$-dimensional chiral Ising universality class
with four Dirac fermion components
have been obtained by the quantum Monte Carlo simulations at $B=0$,
and are given by $(\beta=0.53, \nu=0.80)$~\cite{Wang2014,Wang2016},
which was further supported by the infinite projected entangled pair state calculation~\cite{Corboz2018}.
Other quantum Monte Carlo studies with different schemes and system sizes
give $(\beta=0.63, \nu=0.78)$~\cite{Li_PRB2015,Li2015},
$(\beta=0.47,\nu=0.74)$~\cite{Hesselmann2016}, and $(\beta=0.67,\nu=0.88)$~\cite{Huffman2017}.
These exponents lead to $\beta/2\nu=0.33, 0.40, 0.32, 0.38$ respectively,
and the scaling behavior of $M(B)$ found in our study
falls into this range and is consistent with them.
The homogeneity relation Eq.~\eqref{eq:ansatz} and the critical exponent can be further confirmed
by performing a data collapse.
According to Eq.~\eqref{eq:ansatz},
the CDW order parameter for general $g$ is expected to behave as
\begin{align}
M(g,l_B^{-1})= l_B^{-\beta/\nu}\Phi(gl_B^{1/\nu}),
\end{align}
where $\Phi(\cdot)$ is a scaling function.
This is a variant of the finite size scaling similarly to Eq.~\eqref{eq:M0}.
When performing a data collapse,
we use the results for $0.02B_0\lesssim B\lesssim 0.1B_0$
so that finite size effects are negligible.
As shown in Fig.~\ref{fig:scaling},
the calculated data well
collapse into a single curve and the critical exponents are evaluated as
$\beta=0.54(3),\nu=0.80(2)$ with $V_c=1.30(2)t$.
This gives $\beta/2\nu= 0.34(2)$, which
is consistent with $\beta/2\nu=0.36$ obained from $M(V=V_c,B)$ at the quantum critical point (Fig.~\ref{fig:MV13}).
Our critical exponents are compatible with those obtained previously by the numerical calculations as mentioned above
and roughly with those by the field theoretic calculations
~\cite{Sorella1992,Assaad2013,Wang2014,Wang2016,Li_PRB2015,Li2015,
Hesselmann2016,Huffman2017,Hohenadler2014,Toldin2015,Otsuka2016,Otsuka2018,su4_2018,
Corboz2018,Braun2012,Rosenstein1993,Wetterich2001,Herbut2006,Herbut2009,Herbut2014,Ihrig2018,DiracQCP}.
Our numerical calculations for the $(2+1)$-dimensional criticality
are limited to rather small magnetc lengths $l_B$ bounded by the system size $L_y$,
and we expect that more accurate evaluations of the critical exponents would be possible
for larger $L_y$ with controlled extrapolations to $\chi\rightarrow \infty$.
\begin{figure}[tbh]
\includegraphics[width=6.5cm]{scaling-crop.pdf}
\caption{Scaling plot of the CDW order parameter $M(V,B)$ in terms of $g=(V-V_c)/V_c$ and $l_B=1/\sqrt{B}$.
The blue squares are for $L_y=6$ and red circles for $L_y=10$.
}
\label{fig:scaling}
\end{figure}
The successful evaluation of the critical exponents
strongly verifies the scaling ansatz Eq.~\eqref{eq:ansatz}.
Although the scaling ansatz may be intuitively clear and similar relations were discussed for the bosonic
Ginzburg-Landau-Wilson theory in the context of the cuprate high-$T_c$ superconductivity
~\cite{Fisher1991,Lawrie1997,Tesanovic1999},
its validity is {\it a priori} non-trivial and
there have been no non-perturbative analyses even for the well-known bosonic criticality.
This is in stark contrast to the conventional finite system size scaling at $B=0$ which
has been well established for various systems~\cite{Cardy}.
The present study is a first non-perturbative analysis of the $l_B$-scaling relation,
providing a clear insight from a statistical physics point of view for the quantum critical magnetic catalysis.
Besides, the scaling ansatz could be used as a theoretical tool for investigating some critical phenomena similarly to
the recently developed finite correlation length scaling in tensor network states
(see also Appendix~\ref{app:a})~\cite{Corboz2018,Tada2019,Rader2018,Pollmann2009}.
Based on this observation,
one could evaluate critical behaviors of the magnetic catalysis in other universality classes in $(2+1)$-dimensions,
such as $\mathrm{SU}(2)$ and $\mathrm{U}(1)$ symmetry breaking with a general number of Dirac fermion components,
by using the critical exponents obtained for the phase transitions at $B=0$
~\cite{Sorella1992,Assaad2013,Wang2014,Wang2016,Li_PRB2015,Li2015,
Hesselmann2016,Huffman2017,Hohenadler2014,Toldin2015,Otsuka2016,Otsuka2018,su4_2018,
Corboz2018,Braun2012,Rosenstein1993,Wetterich2001,Herbut2006,Herbut2009,Herbut2014,Ihrig2018,DiracQCP}.
It would be a future problem to clarify
the exact condition for the $l_B$-scaling to hold in general cases.
\subsection{Phase diagram}
The above discussions can be summarized into a global phase diagram near the quantum critical point
in the $V$-$B$ plane at zero temperature
as shown in Fig.~\ref{fig:scaling}.
Here, we mainly focus on the critical behaviors of the order parameter but not on phase boundaries.
In this phase diagram, there are two length scales; one is the correlation length of the CDW order parameter $\xi$
at $B=0$,
and the other is the magnetic length $l_B$.
One can compare it with the familiar finite temperature phase diagram near a quantum critical point
~\cite{QCPreview1997,MoriyaUeda2000,QCPreview2007}.
The length scale $l_B$ in our case corresponds to
a system size along the imaginary time, $L_{\tau}=1/T$,
in a standard quantum critical system at finite temperature $T$.
In a finite temperature system, anomalous finite temperature behaviors are seen when
the dynamical correlation length $\xi_{\tau}\sim \xi^z$ becomes longer than the temporal
system size, $\xi_{\tau}\gg L_{\tau}$, so that the critical singularity is cut off by $L_{\tau}$ in the imaginary time
direction~\cite{QCPreview1997,MoriyaUeda2000,QCPreview2007}.
Similarly in the present system at $T=0$,
physical quantities will exhibit anomalous $B$-dependence
when the spatial correlation length $\xi$ is longer than the magnetic length, $\xi\gg l_B$,
and the critical singularity is cut off by $l_B$ in the spatial direction.
In this way, we can understand the scaling behavior $M\sim l_B^{-\beta/\nu}\sim B^{\beta/2\nu}$ in close analogy with
the finite temperature scaling behaviors associated with a quantum critical point at $B=0$.
On the other hand, the order parameter shows conventional $B$-dependence,
$M(B)\sim B$ or $M(B)-M(0)\sim B^2$, when the system is away from the quantum critical point,
$\xi\ll l_B$.
We note that our phase diagram would be qualitatively applicable to an interacting Dirac system
with a general flavor number $N_f$
including $N_f\rightarrow\infty$ with $\beta=\nu=1$~\cite{Shovkovy2013book,Miransky2015review}.
It is also noted that the Dirac semimetal phase will be extended to a $B\neq 0$ region at finite low temperature
~\cite{Shovkovy2013book,Miransky2015review,QCD1,QCD2,QCD3,QCD4,Boyda2014,DeTar2016,DeTar2017},
and the critical behaviors can be modified as will be briefly discussed later.
\begin{figure}[tbh]
\includegraphics[width=5.5cm]{phase_diagram-crop.pdf}
\caption{Schematic phase diagram in the $V$-$B$ plane at zero temperature
and the $B$-dependence of $M(V,B)$ for fixed $V$ in each region.
The CDW state at $B=0$ is denoted as CDW$_0$ and
$M_0(V)=M(V,B=0)\sim(V-V_c)^{\beta}$.
The crossover boundaries (dashed lines) are characterized by
$l_B\simeq \xi$.
}
\label{fig:phase_diagram}
\end{figure}
We would also expect that a similar phase diagram could be seen even in a system with long-range interactions such as
QED-like theories in the massless limit,
because it is considered that criticality of a quantum phase transition in a $(2+1)$-dimensional Dirac system
driven by a short-range interaction
is not affected by the long-range Coulomb interaction~\cite{Hohenadler2014,Herbut2009}.
It is noted that, while the Coulomb interaction is (marginally) irrelevant at the transition point,
it will play an important role at a weak coupling regime and an order parameter
could behave as $M\sim \sqrt{B}$ even for any small coupling~\cite{Shovkovy2013book,Miransky2015review}.
\subsection{Discussions}
In this section, we discuss several issues in the magnetic catalysis including possible future studies.
{\it Comparison with conventional finite size effects}---
In the previous section, we have discussed the effects of a finite $l_B$ in analogy with
the temporal size $L_{\tau}$.
Here, we make a comparison of the magnetic catalysis as a finite size effect in spatial directions
with the conventional finite size effects.
In a finite size Dirac system with an isotropic linear system size $L$ in absence of a magnetic field,
an order parameter $M$ (more precisely, a long range order $M=\sqrt{\langle \hat{M}^2\rangle}$)
is usually overestimated when compared with the thermodynamic value,
and it shows smooth crossover for a wide range of interaction strength when the system size is fixed
~\cite{Sorella1992,Assaad2013,Wang2014,Wang2016,Li_PRB2015,Li2015,Hesselmann2016,Huffman2017,
Hohenadler2014,Toldin2015,Otsuka2016,Otsuka2018,su4_2018}.
For different system sizes, it behaves as $M\sim L^{-\beta/\nu}$ at the critical point based on the
conventional finite size scaling ansatz.
Similar scaling relations hold also for an infinite system within a framework of tensor network states where
the system size $L$ is replaced by the correlation length due to a finite bond dimension
(see also Appendix~\ref{app:a})~\cite{Corboz2018,Tada2019}.
In this sense, at least formally,
the enhanced $M$ by the magnetic field in the present study is analogous
to the overestimated $M$ in a conventional finite size system without a magnetic field.
Furthermore, these two phenomena share a physical origin in common, i.e. the dimensional reduction.
As mentioned in Sec.~\ref{sec:intro},
a magnetic field reduces the spatial dimensionality $d\rightarrow d-2$ via the Landau quantization.
Similarly, a small system size quantizes the spatial degrees of freedom and possible
wavenumbers are discretized.
Consequently, the density of states at low energy can become larger than that in the thermodynamic limit
and correlation effects can be amplified,
which would lead to enhanced/overestimated $M$.
Therefore, the magnetic catalysis can be regarded as a finite size effect and is expected to be a quite universal
phenomenon.
However,
there is a crucial difference that the finite $l_B$ effect can be observed in an experiment as
an anomalous $B$-dependence $M(B)\sim l_B^{-\beta/\nu}
\sim B^{\beta/2\nu}$,
in contrast to the familiar finite size scaling, $M\sim L^{-\beta/\nu}$.
{\it Ground state energy density}---
Although we have been focusing on the CDW order parameter,
scaling behaviors will also be seen in other quantities such as the ground state energy density $\varepsilon$ itself.
According to Eq.~\eqref{eq:ansatz}, $\varepsilon$ of a thermodynamically large system is expected to behave as
\begin{align}
\varepsilon(g,l_B^{-1})=\varepsilon(g,0) +\frac{\varepsilon_\textrm{sing}(g l_B^{1/\nu})}{l_B^3}+\cdots.
\end{align}
At the quantum critical point $g=0$ (i.e. $V=V_c$),
the prefactor in front of $l_B^{-3}$ might be factorized as $\varepsilon_\textrm{sing}(0)=C_0v$ with a constant $C_0$
and the ``speed of light" $v$ characterizing the underlying field theory with
the Lorentz invariance
~\cite{Rader2018}.
Away from the quantum critical point, the mean field behaviors will be qualitatively correct
as we have seen in the CDW order parameter $M$ (Sec.~\ref{sec:nonQCP}).
Indeed, our iDMRG calculation and mean field calculation suggest
for a small magnetic field $l_B^{-1}\rightarrow0$,
$\varepsilon_\textrm{sing}(g l_B^{1/\nu}\ll -1)\sim $ const $>0$
in the Dirac semimetal regime $g<0$ (i.e. $V< V_c$),
while $\varepsilon_\textrm{sing}(g l_B^{1/\nu}\gg 1)\sim l_B^{-1}>0$ in the ordered phase $g>0$ (i.e. $V> V_c$),
which is in agreement with the large $N_f$ field theory~\cite{Shovkovy2013book,Miransky2015review}.
Consequently, the orbital magnetic moment $m_\textrm{orb}=-\partial \varepsilon/\partial B$ will be
$m_\textrm{orb}\sim -\sqrt{B}$ for the former (and also at the critical point), and
$m_\textrm{orb}\sim -B$ for the latter.
Details of the ground state energy density and the diamagnetic orbital magentic moment will be discussed elsewhere.
{\it Finite temperature correction}---
Finally, we briefly touch on finite temperature effects around $T= 0$.
At finite temperature, the new length scale $L_{\tau}$ is introduced and
we expect an anomalous $T/\sqrt{B}$ scaling in our system,
by following a scaling hypothesis for the singular part of the free energy density,
$f_\textrm{sing}(g,h,l_B^{-1},L_{\tau}^{-1})=b^{-D}f_\textrm{sing}(b^{y_g}g,b^{y_h}h,bl_B^{-1},b^zL_{\tau}^{-1})$ with $z=1$.
For example, the CDW order parameter would have a finite temperature correction given by
$M(B,T)= B^{\beta/2\nu}\Psi(T/\sqrt{B})$ at the critical point $g=0$,
where $\Psi(\cdot)$ is a scaling function with the property $\Psi(x\rightarrow0)=$ const.
Since finite temperature effects are important in experiments,
detailed investigations of them would be an interesting future problem.
\section{summary}
We have discussed quantum criticality of the magnetic catalysis
in spinless fermions
on the $\pi$-flux square lattice by non-perturbative calculations with iDMRG.
We found the scaling behavior of the CDW order parameter $M(B)$
characteristic of the $(2+1)$-dimensional chiral Ising
universality class, and established a global phase
diagram near the quantum critical point.
The present study is a first non-perturbative investigation of fermionic quantum criticality under a magnetic field,
and could provide a firm basis for deeper understandings of other related systems.
\section*{acknowledgements}
We thank F. Pollmann for introducing the open source code TenPy for the iDMRG calculations.
We are also grateful to Y. Fuji, M. Oshikawa, and K. Fukushima for valuable discussions.
The numerical calculations were performed at Max Planck Institute for the Physics of Complex Systems.
This work was supported by JSPS KAKENHI Grant No. JP17J05736,
No. JP17K14333, KAKENHI on Innovative Areas ``J-Physics''
[No. JP18H04318].
|
2,869,038,153,815 | arxiv | \section{Introduction}
It has become clear by now that the standard Poisson-Boltzmann (PB) theory used to describe and understand the
electrostatic interactions in colloidal systems has severe limitations and can sometimes give qualitatively unreliable
if not outright wrong answers \cite{Naji2010}. There are several distinct reasons why the PB theory cannot describe some salient features of highly charged Coulomb systems.
First and most notably, the PB theory is a mean-field theory, and thus completely misses the important effects of ionic correlations that have recently been the focus of much research in Coulomb fluids \cite{Kanduc2009}. The correlation effect, first observed in simulations \cite{Guldbrand1984}, exposes the limitations of the mean-field \emph{ansatz} in quite a drastic manner, since for highly charged systems the interactions mediated by mobile ions between equally charged interfaces can become attractive. However, general theorems demand the interaction to be repulsive at the mean-field level \cite{Neu1999,Sader1999,Trizac1999}. Several lines of thought were spawned by simulations and converged into a paradigm shift that allowed for a simple conceptual understanding of why the mean-field picture breaks down for highly charged systems and how to formulate a theory that would be valid in these circumstances \cite{Naji2005}. This paradigm shift led to a dichotomy between the weak and strong-coupling approaches that delimit the exact behavior of a Coulomb system at any value of electrostatic coupling \cite{Kanduc2010}.
Another drawback of the PB theory is the physical model on which it is based -- point charged particles -- that neglects all ion-specific effects except for the ion valency. It is thus a \emph{one parameter theory} where the ions differ only in the amount of charge they bear. One straightforward way to amend this drawback, sharing some of the conceptual simplicity with the original Poisson-Boltzmann formulation, is to take into account the excess static ionic polarizability of the ions \cite{Ben-Yaakov2011,Ben-Yaakov2011b,Frydel2011} proportional to the volume of the cavity created by the ion in the solvent. Static excess ionic polarizability is then a second parameter that differentiates between different but equally charged ionic species and thus presents an important step towards more {\em civilized} models of Coulomb fluids.
Studies of the excess ionic polarizability have a venerable history and go all the way back to the classical book by Debye on polar molecules (see discussion on pages 111-115 in Ref. \cite{Debye1929}), where he already discussed cavities around ions having a different value of dielectric constant compared to the surrounding solution. These cavities in fact represent excess polarization of the ions in aqueous solvent. Since due to saturation effects for most salts the interior dielectric constant should be taken much smaller than the aqueous one, the corresponding (static) dielectric constant of the salt should then be smaller than for pure solvent. This corresponds to negative excess ionic polarizability. While in Debye's analysis the effect scales as the volume of the ionic cavity, there are indications that for large enough solutes it should actually scale with the area of the cavity \cite{Chandler2005}.
One of the moot points of Debye's analysis is exactly how to pick the right size of the cavity, an issue that has continued unabashed ever since \cite{Conway1981}. The changes in the effective dielectric constant of ionic solutions due to ionic polarizability was later picked up by Bikerman \cite{Bikerman1942} who, among other things, acknowledged that a realistic treatment of ions in aqueous solution should take their finite size into account (see also the discussion in \cite{Hatlo2012}) as well as their excess polarizability. The effects of ionic polarizability and the associated dielectric decrement on the interactions between charged macromolecular surfaces in the presence of mobile counterions has been investigated in more recent times starting from the fundamental work of Netz \cite{Netz2001} and continuing with a steady stream of works \cite{Ben-Yaakov2011, Ben-Yaakov2011b,Frydel2011,Hatlo2012}.
Some facets of the ionic and colloid polarizability were discussed starting from the weak-coupling level by generalising the zero Matsubara frequency van der Waals term and modifying the appropriately formulated linearised Debye-H\"uckel theory \cite{Netz2001,Netz2007}. Levin and coworkers \cite{Levin2009a,Levin2009} dealt with polarizability in the context of (ideally) polarizable ions in the vicinity of the dielectric interface. They also formulated a theory of monovalent and multivalent counterions in suspensions of polarizable colloids or nanoparticles \cite{Bakhshandeh2011}\ which in some respects complements our work where the mono or polyvalent counterions themselves are treated as polarizable.
The main conceptual fulcrum of our present work is the dielectric decrement of ionic solutions that has been attributed to various sources, which underlie the changes in the dielectric response of the solution, but can be universally quantified by an excess ionic polarizability \cite{Ben-Yaakov2011,Ben-Yaakov2011b}. It is proportional to the derivative of the (static) dielectric constant of a salt solution with respect to the concentration of the ions. Numerically this last coefficient, $\tilde\beta$ \cite{Ben-Yaakov2011}, turns out to be between $-7\,{\rm M}^{-1}$ and $-20\,{\rm M}^{-1}$ for most of the common salts \cite{Hasted1973}.
Here we shall proceed with the analysis of effects of the excess static ionic polarizability of ions by formulating consistent weak- and strong-coupling approaches that will lead to a two parameter -- charge and static excess polarizability -- theory of a Coulomb fluid. We thus reformulate the basic model of a Coulomb fluid and investigate its consequences. This is accomplished by first incorporating the excess ionic polarizability effect in a consistent way into a microscopic model and then solving the corresponding theory at the mean-field weak-coupling level as well as at the strong coupling level.
It further turns out that the radius of the ions (more precisely of their hydration shell or cavity) must be introduced, leading to a more civilized three parameters theory. The presented theory has thus a very broad parameter space that we can not analyze in complete detail. We point to some salient features and leave most of the details for future endeavor.
\section{Model}
We are interested in the behavior of mobile charges (counterions)
immersed in a planar slab of thickness $L$ filled by aqueous solvent of permittivity $\epsilon\ind{w}$. The slab is assumed to be confined between two semi-infinite regions of permittivity $\epsilon_\textrm{ext}$ that bear fixed charges of opposite sign to the sign of the mobile charges with surface charge density $\sigma_0$. Counterions have a radius $R$, a charge $e = q e_0$, where $e_0$ is the elementary charge of the electron and $q$ is their valency, and an excess polarizability $\alpha$. This latter quantity is defined precisely as the difference between the aqueous solvent polarizability and the proper ionic polarizability, and may thus be negative as surmised by Debye \cite{Debye1929}. In fact experimentally this is the standard behavior observed for many salts, see Ref \cite{Ben-Yaakov2011} for details. We will denote the whole space as $E$ and the volume of the slab as $V$. A schematic representation of the geometry of our model is given in Fig. \ref{f_system}.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.5\linewidth]{schematic.eps}
\end{center}
\caption{Polarizable counter-ions of excess polarizability $\alpha$ between two charged plates. The solvent in between has a permittivity $\epsilon_w$, while the two semi-infinite regions $0 > z > L$ have permittivity $\epsilon_\textrm{ext}$. The two surfaces at $z = 0, L$ bear a surface charge of surface charge density $\sigma$. $R$ is the radius of the ions.}
\label{f_system}
\end{figure}
\subsection{Field-action}\label{}
The partition function for $N$ counterions is
\begin{eqnarray}\label{zn}
Z_N & = & \frac{1}{N!}\int[d\phi]\prod_{j=1}^N d{\boldsymbol{x}}_j \\ & &\times \exp\left(-\frac{\beta\epsilon_0}{2}\int_E\epsilon({\boldsymbol{x}})(\nabla\phi({\boldsymbol{x}}))^2 d{\boldsymbol{x}}+\beta\sum_j\left[ie\phi({\boldsymbol{x}}_j)-\frac{\alpha}{2}(\nabla\phi({\boldsymbol{x}}_j))^2\right]-i\beta\int_{\partial V}\sigma({\boldsymbol{x}})\phi({\boldsymbol{x}})d'{\boldsymbol{x}}\right),\nonumber
\end{eqnarray}
where $\beta = (k_BT)^{-1}$ and $d'{\boldsymbol{x}}$ denotes the integration over the bounding surfaces $\partial V$. The standard field-theoretical representation of the Coulomb fluid partition function in terms of the fluctuating electrostatic potential has been used \cite{Dean2009b}, properly extended by the fact that the counterion energy in an electrostatic field contains the point charge contribution $ie\phi({\boldsymbol{x}}_j)$ as well as the term due to its excess polarizability $\frac{\alpha}{2}(\nabla\phi({\boldsymbol{x}}_j))^2$.
Note that in this general expression, the surface charge may not be uniform, although we will restrict ourselves to the case $\sigma({\boldsymbol{x}})=\sigma_0$.
The grand canonical partition function for a given fugacity $\lambda$ is then given by
\begin{equation} \label{xi}
\mathcal{Z}=\sum_{N=0}^\infty \lambda^N Z_N= \int \exp(-\beta S[\phi])[d\phi],
\end{equation}
where the field-action $S[\phi]$ is given by
\begin{equation}
\beta S[\phi]=\frac{\beta\epsilon_0}{2}\int_E\epsilon({\boldsymbol{x}})(\nabla\phi({\boldsymbol{x}}))^2 d{\boldsymbol{x}} - \lambda\int_V \exp\left(-\beta\left[\frac{\alpha}{2}(\nabla\phi({\boldsymbol{x}}))^2-ie\phi({\boldsymbol{x}})\right]\right) d{\boldsymbol{x}} - i\beta\int_{\partial V} \sigma({\boldsymbol{x}})\phi({\boldsymbol{x}})d'{\boldsymbol{x}}.
\label{xi}
\end{equation}
This is the fundamental expression that we will evaluate; we will specifically concentrate on its dependence on the separation between charged plane-parallel boundaries.
\subsection{Dimensionless field-action}\label{}
The field-action can be rewritten in terms of dimensionless parameters. Of course this analysis holds only in 3D; in other dimensions, the characteristic lengths that define the dimensionless parameters would have to be defined differently \cite{Dean2009b}. The dimensionless form of the action itself suggests various approximations that allow an explicit and exact evaluation of the grand canonical partition function \cite{Naji2005}.
First, we recall the definition of the Bjerrum and Gouy-Chapman lengths, $l\ind{B} = {\beta e_0^2}/{4\pi \epsilon\ind{w}\epsilon_0} $ and $ l\ind{GC} = {1}/ {2\pi q l\ind{B}\sigma_S}$, where $\sigma_0 = e_0 \sigma_S$ is chosen to be positive. The electrostatic "coupling constant" is then defined as the ratio \cite{Boroudjerdi2005}
\begin{equation}
\Xi = \frac{q^2 l\ind{B}}{l\ind{GC}} = 2\pi q^3 l\ind{B}^2\sigma_S = q^3 ~\Xi_0.
\end{equation}
Above we specifically decomposed the coupling parameter into its $q$ and $\sigma_S$ dependence. The dimensionless length, field, permittivity and surface charge can then be expressed as $\tilde {\boldsymbol{x}} = {\boldsymbol{x}}/l\ind{GC}$, $ \tilde\phi = \beta e \phi$, $\varepsilon({\boldsymbol{x}}) = \epsilon({\boldsymbol{x}})/\epsilon\ind{w}$, $s({\boldsymbol{x}}) = -\sigma({\boldsymbol{x}})/\sigma_0$. One can also introduce a rescaled polarizability defined as
\begin{equation}
\tilde \alpha = \frac{\beta }{(\beta q e_0 l\ind{GC})^2} \alpha.
\end{equation}
Usually instead of using the excess polarizability one can use the dielectric decrement $\tilde\beta$ in units of inverse Mole per liter \cite{Ben-Yaakov2011}, defined as $\alpha = \epsilon_0 \tilde\beta$. Typically the dielectric decrement for various salts is negative.
The dimensionless polarizability represents an additional independent parameter of the theory. Finally we define the dimensionless fugacity as $ \tilde \lambda=2\pi\Xi l\ind{GC}^3\lambda$.
We can estimate the numerical values for all these parameters and obtain typical values for monovalent counterions that are of the order: $l\ind{B} \simeq 1\textrm{nm}$, $ \Xi \simeq 1$, $\tilde\alpha \simeq 10^{-2}$ and $\varepsilon\ind{ext} \simeq 5\times 10^{-2}$.
We can now derive the grand canonical partition function in the form
\begin{equation}
\label{dl_pf}
\mathcal{Z}=\int \exp\left(-\frac{S[\phi]}{\Xi}\right)[d\phi],
\end{equation}
where the field action can be obtained as
\begin{equation}
\label{dl_action}
S[\phi]= \frac{1}{8\pi}\int_E\varepsilon({\boldsymbol{x}})(\nabla\phi({\boldsymbol{x}}))^2 d{\boldsymbol{x}} - \frac{\lambda}{2\pi}\int_V \exp\left(-\frac{\alpha}{2}(\nabla\phi({\boldsymbol{x}}))^2+i\phi({\boldsymbol{x}})\right) d{\boldsymbol{x}} + \frac{i}{2\pi}\int_{\partial V}s({\boldsymbol{x}}) \phi({\boldsymbol{x}})d'{\boldsymbol{x}},
\end{equation}
Here, in order not to proliferate the notation we simply renamed all the "$\;\tilde {}\;$" quantities back to their un-"$\;\tilde {}\;$" symbols, because in what follows we will work only with the dimensionless action.
This expression is then the point of departure for the evaluation of the free energy and pressure of the system. One should note here that the partition function (\ref{dl_pf}) depends on two parameters: the coupling constant $\Xi$ as well as the dimensionless polarizability $\alpha$, \emph{i.e.} it is a two-parameter function.
\subsection{Density and electroneutrality}\label{}
In unscreened systems with long range Coulomb interactions the stability is insured only if the system as a whole is electroneutral. This is a particularity of long range interactions that becomes irrelevant for all finite range interaction potentials \cite{Kanduc2011}. Special care then needs to be taken in order to stipulate this stability, that is given as a
condition on the one-particle ionic density. The latter is defined by the operator
\begin{equation}
n({\boldsymbol{x}})= \lambda \exp\left(-\frac{\alpha}{2}(\nabla\phi({\boldsymbol{x}}))^2+i\phi({\boldsymbol{x}})\right)\mathbf{1}_V({\boldsymbol{x}}),
\end{equation}
where $\mathbf{1}_V({\boldsymbol{x}})$ is the indicator function of the volume $V$ defined by $\int f({\boldsymbol{x}})\mathbf{1}_V({\boldsymbol{x}})d{\boldsymbol{x}}=\int_V f({\boldsymbol{x}})d{\boldsymbol{x}}$. We will also use the indicator function of the surface $\partial V$, given by $\int f({\boldsymbol{x}})\mathbf{1}_{\partial V}({\boldsymbol{x}})d{\boldsymbol{x}}=\int_{\partial V}f({\boldsymbol{x}})d'{\boldsymbol{x}}$.
While the true density is actually given by $\frac{n}{2\pi\Xi}$, the above expression is easier to use in the mean field approximation since it does not involve $\Xi$.
We now impose average electroneutrality in the system by stipulating that
\begin{equation}\label{electroneutrality}
\int \langle n({\boldsymbol{x}})\rangle d{\boldsymbol{x}}=\int_{\partial V}s({\boldsymbol{x}})d'{\boldsymbol{x}}.
\end{equation}
This zero-moment gauge condition insures that the system remains stable for any configuration of the charges. Electroneutrality needs to be formulated as an additional condition on the density function only for unscreened interactions, see \cite{Kanduc2011} for details.
\subsection{Grand potential, free energy, and pressure}\label{presdisc}
The grand canonical thermodynamic potential is defined by
\begin{equation}
J_{\lambda}=-\ln\mathcal{Z}_{\lambda}.
\end{equation}
We write explicitly the dependence on $\lambda$ since it will feature prominently in our analysis. The fugacity is not a physical parameter, so the pressure should not depend on it. To solve this issue, we have to know how $J_\lambda$ depends on $\lambda$; by differentiating (\ref{dl_pf}), we get
\begin{equation}
\frac{dJ_{\lambda}}{d\lambda}=-\frac{N}{2\pi\Xi\lambda},
\end{equation}
where $N$ has no subscript $\lambda$ because it does not depend on it as a consequence of electroneutrality (\ref{electroneutrality}). Now, it is clear that the free energy defined by
\begin{equation}\label{def_free_en}
F_{\lambda}=J_{\lambda} + \frac{N}{2\pi\Xi}\ln\lambda
\end{equation}
does not depend on $\lambda$, \emph{i.e.} $F_{\lambda} = F$, while $\ln\lambda$ is the chemical potential. This means that the free energy can be safely used to compute the pressure:
\begin{equation}
P=-\frac{\partial F}{\partial L}.
\label{defpress1}
\end{equation}
Note that all the energies defined above are energies per unit area because of the transverse extensivity of our system. Furthermore the above pressure is in dimensionless units; the physical pressure is thus $p=P/(\beta l\ind{GC}^3)$.
\section{Weak coupling approximation}
Depending on the strength of the Coulomb coupling as parameterized by the coupling constant $\Xi$ the grand canonical partition function exhibits two well defined limiting laws \cite{Boroudjerdi2005,Kanduc2009}. For vanishing values of the coupling constant, $\Xi \rightarrow 0$, the partition function can be well approximated by its saddle-point value and fluctuations around it. In fact the saddle-point is known to correspond exactly to the mean-field Poisson-Boltzmann expression while the Gaussian fluctuations around the mean-field correspond to the zero Matsubara frequency van der Waals or thermal Casimir interactions \cite{Podgornik1988}.
We will first derive the mean-field equations for our field-action, equivalent to those derived elsewhere \cite{Ben-Yaakov2011b,Frydel2011}, and then evaluate the Gaussian fluctuations around the mean-field and their dependence on the separation between the bounding surfaces.
\subsection{Mean-field}
We start with the general saddle-point equation satisfied at equilibrium
\begin{equation}\label{mf_eq}
0=\left\langle \frac{\delta S}{\delta\phi}\right\rangle=\frac{1}{2\pi}\left[-\nabla\cdot\left\langle\left[\frac{\varepsilon({\boldsymbol{x}})}{2}+\alpha n({\boldsymbol{x}})\right]\nabla\phi({\boldsymbol{x}})\right\rangle-i\langle n({\boldsymbol{x}})\rangle + i s({\boldsymbol{x}})\mathbf{1}_{\partial V}({\boldsymbol{x}})\right].
\end{equation}
The mean-field is more often written in terms of the (real) electrostatic potential proportional to $\psi=-i\phi$, with the corresponding field-action $\tilde S[\psi]$, than in terms of the fluctuating potential $\phi$. For this new variable,
the mean-field configuration is evaluated from the saddle-point condition
\begin{equation}
\frac{\delta \tilde S[\psi_\textrm{MF}]}{\delta\psi({\boldsymbol{x}})}=0.
\end{equation}
The grand canonical potential is then approximated by
\begin{equation}
J\simeq J_\textrm{MF}=\frac{\tilde S[\psi_\textrm{MF}]}{\Xi}.
\end{equation}
From the saddle-point equation the mean-field equation can be rewritten in its Poisson-Boltzmann form as \cite{Ben-Yaakov2011b,Frydel2011}
\begin{equation}\label{mf_bulk}
\nabla\cdot\left[\left(\frac{\varepsilon}{2} + \alpha n\right)\nabla\psi_\textrm{MF}\right]=- n + s \mathbf{1}_{\partial V},
\end{equation}
where the density is given by
\begin{equation}\label{def_n}
n= \lambda \exp\left(\frac{\alpha}{2}(\nabla\psi_\textrm{MF})^2-\psi_\textrm{MF}\right)\mathbf{1}_V.
\end{equation}
In these two equations, it is clear that the fugacity can be absorbed into the electrostatic potential: this change will modify the grand potential but not the free energy. We can thus assume $\lambda=1$ for the mean-field as well as for fluctuations around it.
\subsection{Pressure in the plane-parallel geometry}
In 1D, which is also the case of two charged plane parallel surfaces since the mean potential depends only on the transverse coordinate $z$, the Poisson-Boltzmann equation has the form
\begin{equation}\label{mf_bulk1}
\left[\left(\frac{\varepsilon}{2} + \alpha n(z)\right)\psi_\textrm{MF}(z)'\right]'=-n(z) + s \mathbf{1}_{\partial V}(z),
\end{equation}
with
\begin{equation}
n(z)= \lambda \exp\left(\frac{\alpha}{2}\psi_\textrm{MF}(z)'^2- \psi_\textrm{MF}(z)\right)\mathbf{1}_V(z).
\end{equation}
We used the notation $f'(z) = \frac{df}{dz}(z)$. In this case it can be shown that the pressure in the system is a constant given by the contact value theorem \cite{Ben-Yaakov2011}
\begin{equation}
P=\frac{1}{2\pi\Xi}\left[ n(z)-\left(\frac{\varepsilon}{4}+\alpha n(z)\right)\psi\ind{MF}'(z)^2 \right]=\text{const}.
\label{presform}
\end{equation}
It can be easily checked that this quantity is actually equal to the pressure obtained equivalently by the standard thermodynamic definition $P=-\frac{1}{\Xi}\frac{\partial \tilde S[\psi\ind{MF}]}{\partial L}$. The above form for the interaction pressure contains an osmotic van't Hoff term, the first one in Eq. \ref{presform}, that contains the effects of the polarizability implicitly, \emph{i.e.} through the variation of the density profile on the polarizability, and a Maxwell stress term, the second one in Eq. \ref{presform}, that contains the polarizability effects explicitly.
\subsection{Second order fluctuations correction}
The grand potential can be computed to the next order by taking into account fluctuations around the mean-field solution. This is done by expanding $S$ around $\phi\ind{MF}=i\psi_\textrm{MF}$ to the second order, obtaining
\begin{equation}
S[\phi_\textrm{MF}+\theta] = \tilde S[\psi_\textrm{MF}] + \frac{1}{2}\int \frac{\delta^2 S}{\delta\phi(x)\delta\phi(y)}[\phi_\textrm{MF}] \theta(x)\theta(y)dx dy=\tilde S[\psi_\textrm{MF}]+S^{(2)}[\theta].
\end{equation}
In this case the grand potential is given by
\begin{equation}
J\simeq J_\textrm{MF}^{(1)}=\frac{\tilde S[\psi_\textrm{MF}]}{\Xi}-\ln\mathcal{Z}^{(2)} = \frac{\tilde S[\psi_\textrm{MF}]}{\Xi}-\ln\left[\int\exp\left(-\frac{S^{(2)}[\theta]}{\Xi}\right)[d\theta]\right],
\end{equation}
where $\mathcal{Z}^{(2)}$ is the contribution of the fluctuations to the partition function.
The effective action for the fluctuations $S^{(2)}[\theta]$ is straightforward to compute, yielding
\begin{eqnarray}
S^{(2)}[\theta]&=
\frac{1}{4\pi}\int\left[\left(\frac{\varepsilon}{2}+\alpha n\right)(\nabla \theta)^2+ n \alpha^2(\nabla\psi_\textrm{MF}\cdot\nabla \theta)^2 - \left(\frac{1}{2}\nabla\cdot(\varepsilon\nabla\psi_\textrm{MF}) - s\mathbf{1}_{\partial V}\right)\theta^2\right],
\label{act_fluc}
\end{eqnarray}
where we have used the mean-field Eq. (\ref{mf_bulk}).
Note that $\varepsilon\nabla\psi_\textrm{MF}$ is not continuous, and thus leads to a surface term. As expected, this action does not depend on the fugacity but on the mean-field density, which is the only physically meaningful quantity.
In our model we consider parallel plates of constant surface charge, the mean field problem is then one dimensional. We can therefore split the coordinates in a one dimensional coordinate $z$ perpendicular to the plates, and a two dimensional one parallel to the plates: ${\boldsymbol{x}}=(z,{\boldsymbol{r}})$.
\subsection{Pressure}
Fourier transforming the fluctuations in the direction parallel to the plates we obtain
\begin{equation}
\theta(z,{\boldsymbol{r}})=\int \exp(i{\boldsymbol{k}}\cdot{\boldsymbol{r}})\tilde\theta(z,{\boldsymbol{k}})\frac{d{\boldsymbol{k}}}{(2\pi)^2},
\end{equation}
where $\tilde\theta(z,-{\boldsymbol{k}})=\tilde\theta(z,{\boldsymbol{k}})^*$ because $\theta$ is real. This decomposition furthermore allows us to write the fluctuations action (\ref{act_fluc}) as
\begin{equation}
S^{(2)}[\theta]=\int S_{\boldsymbol{k}}^{(2)}\left[\tilde\theta(\cdot,{\boldsymbol{k}})\right]\frac{d{\boldsymbol{k}}}{(2\pi)^2},
\end{equation}
where the one dimensional action is
\begin{eqnarray}
\label{act_fluc_k}
S_{\boldsymbol{k}}^{(2)}[\theta]&=&\frac{1}{4\pi}\int\left(\left[\frac{\varepsilon}{2}+\alpha n+ \alpha^2 n\psi_\textrm{MF}'^2\right]\theta'^2+\left[-\frac{1}{2}(\varepsilon \psi_\textrm{MF}')'+\left(\frac{\varepsilon}{2}+\alpha n\right) k^2\right]\theta^2\right) \nonumber \\
&&\quad+\frac{1}{4\pi} \left[\theta(0)^2+\theta(L)^2\right] \nonumber\\
&=& S^{(2)}_{{\boldsymbol{k}},\textrm{b}}+S^{(2)}_{{\boldsymbol{k}},\textrm{s}}.
\end{eqnarray}
This action thus has a bulk part $S^{(2)}_{{\boldsymbol{k}},\textrm{b}}$ and a surface part $S^{(2)}_{{\boldsymbol{k}},\textrm{s}}$. The surface action actually contains another term due to the fact that $\varepsilon \psi_\textrm{MF}'$ is discontinuous across the bounding surfaces, so that finally
\begin{equation}
S^{(2)}_{{\boldsymbol{k}},\textrm{s}}[\theta]=\frac{C}{2}\left(\theta(0)^2+\theta(L)^2\right), \qquad {\rm where} \qquad C=\frac{1}{4\pi}\left[\varepsilon\psi\ind{MF}'\right]_{0^+}^{0^-}+\frac{1}{2\pi},
\end{equation}
with the notation $[f(x)]_{x_1}^{x_2}=f(x_2)-f(x_1)$. We also used the symmetry $z\leftrightarrow L-z$ of our system. The partition function for the fluctuations can be written as a product of path-integrals,
\begin{equation}\label{pi_prod}
\mathcal{Z}^{(2)}=\prod_{\boldsymbol{k}} \int \exp\left(- \frac{S^{(2)}_{\boldsymbol{k}}[\theta]}{\Xi} \right)[d\theta].
\end{equation}
These path-integrals are computed in appendix \ref{ap1}, leading to
\begin{equation}
\mathcal{Z}^{(2)}_{\boldsymbol{k}}=\exp \left(\frac{kL}{2}\right)\sqrt{\frac{2\pi b^{\boldsymbol{k}}(0,L)}{\left[a\ind{f}^{\boldsymbol{k}}(0,L)+\frac{ C+\varepsilon_\textrm{ext} k/4\pi}{\Xi}\right]^2-b^{\boldsymbol{k}}(0,L)^2}},
\end{equation}
where the functions $b^{\boldsymbol{k}}$ and $a\ind{f}^{\boldsymbol{k}}$ are defined in the appendix.
The total free energy of the mean field configuration and fluctuations around it is then obtained as
\begin{equation}
F\ind{MF}^{(1)} = \frac{\tilde S[\psi\ind{MF}]}{\Xi}-\frac{1}{2\pi\beta}\int_0^\infty \ln\left(\mathcal{Z}^{(2)}_k\right)k\,dk.
\label{pressure1}
\end{equation}
We note here that the structure of the free energy $F\ind{MF}^{(1)} $ does not look like a mean-field term independent of the counterion polarizability plus a zero frequency van der Waals term that stems from the polarizability of the counterions. Though this kind of decomposition is sometimes assumed in the literature \cite{Ninham1997,Edwards2004}, it clearly does not correspond to the weak-coupling approximation.
We can see numerically that the integral over the transverse Fourier modes in Eq. (\ref{pressure1}) diverges; this comes from our model of point-like dipoles. Taking into account the size $R$ of the polarizable ions (more precisely, $R$ is the charge of their hydration shell), the integral is regularized by the dimensionless cut-off
\begin{equation}
k\ind{max}=\frac{\pi l\ind{GC}}{R}.
\end{equation}
Physically the cut-off arises because electric fields which fluctuate on length scales shorter than the
polarizable ion cannot polarize it.
The interaction pressure on this level of approximation is then obtained by taking into account Eq. \ref{defpress1}, leading to
\begin{equation}
P^{(1)}=-\frac{\partial F^{(1)}\ind{MF}}{\partial L}.
\label{defpress2}
\end{equation}
The results for the fluctuations-corrected interaction pressure from Eq. \ref{defpress2} on the weak coupling approximation level are shown on Fig. \ref{f_L_P} for $\Xi=1$, $\varepsilon\ind{ext}=0.05$ and $R=1$, for various values of the counterion polarizability $\alpha$. The fluctuations correction in $P^{(1)}$ is quite small compared to the mean-field value, but can become substantial as the polarizability $\alpha$ decreases, \emph{i.e.} becomes more negative. This correction reduces the interaction pressure between the surfaces. This indicates that ions with nominally equal charge (of equal valency) but differing in the polarizability will mediate markedly different interactions when confined between charged dielectric interfaces even at the weak-coupling level.
\begin{figure*}[t!]
\begin{center}
\begin{minipage}[b]{0.485\textwidth}
\begin{center}
\includegraphics[width=\textwidth]{L_P_bigL.eps}
\end{center}\end{minipage} \hskip0.25cm
\begin{minipage}[b]{0.485\textwidth}\begin{center}
\includegraphics[width=\textwidth]{L_P.eps}
\end{center}\end{minipage} \hskip0.25cm
\caption{Pressure $P^{(1)}(L)$ from Eq. \ref{defpress2} as a function of the plate separation $L$ for $\varepsilon\ind{ext}=0.05$, $\Xi=1$, and $R=1$. The dashed lines are the mean field result, the solid lines include the fluctuations.
\emph{Left}: for $\alpha=-0.1$ and large plate separation, the difference is barely distinguishable.
\emph{Right}: for small plate separation $L$ and various values of the polarizability $\alpha$. The effect of fluctuations can be quite important for large counterion polarizabilities.}
\label{f_L_P}
\end{center}\end{figure*}
\subsection{Density}\label{}
We now consider the ion density by taking into account the mean field solution as well as the fluctuations around the mean field. From (\ref{def_n}), the ion density is
\begin{eqnarray}
\rho_1({\boldsymbol{x}}) & = & \left\langle \exp\left(-\frac{\alpha}{2}(\nabla\phi({\boldsymbol{x}}))^2+i\phi({\boldsymbol{x}})\right)\right\rangle_1 \\
& = & (2\pi\alpha)^{-3/2}\int \exp\left(-\frac{p^2}{2\alpha}\right)\left\langle \exp\left(i{\boldsymbol{p}}\cdot\nabla\phi({\boldsymbol{x}})+i\phi({\boldsymbol{x}})\right)\right\rangle_1 d{\boldsymbol{p}},
\end{eqnarray}
where we used a Hubbard-Stratonovitch transformation to obtain the last expression. In this way we have only terms linear in $\phi$ in the exponential. The subscript 1 denotes that we take into account the first order of the fluctuations. We notice that the mean-field equation for electroneutrality should also hold on average at equilibrium, so that $\rho_1$ will satisfy electroneutrality. As a consequence, we only need $\rho_1$ up to a multiplicative constant, and this constant will be set by electroneutrality.
The interpretation of the above formula is that the local ion density is the average over a fluctuating dipolar moment vector of a Coulomb fluid characterized by ions with a charge and a dipolar moment. We then decompose $\phi$ into a mean-field term plus Gaussian fluctuations
\begin{equation}
\phi=i\psi\ind{MF}+\theta,
\end{equation}
obtaining
\begin{equation}\label{rho1_p}
\rho_1({\boldsymbol{x}})=(2\pi\alpha)^{-3/2}\int \exp\left(-\frac{p^2}{2\alpha}-{\boldsymbol{p}}\cdot\nabla\psi\ind{MF}({\boldsymbol{x}})-\psi\ind{MF}({\boldsymbol{x}})\right)\left\langle \exp\left(i{\boldsymbol{p}}\cdot\nabla\theta({\boldsymbol{x}})+i\theta({\boldsymbol{x}})\right)\right\rangle_1 d{\boldsymbol{p}}.
\end{equation}
The average is now easy to compute,
\begin{equation}
\left\langle \exp\left(i{\boldsymbol{p}}\cdot\nabla\theta({\boldsymbol{x}})+i\theta({\boldsymbol{x}})\right)\right\rangle_1=\exp\left(-\frac{1}{2}\left\langle({\boldsymbol{p}}\cdot\nabla\theta({\boldsymbol{x}})+\theta({\boldsymbol{x}}))^2\right\rangle_1\right),
\end{equation}
and then, introducing the correlator of the fluctuations
\begin{equation}
G({\boldsymbol{x}},{\boldsymbol{x}}')=\langle\theta({\boldsymbol{x}})\theta({\boldsymbol{x}}')\rangle_1,
\end{equation}
we can write it as
\begin{equation}
\left\langle \exp\left(i{\boldsymbol{p}}\cdot\nabla\theta({\boldsymbol{x}})+i\theta({\boldsymbol{x}})\right)\right\rangle_1=\exp\left(-\frac{1}{2}{\boldsymbol{p}}^T\nabla\nabla'^TG({\boldsymbol{x}},{\boldsymbol{x}}){\boldsymbol{p}}-\frac{1}{2}G({\boldsymbol{x}},{\boldsymbol{x}})-\frac{1}{2}{\boldsymbol{p}}\cdot\bar\nabla G({\boldsymbol{x}},{\boldsymbol{x}})\right).
\end{equation}
We used the notation $\nabla$ for the gradient with respect to the first variable of $G({\boldsymbol{x}},{\boldsymbol{x}}')$, $\nabla'$ for the second variable, and $\bar\nabla$ for the sum of the two gradients. We can now insert this expression into (\ref{rho1_p}), and remain with a Gaussian integral
\begin{equation}
\rho_1({\boldsymbol{x}})=(2\pi\alpha)^{-3/2}\int \exp\left(-\frac{1}{2}{\boldsymbol{p}}^T{\boldsymbol{\alpha}}_{1+}^{-1}({\boldsymbol{x}}){\boldsymbol{p}}-{\boldsymbol{p}}\cdot\nabla\psi_1({\boldsymbol{x}})-\psi_1({\boldsymbol{x}})\right)d{\boldsymbol{p}},
\end{equation}
where we introduced a renormalized polarizability (which is now a position-dependent matrix) and a renormalized field
\begin{eqnarray}
{\boldsymbol{\alpha}}_{1+}^{-1}({\boldsymbol{x}}) & = & \alpha^{-1}+\nabla\nabla'^T G({\boldsymbol{x}},{\boldsymbol{x}}),\\
\psi_1({\boldsymbol{x}}) & = & \psi\ind{MF}({\boldsymbol{x}}) + \frac{1}{2}G({\boldsymbol{x}},{\boldsymbol{x}}).
\end{eqnarray}
Performing the integral gives
\begin{equation}\label{rho1+}
\rho_{1+}({\boldsymbol{x}}) = \sqrt{\det\left(\frac{{\boldsymbol{\alpha}}_{1+}({\boldsymbol{x}})}{\alpha}\right)}\exp\left(\frac{1}{2}[\nabla\psi_1({\boldsymbol{x}})]^T{\boldsymbol{\alpha}}_{1+}({\boldsymbol{x}})\nabla\psi_1({\boldsymbol{x}})-\psi_1({\boldsymbol{x}})\right).
\end{equation}
The index "$+$" means that our computation works only for $\alpha>0$. In the more common case where $\alpha<0$, the computation is the same up to some factors of $i$, and we get
\begin{equation}
{\boldsymbol{\alpha}}_{1-}^{-1}({\boldsymbol{x}}) = |\alpha|^{-1}-\nabla\nabla'^T G({\boldsymbol{x}},{\boldsymbol{x}}),
\end{equation}
and
\begin{equation}
\rho_{1-}({\boldsymbol{x}})= \sqrt{\det\left(\frac{{\boldsymbol{\alpha}}_{1-}({\boldsymbol{x}})}{\alpha}\right)}\exp\left(-\frac{1}{2}[\nabla\psi_1({\boldsymbol{x}})]^T{\boldsymbol{\alpha}}_{1-}({\boldsymbol{x}})\nabla\psi_1({\boldsymbol{x}})-\psi_1({\boldsymbol{x}})\right).
\end{equation}
Now we have to compute $G({\boldsymbol{x}},{\boldsymbol{x}})$ and $\nabla\nabla'^T G({\boldsymbol{x}},{\boldsymbol{x}})$ at each point. To do this, we will use the same technique we used to compute the pressure: we Fourier transform the fluctuations in the direction parallel to the plates and use the Pauli-van Vleck formula.
Since the fluctuations action (\ref{act_fluc}) is a sum over different transversal modes, two modes with different wave vectors are uncorrelated and we can write the correlator as an integral over the modes,
\begin{equation}
G({\boldsymbol{x}},{\boldsymbol{x}}') = \int \exp(i{\boldsymbol{k}}\cdot[{\boldsymbol{r}}-{\boldsymbol{r}}'])G_{\boldsymbol{k}}(z,z')\frac{d{\boldsymbol{k}}}{(2\pi)^{d-1}},
\end{equation}
where $G_{\boldsymbol{k}}(z,z')$ is the one dimensional correlator for the action in Eq. (\ref{act_fluc_k}). More precisely, we need $G_{\boldsymbol{k}}(z,z)$ as well as $\partial\partial'G_{\boldsymbol{k}}(z,z)$. These functions are computed in appendix \ref{ap2}. Then we can write
\begin{equation}
G({\boldsymbol{x}},{\boldsymbol{x}})=\int G_{\boldsymbol{k}}(z,z)\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}=\frac{1}{2\pi}\int_0^\infty G_k(z,z)k\,dk,
\end{equation}
and the matrix
\begin{equation}
\nabla\nabla'^TG({\boldsymbol{x}},{\boldsymbol{x}})=\int \begin{pmatrix}\partial\partial' & -i{\boldsymbol{k}}^T\partial \\ i{\boldsymbol{k}}\partial' & {\boldsymbol{k}}\kk^T\end{pmatrix}G_{\boldsymbol{k}}(z,z)\frac{d{\boldsymbol{k}}}{(2\pi)^2}=\frac{1}{(2\pi)}\int_0^\infty \begin{pmatrix}\partial\partial' & 0 \\ 0 & \frac{k^2}{2}\mathbf{1}_2\end{pmatrix}G_k(z,z) k\,dk,
\end{equation}
where $\mathbf{1}_2$ is the two dimensional identity matrix.
In conclusion, we have the algorithm to compute the pressure and the density: for each mode, we integrate the Pauli-van Vleck formula and then compute its contributions to $G({\boldsymbol{x}},{\boldsymbol{x}})$ and $\nabla\nabla'^TG({\boldsymbol{x}},{\boldsymbol{x}})$ and add them to the contributions of the previous modes. Finally, we compute the new density at each point and renormalise it using electroneutrality.
The mean field and first order densities can be compared on Fig. \ref{f_z_rho} (left). First of all, we observe that the effect of the fluctuations is small and depends on the values of the parameters. For higher $\Xi$, e.g. $\Xi=1$ on the figure, the ions get preferentially included in the region close to the dielectric boundaries. This is not a mean-field effect since the mean-field density does not depend on the coupling parameter.
The $\alpha$ dependence of the counterion density in the slab is shown on Fig. \ref{f_z_rho_alpha} (left). The mean field density depends strongly on $\alpha$ \cite{Ben-Yaakov2011,Kanduc2009}, and this dependence remains after one adds the fluctuation contribution. The inset shows that the deviation from the mean-field density increases with $\alpha$: the effect of the fluctuations is enhanced by the polarizability.
As a conclusion, the counterions are attracted by the boundaries, and most of this effect has a mean-field nature. The polarizability tends to lower this attraction at the mean-field level, but to increase it at the fluctuations level.
\begin{figure*}[t!]
\begin{center}
\begin{minipage}[b]{0.485\textwidth}
\begin{center}
\includegraphics[width=\textwidth]{z_rho_mf.eps}
\end{center}\end{minipage} \hskip0.25cm
\begin{minipage}[b]{0.485\textwidth}\begin{center}
\includegraphics[width=\textwidth]{z_rho_sc.eps}
\end{center}\end{minipage} \hskip0.25cm
\caption{Counterions density profile within the slab - dependence on the coupling constant $\Xi$. \emph{Left}: Weak coupling density close to the left electrode taking into account the fluctuations around the mean field as a function of the position within the slab ($z\in\left[0,\frac{L}{2}\right]$), for $R=0.3$, $\alpha=-0.3$, $\varepsilon\ind{ext}=0.05$ and $\Xi=0.3$ (dashed line) or $\Xi=1$ (solid line). The mean-field density itself is presented by the dotted line. \emph{Right}: Strong coupling density as a function of the position for $R=2$, $\alpha=-0.01$, $\varepsilon\ind{ext}=0.05$ and $\Xi=10$ (solid line) or $\Xi=50$ (dashed line).
}
\label{f_z_rho}
\end{center}\end{figure*}
\begin{figure*}[t!]
\begin{center}
\begin{minipage}[b]{0.485\textwidth}
\begin{center}
\includegraphics[width=\textwidth]{z_rho_mf_alpha_dev.eps}
\end{center}\end{minipage} \hskip0.25cm
\begin{minipage}[b]{0.485\textwidth}\begin{center}
\includegraphics[width=\textwidth]{z_rho_sc_alpha.eps}
\end{center}\end{minipage} \hskip0.25cm
\caption{Counterions density profile within the slab -- dependence on the polarizability $\alpha$. \emph{Left}: Weak coupling density close to the left electrode taking into account the fluctuations around the mean field as a function of the position within the slab ($z\in\left[0,\frac{L}{2}\right]$), for $R=0.3$, $\varepsilon\ind{ext}=0.05$, $\Xi=0.3$ and $\alpha=-0.3$ (solid line), $\alpha=-0.1$ (dashed line) or $\alpha=-10^{-6}$ (dotted line). \emph{Inset}: Deviation from the mean-field density. \emph{Right}: Strong coupling density as a function of the position within the dielectric slab for $R=1$, $\varepsilon\ind{ext}=0.05$, $\Xi=10$ and $\alpha=-0.05$ (solid line), $\alpha=-0.01$ (dashed line) or $\alpha=-10^{-6}$ (dotted line).
}
\label{f_z_rho_alpha}
\end{center}\end{figure*}
\section{Strong coupling}
The strong coupling approximation \cite{Boroudjerdi2005,Kanduc2009} is obtained in the limit of asymptotically large coupling parameter, $\Xi \rightarrow \infty$. In this limit it turns out that the statistical mechanical description of the system is equivalent to a properly normalized one-body description. This means that we can treat the system as composed of bounding surfaces and a single polarizable charge between them.
We will first derive the strong coupling form for the partition function, equivalent to the first order virial expansion, and then evaluate the density profile and the interaction pressure.
\subsection{Formulation}
The strong coupling limit formally corresponds to the $\lambda \ll 1$ limit. To the lowest non-trivial order in $\lambda$, the partition function is given by
\begin{equation}
\mathcal{Z} \simeq Z_0+\frac{\lambda}{2\pi\Xi} Z_1=Z_0\left(1+\frac{\lambda}{2\pi\Xi} U\right).
\end{equation}
We are thus interested in the evaluation of
\begin{equation}
Z_0 = \int[d\phi] \exp\left(-\frac{1}{2\pi\Xi}\left[\frac{1}{4}\int\varepsilon({\boldsymbol{x}})(\nabla\phi({\boldsymbol{x}}))^2 d{\boldsymbol{x}} + i\int_{\partial V} s({\boldsymbol{x}})\phi({\boldsymbol{x}})d'{\boldsymbol{x}}\right]\right).
\end{equation}
and $Z_1=\int z_1({\boldsymbol{x}}_0)d{\boldsymbol{x}}_0$, with
\begin{eqnarray}
z_1({\boldsymbol{x}}_0) & = & \int[d\phi] \exp\left(-\frac{1}{2\pi\Xi}\left[\frac{1}{4}\int\varepsilon({\boldsymbol{x}})(\nabla\phi({\boldsymbol{x}}))^2 d{\boldsymbol{x}}+i\int_{\partial V}s({\boldsymbol{x}})\phi({\boldsymbol{x}})d'{\boldsymbol{x}}\right]\right. \label{SC_z1} \\
&&\phantom{\int[d\phi] \exp()}
\left.-\left[\frac{\alpha}{2}(\nabla\phi({\boldsymbol{x}}_0))^2-i\phi({\boldsymbol{x}}_0)\right]\right). \nonumber
\end{eqnarray}
The quantity $\lambda z_1({\boldsymbol{x}}_0)/\mathcal{Z}\simeq \lambda z_1({\boldsymbol{x}}_0)/Z_0$ is the ionic density at ${\boldsymbol{x}}_0$. The total number of ions thus follows by stipulating that $\lambda U=N$, so that $\lambda$ can be tuned to satisfy electroneutrality.
As in the mean-field approximation, we will be interested in the density and the pressure. For the density, we will be specifically interested in the ${\boldsymbol{x}}_0$ dependent part of $z_1({\boldsymbol{x}}_0)$, whereas for the pressure we need the $L$ dependent part of $Z_0$ and $Z_1$. In this sense the density is easier to compute, so that we address this question first.
\subsection{Density}\label{}
We introduce an auxiliary vector ${\boldsymbol{p}}$ together with a Hubbard-Stratonovich decomposition and perform the integration over $\phi$ to write down Eq. \ref{SC_z1}\ as
\begin{eqnarray}
z_1({\boldsymbol{x}}_0) & = & (2\pi\alpha)^{-3/2}\det\left(-\frac{\varepsilon\nabla^2}{4\pi\Xi}\right)^{-1/2} \nonumber \\
& & \times \int d{\boldsymbol{p}} \exp\left(-\frac{{\boldsymbol{p}}^2 }{2\alpha}-\frac{1}{2}\left\langle\left( -\frac{1}{2\pi\Xi}\int s({\boldsymbol{x}})\phi({\boldsymbol{x}})d'{\boldsymbol{x}}+{\boldsymbol{p}}\cdot\nabla\phi({\boldsymbol{x}}_0)+\phi({\boldsymbol{x}}_0)\right)^2\right\rangle_{0}\right),
\end{eqnarray}
where $\langle \dots\rangle_0$ denotes the Gaussian average over $\phi$
with the action $S_0[\phi]=\frac{1}{8\pi}\int\varepsilon({\boldsymbol{x}})(\nabla\phi({\boldsymbol{x}}))^2 d{\boldsymbol{x}}$.
In this way the average can be written in terms of the correlator, $G({\boldsymbol{x}},{\boldsymbol{x}}')=\langle\phi({\boldsymbol{x}})\phi({\boldsymbol{x}}')\rangle_0$, as
\begin{equation}
\left\langle\left(-\frac{1}{2\pi\Xi}\int s({\boldsymbol{x}})\phi({\boldsymbol{x}})d'{\boldsymbol{x}}+{\boldsymbol{p}}\cdot\nabla\phi({\boldsymbol{x}}_0)+\phi({\boldsymbol{x}}_0)\right)^2\right\rangle_0={\boldsymbol{p}}^T\AA({\boldsymbol{x}}_0){\boldsymbol{p}}+2{\boldsymbol{p}}\cdot{\boldsymbol{B}}({\boldsymbol{x}}_0)+C({\boldsymbol{x}}_0),
\end{equation}
where
\begin{eqnarray}
\AA({\boldsymbol{x}}_0) & = & \nabla\nabla'^TG({\boldsymbol{x}}_0,{\boldsymbol{x}}_0),\label{def_A}\\
{\boldsymbol{B}}({\boldsymbol{x}}_0) & = & \nabla \left(G({\boldsymbol{x}}_0,{\boldsymbol{x}}_0) - \frac{1}{2\pi\Xi}\int s({\boldsymbol{x}})G({\boldsymbol{x}}_0,{\boldsymbol{x}})d'{\boldsymbol{x}}\right), \\
C({\boldsymbol{x}}_0) & = & G({\boldsymbol{x}}_0,{\boldsymbol{x}}_0) - \frac{1}{\pi\Xi}\int s({\boldsymbol{x}})G({\boldsymbol{x}}_0,{\boldsymbol{x}})d'{\boldsymbol{x}}+\frac{1}{(2\pi\Xi)^2}\int s({\boldsymbol{x}})s({\boldsymbol{x}}')G({\boldsymbol{x}},{\boldsymbol{x}}')d'{\boldsymbol{x}} d'{\boldsymbol{x}}' \nonumber\\
&=& C'({\boldsymbol{x}}_0)+\frac{1}{(2\pi\Xi)^2}\int s({\boldsymbol{x}})s({\boldsymbol{x}}')G({\boldsymbol{x}},{\boldsymbol{x}}')d'{\boldsymbol{x}} d'{\boldsymbol{x}}' ,
\end{eqnarray}
and $\nabla$ and $\nabla'$ denote respectively the gradient with respect to the first and second variable. We can now perform the explicit integration over ${\boldsymbol{p}}$, obtaining
\begin{equation}\label{z1}
z_1({\boldsymbol{x}}_0) = Z_0 \det\left(1+\alpha\AA({\boldsymbol{x}}_0)\right)^{-1/2} \times \exp\left(\frac{1}{2}{\boldsymbol{B}}({\boldsymbol{x}}_0)^T\left(\frac{1}{\alpha}+\AA({\boldsymbol{x}}_0)\right)^{-1}{\boldsymbol{B}}({\boldsymbol{x}}_0)-\frac{C'({\boldsymbol{x}}_0)}{2}\right).
\end{equation}
where
\begin{equation}\label{sc_z0}
Z_0 = \det\left(-\frac{\varepsilon\nabla^2}{4\pi\Xi}\right)^{-1/2}\times\exp\left(-\frac{1}{2(2\pi\Xi)^2}\int s({\boldsymbol{x}})s({\boldsymbol{x}}')G({\boldsymbol{x}},{\boldsymbol{x}}')d'{\boldsymbol{x}} d'{\boldsymbol{x}}'\right).
\end{equation}
As we noted in the weak-coupling treatment, the Hubbard-Stratonovitch transform depends on the sign of $\alpha$. However, it is easy to see here that the final expressions (\ref{def_A}-\ref{z1}) remain the same if $\alpha$ is negative.
We should mention that a problem arises in Eq. (\ref{z1}) if an eigenvalue of $1+\alpha\AA({\boldsymbol{x}}_0)$ is negative. This happens notably if $\alpha$ is too negative, leading to a negative effective permittivity of the hydration shell of the ion, and thus to an instability for the field. The problematic value of $\alpha$ therefore strongly depends on the radius of the hydration shell.
In order to be more explicit, we need the expression for the correlator. Again, we Fourier transform the field in the direction parallel to the plates:
\begin{equation}
\phi(z,{\boldsymbol{r}})=\int \exp(i{\boldsymbol{k}}\cdot{\boldsymbol{r}})\tilde\phi(z,{\boldsymbol{k}})\frac{d{\boldsymbol{k}}}{(2\pi)^{d-1}},
\end{equation}
and the correlator for the ${\boldsymbol{k}}$ mode is relatively easy to determine and is given in \cite{Kanduc2007}. To make the symmetry $z\rightarrow L-z$ more explicit, we switch to coordinates where the plates are located at $-L/2$ and $L/2$; in this case the correlator is given by
\begin{equation}\label{correl_k}
G_{\boldsymbol{k}}(z,z')=4\pi\Xi\left[\frac{\exp(-k|z-z'|)}{2k}+ \frac{\cosh(k(z+z'))+\Delta \exp(-kL)\cosh(k(z-z'))}{\Delta^{-1}\exp(kL)-\Delta \exp(-kL)}\right],
\end{equation}
where
\begin{equation}
\Delta=\frac{1-\varepsilon\ind{ext}}{1+\varepsilon\ind{ext}}.
\end{equation}
We will write $\AA({\boldsymbol{x}}_0)$, ${\boldsymbol{B}}({\boldsymbol{x}}_0)$ and $C'({\boldsymbol{x}}_0)$ using this expression for the correlator. Divergences may appear, but for the density itself we can drop (almost) all the ${\boldsymbol{x}}_0$-independent terms. In fact we find
\begin{equation}\label{AA}
A({\boldsymbol{x}}_0) = \frac{1}{2\pi}\int_0^{k\ind{max}}\begin{pmatrix}\partial\partial' &0 \\ 0 & \frac{k^2}{2}\mathbf{1}_2 \end{pmatrix} G_k(z_0,z_0) k\, dk,
\end{equation}
where we need a cut-off as in the weak-coupling limit, and
\begin{equation}
\partial\partial'G_k(z_0,z_0) = 4\Xi\left[q\ind{max}(k)-k\arctan\left(\frac{q\ind{max}(k)}{k}\right)\right]+4\pi\Xi k^2\frac{\cosh(2kz_0)-\Delta \exp(-kL)}{\Delta^{-1}\exp(kL)-\Delta \exp(-kL)},
\end{equation}
where $q\ind{max}(k)$ is defined by (\ref{defqmax}), $q\ind{max}(k)^2+k^2=k\ind{max}^2$. Then, for ${\boldsymbol{B}}({\boldsymbol{x}}_0)$, we will keep
\begin{equation}\label{BB}
{\boldsymbol{B}}({\boldsymbol{x}}_0) = 2\Xi\begin{pmatrix}
1\\ \mathbf{0} \end{pmatrix} \int_0^{k\ind{max}}\frac{k^2\sinh(2kz_0)}{\Delta^{-1}\exp(kL)-\Delta \exp(-kL)}dk.
\end{equation}
and we can drop the second term in $C'({\boldsymbol{x}}_0)$,
\begin{equation}
C'({\boldsymbol{x}}_0) = 2\Xi\int_0^{k\ind{max}}\frac{\cosh(2kz_0)}{\Delta^{-1}\exp(kL)-\Delta \exp(-kL)}k\, dk.
\end{equation}
The result is shown in Fig. \ref{f_z_rho} (right), where we used electroneutrality to normalize the strong coupling result, defined as we have seen up to a constant.
We see that the counterions are completely excluded from the region close to the interfaces and pushed towards the middle of the dielectric slab. This effect increases with the coupling parameter $\Xi$. It appears on Fig. \ref{f_z_rho_alpha}\ that, in opposition to the weak coupling limit, the dependence on the polarizability is weak. Fig. \ref{f_z_rho_eps}\ shows that the strong coupling density is ruled by the images: without them, the density would be constant within the slab \cite{Boroudjerdi2005,Jho2008,Kanduc2009,Naji2005}.
As a conclusion, the polarizability has a small effect at the strong coupling level.
\subsection{Pressure}\label{}
Using its definition (\ref{def_free_en}) and the condition $\lambda U=N$, we can write the $L$ dependent part of the free energy, up to the first order in $\lambda$, as
\begin{equation}
F=J+\frac{N}{2\pi\Xi}\ln \lambda = -\ln\mathcal{Z}-\frac{N}{2\pi\Xi}\ln U \simeq -\ln Z_0-\frac{N}{2\pi\Xi}\ln U.
\label{fren}
\end{equation}
Of course we need to take into account the zero-moment gauge condition (electroneutrality, Eq.\ref{electroneutrality}) when evaluating the above expression, which to the lowest order eliminates the $\Xi$ dependence.
Let us first compute the $L$ dependent part of $J_0=-\ln Z_0$, using (\ref{sc_z0}) we get
\begin{equation}
J_0=\frac{1}{2}\ln \left[\det\left(-\frac{\varepsilon\nabla^2}{4\pi\Xi}\right)\right]+\frac{1}{2(2\pi\Xi)^2}\int s({\boldsymbol{x}})s({\boldsymbol{x}}')G({\boldsymbol{x}},{\boldsymbol{x}}')d'{\boldsymbol{x}} d'{\boldsymbol{x}}'.
\end{equation}
The first term is the thermal Casimir fluctuation free energy, and the second the electrostatic interaction between the plates. Using the decomposition of the correlator in orthogonal modes, we can write
\begin{equation}
\int s({\boldsymbol{x}})s({\boldsymbol{x}}')G({\boldsymbol{x}},{\boldsymbol{x}}')d'{\boldsymbol{x}} d'{\boldsymbol{x}}'=2\left(G_0(0,0)+G_0(0,L)\right),
\end{equation}
where we have taken $\int d{\boldsymbol{r}}=1$ to have the energy per unit area. Since the orthogonal mode $k=0$ is ill-defined in (\ref{correl_k}), we can derive with respect to $L$ before taking the limit $k\rightarrow 0$. We get
$\frac{d}{dL}G_k(0,0) \underset{k\rightarrow 0}{\rightarrow} 0$ and $
\frac{d}{dL}G_k(0,L) \underset{k\rightarrow 0}{\rightarrow} -2\pi\Xi$, so that we can replace $G_0(0,L)=-2\pi\Xi L$. We thus get the final expression for the grand potential, \begin{equation}
J_0 =\frac{1}{4\pi}\int \ln\left(1-\Delta^2\exp(-2kL)\right)k dk -\frac{L}{2\pi\Xi }.
\end{equation}
Within the strong-coupling virial expansion this term corresponds to the free energy of the system without any ion.
Now we need to compute $U$, remembering that we only need the $L$ dependent terms in $\ln U$. Starting from
expression (\ref{z1}), we get
\begin{equation}\label{u}
z_1({\boldsymbol{x}}_0)=
{Z_0} \exp{(- W({\boldsymbol{x}}_0))},
\end{equation}
where $W({\boldsymbol{x}}_0)$ is an effective one-body potential given by
\begin{equation}\label{u}
W({\boldsymbol{x}}_0) = - \frac{1}{2}{\boldsymbol{B}}({\boldsymbol{x}}_0)^T\left(\frac{1}{\alpha}+\AA({\boldsymbol{x}}_0)\right)^{-1}{\boldsymbol{B}}({\boldsymbol{x}}_0) + \frac{C'({\boldsymbol{x}}_0)}{2} + {\textstyle\frac12} {\rm Tr }\log{\left(1+\alpha\AA({\boldsymbol{x}}_0)\right)}.
\end{equation}
The $Z_0$ term and the last term in the exponent of the above equation can be rearranged and interpreted in the following way: keeping only terms proportional to $(\nabla \phi)^2$ in the exponential of (\ref{SC_z1}), we can show that
\begin{equation}
\det\left(-\frac{\varepsilon\nabla^2}{4\pi\Xi}\right)^{-1/2}\det\left(1+\alpha\AA({\boldsymbol{x}}_0)\right)^{-1/2}=\det \left(-\nabla \left[\frac{\varepsilon({\boldsymbol{x}})}{4\pi\Xi}+\alpha\delta({\boldsymbol{x}}-{\boldsymbol{x}}_0)\right]\nabla\right)^{-1/2}.
\label{effdielfghdj}
\end{equation}
This means that these two functional determinants represent the thermal Casimir partition function for a system composed of a finite extension dielectric slab, two semi-infinite dielectric regions outside of it and a single polarizable ion within the slab. The delta function in the expression for the effective dielectric response function on the r.h.s. of (\ref{effdielfghdj}) needs to be regularized to avoid a divergence in the case of a point ion. It is clear that the last term in the effective one-body potential (\ref{u}) describes the thermal Casimir or zero-frequency van der Waals interaction between the polarizable particle and the dielectric interfaces in the system. It is given explicitly by
\begin{equation}
{\textstyle\frac12} {\rm Tr }\log{(1+\alpha\AA({\boldsymbol{x}}_0))} = {\textstyle\frac12} {\rm Tr }\log\left(1+\alpha \nabla\nabla'^TG({\boldsymbol{x}}_0,{\boldsymbol{x}}_0)\right) \simeq {\textstyle\frac12} \alpha {\rm Tr}\left[\nabla\nabla'^TG({\boldsymbol{x}}_0,{\boldsymbol{x}}_0)\right].
\end{equation}
In the asymptotic regime of large ${\boldsymbol{x}}_0$ we obtain the scaling ${\boldsymbol{x}}_0^{-3}$ which corresponds to the zero-frequency van der Waals interaction between the polarizable particle and a single dielectric discontinuity \cite{Parsegian2005}. Our results are thus completely consistent with everything else we know about the polarizable particles and their zero-frequency van der Waals interactions with dielectric discontinuities \cite{Ninham1997}.
At the end of the calculation we obtain for the $L$-dependent interaction free energy (\ref{fren}) first the usual extensive term, $\frac{L}{2\pi\Xi}$, giving rise to an attractive force between the plates, independent on $L$ while the other terms can not be evaluated analytically but are easily calculated numerically: the computation of $\AA$ follows from (\ref{AA}), ${\boldsymbol{B}}$ is obtained from (\ref{BB}) and finally $C'({\boldsymbol{x}}_0)$ is obtained in the form
\begin{equation}
C'({\boldsymbol{x}}_0)=2\Xi\int_0^{k\ind{max}}\frac{\cosh(2kz_0)}{\Delta^{-1}\exp(kL)-\Delta \exp(-kL)}k\, dk + 2L,
\end{equation}
where we differentiated with respect to $L$ before taking the limit $k\rightarrow 0$ in order to get the last term that represents the attraction between the ion and each of the bounding dielectric surfaces of the slab. Finally $U$ is given by integrating (\ref{u}).
The pressure obtained from the interaction free energy is shown on Fig. \ref{f_L_P_sc_alpha_R} for different values of $\alpha$ and $R$. The dependance on the polarizability is very weak and non-monotonous, contrarily to what we observed in the weak coupling limit. The results are however very sensitive to the size of the ions, especially for large ions.
\begin{figure*}[t!]
\begin{center}
\begin{minipage}[b]{0.485\textwidth}
\begin{center}
\includegraphics[width=\textwidth]{L_P_sc_alphadep.eps}
\end{center}\end{minipage} \hskip0.25cm
\begin{minipage}[b]{0.485\textwidth}\begin{center}
\includegraphics[width=\textwidth]{L_P_sc_Rdep.eps}
\end{center}\end{minipage} \hskip0.25cm
\caption{Strong coupling pressure as a function of the plate separation $L$.
\emph{Left}: $R=2$, $\varepsilon\ind{ext}=0.05$ and $\Xi=10$ for different values of $\alpha$.
\emph{Right}: $\alpha=-0.01$, $\varepsilon\ind{ext}=0.05$ and $\Xi=10$ for different values of $R$.}
\label{f_L_P_sc_alpha_R}
\end{center}\end{figure*}
\begin{figure*}[t!]
\begin{center}
\begin{minipage}[b]{0.485\textwidth}
\begin{center}
\includegraphics[width=\textwidth]{z_rho_mf_eps.eps}
\end{center}\end{minipage} \hskip0.25cm
\begin{minipage}[b]{0.485\textwidth}\begin{center}
\includegraphics[width=\textwidth]{z_rho_sc_eps.eps}
\end{center}\end{minipage} \hskip0.25cm
\caption{Effect of the outer permittivity on the counterion density distribution.
\emph{Left}: Weak coupling density with $\alpha=-0.3$, $R=0.3$, $\Xi=0.3$ and $\varepsilon\ind{ext}=0.05$ (solid line) or $\varepsilon\ind{ext}=1$ (dashed line).
\emph{Right}: Strong coupling density with $\alpha=-0.01$, $R=2$, $\Xi=10$ and $\varepsilon\ind{ext}=0.05$ (solid line) or $\varepsilon\ind{ext}=1$ (dashed line).
}
\label{f_z_rho_eps}
\end{center}\end{figure*}
\begin{figure*}[t!]
\begin{center}
\begin{minipage}[b]{0.48\textwidth}
\begin{center}
\includegraphics[width=\textwidth]{L_P_eps.eps}
\end{center}\end{minipage} \hskip0.25cm
\begin{minipage}[b]{0.495\textwidth}\begin{center}
\includegraphics[width=\textwidth]{L_P_sc_eps.eps}
\end{center}\end{minipage} \hskip0.25cm
\caption{Effect of the outer permittivity on the interaction pressure.
\emph{Left}: Weak coupling pressure with $\alpha=-0.1$, $R=1$, $\Xi=0.3$ and $\varepsilon\ind{ext}=0.05$ (solid line) or $\varepsilon\ind{ext}=1$ (dashed line).
\emph{Right}: Strong coupling pressure with $\alpha=-0.01$, $R=2$, $\Xi=10$ and $\varepsilon\ind{ext}=0.05$ (solid line) or $\varepsilon\ind{ext}=1$ (dashed line).
}
\label{f_L_P_eps}
\end{center}\end{figure*}
\section{Discussion and Conclusions}\label{}
In this paper we have formulated a theory of Coulomb fluids that, apart from the charge of the mobile counterions, includes also their static excess polarizability. This leads to a possibility of ion specific effects even for ions with nominally equal valency \cite{Ben-Yaakov2011}. Instead of starting from the phenomenological description of the ionic effects on the local dielectric function, an endeavor pursued in Ref. \cite{Ben-Yaakov2011,Ben-Yaakov2011b}, we rather implemented the effect of ionic polarizability at the level of the field action deriving the appropriate field-theoretic representation of the model. Though this variation in the approach results in the same form of the model at the mean-field level, the formulation presented is eventually more general and suitable for further analysis and implementation of the weak and the strong-coupling asymptotic limits.
After formulating the model and casting it into a field-theoretic form, we derive the pressure and the ionic density in the mean-field level approximation -- corresponding to the saddle-point of the field-theoretic action. We then add the effects of Gaussian fluctuations of the local electrostatic potential around the mean field. This constitutes the weak-coupling approximation of the complete field theory.
The effect of the fluctuations around the mean-field saddle point is found to be rather small. In the pressure itself it is barely discernible, see Fig. \ref{f_L_P}, but it does become stronger as the polarizability of the ions is increased, while the density profile shows an effect only very close to the boundaries of the system where the ionic density is enhanced, see Fig. \ref{f_z_rho}, depending on the coupling parameter. This modification of the ionic density in the region close to the dielectric boundaries of the system is partly due to the image effects \cite{Kanduc2007,Jho2008} and partly due to the ionic polarizability.
We then formulated a full strong-coupling theory which formally corresponds to a single-particle level description and derived its consequences in detail. In the strong-coupling limit the ions are expelled from the vicinity of the dielectric boundaries, see Fig. \ref{f_z_rho}. The origin of this effect lies in the dielectric image interactions that lead to a vicinal exclusion of the ions close to the dielectric discontinuities \cite{Kanduc2007,Jho2008}.
The results derived here, both for the mean-field plus fluctuations and for the strong coupling regime, exhibit a dependence on the ion polarizability as well as on the size of the ions, Fig. \ref{f_L_P} and \ref{f_L_P_sc_alpha_R}. The effect of ionic polarizability on the interaction pressure is connected partly with the changes in the density profile leading to the changes in the osmotic van't Hoff component of the interactions pressure, and partly with their contribution to the Maxwell stress term in Eq. \ref{presform}. The ionic size dependence comes from divergences naturally present for point dipoles. We have to stress here that what we refer to as the {\em size of the ions} is actually the size of the ionic cavity in the solvent which includes also their hydration shell \cite{Ben-Yaakov2011}. Despite the fact that the field theory arising from our model is a priori independent of the ionic size, we see that if one leaves the domain of mean field theory, by either taking into account fluctuations or going to the strong coupling limit, calculated thermodynamic quantities exhibit ultra-violet, or short distance, divergences. These divergences are associated with the inclusion of ionic polarizability as they do not arise in strong coupling or in the mean field fluctuations for non-polarizable ions. We have argued that the length scale used to cut-off ultra-violet divergences is thus the {\em size} of the polarizable molecules. Our results are thus in line with Bikerman \cite{Bikerman1942} who long ago argued for the role of the ionic size. The effects of the ion size on the interaction pressure are shown in Fig. \ref{f_L_P_sc_alpha_R}. The size dependence mediated by the polarizability of the ions has nothing to do with steric effects and has not been seen before for non polarizable ions or for polarizable ions on the mean-field level.
The effect of dielectric images, \emph{i.e.} of the outer permittivity, is shown in the case of density on Fig. \ref{f_z_rho_eps}\ and in the case of pressure on Fig. \ref{f_L_P_eps}. The weak coupling limit is only weakly affected by the images, which is to be expected since this regime is dominated by the mean field that does not depend on $\varepsilon\ind{ext}$ \cite{Kanduc2007}. On the other hand, the strong coupling limit is strongly affected, this time because images add a non negligible term to the correlator (\ref{correl_k}).
Due to the polarizability of the ions, it is also clear that our two approximations break down if the parameters are too extreme, but for different reasons. This is easy to analyze for the strong coupling result (\ref{z1}): here extreme parameter values correspond either to ions which are too small or have too high a (negative) polarizability.
In this case, the effective permittivity around the ion may turn negative, leading to a field instability that shows up in the partition function.
For the weak coupling limit, on the other hand, if the dielectric function $1+\alpha n(z)$ becomes negative on the mean field level, a divergence appears for the fluctuations about the mean field and the system becomes unstable.
This can be interpreted in the sense that the validity of the strong coupling vs. weak coupling description no longer depends on a single coupling parameter, but actually on three parameters. More work would thus be needed to explore different regions of the parameters space and assess the validity of the WC-SC dichotomy in each of them. Our present work can only be seen as a first step towards this complicated endeavor.
One general conclusion stemming from the present work is that the contribution of polarizable counterions to the total partition function is in general non-additive, contrary to what is sometimes assumed \cite{Ninham1997,Edwards2004,LoNostro}. It is in fact highly non-additive at the weak coupling level, whereas it can sometimes be reduced to an additive contribution in the free energy at the strong coupling level, only if the polarizability is large enough. Simply adding a van der Waals ion-polarizability dependent contribution to the electrostatic potential of mean force is simply wrong.
A final note is in order about the possible computational verifications of our analytical calculations \emph{via} coarse grained simulations that we did not attempt to see through in this work. As polarizability belongs to non-pairwise additive effects the simulation of the present model presents a considerable challenge. One would need to include the image interactions as well as the polarizability couplings to all orders which would appear to be no small accomplishment. Until such time when these type of simulations are actually performed our analytical calculations will remain the sole means to assess the consequences of our model of Coulomb fluids.
\section{Acknowledgments}
VD and DSD would like to thank R.R. Horgan for a discussion about the divergences appearing in the fluctuations about the mean-field. DSD acknowledges support from the Institut Universitaire de France. RP acknowledges support of the The Leverhulme Trust and of ARRS through research program P1-0055 and research project J1-0908.
\bibliographystyle{h-physrev}
|
2,869,038,153,816 | arxiv | \section{Introduction}
The interplay between correlated electron physics and topology offers tantalizing new functionality where emergent phenomena from many-body interactions are protected at elevated temperatures and energy scales \cite{WKrempa2014,Rau2016,Tokura2017,Keimer2017}. Theoretical investigations into the kagome lattice have found a band structure with topological Dirac bands along with high density of states from electronic flat bands and van Hove saddle points, which offer unusual electronic instabilities and an opportunity for investigating this overlap \cite{Yu2012,Kiesel2012,Wang2013,Parameswaran2013,Mazin2014,Lee2016}. Depending on the band filling and many-body interactions generating instabilities at the Fermi level, phases such as density waves and superconductivity have been predicted \cite{Kiesel2012,Wang2013,Isakov2006,Guo2009,Wen2010,Kiesel2013,Ko2009}. The band structure, band filling, and influence of electronic correlations are directly observable using ARPES \cite{Lu2012,Sobota2021}.
Experimental observation of band renormalizations influencing magnetic Weyl fermions in Mn$_3$Sn as well as the observation of the kagome band structure, including Dirac points and flat bands, plus magnetic interactions in Fe$_3$Sn$_2$, revealed kagome physics is possible in actual materials \cite{Kuroda2017,Ye2018,Yin2018,Lin2018}. Recently, the band fillings of AV$_3$Sb$_5$ (A = K, Rb, Cs) \cite{Ortiz2019} and RMn$_6$Sn$_6$ (R = rare earth) \cite{Venturini1991,Ma2021} families of kagome materials have been discovered to be more supportive of enhanced electronic instabilities. Flat bands and saddle points have been observed for YMn$_6$Sn$_6$ while the interplay between magnetism and topology has been shown in TbMn$_6$Sn$_6$ \cite{Li2021,Yin2020}. The kagome electronic structure along with a charge density wave (CDW) and superconductivity have been observed in kagome CsV$_3$Sb$_5$ \cite{Ortiz2020,Neupert2022,Nakayama2021,Liu2021,Kang2022}.
The recent discovery of a CDW in ScV$_6$Sn$_6$ offers new opportunities to understand the origins of electronic instabilities in these topological kagome systems \cite{Arachchige2022}. In particular, the investigation of vanadium kagome layers in AV$_3$Sb$_5$ and RV$_6$Sn$_6$ (R = Y, Gd-Tm, and Lu) has provided crucial insights into these unique properties. The stacked kagome layers in AV$_3$Sb$_5$ and two kagome sheets separated by alternating RSn$_2$ and Sn$_4$ layers per unit cell in RV$_6$Sn$_6$ have both shown the potential to exhibit exotic electronic phases.
Here we report the temperature-dependent electronic structure changes of ScV$_6$Sn$_6$, a novel kagome metal material that accommodates a CDW phase below the critical temperature (T$_c$). A Lifshitz transition is identified in the ARPES spectra that is related to the saddle point moving across the Fermi level at T$_c$. This result shows the CDW behavior may be connected to nesting of the saddle point, similar to related materials~\cite{Kang2022}. However, no energy gap is observed at the Fermi level and thus the CDW is not a typical Fermi surface nesting scenario. In addition, our ARPES spectra, STM, and first-principles calculations identified the appearance of a new band below the CDW T$_c$ attributed to the surface kagome layer. This phenomenon has not been observed previously in kagome metal systems, and since it occurs conspicuously at the critical temperature of the CDW, it will play an essential role in understanding the CDW of ScV$_6$Sn$_6$.
\section{Results and Discussion}
ScV$_6$Sn$_6$ crystallizes in the P6/mmm space group. The unit cell consists of two kagome layers composed of V atoms, which are enclosed by Sn layers and SnSc layers along the out-of-plane direction, similar to that of other AV$_6$Sn$_6$ (A=Gd, Ho, Y, Tb, etc.) \cite{Peng2021,Pokharel2021,Rosenberg2022,Pokharel2022} (Fig.~\ref{fig:bulk}a). The electronic band structures of ScV$_6$Sn$_6$, determined by first-principles density functional theory (DFT) calculations, show the characteristic features of the kagome lattice, such as a flat band due to the confinement of electrons caused by quantum interference in the kagome lattice, a Dirac point (DP) at the K point, and a saddle point (SP) at the M point from the hexagonal crystal symmetry (Fig.~\ref{fig:bulk}c,d). All bands are doubly degenerate due to the spatial inversion symmetry. Therefore the two kagome layers in the unit cell also degenerate the flat band (the gray shaded area in Fig.~\ref{fig:bulk}c). Similar features in the band structures are reported for other kagome metals~\cite{Pokharel2021, Rosenberg2022}.
Our calculations with orbital-resolved electronic structures further confirm the one complete set of the features of the kagome lattice with the orbital composition of $d_{z^2}$. A complete set of characteristics of the kagome lattice includes a Dirac point with a small gap due to spin-orbit coupling at the K point located at 0.45~eV below Fermi level ($E_F$), a saddle point near $E_F$, and a flat band located at ~0.3~eV above $E_F$ across the entire Brillouin zone in the $k_z=0$ plane (Fig.~\ref{fig:bulk}c).
The bands characterized mainly by the $d_{z^2}$ orbital component (red points in Fig.~\ref{fig:bulk}c) are not continuous along the M-K line due to the interaction with bands of non-orthogonal orbital compositions of $d_{xz}$ and $d_{yz}$.The importance of the orbital character of the Dirac fermions is emphasized in several other reports on kagome metals \cite{Li2021, Peng2021, Yang2022,Liu2020}.
\begin{figure}[htb!]
\includegraphics[width=0.5\textwidth]{Fig1.pdf}
\caption{Crystal structure and electronic band structure of kagome metal ScV$_6$Sn$_6$ in the normal state. a, Top and side views of ScV$_6$Sn$_6$ crystal structure. The kagome layers consisting of V atoms are sandwiched by ScSn and Sn atomic layers. b, Brillouin zone of ScV$_6$Sn$_6$. c, Orbital projected band structure of ScV$_6$Sn$_6$ for the d orbitals of V atoms. d, The characteristic features of the kagome lattice on ScV$_6$Sn$_6$, such as a flat band, a Dirac point (DP) at the K point, and a saddle point (SP) at the M point. The gray shaded area indicates the flat bands, the DP and the SP are marked and magnified.}
\label{fig:bulk}
\end{figure}
The electronic structure of ScV$_6$Sn$_6$ is investigated using DFT and ARPES with the Fermi surface in the normal state (T = 124~K) and the CDW state (T = 20~K) for the same $h\nu$ = 98~eV photon energy is shown in Fig.~\ref{fig:FS}. Interestingly, the normal state Fermi surface is similar to previous reports of the kagome termination Fermi surface for GdV$_6$Sn$_6$ while the Fermi surface in the low temperature CDW state resembles that of the Sn termination Fermi surface for GdV$_6$Sn$_6$~\cite{Peng2021}. Scanning the photon beam across the sample did not yield variations in the spectral intensity and STM images, shown in the Supplemental Information Fig.~\ref{fig:SI-STM}. This fact reveals that the ARPES data is from mixed terminations (Fig.~\ref{fig:SI-STM}a) due to the small facets from difficult-to-cleave ScV$_6$Sn$_6$ crystals. To investigate the $k_z$ dispersion, photon energy dependent scans were performed as shown in Fig.~\ref{fig:SI-kz}. No $k_z$ dispersion is observed at the Fermi level, similar to previous measurements of YMn$_6$Sn$_6$ and RbV$_3$Sb$_5$~\cite{Li2021,Liu2021}.
Previous investigations of ScV$_6$Sn$_6$ have revealed a CDW transition with T$_c$ = 92 K \cite{Arachchige2022}. Neutron and X-ray studies have reported CDW structural distortions with an unusual $q_{CDW} = (\frac{1}{3},\frac{1}{3},\frac{1}{3})$ corresponding to a distinct decrease in resistivity and magnetic susceptibility on cooling through T$_c$ \cite{Arachchige2022}.
These findings suggest changes to the Fermi surface are expected across T$_c$. While there are differences as noted above, the Fermi surfaces do not show any prominent folded electronic bands in the low temperature data due to the CDW phase. In the T = 124~K normal state Fermi surface, there is intensity at the $\bar{\text{M}}$ point as well as a point near $\bar{\text{K}}$ that lies on the $\bar{\Gamma}$-$\bar{\text{K}}$ line (Fig.~\ref{fig:FS}a,b). For the T = 20~K Fermi surface, in the CDW phase, the intensity at the $\bar{\text{M}}$ point and the intensity on the $\bar{\Gamma}$-$\bar{\text{K}}$ line is suppressed while enhanced intensity near $\bar{\text{K}}$, but on the $\bar{\text{K}}$-$\bar{\text{M}}$ line, is observed (Fig.~\ref{fig:FS}c). While this suppression of intensity could be due to electronic gaps from the CDW formation, we do not find gaps in the electronic structure at the Fermi level. It should be noted that a similar electronic structure and similar asymmetry to the intensity on the $\bar{\text{K}}$-$\bar{\text{M}}$ line due to photoemission matrix elements has been observed for GdV$_6$Sn$_6$ where no CDW is known to exist \cite{Peng2021}.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{Fig2.pdf}
\caption{The Fermi surfaces of ScV$_6$Sn$_6$ using DFT and ARPES. a, The theoretical Fermi surface of ScV$_6$Sn$_6$ for the normal state structure. b, The experimental Fermi surface of ScV$_6$Sn$_6$ at T = 124 K (normal state) using photon energy $h\nu$ = 98~eV with the Brillouin zone and high symmetry points highlighted. c, Experimental Fermi surface at T = 20 K (CDW state) using photon energy $h\nu$ = 98~eV with high symmetry points and high symmetry cuts shown in subsequent figures highlighted.}
\label{fig:FS}
\end{figure}
We further analyze the electronic structure of the high-temperature phase and show the theoretical and experimental band structures in Fig.~\ref{fig:RTband} along the two high symmetry lines defined in Fig.~\ref{fig:FS}c. Our band structures calculated for the bulk structure agree well with the band dispersion measured by ARPES. Dirac dispersion is indicated along $\bar{\text{M}}$-$\bar{\text{K}}$ at -0.2~eV below $E_F$ and the electron pocket is seen at $\bar{\text{M}}$ (Fig.~\ref{fig:RTband}a). This is seen clearly by ARPES in Fig.~\ref{fig:RTband}c,e at approximately the same energy. At $\bar{\text{K}}$ the Dirac cone at $E$-$E_F=-0.45$~eV, which is of $d_{z^2}$ character and one of the major features of the kagome band structure, is again clear in the bulk calculation (Fig.~\ref{fig:RTband}a,b), and part of the lower Dirac cone is especially bright along $\bar{\text{K}}$-$\bar{\Gamma}$ (Fig.~\ref{fig:RTband}c,e).
At $\bar{\text{K}}$ the occupied bands and electron pocket $E$-$E_F=-0.1$~eV is clear in the calculation but less prominent in the ARPES data. In Fig.~\ref{fig:RTband}c the linear Dirac dispersion of the occupied bands is faintly discernible. DFT calculations show that this Dirac dispersion and electron pocket consist of the orbital components $d_{x^2-y^2}$, $d_{xy}$, $d_{yz}$, and $d_{xz}$.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{Fig3.pdf}
\caption{Normal state band structures. a, DFT band structure for a semi-infinite slab along the $\bar{\text{M}}-\bar{\text{K}}-\bar{\Gamma}-\bar{\text{K}}-\bar{\text{M}}$ path and b, along the $\bar{\Gamma}-\bar{\text{K}}-\bar{\text{M}}-\bar{\text{K}}-\bar{\Gamma}$ path. c,d, ARPES band structures along the same paths and e,f, curvature analysis for the same.}
\label{fig:RTband}
\end{figure}
\begin{figure*}[t!]
\includegraphics[width=1.0\textwidth]{Fig5.pdf}
\caption{Temperature dependence of electronic structure and and formation of the Lifshitz transition. a-d, ARPES data (top) and curvature analysis image (bottom) along Cut 2 shown in Fig.~\ref{fig:FS}c for T = 124~K, 93~K, 65~K, and 20~K, respectively. All data taken with photon energy $h\nu$ = 82~eV. Brillouin zone high symmetry positions shown in top left panel. Green dashed lines and $\bar{\text{M}}$ marker are guides to the eye to highlight the upward motion of the bands at the $\bar{\text{M}}$ point in the data. Orange arrow highlights band crossing that also moves up in energy when the temperature is reduced.}
\label{fig:lifshitz}
\end{figure*}
To understand the electronic structure and its implications for the CDW, Fig.~\ref{fig:lifshitz} shows the temperature dependence of the electronic structure along Cut 2 outlined in Fig.~\ref{fig:FS}c. For these figures, centered at $\bar{\text{M}}$ along the $\bar{\text{K}}$-$\bar{\text{M}}$ direction, a systematic upward motion in energy is observed for the band structure as the temperature is lowered across the CDW transition. At the $\bar{\text{M}}$ point, this band shift makes the bands touching $E_F$ appear to change from an electron-like dispersion above T$_c$ to a hole-like dispersion below T$_c$, as highlighted by the green dashed curves in Fig.~\ref{fig:lifshitz}. This change in the Fermi surface contour is indicative of a Lifshitz transition due to the van Hove saddle point moving above $E_F$ as the temperature is lowered below T$_c$~\cite{Lifshitz1960}.
For the related kagome CsV$_3$Sb$_5$ and RbV$_3$Sb$_5$ systems, the formation of the charge order is linked to the van Hove singularities near the Fermi level~\cite{Liu2021, Kang2022,YHu2022}. While it is suggested the van Hove singularities in CsV$_3$Sb$_5$ create a Fermi surface nesting scenario, in ScV$_6$Sn$_6$ we do not find any energy gaps at the Fermi level below T$_c$.
On the other hand, the energy shift of the bands and Lifshitz transition does change the shape of the Fermi surface. The shift of spectral weight from the $\bar{\text{M}}$ point towards the $\bar{\text{K}}$ point across the CDW transition is similar to what is observed in the Fermi surface plots (Fig.~\ref{fig:FS}b, c). However, we do not find any evidence that the CDW gaps the Fermi surface. Hence, the van Hove saddle point likely plays a role in the CDW formation, it is not a prototypical Peierls type instability. Nonetheless, the relocation of a large density of states at the Fermi level correlates with the reduced resistivity of the material as the CDW forms~\cite{Arachchige2022}.
In our ARPES experiments the electronic structure in the normal and CDW phases are very similar, with one notable difference. In the low-temperature CDW phase an additional band near the Fermi level is observed that does not appear in the normal state as shown in Fig.~\ref{fig:newband}. Temperature dependent STM/S data also support this observation.
Shown is Fig.~\ref{fig:SI-STM}c is the comparison of the local density of states (LDOS) below (4.6~K, blue) and above (120~K, black) the CDW transition temperatures from the kagome termination. There is a distinct peak around 50~mV below the Fermi level in the 4.6~K dI/dV map, but it is absent in the 120~K data.
To find the origin of the additional band in the CDW phase, we first evaluated the band structures for low-temperature bulk structures. The band structures are investigated at $k_z=\frac{n}{6}$ ($n$=0, 1, 2, 3) due to the three times change in the structural periodicity along the c-axis by CDW. No additional bands were identified near the Fermi level with a similar band structure for high-temperature structures (normal state). (See Fig.~\ref{fig:SI-2})
We present the band structures of two kinds of surface terminations for the CDW state slab structures in the Supplementary Information (Fig.~\ref{fig:SI-slab}). To compare the electronic structures of the normal state between the calculation and ARPES experiments, we investigated unfolded band structures along the same high symmetry lines. For the Sn-termination (Fig.~\ref{fig:SI-slab}b,d), the dangling bond state of Sn on the surface appears above the Fermi level, but there is no additional band near the Fermi level. For the kagome termination (Fig.~\ref{fig:SI-slab}c,e), interestingly, we can clearly see an additional surface band analogous to the extra band observed in the ARPES experiment near the Fermi level.
We also confirm that the orbital contribution of the additional band is the $d_{z^2}$ orbital of surface V atoms. We further analyze the evolution of additional bands by comparing the electronic structure for the kagome termination in the normal state and CDW state. In the band structures for the kagome termination in the normal state, the additional bands also exist with one unoccupied and one occupied band (see Fig.~\ref{fig:SI-1}g,h). As the temperature decreases below T$_c$, CDW is induced, and these additional bands move down below the Fermi level. It is a CDW transition from metal to metal, so it does not show the same energy gain as the band that went below the Fermi level as the band gap opened. However, it can be confirmed that the occupied band gains energy as it goes further down the energy level induced by CDW.
\begin{figure*}[t!]
\includegraphics[width=1.0\textwidth]{new-Fig4.pdf}
\caption{Emergence of new bands from room temperature (normal state) to low temperature (CDW) structures. a, DFT bands of the normal state and b, CDW state. c,d, ARPES data at T = 124~K in the normal state and at T = 20~K in the CDW state, respectively. e,f, Curvature analysis for the normal and CDW states, respectively. The red dashed lines are to guide the eye to the absence and location of the extra band in the normal state that is present in the CDW state (red arrows). }
\label{fig:newband}
\end{figure*}
Figures~\ref{fig:newband}c-f show Cut 1 with $h\nu$ = 98~eV at temperatures below and above the CDW transition, which emphasize the existence of the extra band in the CDW state. The extra band matches that found in the DFT calculation (Fig.~\ref{fig:newband}a,b). The differences are clearer in curvature method plots (Fig.~\ref{fig:newband}e,f) \cite{Zhang2011}. The extra band is missing in the normal state above T$_c$ as highlighted in Fig.~\ref{fig:newband}c.
While this additional band in the CDW phase is the most prominent difference, there are more subtle changes in the electronic structure observed in the curvature method plots near the $\bar{\text{M}}$ point at $E$-$E_F$ = -0.4~eV and near the $\bar{\text{K}}$ point at $E$-$E_F$ = -0.2~eV as highlighted in Fig.~\ref{fig:SI-nogap}. Additional data for the evolution of the electronic structure at intermediate temperatures is provided in the Supplementary Information Figs.~\ref{fig:SI-3},~\ref{fig:SI-4},~and ~\ref{fig:SI-5}.
\section{Conclusion}
Our study of the topological kagome metal, ScV$_6$Sn$_6$, has revealed a novel phenomenon of surface bands and Lifshitz transition which are evident at the critical temperature of the CDW state. This finding is significant as it provides new insights into the electronic properties of kagome materials and their CDW behavior. The identification of this new surface band could also lead to the design and development of new materials with unique functionality. Future studies on kagome materials may benefit from the findings of our investigation, which sheds light on the origin of electronic instabilities in these unique systems.
\section*{Methods}
Our ab \textit{initio} calculations are based on density functional theory (DFT) \cite{HK1964,KS1965} as implemented in the Vienna ab \textit{initio} simulation package (VASP) \cite{Kresse1993,Kresse1996} with projector augmented wave potentials \cite{PAW1994,Kresse1999} and spin-orbit coupling. The Perdew-Burke-Ernzerhof (PBE) form is employed for the exchange-correlation functional with the generalized gradient approximation (GGA) \cite{GGA1996}. The energy cutoff is set to 520~eV for all calculations.
For the crystal structures, we used the P6/mmm space group with lattice constants, a,b = 5.4530~\AA and c=9.2311~\AA, and 13 atoms for the high-temperature (normal state) structure. We use the experimental structure \cite{Arachchige2022} for P6/mmm space group supercell ($\sqrt{3}\times\sqrt{3}\times$3) in the low-temperature structure with lattice constants, $a=b=9.4433$~\AA~and $c=27.7281$~\AA, and 117 atoms. The Brillouin zone is sampled using a 31$\times$31$\times$11 $\Gamma$-centered $k$-grid for the high-temperature structure, and 11$\times$11$\times$3 $\Gamma$-centered $k$-grid for the low-temperature structure, respectively. From the DFT result, we obtain the maximally localized Wannier functions for 4$s$ and 3$d$-orbitals of V atom and 5$p$-orbitals on Te atoms by using the WANNIER90 code \cite{Mostofi2014}, which are used to analyze the surface density of states. The surface projected local density of states was calculated by the WANNIERTOOLS \cite{Wu2018}, which is based on the iterative Green’s function technique \cite{Sancho1985}.
ScV$_6$Sn$_6$ crystals were grown from a Sn-rich melt as described in our recent paper~\cite{Arachchige2022}. The crystals grow as hexagonal blocks 0.4-3mm in size. ScV$_6$Sn$_6$ exhibit relatively poor 001 cleavage in contrast to the excellent 001 cleavage we observe in LuV$_6$Sn$_6$, YV$_6$Sn$_6$ or RMn$_6$Sn$_6$ crystals.
Synchrotron based ARPES measurements were carried out at Beamline 4.0.3 at the Advanced Light Source utilizing a Scienta R8000 photoelectron analyzer allowing for an angular resolution less than 0.2$^{\circ}$ and an energy resolution better than 20~meV. Samples were cleaved in vacuum at a base pressure better than 5$\times$ 10$^{-11}$ Torr. Measurements were performed with both linearly horizontal and linearly vertical polarizations in the energy range $h\nu$ = 38 – 124~eV and a sample temperature range of T = 20 – 124~K.
Single crystals were cleaved in ultrahigh vacuum (UHV) at low temperature and then immediately transferred to the scanning tunneling microscopy/spectroscopy (STM/S) head which was precooled to 4.6~K or 78~K without breaking the vacuum. The STM/S experiments were carried out using a UHV STM with a base pressure better than 2 $\times$ 10$^{-10}$~Torr. Pt-Ir tips (electro-polished after mechanical grinding) were conditioned on Au(111) surface before each measurement.
\section*{Acknowledgement}
Theory work and ORNL-led synchrotron based ARPES measurements were supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division (J. W. V., M. Y., H. L. and R. M.), and by the U.S. Department of Energy (DOE), Office of Science, National Quantum Information Science Research Centers, Quantum Science Center (S.-H. K. and Q. L.).
This research used resources of the Advanced Light Source, which is a DOE Office of Science User Facility under contract No. DE-AC02-05CH11231. D.M. acknowledges support from the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division. H.W.S.A and W.R.M. acknowledge support from the Gordon and Betty Moore Foundation’s EPiQS Initiative, Grant GBMF9069 to DM. STM/S research conducted at the Center for Nanophase Materials Sciences (CNMS), which is a US Department of Energy, Office of Science User Facility at Oak Ridge National Laboratory (S. H., H. J., and Z. G.). This research used resources of the Oak Ridge Leadership Computing Facility and the National Energy Research Scientific Computing Center, US Department of Energy Office of Science User Facilities.
This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC0500OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for the United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
\clearpage
\bibliographystyle{apsrev4-2}
|
2,869,038,153,817 | arxiv |
\section{Basics}
\label{sec-basics}
\noindent
\emph{Finite automata.}
We consider languages over a fixed finite alphabet $A=\{a,b,\ldots\}$
and finite automata (NFAs) of the form $\mathcal{A}=(Q,A,{\cdot},I,F)$ where
``$\cdot$'' denotes the transition function. For $p\in Q$ and $a\in
A$, $p\concatdot a$ is a subset of $Q$.
The transition function is extended to sets of
states $S\subseteq Q$ via $S\concatdot a=\bigcup_{p\in S}p\concatdot a$ and to
words by $S\cdot\epsilon = S$ and $S\concatdot (au)=(S\concatdot a)\concatdot u$. We
often write $p\step{u}q$ rather than $q\in (p\concatdot u)$. The language
recognized by $\mathcal{A}$ is $L(\mathcal{A})\stackrel{\mbox{\begin{scriptsize}def\end{scriptsize}}}{=} \{u\in A^* ~|~ (I\cdot u)\cap
F\neq\emptyset\}$.
$\mathcal{A}$ is deterministic (is a DFA) if $|I|\leq 1$ and $|p\cdot a|\leq 1$ for all
$p$ and $a$. It is complete if $|I|\geq 1$ and $|p\cdot a|\geq 1$ for
all $p$ and $a$.
The transition function induces a quasi-ordering on the states of
$\mathcal{A}$: $p\leq_\mathcal{A} q$ if there is a word $u$ such that
$p\step{u}q$, i.e., when $q$ can be reached from $p$ in the directed
graph underlying $\mathcal{A}$. The quasi-ordering is a partial ordering if
$\mathcal{A}$ is acyclic, i.e., $p\step{u}q\step{v}p$ implies $p=q$; or in
other words, when the only loops in $\mathcal{A}$ are self-loops. It is well
known that the $\mathcal{R}$-trivial languages are exactly the languages
accepted by (deterministic) acyclic automata~\cite{brzozowski80b}.
Regarding self-loops, we
say that $p$ is $a$-stable when $p\cdot a=\{p\}$, and that it is
$B$-stable, where $B\subseteq A$ is some subalphabet, if it is
$a$-stable for each $a\in B$. \\
\noindent
\emph{Subwords and piecewise-testable languages.}
We write $u\preccurlyeq v$ when $u$ is a (scattered) subword of $v$,
i.e., can be obtained from $v$ by removing some of its letters
(possibly none, possibly all). A word $u=a_1a_2\cdots a_n$ generates
a principal filter in $(A^*,\preccurlyeq)$. This is the language $L_u
=\{v~|~u\preccurlyeq v\}$, also denoted by the regular expression
$A^*a_1A^*a_2\ldots A^*a_nA^*$. The example in the introduction
has $\mathit{Sol}(\psi)=
L_{\texttt{a}\texttt{b}}\cap L_{\texttt{b}\texttt{c}}\cap (A^*\setminus L_{\texttt{a}\texttt{c}})$.
For $k\in\mathbb{N}$, we write $u\sim_k v$ when $u$ and $v$ have the same
subwords of length at most $k$~\cite{simon72}. This equivalence is
called \emph{Simon's congruence} since $u\sim_k v$ implies $xuy\sim_k
xvy$ for all $x,y\in A^*$. Furthermore, $\sim_k$ partitions $A^*$ in a
finite number of equivalence classes.
\begin{definition}[Piecewise-testable languages]
\label{def-PT-multiple}
A language $L\subseteq A^*$ is piecewise-testable
if it satisfies one of the equivalent following
properties:\footnote{The last four characterizations
refer to notions that we do not redefine in this article because we do not
use them. See references for details.}
\begin{itemize}
\item $L$ is a finite boolean combination of principal filters,
\item $L$ is a union $[u_1]_k\cup\cdots\cup [u_\ell]_k$ of
$\sim_k$-classes for some $k\in\mathbb{N}$,
\item $L$ can be defined by a $\mathcal{B}\Sigma_1$-formula in the
first-order logic over words~\cite{DGK-ijfcs08},
\item the syntactic monoid of $L$ is finite and $\mathcal{J}$-trivial
(Simon's theorem)~\cite{simon72},
\item the minimal automaton for $L$ is finite, acyclic, and satisfies
the UMS property~\cite{simon75,stern85},
\item the minimal automaton for $L$ is finite, acyclic, and
locally confluent~\cite{klima2013}.
\end{itemize}
\end{definition}
The piecewise-testable languages over some $A$ form a variety and we
mentioned the associated closure properties in our introduction. Note
that piecewise-testable languages are not closed under alphabetic
morphisms, concatenations, or star-closures.
\\
\noindent
\emph{Shuffling languages.}
In this note we focus on the shuffle product of words and languages, and more
generally on their parameterized infiltration product. When
$C\subseteq A$ is a subalphabet and $u,v$ are two words, we let
$u\uparrow_C v$ denote the language of all words that are obtained by
shuffling $u$ and $v$ \emph{with possible sharing of letters from
$C$}. This is better defined via a notation for extracting subwords:
for a word $u=a_1a_2\cdots a_n$ of length $n$ and a subset
$K=\{i_1,\ldots,i_r\}\subseteq\{1,\ldots,n\}$ of positions in $u$
where $i_1<i_2<\cdots<i_r$, we write $u_K$ for the subword
$a_{i_1}a_{i_2}\cdots a_{i_r}$ of $u$. Then we let
\[
x\in u\uparrow_C v
\iff \left\{
\begin{array}{l}
\exists K,K':
K\cup K'= \{1,2,\ldots,|x|\}, \\
x_K=u, x_{K'}=v, \text{ and } x_{K\cap K'}\in C^*.
\end{array}
\right
\]
The operation is lifted from words to languages in the standard way
via $L\uparrow_C L'=\bigcup_{u\in L}\bigcup_{v\in L'}u\uparrow_C v$.
This generalizes shuffle products and the interpolation products
$L\uparrow L'$ from~\cite{pin83,sakarovitch83} since $L\shuffle L' =
L\uparrow_\emptyset L'$ and $L\uparrow L' = L\uparrow_A L'$. Note that
$L\uparrow_C L'\subseteq L\uparrow_{C'}L'$ when $C\subseteq C'$.
Also note that $L\uparrow_C L'=L\shuffle L'$ when
$L$ or $L'$ is subword-closed.
A
\emph{shuffle ideal} is any language of the form $L\shuffle A^*$. It
is well-known that shuffle ideals are finite unions of principal
filters~\cite{haines69,heam2002} hence they are piecewise-testable.
\begin{theorem}[Main result]
\label{thm-main}
If $L$ is regular and $X$-trivial (where $X$ can be $\mathcal{R}$, $\mathcal{L}$,
or $\mathcal{J}$) then $L\uparrow_C L'$ is regular and $X$-trivial when $L'$
is finite, or cofinite, or is a shuffle ideal.
\end{theorem}
Let us first note that, since $A$ is finite, Theorem~\ref{thm-main}
answers the question about $L\shuffle A$ raised in our introduction.
A proof of the Theorem is given in the next section after a
few observations that we now make.
Let us mention a few directions in which our main result cannot be
extended:
\begin{itemize}
\item The shuffle of two piecewise-testable languages is
star-free~\cite[Theorem~4.4]{castiglione2012} but is not always
piecewise-testable: for example $a^*\shuffle
ab^*$, being $a(a+b)^*$, is not piecewise-testable while $a^*$ and $ab^*$ are.
\item The concatenation $L{\cdot} F$ of a piecewise-testable $L$ and a
finite $F$ is not always piecewise-testable: $(a+b)^*$ is
piecewise-testable but $(a+b)^*a$ is not. Note that $L\concatdot F$ is
included in $L\shuffle F$ that we claim is piecewise-testable.
\item The scattered residual $L \dashrightarrow u$ of a
piecewise-testable $L$ by some word $u$ is not always
piecewise-testable. For example $ac(a+b)^* \dashrightarrow
c=a(a+b)^*$. (Recall that $w \dashrightarrow u$ is the set of all
words $v$ such that $w\in u\shuffle v$, obtained by removing the
subword $v$ somewhere along $w$~\cite{kari94}.)
\end{itemize}
Finally, there are some (admittedly degenerate) situations that are
not covered by Theorem~\ref{thm-main} and where the shuffle of
two piecewise-testable languages is piecewise-testable.
\begin{proposition}
If $L_1,\ldots,L_m\subseteq A^*$ are piecewise-testable then
$L_1\shuffle \cdots \shuffle L_m$ is piecewise-testable in any of the following
cases:
\begin{itemize}
\item the $L_i$'s are all complements of shuffle ideals, i.e., they are
subword-closed;
\item their subalphabets are pairwise disjoint.
\end{itemize}
\end{proposition}
The first claim is easy to see since the shuffle of subword-closed
languages is subword-closed, and the second claim\footnote{Already
given in the long version of~\cite{masopust2016b}.}
is a consequence of the following Lemma.
\begin{lemma}[{\protect See also~\cite[Lemma~6]{esik98}}]
\label{lem-disjoint-alpha}
Let $\mathfrak{F}$ be a family of languages over $A$ that is closed under
intersections and inverse morphisms. If $L_1,L_2\in\mathfrak{F}$ use
disjoint subalphabets, then $L_1\shuffle L_2$ is in $\mathfrak{F}$ too.
\end{lemma}
\begin{proof}
Write $e_B: A^*\to A^*$ for the erasing morphism that replaces all
letters from some subalphabet $B$ with $\epsilon$ and leaves other
letters unchanged. Assuming $L_1\subseteq A_1^*$ and $L_2\subseteq
A_2^*$, with furthermore $A_1\cap A_2=\emptyset$, one has
\begin{gather*}
L_1\shuffle L_2 = (L_1\shuffle A_2^*)\cap (L_2\shuffle A_1^*)
= e_{A_2}^{-1}(L_1) \cap e_{A_1}^{-1}(L_2)
\:.
\end{gather*}
The last equality shows that $L_1\shuffle L_2$ is in $\mathfrak{F}$.
\end{proof}
\section{The question of piecewise complexity}
\label{sec-complexity}
We write $h_A(L)$ for the \emph{piecewise complexity} of $L$, defined
as the smallest $k$ such that $L$ is $k$-$\ComplexityFont{PT}$, i.e., can be written as
a union $L=[u_1]_k \cup \cdots \cup [u_r]_k$ of $\sim_k$-classes over
$A^*$. We let $h_A(L)=\infty$ when $L$ is not piecewise-testable. For
notational convenience, we usually write $h(L)$ when the alphabet is
understood\footnote{The only situation where $A$ is relevant happens
for $h_A(A^*)=0<h_{A'}(A^*)=1$ when $A\subsetneq A'$.} and
write $h(u)$ for $h(\{u\})$ when $L=\{u\}$ is a singleton.
It was argued in~\cite{KS-csl2016} that $h(L)$ is an important, robust
and useful, descriptive complexity measure for $\ComplexityFont{PT}$ languages. In
this light, a natural question is to provide upper-bounds on
$h(L\shuffle L')$ as a function of $h(L)$ and $h(L')$.
Computing or bounding $h(L)$ has received little attention
until~\cite{KS-csl2016}, and the available toolset for these questions
is still primitive. In this section we provide some preliminary
answers for $L\shuffle L'$ and slightly enrich the available toolset.
\\
Before looking at simpler situations, let us
note that, in general, the piecewise-complexity of $L\shuffle w$ can
be much higher than $h(L)$ and $h(w)$.
\begin{proposition}[Complexity blowup]
\label{prop-blowup
One cannot bound $h(L\shuffle w)$ with a
polynomial of $h(L)+h(w)$, even if we require $h(L)= 0$.
(NB: this statement assumes unbounded alphabets.)
\end{proposition}
\begin{proof}
Pick some $\lambda\in\mathbb{N}$ and let $U_n$ be a word over a $n$-letter
alphabet $A_n=\{a_1,\ldots,a_n\}$, given by $U_0=\epsilon$ and
$U_{i+1}=(U_i a_{i+1})^\lambda U_i$. It is known that
$h(U_n)=n\lambda+1$~\cite[Prop.~3.1]{KS-csl2016}.
On the other hand $h(A_n^*\shuffle
U_n)=h(L_{U_n})=|U_n|=(\lambda+1)^n-1$ since, for any word $u$,
$h(L_u)=|u|$~\cite[Prop.~4.1]{KS-csl2016}.
\end{proof}
\subsection{Simple shuffles}
\begin{proposition}
\label{prop-h-disjoint-alpha}
Assume that $L_1$ and $L_2$ are two non-empty piecewise-testable
languages on disjoint alphabets. Then $h(L_1\shuffle L_2)=\max
(h(L_1),h(L_2))$.
\end{proposition}
\begin{proof}
Since $k$-$\ComplexityFont{PT}$ languages form a variety~\cite[Lemma~2.3]{therien81},
\Cref{lem-disjoint-alpha} applies and yields $h(L_1\shuffle
L_2)\leq \max(h(L_1),h(L_2))$.
To see that $h(L_1\shuffle L_2)\geq h(L_1)$, we write $k=h(L_1\shuffle
L_2)$ and show that $L_1$ and $L_2$ are closed under $\sim_k$:
Pick any word $u\in L_1$ and any $u'\in A_1^*$ with $u\sim_k
u'$. Since $L_2$ is not empty, there is some $v\in L_2$ and we obtain
$uv\in L_1\shuffle L_2$, and also $u'v\in L_1\shuffle L_2$ since
$uv\sim_k u'v$. Necessarily $u'\in L_1$ since $L_1$ and $L_2$ have
disjoint alphabets. Hence $L_1$
is closed under $\sim_k$, i.e., $h(L_1)\leq k$. The same
reasoning applies to $L_2$.
\end{proof}
\begin{proposition}
\label{prop-h-shuf-filters}
Assume that $L_u$ and $L_v$ are two principal filters.
Then $h(L_u\shuffle L_v)\leq h(L_u)+h(L_v)$.
\end{proposition}
\begin{proof}
Recall that $h(L_u)=|u|$ as noted above. We then observe
that $L_u\shuffle L_v=\bigcup_{w\in u\shuffle v}L_w$ and that
$|w|=|u|+|v|$ for all $w\in u\shuffle v$.
\end{proof}
The upper bound in \Cref{prop-h-shuf-filters} can be reached, an easy
example being $h(L_{a^n}\shuffle L_{a^m})=h(L_{a^{n+m}})=n+m$. The
inequality can also be strict, as exemplified by
\Cref{prop-h-disjoint-alpha}.
\subsection{Shuffling finitely many words}
Finite languages are piecewise-testable and closed under shuffle
products. Their piecewise complexity reduces to the case of individual
words
in view of the following (from~\cite{KS-csl2016}):
\begin{gather}
\label{eq-h-F}
h(F) = \max_{u\in F} h(u)
\quad
\text{when $F$ is finite}.
\end{gather}
\begin{lemma}
\label{lem-h-shuffle-words}
$h(u_1\shuffle u_2 \shuffle \cdots \shuffle u_m)\leq
1+\max_{a\in A}\bigl(|u_1|_a+\cdots+|u_m|_a\bigr)$.
\end{lemma}
\begin{proof}
Assume $A=\{a_1,\ldots,a_n\}$ and define $\ell_1,\ell_2,\ldots,\ell_n$
via $\ell_j=|u_1|_{a_j}+\cdots+|u_m|_{a_j}$. From
\[
u_1 \shuffle \cdots \shuffle u_m \:\subseteq\:
a_1^{\ell_1}\shuffle \cdots \shuffle a_n^{\ell_n}
\:,
\]
we deduce
\begin{align*}
h(u_1\shuffle \cdots \shuffle u_m)&\leq
h\bigl(a_1^{\ell_1}\shuffle \cdots \shuffle a_n^{\ell_n}\bigr)
\\
\shortintertext{by Eq.~\eqref{eq-h-F}}
&= \max
\bigl( h(a_1^{\ell_1}),\ldots,h(a_n^{\ell_n})\bigr)
\\
\shortintertext{by Prop.~\ref{prop-h-disjoint-alpha}}
&=
\max(1+\ell_1,\ldots,1+\ell_n)\:.\qedhere
\end{align*}
\end{proof}
We may now bound $h(u_1\shuffle u_2\shuffle \cdots)$ as a function of
$h(u_1),h(u_2),\ldots$.
\begin{theorem}[Upper bound for shuffles of words]
\label{thm-h-shuffle-words}
Assume $|A|=n$.\\
(1) $h(u_1\shuffle u_2 \shuffle \cdots \shuffle u_m)$
is in $O\bigl(\bigl[\sum_{i=1}^m h(u_i)\bigr]^n\bigr)$.
\\
(2)
This upper bound is tight: for every $\lambda\in\mathbb{N}$, there exists words
$u_1,\ldots,u_m$ with fixed $m=n$ and such that $h(u_1\shuffle \cdots \shuffle
u_m)=(\lambda+1)^n$ and $h(u_1)+\cdots+h(u_m)=n^2\lambda+n$.
\end{theorem}
\begin{proof}
(1)
By \Cref{lem-h-shuffle-words},
\begin{align*}
& h(u_1\shuffle u_2 \shuffle \cdots \shuffle u_m)-1
\\
\leq &
\max_{a\in A}\bigl(|u_1|_a+\cdots+|u_m|_a\bigr)
\leq
\sum_{i=1}^m |u_i|
\:.
\end{align*}
On the other hand, \cite[Prop.~3.8]{KS-csl2016} showed that
\[
|u|< \left(\frac{h(u)}{|A|}+2\right)^{|A|} \text{ for any word $u\in A^*$.}
\]
Thus, for fixed $A$, $|u|$ is $O(h(u)^{|A|})$ and
$\sum_i |u_i|$ is $O\bigl(\bigl[\sum_i h(u_i)\bigr]^{|A|}\bigr)$, which
establishes the upper bound claim.
\noindent
(2) We consider $U_n$ as defined in the proof of Proposition~\ref{prop-blowup} and, for
$j=1,\ldots,m$, let $u_{j}$ be $r^j(U_n)$ where $r:A^*\to A^*$ is the
circular renaming that replaces each $a_i$ by $a_{i+1}$ (counting
modulo $n$). Write $\ell$ for $|U_n|$, i.e., $\ell=(\lambda+1)^n-1$. We
saw that $h(u_{j})=h(U_n)=n\lambda+1$ so, fixing $m=n$, $\sum_{i=1}^m
h(u_i)=n^2\lambda+n$ as claimed. Let $L = u_{1}\shuffle
u_{2}\shuffle \cdots\shuffle u_{n}$. There remains to prove that
$h(L)=(\lambda+1)^n=\ell+1$.
We first observe that, for any letter $a_j$, $|u_{1}|_{a_j} + \cdots +
|u_{n}|_{a_j} = \ell$. Indeed, the circular renamings ensure that
\[
|r^1(u)|_{a_j}+\cdots+|r^n(u)|_{a_j}=
|u|_{a_{j-1}}+\cdots+|u|_{a_{j-n}}=|u|
\]
for any word $u\in A^*$. We then obtain $h(L)\leq \ell+1$ by \Cref{lem-h-shuffle-words}.
There remains to show $h(L)>\ell$. For this, we observe that, for any
$i=1,\ldots,\ell$, the $i$-th letters $u_{1}[i],\ldots,u_{n}[i]$ form
a permutation of $\{a_1,\ldots,a_n\}$. Thus we can obtain
$(a_1a_2\cdots a_n)^\ell$ by shuffling $u_{1},\ldots,u_{n}$, i.e.,
$(a_1a_2\cdots a_n)^\ell\in L$. However $(a_1a_2\cdots
a_n)^{\ell}a_1$ is not in $L$ (it is too long) and $(a_1a_2\cdots
a_n)^{\ell}a_1 \sim_{\ell} (a_1a_2\cdots a_n)^\ell$ (both words
contain all possible subwords of length $\leq\ell$). Thus $L$ is not
closed under $\sim_\ell$, which concludes the proof.
\end{proof}
\subsection{A general upper bound?}
As yet we do not have a good upper bound in the general case.
Recall
that the \emph{depth} of a complete DFA is the maximal length of an acyclic path
from some initial to some reachable state. When $L$ is regular, we write
${\textit{dp}}(L)$ for the depth of the canonical DFA for $L$. Since
$h(L)\leq{\textit{dp}}(L)$ holds for all $\ComplexityFont{PT}$ languages~\cite{klima2013}, one
could try to bound
${\textit{dp}}(L\shuffle w)$ in terms of ${\textit{dp}}(L)$ and $w$.
This does not seem very promising: First, for $L$ fixed, ${\textit{dp}}(L\shuffle w)$ cannot be bounded in
$O(|w|)$. Furthermore, ${\textit{dp}}(L)$ can be much larger than $h(L)$: if
$L$ is $k$-$\ComplexityFont{PT}$ and $|A|=n$ then the depth of the minimal DFA for $L$
can be as large as $\binom{k+n}{k}-1$~\cite[Thm.~31]{masopust2017}. Finally, this approach would only
provide very large upper bounds, far above what we observe in
experiments.
\section{Conclusion}
\label{sec-concl}
We proved that $L\shuffle w$ is piecewise-testable when $L$ is (and
when $w$ is a word), relying on a little-used characterization of
piecewise-testable languages. This is part of a more general research
agenda: identify constructions that produce piecewise-testable
languages and compute piecewise complexity modularly. In this
direction, an interesting open problem is to identify sufficient
conditions that guarantee that a Kleene star $L^*$, or a concatenation
$L\concatdot L'$, is piecewise-testable. It is surprising that such
questions seem easier for shuffle product than for concatenation.
\section{Introduction}
\label{sec-intro}
Piecewise-testable languages, introduced in~\cite{simon72,simon75},
are an important variety of simple dot-depth one, hence star-free,
regular languages. As such they are closed
under boolean operations, left and right derivatives, and inverse
morphisms.
We prove in this paper that the shuffle product $L\shuffle F$ of $L$
with some finite language $F$ is piecewise-testable when $L$ is.
\\
\noindent
\emph{Some motivations.}
The question was raised by our investigations of $\FO(A^*,\preccurlyeq)$,
the first-order ``logic of subwords'', and its decidable two-variable
fragment~\cite{KS-csl2016,HSZ-lics2017}. Let us use $u\preccurlyeq v$ to
denote that $u$ is a (scattered) subword, or a subsequence, of $v$. For example,
$\texttt{simon}\preccurlyeq\texttt{stimulation}$ while
$\texttt{ordering} \not\preccurlyeq \texttt{wordprocessing}$. Given a formula
$\psi(x)$ with one free variable, e.g.,
\begin{gather}
\label{ex-psi}
\tag{$\psi(x)$}
\texttt{ab}\preccurlyeq x
\land \texttt{bc}\preccurlyeq x
\land\texttt{ac}\not\preccurlyeq x
\:,
\end{gather}
we write $\mathit{Sol}(\psi)$ for its set of solutions. In this example,
$\mathit{Sol}(\psi)$ is the set of all words that have $\texttt{ab}$,
$\texttt{bc}$, but not $\texttt{ac}$, among their subwords. If we
assume that the alphabet under consideration is
$A=\{\texttt{a},\texttt{b},\texttt{c}\}$, then
$\mathit{Sol}(\psi)$ is
the language described via
$\texttt{c}^*\texttt{b}^+\texttt{c}(\texttt{b}+\texttt{c})^*\texttt{a}^+\texttt{b}(\texttt{a}+\texttt{b})^*$,
a simple regular expression. It is shown in~\cite{KS-csl2016,HSZ-lics2017} how to
compute such solutions automatically. Let us extend the framework
with the predicate $\preccurlyeq_1$, defined via
\[
u\preccurlyeq_1 v \iff u\preccurlyeq v\land |u| = |v| - 1,
\]
where $|u|$ is the length of $u$, so that $\preccurlyeq$ is the reflexive transitive
closure of $\preccurlyeq_1$. Now an $\FO^2(A^*,\preccurlyeq,\preccurlyeq_1)$
formula of the form
\begin{gather}
\tag{$\phi(x)$}
\exists y: y\preccurlyeq_1 x\land \psi(y)
\end{gather}
has $\mathit{Sol}(\phi)= \mathit{Sol}(\psi)\shuffle A$ as set of solutions. This is
because $L\shuffle A$ is the union of all $u\shuffle a$ for $u\in L$
and $a\in A$ , and $u\shuffle a$ is the set of all words that can be
obtained by inserting the letter $a\in A$ somewhere in $u$.
Such
equalities provide an effective quantifier-elimination procedure for
(a fragment of) the logic. Extending the complexity analysis
from~\cite{KS-csl2016} requires proving that $\mathit{Sol}(\phi)$ is
piecewise-testable when $\mathit{Sol}(\psi)$ is. This
will be a consequence of the main result in this paper. \\
\noindent
\emph{Through the mirror automaton.}
It took us some time to
find a simple proof that $L\shuffle A$ is piecewise-testable when
$L$ is. In particular, starting from any of the
well-known characterizations of piecewise-testable languages (see
Definition~\ref{def-PT-multiple} below) did not take us very far.
Neither could we use the approach developed for star-free languages
---see~\cite[Coro.~3.3]{castiglione2012}--- since piecewise-testable
languages are not closed under bounded shuffle.
We eventually found a simple proof based on a classic but little-used
characterization: \emph{a regular language $L$ is
piecewise-testable if, and only if, $L$ and its mirror image
$\mirror{L}$ are $\mathcal{R}$-trivial, that is, iff
the minimal DFAs for $L$ and for $\mirror{L}$ are
both acyclic}. This characterization is not explicitly mentioned in the
main references on piecewise-testable languages,
be they classic (e.g.,~\cite{sakarovitch83}) or
recent (e.g.,~\cite{masopust2017}).
As far as we know, it was first given explicitly by
Brzozowski~\cite{brzozowski76b}. Beyond that, we only saw it
in~\cite{schwentick2001,klima2012b} (and derived works).
\\
\noindent
\emph{Outline of the paper.} In section~\ref{sec-basics} we recall
the necessary notions on automata, languages, piecewise-testability,
etc., state our main result and discuss extensions. In
Section~\ref{sec-main} we prove the main technical result: the class
of $\mathcal{R}$-trivial regular languages is closed under interpolation
products with finite languages. The proof is by inspecting the
(nondeterministic) shuffle automaton and checking that the standard
determinization procedure yields an acyclic automaton.
In Section~\ref{sec-complexity} we provide bounds on the piecewise complexity
of some shuffle languages.
In the
conclusion, we list some questions raised by this work.
\section{Shuffling acyclic automata}
\label{sec-main}
In this section we first prove Proposition~\ref{prop-main} by inspecting the shuffling
of automata.
\begin{proposition}
\label{prop-main}
If $L\subseteq A^*$ is regular and $\mathcal{R}$-trivial then $L\uparrow_C
w$ is too, for any $w\in A^*$ and $C\subseteq A$.
\end{proposition}
Let $\mathcal{A}=(Q,A,{\cdot},i,F)$ be an acyclic complete deterministic
automaton for $L$, and let $w=z_1\cdots z_m\in A^*$ be the word under
consideration. When building the shuffle automaton for $L\uparrow_C
w$, it is more convenient to consider the smallest automaton for $w$,
deterministic but not complete. Formally, we let
$\mathcal{B}=(Q',A,{\circ},i',F')$ given by $Q' = Q\times\{0,1,\ldots,m\}$,
$i'=(i,0)$, $F'=F\times \{m\}$, and a transition table given by
\begin{equation}
\label {eq-delta-B}
(p,k)\circ a=
\bigl\{
\;\;
(p\cdot a,k)
,\;\;
\obracew{(p,k+1)}{\text{if $a{=}z_{k+1}$}}
,
\obracew{(p\cdot a,k+1)}{\text{if furthermore $a\in C$}}
\bigr\}.
\end{equation}
This is a standard construction: $\mathcal{B}$ is nondeterministic in
general, and it is easy to see that it accepts exactly $L\uparrow_C
w$.
Observe that $\mathcal{B}$ too is acyclic: by Eq.~\eqref{eq-delta-B}, for
any transition $(p,k)\step{a}(q,\ell)$ one has $p\leq_\mathcal{A} q$ and
$k\leq\ell$ and this extends to any path $(p,k)\step{u}(q,\ell)$ by
transitivity. Thus $\leq_\mathcal{B}$ is included in the Cartesian product
of two partial orderings.
From $\mathcal{B}=(Q',A,{\circ},i,F')$ we derive a powerset automaton
$\mathcal{P}=(\bm{Q},A,{\bullet},\bm{i},\bm{F})$ in the standard way, i.e.,
$\bm{Q}=2^{Q'}=\{S~|~S\subseteq Q'\}$, $\bm{i}=\{i'\}$,
$\bm{F}=\{S\in\bm{Q}~|~S\cap F'\neq \emptyset\}$ and $S\bullet
a=\{S\circ a\}$. It is well known that $\mathcal{P}$ is
deterministic, complete, and accepts exactly the language accepted by
$\mathcal{B}$, i.e., $L\uparrow_C w$.
\begin{lemma}
\label{lem-P-acyclic}
$\mathcal{P}$ is acyclic.
\end{lemma}
\begin{proof}
Let $S_0\step{a_1}S_1\step{a_2}S_2\cdots\step{a_n}S_n=S_0$ be a
non-empty cycle in $\mathcal{P}$ and write $S=\bigcup_{i=0}^n S_i$ and
$B=\{a_1,\ldots,a_n\}$ for the set of states (resp., set of letters)
appearing along the cycle.
We first claim that for any $(p,k)\in S_n$, $p$ is $B$-stable in $\mathcal{A}$,
which mean that $p\cdot a_i=p$ for $i=1,\ldots,n$. We prove this by induction
on $\leq_\mathcal{B}$: so consider an arbitrary $(p,k)\in S_n$ and assume
that $p'$ is $B$-stable whenever there is some $(p',k')\in S_n$ with $(p',k')<_\mathcal{B}(p,k)$.
Since $S_0\step{a_1}S_1\cdots\step{a_n}S_n$ and $(p,k)\in S_n$,
$\mathcal{B}$ has a sequence of transitions
\[
(p_0,\ell_0)
\:\step{a_1}\: (p_1,\ell_1)
\:\step{a_2}\: (p_2,\ell_2)
\cdots
\:\step{a_n}\: (p_n,\ell_n) = (p,k)
\]
with $(p_i,\ell_i) \in S_i$ for all $i=1, \ldots, n$. Thus $p_0
\leq_\mathcal{A} p_1 \cdots \leq_\mathcal{A} p_n=p$ and $\ell_0 \leq \ell_1 \cdots
\leq \ell_n=k$. If $p_0\neq p$, then $p_0 = p_1 = \ldots = p_{i-1}
\neq p_i\leq_\mathcal{A} p_n$ for some $i$. Given $(p_{i-1},\ell_{i-1})
\step{a_i} (p_i,\ell_i)$ and $p_{i-1}\neq p_i$,
Eq.~\eqref{eq-delta-B} requires that $p_{i-1}\cdot {a_i}=p_{i}$ in
$\mathcal{A}$, hence $p_{i-1}$ is not $B$-stable, but this contradicts the
induction hypothesis since $p_{i-1}=p_0$, $(p_0,\ell_0)$ belongs to $S_n$, and
$(p_0,\ell_0)<_\mathcal{B}(p,k)$. Thus $p_0=p_1=\cdots=p_n=p$. If $\ell_0<\ell_n$, the
induction hypothesis applies and states that $p_0$ is $B$-stable. If
$\ell_0 = \ell_1 = \cdots = \ell_n$, then Eq.~\eqref{eq-delta-B}
requires that $p_{i-1} \cdot a_i=p_i$ for all $i<n$, which proves the
claim.
Since we can change the origin of the cycle, we conclude that $p$ is
$B$-stable in $\mathcal{A}$ for any $(p,k)$ in $S$, not just in $S_n$. If
$p$ is $B$-stable, then $(p,k)\circ a_i\ni (p,k)$ by
Eq.~\eqref{eq-delta-B}. Thus $S_{i-1}\bullet a_i \supseteq S_{i-1}$ for
all $i=1,\ldots,n$. This entails $S_0 \subseteq S_1 \subseteq \cdots
\subseteq S_n=S_0$ and then $S_0 = S_1 = \ldots = S_n$. We have proved
that all cycles in $\mathcal{P}$ are self-loops, hence $\mathcal{P}$ is acyclic as
claimed.
This entails that $L\uparrow_C w$, the language recognized by $\mathcal{P}$,
is $\mathcal{R}$-trivial and concludes the proof of Proposition~\ref{prop-main}.
\end{proof}
\input{fig-power-automaton}
\begin{remark}
\label{rem-nfa-acyclic}
\Cref{lem-P-acyclic} needs a proof because determinizing an acyclic
NFA does not always yield an acyclic
DFA.\footnote{Indeed nondeterministic and deterministic acyclic
automata have different expressive powers, see~\cite{schwentick2001}.} For
example, the NFA obtained by shuffling DFAs for $a^*$ and for $b^*a$
is acyclic (see left of Fig.~\ref{fig-power-automaton}). However, its
powerset automaton and the minimal DFA are not (see right of the
figure). Indeed, $a^*\shuffle b^*a=(a+b)^*a$ is not $\mathcal{R}$-trivial.
\end{remark}
With Proposition~\ref{prop-main} it is easy to prove our main result.
\begin{proof}[Proof of Theorem~\ref{thm-main}]
We first assume that $L$ is $\mathcal{R}$-trivial
and consider several cases for $L'$:
\begin{itemize}
\item
If $L'$ is finite, we use distributivity of shuffle over unions:
$L\uparrow_C L'$ is $\mathcal{R}$-trivial since it is a finite union
$\bigcup_{w\in L} L\uparrow_C w$ of $\mathcal{R}$-trivial languages.
\item
If $L'$ is a shuffle ideal, i.e., if $L'=L'\shuffle A^*=L'\uparrow_C
A^*$, then $L\uparrow_C L'$ is a shuffle ideal too in view of
\[
L\uparrow_C L'
=
L\uparrow_C (L'\uparrow_C A^*)
=
(L\uparrow_C L')\uparrow_C A^*
\:.
\]
Recall now that shuffle ideals are always $\mathcal{R}$-trivial.
\item
If $L'$ is cofinite, it is the union of a finite language and a
shuffle ideal, so this case reduces to the previous two cases by
distributing shuffle over union.
\end{itemize}
Once the result is proved for $X=\mathcal{R}$, it extends to $X=\mathcal{L}$ by
mirroring since $L$ is $\mathcal{L}$-trivial if, and only if, its mirror
$\mirror{L}$ is $\mathcal{R}$-trivial, and since $\mirror{(L\uparrow_C L')}=
\mirror{L}\uparrow_C\mirror{L'}$.
Finally, it extends to $X=\mathcal{J}$ since a finite monoid is
$\mathcal{J}$-trivial if, and only if, it is both $\mathcal{R}$- and $\mathcal{L}$-trivial.
\end{proof}
\begin{remark}
Masopust and Thomazo extended the UMS criterion to \emph{nondeterministic}
automata. They showed that $L$ is piecewise-testable if it is
recognized by a complete acyclic NFA with the UMS
property~\cite[Thm.~25]{masopust2017}. The NFA that one obtains by
shuffling minimal DFAs for $L$ and $w$ is indeed acyclic and
complete. However it does not satisfy the UMS property in general
(already with $a^*\shuffle a$) so this additional characterization of
piecewise-testable language does not directly entail our main result.
\end{remark}
|
2,869,038,153,818 | arxiv |
\section{Introduction}
Suppose you are shown a small, unfamiliar object and asked if it could fit through an M-shaped slot. How might you solve this task? One approach would be to ``rotate" the object in your mind's eye and see if, from some particular angle, the object's profile fits into an M. To put the object through the slot would then just require orienting it to that particular imagined angle. In their famous experiments on ``mental rotation", Shepard \& Metzler argued that this is the approach humans use when reasoning about the relative poses of novel shapes~\cite{shepard1971mental}. Decades of work in psychology have documented numerous other ways that ``mental images", i.e. pictures in our heads, can aid human cognition~\cite{richardson2013mental}. In this paper, we ask: can we give robots a similar ability, where they use mental imagery to aid their spatial reasoning?
Fortunately, the generic ability to perform imagined translations and rotations of a scene, also known as \textit{novel view synthesis}, has seen a recent explosion of research in the computer vision and graphics community~\cite{dellaert2020neural,xie2022neural,tewari2022advances}. Our work builds in particular upon Neural Radiance Fields (NeRFs)~\cite{mildenhall2020nerf}, which can render what a scene would look like from any camera pose. We treat a NeRF as a robot's ``mind's eye", a virtual camera it may use to imagine how the scene would look were the robot to reposition itself. We couple this ability with an affordance model \cite{zeng2019learning}, which predicts, from any given view of the scene, what actions are currently afforded. Then the robot must just search, in its imagination, for the mental image that best affords the action it wishes to execute, then execute the action corresponding to that mental image.
\begin{figure}[!t]
\centering
\includegraphics[trim={0cm, 0cm, 0cm, 0},clip,width=1\textwidth]{figures/method_v14.pdf}
\caption{\textbf{Overview of \ourmethod.} (a) Given a set of multi-view RGB images as input, we optimize a neural radiance field representation
of the scene via volume rendering with perspective ray casting.
(b) After the NeRF is optimized, we perform volume rendering with orthographic ray casting to render the scene from $V$ viewpoints. (c) The rendered orthographic images are fed into the policy for predicting pixel-wise action-values that correlate with picking and placing success. (d) The pixel with the highest action-value is selected, and its estimated depth and associated view orientation are used to parameterize the robot's motion primitive.
}
\label{fig:method}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[trim={0cm, 0cm, 0cm, 0},clip,width=1\textwidth]{figures/real_tasks_v2.pdf}
\caption{\textbf{Real-world Results.} MIRA is sample efficient in learning vision-based 6-DoF manipulation tasks: packing flosses (a-b), packing metal cubes (c-d), and putting the metal sphere into cups (e-f). These tasks are challenging for methods that rely on 3D sensors as objects contain thin structures or are composed of specular or semi-transparent material. MIRA is able to solve these tasks because it only requires RGB inputs.
}
\label{fig:real}
\end{figure}
\vspace{-0.5em}
We test this framework on 6-DoF rearrangement tasks~\cite{batra2020rearrangement}, where the affordance model simply predicts, for each pixel in a given camera view, what is the action-value of picking (or placing) at that pixel's coordinates.
Using NeRF as a virtual camera for this task has several advantages over prior works which used physical cameras:
\begin{itemize}[leftmargin=*]
\item \textbf{Out-of-plane rotation.} Prior works have applied affordance maps to 2-dimensional top-down camera views, allowing only the selection of top-down picking and placing actions~\cite{zeng2019learning,zeng2020transporter,shridhar2021cliport}. We instead formulate the pick-and-place problem as an action optimization process that searches across different novel synthesized views and their affordances of the scene. We demonstrate that this optimization process can handle the multi-modality of picking and placing while naturally supporting actions that involve out-of-plane rotations.
%
\item \textbf{Orthographic ray casting.} A NeRF trained with images from consumer cameras can be used to synthesize novel views from \emph{novel kinds of cameras} that are more suitable to action reasoning. Most physical cameras use perspective projection, in which the apparent size of an object in the image plane is inversely proportional to that object's distance from the camera --- a relationship that any vision algorithm must comprehend and disentangle. NeRF can instead create images under other rendering procedures; we show that orthographic ray casting is particularly useful, which corresponds to a non-physical ``camera" that is infinitely large and infinitely distant from the scene. This yields images in which an object's size in the image plane is invariant to its distance from the camera, and its appearance is equivariant with respect to translation parallel to the image plane. In essence, this novel usage of NeRF allows us to generate ``blueprints" for the scene that complement the inductive biases of algorithms that encode translational equivariance (such as ConvNets).
\item \textbf{RGB-only.} Prior rearrangement methods \cite{manuelli2019kpam,simeonov2021neural,song2020grasping} commonly require 3D sensors (e.g. via structured light, stereo, or time-of-flight), and these are error-prone when objects contain thin structures or are composed of specular or semi-transparent materials---a common occurrence (see ~\figref{fig:real} for examples). These limitations drastically restrict the set of tasks, objects, and surfaces these prior works can reason over.
\end{itemize}
We term our method Mental Imagery for Robotic Affordances, or \ourmethod.
To test \ourmethod, we perform experiments in both simulation and the real world. For simulation, we extend the Ravens~\cite{zeng2020transporter} benchmark to include tasks that require 6-DoF~ actions. Our model demonstrates superior performance to existing state-of-the-art methods for object rearrangement~\cite{zakka2020form2fit,zeng2020transporter}, despite not requiring depth sensors.
Importantly, the optimization process with novel view synthesis and affordance prediction in the loop enables our framework to generalize to out-of-distribution object configurations, where the baselines struggle.
In summary, we contribute (i) a framework that uses NeRFs as the scene representation to perform novel view synthesis for precise object rearrangement, (ii) an orthographic ray casting procedure for NeRFs rendering that facilitates the policy's translation equivariance, (iii) an extended benchmark of 6-DoF manipulation tasks in Ravens~\cite{zeng2020transporter}, and (iv) empirical results on a broad range of manipulation tasks, validated with real-robot experiments.
\section{Related Works}
\label{sec:related}
\subsection{Vision-based Manipulation.}
\paragraph{Object-centric.}
Classical methods in visual perception for robotic manipulation mainly focus on representing instances with 6-DoF~ poses~\cite{zhu2014single,zeng2017multi,xiang2017posecnn,deng2020self,wang2019normalized,chen2020category,li2020category}. However, 6-DoF~ poses cannot represent the states of deformable objects or granular media, and cannot capture large intra-category variations of unseen instances~\cite{manuelli2019kpam}.
Alternative methods that represent objects with dense descriptors~\cite{florence2018dense,florence2019self,sundaresan2020learning} or keypoints~\cite{manuelli2019kpam,kulkarni2019unsupervised,liu2020keypose,you2021omnihang} improve generalization, but they require a dedicated data collection procedure (\eg configuring scenes with single objects).
\paragraph{Action-centric.} Recent methods based on end-to-end learning directly predict actions given visual observations~\cite{levine2016end,kalashnikov2018scalable,mahler2017dex,zeng2018robotic,ten2017grasp,mousavian20196}. These methods can potentially work with deformable objects or granular media, and do not require any object-specific data collection procedures. However, these methods are known to be sample inefficient and challenging to debug.
Recently, several works~\cite{zeng2020transporter,song2020grasping,wu2020spatial,james2021coarse,huang2022equivariant,wang2022so,zhu2022sample} have proposed to incorporate spatial structure into action reasoning for improved performance and better sample efficiency.
Among them, the closest work to ours is~\citet{song2020grasping} which relies on view synthesis to plan 6-DoF~ picking. Our work differs in that it 1) uses NeRF whereas \cite{song2020grasping} uses TSDF~\cite{curless1996volumetric}, 2) does not require depth sensors, 3) uses orthographic image representation, 4) does not directly use the camera pose as actions, and 5) shows results on rearrangement tasks that require both picking and placing.
\subsection{Neural Fields for Robotics}
Neural fields have emerged as a promising tool to represent 2D images~\cite{karras2021alias}, 3D geometry~\cite{park2019deepsdf,mescheder2019occupancy}, appearance~\cite{mildenhall2020nerf,sitzmann2019scene,sitzmann2021light}, touch~\cite{gao2021objectfolder}, and audio~\cite{sitzmann2020implicit,luo2022learning}.
They offer several advantages over classic representations (e.g., voxels, point clouds, and meshes) including reconstruction quality, and memory efficiency. Several works have explored the usage of neural fields for robotic applications including localization~\cite{yen2020inerf,moreau2022lens}, SLAM~\cite{Sucar:etal:ICCV2021,zhu2022nice,Ortiz:etal:iSDF2022}, navigation~\cite{adamkiewicz2022vision}, dynamics modeling~\cite{li20223d,2022-driess-compNerfPreprint,wi2022virdo,shen2022acid}, and reinforcement learning~\cite{driess2022reinforcement}.
\textcolor{black}{
For robotic manipulation, GIGA~\cite{jiang2021synergies} and NDF~\cite{simeonov2021neural} use occupancy networks' feature fields to help action prediction. However, both of them rely on depth cameras to perceive the 3D geometry of the scene, while MIRA does not require depth sensors and thus can handle objects with reflective or thin-structured materials. Dex-NeRF~\cite{IchnowskiAvigal2021DexNeRF} infers the geometry of transparent objects with NeRF and determines the grasp poses with Dex-Net~\cite{mahler2017dex}. However, it only predicts the 3-DoF grasping pose and does not provide a solution for pick-conditioned placing; NeRF-Supervision~\cite{yen2022nerfsupervision} uses NeRF as a dataset generator to learn dense object descriptors for picking but not placing. In contrast, MIRA is capable of predicting both 6-DoF picking and pick-conditioned placing. Model-based methods~\cite{li20223d,2022-driess-compNerfPreprint} use NeRFs as decoders to learn latent state representations for model predictive control; NeRF-RL~\cite{driess2022reinforcement} instead uses the learned latent state representations for downstream reinforcement learning. Compared to these works, MIRA uses imitation learning to acquire the policy and thus avoid the conundrum of constructing reward functions. Furthermore, MIRA enjoys better sample efficiency due to the approximate 3D rotational equivariance via novel-view synthesis. The last but not least, MIRA demonstrates real-world results on three 6-DoF kitting tasks while these works primarily focus on simulation benchmark.
}
\section{Method}
Our goal is to predict actions $a_t$, given RGB-only visual observations $o_t$, and trained from only a limited number of demonstrations. We parameterize our action space with two-pose primitives $a_t = (\mathcal{T}_{\text{pick}}, \mathcal{T}_{\text{place}})$, which are able to flexibly parameterize rearrangement tasks \cite{zeng2020transporter}.
This problem is challenging due to the high degrees of freedom of $a_t$ (12 degrees of freedom for two full SE(3) poses), a lack of information about the underlying object state (such as object poses), and limited data. Our method (illustrated in \figref{fig:method}) factorizes action reasoning into two modules: 1) a continuous neural radiance field that can synthesize virtual views of the scene at novel viewpoints, and 2) an optimization procedure which optimizes actions by predicting per-pixel affordances across different synthesized virtual pixels. We discuss these two modules in \secref{sec:neural_scene_representation} and \secref{sec:policy} respectively, followed by training details in \secref{sec:training}.
\subsection{Scene Representation with Neural Radiance Field}\label{sec:neural_scene_representation}
To provide high-fidelity novel-view synthesis of virtual cameras, we represent the scene with a neural radiance field (NeRF)~\cite{mildenhall2020nerf}. For our purposes, a key feature of NeRF is that it \textit{renders individual rays (pixels) rather than whole images}, which enables flexible parameterization of rendering at inference time, including camera models that are non-physical (e.g., orthographic cameras) and not provided in the training set.
To render a pixel, NeRF casts a ray $\mathbf{r}(t) = \mathbf{o} + t\mathbf{d}$ from some origin $\mathbf{o}$ along the direction $\mathbf{d}$ passing through that pixel on an image plane. In particular, these rays are casted into a field $F_{\Theta}$ whose input is a 3D location $\textbf{x} = (x, y, z)$ and unit-norm viewing direction $\mathbf{d}$, and whose output is an emitted color $c = (r, g, b)$ and volume density $\sigma$.
Along each ray, $K$ discrete points $\{\mathbf{x}_k = \mathbf{r}(t_k)\}_{k=1}^K$ are sampled for use as input to $F_{\Theta}$, which outputs a set of densities and colors $\{\sigma_k, \mathbf{c}_k\}_{k=1}^K = \{F_{\Theta}(\mathbf{x}_k, \mathbf{d})\}_{k=1}^K$. Volume rendering~\cite{kajiya84} with a numerical quadrature approximation~\cite{max95} is performed using these values to produce the color $\hat{\textbf{C}}(\mathbf{r})$ of that pixel:
\newcommand{T}{T}
\begin{equation} \label{eq:volume_rendering}
\begin{split}
\hat{\textbf{C}}(\mathbf{r}) = \sum_{k=1}^{K} T_k \bigg(1- \exp\Big(-\sigma_k (t_{k+1} - t_k)\Big)\bigg) \textbf{c}_k\,, \quad T_k = \text{exp} \Big(\!-\!\sum_{k' < k} \sigma_{k'} (t_{k'+1} - t_{k'})\Big)\,.
\end{split}
\end{equation}
where $T_k$ represents the probability that the ray successfully transmits to point $\mathbf{r}(t_k)$. At the beginning of each pick-and-place, our system takes multi-view posed RGB images as input and optimizes $\Theta$ by minimizing a photometric loss $\mathcal{L_{\text{photo}}} = \sum_{\mathbf{r} \in \mathbfcal{R}} ||\hat{\mathbf{C}}(\mathbf{r}) - \mathbf{C}(\mathbf{r})||_2^2$,
using some sampled set of rays $\mathbf{r} \in \mathbfcal{R}$, where $\mathbf{C}(\mathbf{r})$ is the observed RGB value of the pixel corresponding to ray $\mathbf{r}$ in an input image. In practice, we use instant-NGP~\cite{mueller2022instant} to accelerate NeRF training and inference.
\paragraph{Orthographic Ray Casting.}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{figures/perspective_vs_orthographic_v3.png}
\caption{\textbf{Perspective vs. Orthographic Ray Casting.}
(a) A 3D world showing two objects, with the camera located at the top. (b) The procedure of perspective ray casting and a perspective rendering of the scene. The nearby object is large, the distant object is small, and both objects appear "tilted" according to their position. (c) The procedure of orthographic ray casting and an orthographic rendering of the scene, which does not correspond to any real consumer camera, wherein the size and appearance of both objects are invariant to their distances and equivariant to their locations. By using NeRF to synthesize these orthographic images, which correspond to non-physical cameras, we are able to construct RGB inputs that are equivariant with translation.
}
\vspace{-0.4em}
\label{fig:perspective_vs_orthographic}
\end{figure}
In~\figref{fig:perspective_vs_orthographic} we illustrate the difference between perspective and orthographic cameras. Though renderings from a NeRF $F_{\Theta}$ are highly realistic, the perspective ray casting procedure used by default in NeRF's volume rendering, which we visualize in \figref{fig:perspective_vs_orthographic}(b), may cause scene content to appear distorted or scaled depending on the viewing angle and the camera's field of view --- more distant objects will appear smaller in the image plane.
Specifically, given a pixel coordinate $(u, v)$ and camera pose $(\mathbf{R}, \mathbf{t})$, NeRF forms a ray $\mathbf{r} = (\mathbf{o}, \mathbf{d})$ using the perspective camera model:
\begin{align}
\mathbf{o} = \mathbf{t}\,,
\quad\quad
\mathbf{d} = \textbf{R}
\begin{bmatrix}
(u-c_x)/f_x \\
(v-c_y)/f_y \\
1
\end{bmatrix}\,.
\end{align}
This model is a reasonable proxy for the geometry of most consumer RGB cameras, hence its use by NeRF during training and evaluation.
However, the distortion and scaling effects caused by perspective ray casting degrades the performance of the downstream optimization procedure that takes as input the synthesized images rendered by NeRF, as we will demonstrate in our results (\secref{sec:sim}).
To address this issue, we modify the rendering procedure of NeRF after it is optimized by replacing perspective ray casting with orthographic ray casting:
\begin{align}
\mathbf{o} = \mathbf{t} +
\textbf{R}
\begin{bmatrix}
(u-c_x)/f_x \\
(v-c_y)/f_y \\
0
\end{bmatrix}\,,
\quad \quad
\mathbf{d} = \textbf{R}
\begin{bmatrix}
0 \\
0 \\
1
\end{bmatrix}\,.
\end{align}
\textcolor{black}{To our knowledge, our work is the first to demonstrate that when a NeRF model is trained on perspective cameras, it may directly be utilized to render orthographic images. Such a result is both non-obvious and surprising because rays are now rendered using a different volume rendering procedure from the one used during training time, and the orthographic rendering process essentially corresponds to an out-of-distribution rendering test on the NeRF model.}
We visualize this procedure in~\figref{fig:perspective_vs_orthographic}(c). Orthographic ray casting marches parallel rays into the scene, so that each rendered pixel represents a parallel window of 3D space.
This property removes the dependence between an object's appearance and its distance to the camera: an object looks the same if it is either far or nearby. Further, as all rays for a given camera rotation $\textbf{R}$ are parallel, this provides equivariance to the in-plane camera center $c_x, c_y$.
These attributes facilitate downstream learning to be equivariant to objects' 3D locations and thereby encourages generalization.
As open-source instant-NGP~\cite{mueller2022instant}
did not support orthographic projection, we implement this ourselves as a CUDA kernel (see Supp.).
Our decision of choosing an orthographic view of the scene draws inspiration from previous works that have used single-view orthographic scene representations \cite{zeng2017multi,zeng2020transporter}, but critically differs in the following two aspects: (a) we create orthographic scene representations by casting rays into a radiance field rather than by point-cloud reprojection, thereby significantly reducing image artifacts (see Supp.); (b) we rely on multi-view RGB images to recover scene geometry, instead of depth sensors.
\subsection{Policy Representation: Affordance Raycasts in a Radiance Field}\label{sec:policy}
To address $\textit{SE}(3)$-parameterized actions, we formulate action selection as an optimization problem over synthesized novel-view pixels and their affordances. Using our NeRF-based scene representation, we densely sample $V$ camera poses around the workspace and render images $\hat{I}_{v_t} = F_{\Theta}(\mathcal{T}_{v_t})$ for each pose, $\forall\,v_t = 0, 1, \cdots, V$. One valid approach is to search for actions directly in the space of camera poses for the best $\mathcal{T}_{v_t}$, but orders of magnitude computation may be saved by instead considering actions that correspond to \textit{each pixel} within each image, and sharing computation between all pixels in the image (e.g., by processing each image with just a single pass through a ConvNet). This (i) extends the paradigm of pixel-wise affordances \cite{zeng2019learning} into full 6-DOF, novel-view-enabled action spaces, and (ii) alleviates the search over poses due to translational equivariance provided by orthographic rendering (Sec.~\ref{sec:neural_scene_representation}).
Accordingly, we formulate each pixel in each synthesized view as parameterizing
a robot action, and we learn a dense action-value function $E$ which outputs per-pixel action-values of shape $\mathbb{R}^{\mathrm{H} \times \mathrm{W}}$ given a novel-view image of shape $\mathbb{R}^{\mathrm{H} \times \mathrm{W} \times 3}$.
Actions are selected by simultaneously searching
across all pixels $\mathbf{u}$ in all synthesized views $v_t$:
\begin{align}
\mathbf{u}^*_t, v^*_t = \argmin_{\mathbf{u}_t, v_t} \ E(\hat{I}_{v_t}, \mathbf{u}_t), \quad \forall\,v_t = 0, 1, \cdots, V
\end{align}
where the pixel $\mathbf{u}^*_t$ and the associated estimated depth $d(\mathbf{u}^*_t)$ from NeRF are used to determine the 3D translation, and the orientation of $\mathcal{T}_{v^*_t}$ is used to determine the 3D rotation of the predicted action.
Our approach employs \textcolor{black}{a single ConvNet that is shared across all views} and use multiple strategies for equivariance: 3D translational equivariance is in part enabled by orthographic raycasting and synergizes well with translationally-equivariant dense model architectures for $E$ such as ConvNets \cite{lecun1995convolutional,long2015fully}, meanwhile 3D rotational equivariance is also encouraged, as synthesized rotated views can densely cover novel orientations of objects.
While the formulation above may be used to predict the picking pose $\mathcal{T}_{\text{pick}}$ and the placing pose $\mathcal{T}_{\text{place}}$ independently, intuitively the prediction of $\mathcal{T}_{\text{pick}}$ affects the prediction of $\mathcal{T}_{\text{place}}$ due to the latter's geometric dependence on the former. We therefore decompose the action-value function into (i) picking and (ii) pick-conditioned placing, similar to prior work~\cite{zeng2020transporter}:
\begin{align} \label{eq:decompose}
\mathbf{u}^*_{\text{pick}}, v^*_{\text{pick}} &= \argmin_{\mathbf{u}_{\text{pick}}, v_{\text{pick}}} \ E_{\text{pick}}(\hat{I}_{v_{\text{pick}}}, \mathbf{u}_{\text{pick}}), \quad &\forall v_{\text{pick}} = 0, 1, \cdots, V
\\
\mathbf{u}^*_{\text{place}}, v^*_{\text{place}} &= \argmin_{\mathbf{u}_{\text{place}}, v_{\text{place}}} E_{\text{place}}(\hat{I}_{v_{\text{place}}}, \mathbf{u}_{\text{place}} | \mathbf{u}^*_{\text{pick}}, v^*_{\text{pick}}), \quad &\forall v_{\text{place}} = 0, 1, \cdots, V
\end{align}
where $E_{\text{place}}$ uses the Transport operation from~\citet{zeng2020transporter} to convolve the feature map of $\hat{I}_{v^*_{\text{pick}}}$ around $\mathbf{u}^*_{\text{pick}}$ with the feature maps of $\{\hat{I}_{v_{\text{place}}}\}_{v_{\text{place}}=1}^V$ for action-value prediction.
We refer readers to~\cite{zeng2020transporter} for details on this coupling.
\subsection{Training}\label{sec:training}
We train the action-value function with imitation learning.
For each expert demonstration, we construct a tuple $\mathcal{D} = \{\hat{I}_{v^*_{\text{pick}}}, \mathbf{u}^*_{\text{pick}}, \hat{I}_{v^*_{\text{place}}}, \mathbf{u}^*_{\text{place}}\}$, where $\hat{I}_{v^*_{\text{pick}}}$ and $\hat{I}_{v^*_{\text{place}}}$ are the synthesized images whose viewing directions are aligned with the end-effector's rotations; $\mathbf{u}^*_{\text{pick}}$ and $\mathbf{u}^*_{\text{place}}$ are the best pixels in those views annotated by experts.
We draw pixels $\{\hat{\mathbf{u}}_j | \hat{\mathbf{u}}_j \neq \mathbf{u}^*_{\text{pick}}, \hat{\mathbf{u}}_j \neq \mathbf{u}^*_{\text{place}} \}_{j=1}^{N_{\text{neg}}}$ from randomly synthesized images $\hat{I}_{\text{neg}}$ as negative samples. For brevity, we omit the subscript for pick and place and present the loss function that is used to train both action-value functions:
\begin{align} \label{eq:training}
\mathcal{L}(\mathcal{D}) = -\log p\Big(\mathbf{u}^* | \hat{I}, \hat{I}_{\text{neg}}, \mathcolor{orange}{\{\hat{\mathbf{u}}_j\}_{j=1}^{N_{\text{neg}}}}\Big), \quad p\Big(\mathbf{u}^* | \hat{I}, \hat{I}_{\text{neg}}, \mathcolor{orange}{\{\hat{\mathbf{u}}_j\}_{j=1}^{N_{\text{neg}}}}\Big) = \frac{e^{-E_{\theta}(\hat{I}, \mathbf{u})}}{e^{-E_{\theta}(\hat{I}, \mathbf{u})} + \mathcolor{orange}{\sum_{j=1}^{N_{\text{neg}}}} e^{-E_{\theta}(\hat{I}_{\text{neg}}, \mathcolor{orange}{\hat{\mathbf{u}}_j)}}}
\end{align}
A key innovation of our objective function compared to previous works~\cite{zeng2020transporter,shridhar2021cliport,huang2022equivariant,seita2021learning} is the inclusion of negative samples $\mathcolor{orange}{\{\hat{\mathbf{u}}_j\}_{j=1}^{N_{\text{neg}}}}$ from imagined views $\hat{I}_{\text{neg}}$. We study the effects of ablating negative samples in~\secref{sec:sim} and show that they are essential for successfully training action-value functions.
\textcolor{black}{In practice, we batch the image that contains the positive pixel and other synthesized views for the forward pass. Every pixel in these images are treated as samples to compute the loss.}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{figures/all_tasks_v6.png}
\caption{\textbf{Simulation qualitative results.} \ourmethod only requires RGB inputs and can solve different 6-DoF~ tasks: (a) hanging-disks, (b) place-red-in-green, (c) stacking-objects, and (d) block-insertion.
}
\vspace{-0.4em}
\label{fig:sim}
\end{figure}
We execute experiments in both simulation and real-world settings to evaluate the proposed method across various tasks.
\label{sec:result}
\subsection{Simulation Experiments}\label{sec:sim}
\paragraph{Environment.} We propose four new 6-DoF~ tasks based on Ravens~\cite{zeng2020transporter} and use them as the benchmark. We show qualitative examples of these tasks in~\figref{fig:sim} and summarize their associated challenges in the supplementary materials. \textcolor{black}{Notably, place-red-in-green is a relatively cluttered environment where 5-10 distractor objects are randomly spawned and placed; hanging-disks and stacking-objects require the policy to generalize to novel objects that are not seen during training.} All simulated experiments are conducted in PyBullet~\cite{coumans2016pybullet} using a Universal Robot UR5e with a suction gripper. The input observations for \ourmethod are 30 RGB images from different cameras pointing toward the center. For all the baselines, we additionally supply the corresponding noiseless depth images. Each image has a resolution of $640 \times 480$. The camera has focal length $f = 450$ and camera center $(c_x, c_y) = (320, 240)$. \textcolor{black}{Demonstrations are collected with motion planner that can access the ground-truth states of objects.}
\paragraph{Evaluation.} For each task, we perform evaluations under two settings: \textit{in-distribution} configures objects with random rotations ($\theta_x, \theta_y \in [-\frac{\pi}{6}, \frac{\pi}{6}], \theta_z \in [-\pi, \pi]$). This is also the distribution we used to construct the training set. \textit{out-of-distribution} instead configures objects with random rotations $(\theta_x, \theta_y \in [-\frac{\pi}{4}, -\frac{\pi}{6}] \cup [\frac{\pi}{6}, \frac{\pi}{4}], \theta_z \in [-\pi, \pi])$. We note that these rotations are outside the training distribution and also correspond to larger out-of-plane rotations. Thus, this setting requires stronger generalization. We use a binary score (0 for failure and 1 for success) and report results on $100$ evaluation runs for agents trained with $n = 1, 10, 100$ demonstrations.
\paragraph{Baseline Methods.} Although our method only requires RGB images, we benchmark against published baselines that additionally require depth images as inputs.
Form2Fit~\cite{zakka2020form2fit} predicts the placing action by estimating dense descriptors of the scene for geometric matching. Transporter-$\textit{SE}(2)$ and Transporter-$\textit{SE}(3)$ are both introduced in~\citet{zeng2020transporter}. Although Transporter-$\textit{SE}(2)$ is not designed to solve manipulation tasks that require 6-DoF~ actions, its inclusion helps indicate what level of task success can be achieved on the shown tasks by simply ignoring out-of-plane rotations.
Transporter-$\textit{SE}(3)$ predicts 6-DoF~ actions by first using Transporter-$\textit{SE}(2)$ to estimate $\textit{SE}(2)$ actions, and then feeding them into a regression model to predict the remaining rotational $(r_x,r_y)$ and translational (z-height) degrees of freedom. Additionally, we benchmark against a baseline, GT-State MLP, that assumes perfect object poses. It takes ground truth state (object poses) as inputs and trains an MLP to regress two $\textit{SE}(3)$ poses for $\mathcal{T}_{\text{pick}}$ and $\mathcal{T}_{\text{place}}$.
\begin{table*}[ht]
\fontsize{8.5}{9.}\selectfont
\centering
\scriptsize
\resizebox{\textwidth}{!} {
\begin{tabular}{l x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm}}\toprule
&
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}block-insertion\\\id{in-distribution}-poses\end{tabular}} &
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}block-insertion\\\ood{out-of-distribution}-poses\end{tabular}} &
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}place-red-in-greens\\\id{in-distribution}-poses\end{tabular}} &
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}place-red-in-greens\\\ood{out-of-distribution}-poses\end{tabular}}
\\\cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13}
Method & 1 & 10 & 100 & 1 & 10 & 100 & 1 & 10 & 100 & 1 & 10 & 100 \\\midrule
GT-State MLP & $0$ & $1$ & $1$ & $0$ & $0$ & $1$ & $0$ & $1$ & $3$ & $0$ & $1$ & $1$ \\
Form2Fit~\cite{zakka2020form2fit} & $0$ & $1$ & $10$ & $0$ & $0$ & $0$ & $\mathbf{35}$ & $79$ & $\mathbf{96}$ & $21$ & $30$ & $61$ \\
Transporter-$\textit{SE}(2)$~\cite{zeng2020transporter} & $25$ & $69$ & $73$ & $\mathbf{1}$ & $21$ & $20$ & $30$ & $74$ & $83$ & $\mathbf{25}$ & $18$ & $36$ \\
Transporter-$\textit{SE}(3)$~\cite{zeng2020transporter} & $\mathbf{26}$ & $70$ & $77$ &$0$ & $20$ & $22$ & $29$ & $77$ & $85$ & $23$ & $20$ & $38$ \\
\rowcolor[rgb]{0.792,1,0.792} Ours & $0$ & $\mathbf{84}$ & $\mathbf{89}$ & $0$ & $\mathbf{74}$ & $\mathbf{78}$ & $27$ & $\mathbf{89}$ & $\mathbf{96}$ & $22$ & $\mathbf{56}$ & $\mathbf{77}$ \\
\midrule
&
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}hanging-disks\\\id{in-distribution}-poses\end{tabular}} &
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}hanging-disks\\\ood{out-of-distribution}-poses\end{tabular}} &
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}stacking-objects\\\id{in-distribution}-poses\end{tabular}} &
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}stacking-objects\\\ood{out-of-distribution}-poses\end{tabular}}
\\\cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13}
Method & 1 & 10 & 100 & 1 & 10 & 100 & 1 & 10 & 100 & 1 & 10 & 100 \\\midrule
GT-State MLP & $0$ & $0$ & $3$ & $0$ & $0$ & $1$ & $0$ & $1$ & $1$ & $0$ & $0$ & $0$ \\
Form2Fit~\cite{zakka2020form2fit} & $0$ & $11$ & $5$ & $\mathbf{4}$ & $1$ & $3$ & $1$ & $7$ & $12$ & $0$ & $4$ & $5$ \\
Transporter-$\textit{SE}(2)$~\cite{zeng2020transporter} & $6$ & $65$ & $72$ & $3$ & $32$ & $17$ & $0$ & $\mathbf{46}$ & $40$ & $\mathbf{1}$ & $\mathbf{18}$ & $35$ \\
Transporter-$\textit{SE}(3)$~\cite{zeng2020transporter} & $6$ & $66$ & $75$ & $0$ & $32$ & $20$ & $0$ & $42$ & $40$ & $0$ & $16$ & $34$ \\
\rowcolor[rgb]{0.792,1,0.792} Ours & $\mathbf{13}$ & $\mathbf{68}$ & $\mathbf{100}$ & $0$ & $\mathbf{43}$ & $\mathbf{71}$ & $\mathbf{13}$ & $21$ & $\mathbf{76}$ & $0$ & $3$ & $\mathbf{74}$ \\
\bottomrule
\end{tabular}
}
\vspace{1em}
\caption{\textbf{Quantitative results}. Task success rate (mean $\%$) vs. $\#$ of demonstration episodes (1, 10, 100) used in training. Tasks labeled with \id{in-distribution} configure objects with random rotations ($\theta_x, \theta_y \in [-\frac{\pi}{6}, \frac{\pi}{6}], \theta_z \in [-\pi, \pi]$). This is also the rotation distribution used for creating the training set. Tasks labeled with \ood{out-of-distribution} configure objects with rotations $(\theta_x, \theta_y \in [-\frac{\pi}{4}, -\frac{\pi}{6}] \cup [\frac{\pi}{6}, \frac{\pi}{4}], \theta_z \in [-\pi, \pi])$ that are (i) outside the training pose distribution and (ii) larger out-of-plane rotations.}
\label{tab:quantitative}
\end{table*}
\textbf{Results.} \figref{fig:average_plot} shows the average scores of all methods trained on different numbers of demonstrations. GT-State MLP fails completely; Form2Fit cannot achieve 40\% success rate under any setting; Both Transporter-\textit{SE}(2) and Transporter-\textit{SE}(3) are able to achieve $\sim$70\% success rate when the object poses are sampled from the training distribution, but the success rate drops to $\sim$30\% when the object poses are outside the training distribution.
\begin{wrapfigure}{r}{0.65\textwidth}
\vspace{-1.5em}
\begin{center}
\includegraphics[width=0.65\textwidth]{figures/bar_plots_v2.png}
\end{center}
\caption{Average scores of all methods under both \id{in-distribution} and \ood{out-of-distribution} settings.
}
\label{fig:average_plot}
\vspace{-1em}
\end{wrapfigure}
\ourmethod outperforms all baselines by a large margin when there are enough demonstrations of the task. Its success rate is $\sim$90\% under the setting of \textit{in-distribution} and $\sim$80\% under the setting of \textit{out-of-distribution}. The performance improvement over baselines demonstrates its generalization ability thanks to the action optimization process with novel view synthesis and affordance prediction in the loop.
Interestingly, we found that \ourmethod sometimes performs worse than baselines when only 1 demonstration is supported. We hypothesize that this is because our action-value function needs more data to understand the subtle differences between images rendered from different views in order to select the best view. We show the full quantitative results in~\tabref{tab:quantitative}.
\textbf{Ablation studies.}
To understand the importance of different components within our framework, we benchmark against two variants: (i) ours w/ perspective ray casting, and (ii) ours w/o multi-view negative samples in Eq.~\eqref{eq:training}. We show the quantitative results in~\tabref{tab:ablation}.
We find that ours w/ perspective ray casting fail to learn the action-value functions because the perspective images (a) contain distorted appearances of the objects that are challenging for CNNs to comprehend and (b) the groudtruth picking or placing locations may be occluded by the robot arms. We visualize these challenges in the supplementary materials. Our method with orthographic ray casting circumvents both challenges by controlling the near/far planes to ignore occlusions without worrying about the distortion and scaling.
Ours w/o multi-view negative samples also fails to learn reliable action-value functions, and this may be due to a distribution shift between training and test: during training, it has only been supervised to choose the best pixel given an image. However, it is tasked to both select the best pixel and best view during the test time.
\begin{table}[ht]
\begin{minipage}[c]{0.7\textwidth}
\centering
\scriptsize
\resizebox{\textwidth}{!} {
\begin{tabular}{l x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm} x{0.54cm}}\toprule
&
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}block-insertion\\\id{in-distribution}-poses\end{tabular}} &
\multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}block-insertion\\\ood{out-of-distribution}-poses\end{tabular}}
\\\cmidrule(lr){2-4} \cmidrule(lr){5-7}
Method & 1 & 10 & 100 & 1 & 10 & 100 \\\midrule
Ours w/ perspective & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
Ours w/o multi-view negatives & $0$ & $11$ & $13$ & $0$ & $2$ & $4$ \\
\rowcolor[rgb]{0.792,1,0.792} Ours & $0$ & $\mathbf{84}$ & $\mathbf{89}$ & $0$ & $\mathbf{74}$ & $\mathbf{78}$ \\
\bottomrule
\end{tabular}
}
\end{minipage}\hfill
\begin{minipage}[c]{0.28\textwidth}
\caption{\textbf{Ablation studies.} We study the effects of ablating orthographic ray casting or multi-view negative samples from our system.}
\label{tab:ablation}
\end{minipage}
\end{table}
\subsection{Real-world Experiments}
We validate our framework with \textcolor{black}{three} kitting tasks in the real world and show qualitative results in~\figref{fig:real}. \textcolor{black}{Additional qualitative results and video can be found in the supplementary material.}
Our system consists of a UR5 arm, a customized suction gripper, and a wrist-mounted camera.
We show that our method can successfully (i) pick up floss cases and pack them into transparent containers, (ii) pick up metal cubes and insert them into the cases, and (iii) pick up a stainless steel ice sphere and place it into 10+ different cups configured with random translations and out-of-plane rotations.
See~\figref{fig:real} for qualitative results.
These tasks are challenging because (i) they include objects with reflective or transparent materials, which makes these tasks
not amenable to existing works that require depth sensors~\cite{zeng2020transporter,simeonov2021neural,zakka2020form2fit},
and (ii) they require out-of-plane action reasoning.
The action-value functions are trained with 20 demonstrations using these cups. \textcolor{black}{Demonstrations are supplied by humans who teleoperate the robot through a customized user interface.}
At the beginning of each pick-and-place, our system gathers 30 $1280 \times 720$ RGB images of the scene with the wrist-mounted camera. \textcolor{black}{These 30 locations are sampled from a circular path on the table and the robot moves the camera to each of them sequentially through inverse kinematics.}
Each image's camera pose is derived from the robotic manipulator's end-effector pose and a calibrated transformation between the end-effector and the camera. This data collection procedure is advantageous as industrial robotic manipulators feature sub-millimeter repeatability, which provides accurate camera poses for building NeRF. In practice, we search through $V=121$ virtual views that uniformly cover the workspace and predict their affordances for optimizing actions. The optimization process currently takes around 2 seconds using a single NVIDIA RTX 2080 Ti GPU. This step can be straightforwardly accelerated by parallelizing the computations with multiple GPUs.
\vspace{-0.5em}
\section{Limitations And Conclusion}
\vspace{-0.5em}
In terms of limitations, our system currently requires training a NeRF of the scene for each step of the manipulation. An instant-NGP~\cite{mueller2022instant} requires approximately 10 seconds to converge using a single NVIDIA RTX 2080 Ti GPU, and moving the robot arms around to collect 30 multi-view RGB images of the scene takes nearly 1 minute. \textcolor{black}{This poses challenges to apply MIRA to tasks that require real-time visio-motor control.} We believe observing the scene with multiple mounted cameras or learning a prior over instant-NGP could drastically reduce the runtime.
In the future, we plan to explore the usage of mental imagery for other robotics applications such as navigation and mobile manipulation.
\section{CUDA Kernel for Orthographic Ray Casting}
\begin{lstlisting}[language=C++, label=code:orthographic, caption=CUDA kernel for orthographic ray casting in instant-NGP~\cite{mueller2022instant}.]
inline __host__ __device__ Ray pixel_to_ray_orthographic(
uint32_t spp,
const Eigen::Vector2i& pixel,
const Eigen::Vector2i& resolution,
const Eigen::Vector2f& focal_length,
const Eigen::Matrix<float, 3, 4>& camera_matrix,
const Eigen::Vector2f& screen_center,
float focus_z = 1.0f,
float dof = 0.0f
) {
auto uv = pixel.cast<float>().cwiseQuotient(resolution.cast<float>());
Eigen::Vector3f dir = {
0.0f,
0.0f,
1.0f
};
dir = camera_matrix.block<3, 3>(0, 0) * dir;
Eigen::Vector3f offset_x = {
(uv.x() - screen_center.x()) * (float)resolution.x() / focal_length.x(),
0.0f,
0.0f
};
offset_x = camera_matrix.block<3, 3>(0, 0) * offset_x;
Eigen::Vector3f offset_y = {
0.0f,
(uv.y() - screen_center.y()) * (float)resolution.y() / focal_length.y(),
0.0f
};
offset_y = camera_matrix.block<3, 3>(0, 0) * offset_y;
Eigen::Vector3f origin = camera_matrix.col(3);
origin = origin + offset_x + offset_y;
return {origin, dir};
\end{lstlisting}
|
2,869,038,153,819 | arxiv | \section{Impact of heavy resonances on the low-energy electroweak effective theory}
So far the Large Hadron Collider (LHC) has not found any trace of beyond
the Standard Model (BSM) states with masses below 1~TeV.
Likewise, no significant deviation
has been observed in the low-energy interactions between Standard Model (SM) particles.
Effective field theories are then the natural approach.
In this talk~\cite{Santos:2015,Pich-preparation} we discuss
the possibility of strongly-coupled BSM scenarios
with the approximate custodial symmetry invariance
of the SM, exact in the SM scalar sector.
We develop an
invariant Lagrangian under $\mathcal{G}=SU(2)_L\times SU(2)_R$, which spontaneously breaks down to
the custodial subgroup $\mathcal{H}=SU(2)_{L+R}$ and generates
the electroweak (EW) would-be Goldstone bosons $\varphi^a$,
described a unitary $2\times 2$ matrix $U(\varphi)$.
In these (non-linear) EW chiral Lagrangian with a light Higgs (ECLh), the low-energy amplitude $\mathcal{M}$ has an expansion in powers of infrared scales $p$
(external momenta and SM masses)
of the form
(e.g., for $2\to 2$ processes)~\cite{Pich-preparation,Weinberg:1978kz,Georgi-Manohar,Buchalla:2013eza,Guo:2015},
\begin{eqnarray}
\mathcal{M} &\sim & \underbrace{ \Frac{p^2}{v^2} }_{\mbox{LO (tree)}}
\, + \, \bigg(
\, \underbrace{ a_{k}^r }_{\mbox{ NLO (tree) }} \quad -\quad
\underbrace{ \Frac{ \Gamma_{k} }{16\pi^2}\ln\Frac{p}{\mu} \quad +\quad ... }_{
\mbox{NLO (1-loop)} }
\quad
\bigg)
\,\,\, \Frac{p^4}{v^4}\, \,\, +\,\,\, {\cal O}(p^6) \, .
\label{eq.chiral-amp}
\end{eqnarray}
The EW effective theory (EWET) Lagrangian operators can be sorted out based on their chiral dimension:
\begin{eqnarray}
\mathcal{L}_{\rm EWET}\,=\, \mathcal{L}_2\, +\, \mathcal{L}_4\, +\, ...
\end{eqnarray}
where the operators in $\mathcal{L}_{\hat{d}}$ are of
${\cal O}(p^{\hat{d}})$~\cite{Pich-preparation,Weinberg:1978kz,Georgi-Manohar,Buchalla:2013eza}.
Covariant derivatives and masses are ${\cal O}(p)$~\cite{chpt}
and each fermion field scales like ${\cal O}(p^{1/2})$ in naive dimensional
analysis (NDA)~\cite{Pich-preparation,Georgi-Manohar,Buchalla:2013eza,Buchalla:2013rka}.
The $\mathcal{G}$--invariant operators in $\mathcal{L}_{\rm EWET}$ are built with the Goldstone tensors $U(\varphi)$,
functions $\mathcal{F}_k$ of the Higgs singlet $h$, its derivatives $\partial_{\mu_1}...\partial_{\mu_n} h$,
the gauge fields and the SM fermions $\psi$~\cite{Buchalla:2013rka,Longhitano:1980iz,Morales:94,SILH,Alonso:2012,Grinstein:2007iv}.
{}From the chiral counting point of view $\mathcal{L}^{\rm SM}$ would be ${\cal O}(p^2)$ but its underlying
renormalizable structure makes all $\Gamma_k=0$ and ensures the absences of
higher-dimension divergences~\cite{Guo:2015,Alonso:2015}.
The most important contributions to a given process
are given by the operators of lowest chiral dimension.
The leading order (LO) contribution is ${\cal O}(p^2)$ and is given by tree-level diagrams
with only $\mathcal{L}_2$ vertices.
Likewise, the one-loop contribution with only $\mathcal{L}_2$ vertices is ${\cal O}(p^4)$;
it is suppressed in~(\ref{eq.chiral-amp}) with respect to
the LO by a factor $p^2/\Lambda_{\rm NL}^2$,
with $\Lambda_{\rm NL}^2\sim 16\pi^2 v^2 \Gamma_k^{-1}\stackrel{>}{_\sim} 3$~TeV
(with $v=(\sqrt{2} G_F)^{-1/2}\approx 246$~GeV).
This suppression factor is related to the non-linearity of the ECLh
and $\Lambda_{\rm NL}\to \infty$ when the Higgs can be embedded in a complex doublet $\Phi$~\cite{Guo:2015}.~\footnote{
Ref.~\cite{Alonso:2015} provides a geometrical interpretation in terms of the curvature of
metric of the internal weak space of the Higgs. In the flat-space limit
one has $\Lambda_{\rm NL}\to \infty$. Linear-Higgs scenarios with a complex Higgs doublet $\Phi$
correspond to this case. True ``non-linear models'' are defined by a non-zero curvature,
not by their (non-linear) representation.
}
In these proceedings~\cite{Santos:2015,Pich-preparation} we focus our attention on the tree-level
next-to-leading order (NLO) contributions. They are ${\cal O}(p^4)$ and are
provided by tree-level diagrams with
one $\mathcal{L}_4$ vertex with low-energy coupling $a_k$ (LEC) and an arbitrary number of $\mathcal{L}_2$ vertices.
They get contributions from tree-level heavy resonance exchanges.
At low energies, these ${\cal O}(p^4)$ terms in~(\ref{eq.chiral-amp}) are
typically suppressed with respect to the LO amplitude, ${\cal O}(p^2)$,
by a factor $a_kp^2/v^2 \sim p^2/M_R^2$~\cite{Santos:2015,Pich-preparation,Ecker:1988te,Ecker:1989yg}.
At high energies, one must include both the light dof (SM particles)
and the possible composite resonances as active degrees of freedom (dof)
in the Lagrangian~\cite{Santos:2015,Pich-preparation,pseudovector-Cata}:
\begin{eqnarray}
\mathcal{L}
&=& \mathcal{L}_{\rm non-res}
\, +\, \mathcal{L}_R\, ,
\end{eqnarray}
where
$\mathcal{L}_{\rm non-res}$ contains only SM fields and
$\mathcal{L}_R$ is the part of the Lagrangian that also contains resonances~\cite{Santos:2015}.
The part of the interaction Lagrangian $\mathcal{L}_R$ relevant for our analysis
of the $\mathcal{L}_4$ LECs is given by the terms linear in the resonance
fields,
$\Delta \mathcal{L}_R =\, \, R \,\,\mathbb{O}_{p^2}[\chi,\psi]$~\cite{Santos:2015,Pich-preparation,Ecker:1988te,Ecker:1989yg,pseudovector-Cata},
with $\chi$ ($\psi$) referring to the light bosonic (fermionic) fields.
The tensor $\mathbb{O}_{p^2}[\chi,\psi]$ that couples the heavy resonance $R$ to the
light dof is going to provide the first correction to the low-energy ECLh
by means
of diagrams where one has a heavy resonance propagator $\sim 1/M_R^2$ exchanged between
two vertices with $\mathbb{O}_{p^2}[\chi,\psi]$.
This gives an EWET operator of ${\cal O}(p^4)$.
At low energies, resonance operators with tensors $\mathbb{O}[\chi,\psi]$ of a higher order in $p$
or containing two or more $R$ fields contribute only to $\mathcal{L}_{\hat{d}}$ with $\hat{d}>4$.
The tree-level contribution to $\mathcal{L}_{\rm EWET}[\chi,\psi]$
is given by the underlying high-energy action $S[\chi,\psi, R]$
with the resonance fields $R$ evaluated at the classical solution $R_{\rm cl}(\chi,\psi)$
of their equations of motion (EoM).
Solving the resonance EoM and expanding their solutions in powers of momenta for $p\ll M_R$,
one can write the heavy fields as local operators of the EWET dof~\cite{Ecker:1988te}.
This prediction for the contribution to the low-energy ECLh
can be complemented through
the consideration of
ultraviolet-completion hypotheses (sum-rules~\cite{WSR,Peskin:92}, unitarity~\cite{Ecker:1989yg},
asymptotic form-factor counting rules~\cite{Brodsky-Lepage}...).
This imposes constraints on the resonance couplings that then turn into predictions
for the low-energy theory.
\section{Phenomenological example: vector form-factors}
Let us illustrate this with a basic example. We consider a colourless triplet vector resonance $V$
in a composite theory with the same symmetries of
the scalar sector of the SM --invariance under parity and charge conjugation--,
with its high energy interaction provided by the Lagrangian~\cite{Santos:2015,Pich-preparation},
\begin{eqnarray}
\Delta\mathcal{L}_V^{(A)} &=& \langle \, V_{\mu\nu}\,\, \mathbb{O}_V^{\mu\nu}\,\rangle
\, ,
\qquad
\mathbb{O}_V^{\mu\nu} \,=\, \Frac{F_V}{2\sqrt{2}} f_+^{\mu\nu}
\,+\, \Frac{i G_V}{2\sqrt{2}} [u^\mu , u^\nu]
\,+\, \Frac{ c_1^V}{2}\,\left( \nabla^\mu J_V^\nu -\nabla^\nu J_V^\mu\right)/v^2
\, ,
\label{eq.example-L}
\end{eqnarray}
with $\langle \,...\,\rangle$ for the matrix trace, $u_\mu= i u (D_\mu U)^\dagger u$, the combinations
$f_\pm^{\mu\nu} = u^\dagger \hat{W}^{\mu\nu} u \pm u \hat{B}^{\mu\nu} u^\dagger$
of the left and right field-strength tensors $\hat{W}^{\mu\nu}$ and $\hat{B}^{\mu\nu}$,
respectively, and $U=u^2=\exp\{ i \varphi^a\sigma^a/v\}$~\cite{Rosell:2012,Pich:2012dv}.
The precise definition of the covariant derivatives $D_\mu$
and $\nabla_\mu$ can be found in~\cite{Rosell:2012,Pich:2012dv}. The tensor
$J_V^\mu = - {\rm Tr_D}\{ \xi \bar{\xi} \gamma^\mu\}$
introduces the fermionic vector current
in a covariant way, with $\xi=u \psi_R+ u^\dagger \psi_L$
given by the $SU(2)_{R,L}$ doublets $\psi_{R,L}=\frac{1}{2}(1\pm \gamma_5)\psi$,
with $\psi=(t,b)^T$ (other SM doublets can be also added~\cite{Guo:2015})
and the Dirac trace ${\rm Tr_D}$.
The superscript $(A)$ refers to the
antisymmetric tensor formulation employed for the spin--1 resonance~\cite{Ecker:1988te}.
The full Lagrangian may contain additional operators
not relevant for the form-factors analyzed in this talk~\cite{Pich-preparation}.
Integrating out $V$ one gets a contribution
to the EWET, which at lowest order is given by
\vspace*{-0.5cm}
\begin{eqnarray}
\label{eq.EWET}
\\
\Delta\mathcal{L}_{\rm EWET}^{\rm from\, V} &=&
\Frac{ \langle \,\mathbb{O}_V^{\mu\nu}\,\rangle^2 }{2M_V^2}
- \Frac{\langle \, \mathbb{O}_V^{\mu\nu} \mathbb{O}_{V\,\mu\nu}\,\rangle }{M_V^2}
= \underbrace{ -\,i\, \Frac{F_V G_V}{4 M_V^2} }_{=\, i\, \mathcal{F}_3/2}
\, \langle \, f_+^{\mu\nu} [u_\mu , u_\nu]\,\rangle
\,\,\,
\underbrace{\, -\,
\Frac{F_V c_1^V}{\sqrt{2} M_V^2} }_{=\mathcal{F}^{X\psi^2
}
\, \langle \, f_+^{\mu\nu} \nabla_\mu J_{V\,\, \nu}/v^2 \,\rangle \,\,
+\,\, ...
\nonumber
\end{eqnarray}
with the dots standing for other effective operators not relevant in these proceedings.
For the Higgsless part, one has $\mathcal{F}_3=a_2-a_3$
in Longhitano's notation of~\cite{Longhitano:1980iz,Morales:94}. In what follows, we will focus
on the Higgsless sector and $\mathcal{F}_3,\mathcal{F}^{X\psi^2},F_V,G_V$ and $c_1^V$
simply represent coupling constants.
The resonance Lagrangian~(\ref{eq.example-L}) provides the vector form-factors
of the $L+R$ current into two-Goldstones and
into two-fermions~\cite{Pich-preparation,Barbieri:2008,Rosell:2012,Pich:2012dv}:
\begin{eqnarray}
\mathbb{F}^v_{\varphi\varphi}(q^2)\,=\, 1\,+\, \Frac{F_V G_V}{v^2}\, \Frac{q^2}{M_V^2-q^2}\, ,
\qquad\qquad
\mathbb{F}^v_{f\bar{f}}(q^2)\,=\, 1\,-\, \Frac{ \sqrt{2} F_V c_1^V }{v^2} \, \Frac{q^2}{M_V^2-q^2}\, ,
\label{eq.VFFs-A}
\end{eqnarray}
with momentum transfer $q^\mu$.
The square form-factors $|\mathbb{F}^v_{ii}(s)|^2$
contribute to the $S$-parameter at one-loop
through the Peskin-Takeuchi sum-rule on the left-right correlator
$\Pi_{W^3B}$~\cite{Peskin:92}. If one requires that these form-factors give a ultraviolet-convergent
contribution
to the sum-rule, they must vanish at $q^2\to\infty$ and one obtains short-distance (SD)
constraints~\cite{Ecker:1989yg,Barbieri:2008,Rosell:2012,Pich:2012dv}
and predictions for the~LECs~\cite{Santos:2015,Pich-preparation,Ecker:1989yg}:
\begin{eqnarray}
F_V G_V\, =\, v^2 \quad&\longrightarrow&\quad
\mathcal{F}_3
=(a_2-a_3)
\,=\, -\, \Frac{F_V G_V}{2 M_V^2}
\quad \stackrel{{\rm SD \ constr.}}{=} \quad -\, \Frac{v^2}{2 M_V^2}
\, .
\label{eq.SD}
\end{eqnarray}
For $M_V>1.5$~TeV
one finds the bound
\begin{eqnarray}
-\, 1.3\cdot 10^{-2}\, \, < \,\,\, \mathcal{F}_3= (a_2-a_3) \, \, \, <\, \, 0\, \, .
\end{eqnarray}
One can obtain analogous bounds for the LEC $\mathcal{F}^{X\psi^2} =v^2/(2 M_V^2)$ by demanding
a similar SD behaviour
$\mathbb{F}^v_{f\bar{f}}(q^2)\stackrel{q^2\to \infty}{\longrightarrow} 0$
to the fermion form-factor, which would give $\sqrt{2} F_V c_1^V\, =\, -\, v^2$.
\subsection{$\mathbb{F}^v_{\varphi\varphi}$ form-factor: S-parameter}
The impact of the bosonic form-factor ${\rm F}^v_{\varphi\varphi}$
on the oblique parameters $S$ and $T$
was studied in a dispersive one-loop resonance analysis~\cite{Barbieri:2008,Rosell:2012,Pich:2012dv},
where the lightest triplet vector ($V$) and axial-vector ($A$) resonances were taken into account.
Therein, the contribution from the Goldstone and Higgs absorptive channels was incorporated.
In particular the ${\rm F}^v_{\varphi\varphi}(q^2)$ determined the contribution from
the $\varphi\varphi$ and $B\varphi$ cuts to the $S$ and $T$ parameter,
respectively~\cite{Pich:2012dv}.
We studied asymptotically-free strongly coupled theories,
where $\Pi_{W^3B}$ satisfies the two Weinberg Sum Rules (WSRs),
and scenarios with weaker ultraviolet (UV) conditions (only the 1st WSR applies)
such as Conformal~\cite{Orgogozo:2012} or Walking~\cite{WTC} Technicolour,
obtaining the 68\% confidence level determinations~\cite{Pich:2012dv}:
\begin{eqnarray}
0.97\, <\, \kappa_W=M_V^2/M_A^2\, <1\, ,& \quad
M_V&\, >\, 5\, \mbox{TeV}
\quad (\mbox{1st \& 2nd WSR})\, ,
\\
0.84\, <\, \kappa_W\, <1.30\, ,& \quad
M_V&\, >\, 1.5\, \mbox{TeV}
\,\,\, (\mbox{only 1st WSR, for } 0.5<M_V/M_A<1 )
\, ,
\nonumber
\end{eqnarray}
where $\kappa_W$ denotes the $hWW$ (and $h\varphi\varphi$) coupling in SM units ($\kappa_W^{\rm SM} = 1$).
\subsection{$\mathbb{F}^v_{f\bar{f}}$ form-factor: $Z\to f\bar{f}$ anomalous couplings}
The $v_f$ and $a_f$ constants that parametrize
the $Z\to f\bar{f}$ decay have the form~\cite{Pich:2012sx},
\begin{eqnarray}
v_f \,=\, T_3^f\, -\, 2 \, Q_f\,\sin^2\theta_W\,
+ \, (\delta g_R^{Zf}+ \delta g_L^{Zf}) \, ,
\qquad \qquad
a_f \,=\, T_3^f
\, + \, (\delta g_R^{Zf} - \delta g_L^{Zf}) \, ,
\end{eqnarray}
with $T_3^t=+1/2$, $T_3^b=-1/2$, the electric charge $Q_f$,
the weak angle $\theta_W$ and the new physics parametrized through the $\delta g_{R,L}^{Zf}$,
given in our low-energy description by
\begin{eqnarray}
|\delta g_{R,L}^{Zf}
\, =\,
|\mathcal{F}^{X\psi^2}|\, \cos(2\theta_W)\, m_Z^2/v^2
\, ,
\end{eqnarray}
in agreement with current bounds of ${\cal O}(10^{-3})$~\cite{deltag-exp}
for the fermion coupling
$ \mathcal{F}^{X\psi^2} \,\sim \, v^2/(2 M_V^2)< 1.3\cdot 10^{-2} $
that one gets from the previous resonance coupling estimate $\sqrt{2} F_V c_1^V=-v^2$,
the bound $M_V>1.5$~TeV~\cite{Pich:2012dv} and the experimental value
$\cos(2\theta_W) \, m_Z^2/v^2= 0.07$.
\section{Equivalent Proca four-vector representation}
Through an appropriate duality transformation in the generating functional
it is possible to rewrite the underlying
resonance Lagrangian $\mathcal{L}^{(A)}$ in~(\ref{eq.example-L})
as a Proca Lagrangian $\mathcal{L}^{(P)}$ in terms of four-vector field $\hat{V}_\mu$
and its field strength tensor
$\hat{V}_{\mu\nu}=\nabla_\mu \hat{V}_\nu - \nabla_\nu \hat{V}_\mu$.
A similar procedure~\cite{Pich-preparation,Ecker:1989yg,Bijnens:1995}
can be applied to models where the resonances are introduced
as gauge fields~\cite{gauge-resonances}.
In the process, additional non-resonant operators with only light dof
are generated, which guarantee a proper UV behaviour.~\cite{Ecker:1989yg,Barbieri:2008,Bijnens:1995}.
On-shell, this duality can be read as $V^{\alpha\beta}=\hat{V}^{\alpha\beta}/M_V$
and $\nabla_\rho V^{\rho\mu}= - M_V\hat{V}^\mu$.
In our particular case, the duality transformation~\cite{Pich-preparation,Bijnens:1995}
changes the antisymmetric tensor
Lagrangian~(\ref{eq.example-L}) into
\begin{eqnarray}
\mathcal{L}^{(A)} \longrightarrow
\mathcal{L}^{(P)}
=&& \langle \, \hat{V}_{\mu\nu}\,\,\left( \Frac{f_{\hat{V}}}{2\sqrt{2}} f_+^{\mu\nu}
+\Frac{i g_{\hat{V}} }{2\sqrt{2}} [u^\mu , u^\nu]\right)
+ \hat{V}_{\mu}\,\,\left( \zeta_{\hat{V}}\, J_V^\mu/v^2 \,
\right)\,\rangle
\nonumber\\
&&
-\, \langle \, \left( \Frac{f_{\hat{V}}}{2\sqrt{2}} f_+^{\mu\nu}
\,+\, \Frac{i g_{\hat{V}} }{2\sqrt{2}} [u^\mu , u^\nu]\right)^2\,\rangle
\, ,
\label{eq.example-L-Proca}
\end{eqnarray}
with $f_{\hat{V}}=F_V/M_V$, $g_{\hat{V}}=G_V/M_V$ and $\zeta_{\hat{V}}= c_1^V M_V$.
In the low-energy limit $p\ll M_V$, Eq.~(\ref{eq.example-L-Proca}) leads to the same EWET,
\begin{eqnarray}
\mathcal{L}_{\rm EWET}
&=&
-\,i\, \Frac{f_{\hat{V}} g_{\hat{V}}}{4}
\, \langle \, f_+^{\mu\nu} [u_\mu , u_\nu]\,\rangle
\,\,\,
-\,
\Frac{f_{\hat{V}} \zeta_{\hat{V}}}{\sqrt{2} M_V^2}
\, \langle \, f_+^{\mu\nu} \nabla_\mu J_{V\,\, \nu}/v^2 \,\rangle \,\,
+\,\, ...
\end{eqnarray}
The same agreement is found for the two form-factors previously obtained in~(\ref{eq.VFFs-A}):
\begin{eqnarray}
{\rm F}^v_{\varphi\varphi}(q^2)\,=\, 1 \, +\, \Frac{f_{\hat{V}} g_{\hat{V}} }{v^2} q^2
\,+\, \Frac{f_{\hat{V}} g_{\hat{V}} }{v^2} \Frac{q^4}{M_V^2-q^2}
\, ,
\qquad
{\rm F}^v_{f\bar{f}}(q^2)\,=\,
1 \, - \, \Frac{ \sqrt{2} f_{\hat{V}} \zeta_{\hat{V}}}{v^2} \Frac{q^2}{M_V^2-q^2}\, .
\end{eqnarray}
\section{Conclusions}
The EWET couplings can be predicted in terms
of resonance parameters; different resonance quantum numbers lead to different
patterns for the LECs~\cite{Santos:2015,Ecker:1988te,pseudovector-Cata}.
Further assumptions about the UV structure of the underlying theory
can be used to refine the predictions~\cite{Santos:2015,Pich:2012dv}.
In this talk we have provided a couple of examples
(oblique parameters $S$ and $T$and the anomalous $Zf\bar{f}$ couplings) to show
that composite resonances with masses of a few TeV ($M_R\sim 4\pi v\approx 3$~TeV)
are compatible with present direct and indirect searches.
The $SU(2)_L\times SU(2)_R$ chiral invariance of the ECLh leads to
an appropriate low-energy suppression of tree-level NLO corrections
by factors $a_k p^2/v^2\sim p^2/M_R^2$
with respect to the LO prediction, ${\cal O}(p^2)$~\cite{Santos:2015,Ecker:1988te,Ecker:1989yg}.
Finally, we have shown the equivalence between the antisymmetric tensors $V^{\mu\nu}$
and Proca four-vectors $\hat{V}^\alpha$ representations for spin--1 fields~\cite{Ecker:1989yg,Bijnens:1995}.
|
2,869,038,153,820 | arxiv | \section{Introduction}
\label{SecIntro}
It is now widely agreed that the Large Magellanic Cloud (LMC),
the most luminous satellite of the Milky Way (MW), is at a particular
stage of its orbit. Its large Galactocentric velocity ($\sim 328$
km/s) is dominated by the tangential component ($\sim 320$ km/s) and
is much higher than all plausible estimates of the MW circular velocity
at its present distance of $\sim 50$ kpc \citep[see; e.g.,][and
references therein]{Kallivayalil2013,GaiaColl2018}. This implies that
the LMC is close to the pericentre of a highly eccentric orbit with
large apocentric distance and long orbital times. Together with the
presence of a clearly asociated close companion \citep[the Small
Magellanic Cloud, SMC, see; e.g.,][]{Westerlund1990,DOnghia2016}, the
evidence strongly suggests that the Clouds are just past their first
closest approach to the Galaxy
\citep{Besla2007,BoylanKolchin2011a,Patel2017}.
The particular kinematic stage of the LMC, together with the
relatively high stellar mass of the Clouds
\citep[$M_*\sim 2.5\times 10^9\, M_\odot$;][]{Kim1998}, offer clues
about the MW virial\footnote{We shall refer to the virial boundary of
a system as the radius where the mean enclosed density is
$200\times$ the critical density for closure. We shall refer to
virial quantities with a ``200'' subscript.} mass and insight into
the hierarchical nature of galaxy clustering in the dwarf galaxy
regime.
Clues about the MW mass fall into two classes. One concerns
the relation between virial mass and satellite statistics; namely, the
more massive the Milky Way halo the higher the likelihood of hosting a
satellite as massive as the LMC. Empirically, observational estimates
suggest that up to $\sim 40\%$ of $L^*$ galaxies may host a satellite
as luminous as the LMC within $\sim 250$ kpc and up to a $10\%$ chance
of having one within $\sim 50$ kpc \citep{Tollerud2011}. This result
has been interpreted as setting a lower limit on the MW virial mass
of roughly $\sim 10^{12}$ M$_\odot$
\citep{Busha2011,BoylanKolchin2011a,Patel2017, Shao2018}.
The other class relates to kinematics; if the LMC is near its first
pericentric passage, its velocity, not yet affected substantially by
dynamical friction, should reflect the total acceleration experienced
during its infall. If, as seems likely, that infall originated far from the MW virial
boundary, then the LMC velocity would provide a robust estimate of the
MW escape velocity at its present location. This assumes, of course,
that the LMC is bound to the MW, an argument strongly supported by its
status as the most luminous and, hence, most massive
satellite. Unbound satellites are indeed possible, but they tend to
occur during the tidal disruption of groups of dwarfs, and to affect
{\it only} the least massive members of a group \citep[see;
e.g.,][]{Sales2007}.
A strong constraint on the MW escape velocity at $r\sim 50$ kpc,
$V^{\rm MW}_{\rm esc}$, could help to discriminate between competing
Galactic potential models by adding information at a distance where
other tracers are scarce and where commonly-used Galactic potential
models often disagree \citep[see; e.g.,][]{Irrgang2013,
Bovy2015,GaravitoCamargo2019,Errani2020}. For example,
$V^{\rm MW}_{\rm esc}$ at $50$ kpc vary between $\sim 450$ km/s and
$\sim 330$ km/s for the four Galactic models proposed in these
references.
The peculiar kinematic state of the LMC adds complexity to the
problem, but also offers unique opportunities. On the one hand, the
short-lived nature of a first pericentric passage implies that the MW
satellite population is in a transient state and out of
dynamical equilibrium. This compromises the use of simple equilibrium
equations to interpret the dynamics of the MW satellites, and reduces
the usefulness of the MW satellites as a template against which the
satellite populations of external galaxies may be contrasted.
However, it also offers a unique opportunity to study the satellites
of the LMC itself. If on first approach, most LMC-associated
dwarfs should still lie close to the LMC itself, as the Galactic tidal
field would not have had time yet to disperse them
\citep{Sales2011}. If we can disentangle the LMC satellite population
from that of the MW then we can directly study the satellite
population of a dwarf galaxy, with important applications to our ideas
of hierarchical galaxy formation \citep{DOnghia2008} and to the
relation between galaxy stellar mass and halo mass at the faint-end of
the galaxy luminosity function \citep{Sales2013}.
The issue of which MW satellites are ``Magellanic'' in origin has
been the subject of several recent studies, mainly predicated
on the idea that LMC satellites should today have positions and
velocities consistent with what is expected for the tidal debris of the
LMC halo \citep{Sales2011,Yozin2015,Jethwa2016}. One application of these
ideas is that LMC satellites should accompany the LMC orbital motion
and, therefore, should have orbital angular momenta roughly parallel
to that of the LMC.
Using such dynamical premises, current estimates
based on accurate proper motions from \textit{Gaia}-DR2 have suggested at least
four ultrafaint dwarfs (Car 2, Car 3, Hor 1, and Hydrus 1) as highly probable
members of the LMC satellite system
\citep{Kallivayalil2018,Fritz2018}, an argument supported and extended
further by semianalytic modeling of the ultrafaint population
\citep{Dooley2017b, Nadler2019}.
Taking into account the combined gravitational potential of the MW+LMC
system might bring two extra candidates (Phx 2 and Ret 2) into plausible
association with the LMC \citep{Erkal2020,Patel2020}. Revised kinematics
for the classical dwarfs have also led to suggestions that the Carina
and Fornax dSph could have been brought in together with the LMC
\citep{Jahn2019,Pardy2020}. Further progress requires refining and
extending membership criteria in order to establish the identity of the
true Magellanic satellites beyond doubt.
Much of the progress reported above has been made possible by LMC
models based on tailored simulations where the Milky Way and the LMC
are considered in isolation, or on dark matter-only cosmological
simulations where luminous satellites are not explicitly
followed. This paper aims at making progress on these issues by
studying the properties of satellite systems analogous to the LMC
identified in cosmological hydrodynamical simulations of Local Group
environments from the APOSTLE project.
The paper is organized as follows. We describe our numerical datasets
in Sec.~\ref{SecNumSims}, and the identification of LMC analogues in
APOSTLE in Sec.~\ref{SecLMCanalogs}. The satellites of such analogues,
and their effect on the primary satellite population, are explored in
Sec.~\ref{SecLMCAssocSats}. Finally, Sec.~\ref{SecLMCIdCrit} uses
these results to help identify Magellanic satellites in the MW and
Sec.~\ref{SecVesc} considers the constraints placed by the LMC on the
MW escape velocity and Galactic potential. We conclude with a brief
summary in Sec.~\ref{SecConc}.
\begin{figure*}
\includegraphics[width=0.7\linewidth]{figs/LMC_image_Azi.pdf}
\caption{ An image of an APOSTLE simulation volume that includes an
LMC analogue as defined in this work (labelled 1-1-1 in subsequent
figures and tables). The upper panel shows the dark matter
distribution of the Local group-like environment, with the M31
analogue in the upper right part of the panel, and the MW analogue in
the bottom left. The area enclosed in a rectangle, which includes
the MW and LMC analogues, is shown in the bottom-left and bottom-right
panels in stellar and gas density projections, respectively. The
LMC analogue is the object located on the lower right in the bottom
panels. Note the purely gaseous stream that emerges from it, with
no stellar counterpart, reminiscent of a ``Magellanic stream''. }
\label{fig:LMCimage}
\end{figure*}
\section{Numerical Simulations}
\label{SecNumSims}
All simulations used in this paper adopt a flat $\Lambda$CDM model
with parameters based on WMAP-7 \citep{Komatsu2011}:
$\Omega_{\rm m}=0.272$, $\Omega_{\Lambda}=0.728$,
$\Omega_{\rm bar}=0.0455$, $H_0=100\, h\, \kms \, {\rm Mpc}^{-1}$,
$\sigma=0.81$, with $h=0.704$.
\subsection{The DOVE simulation}
We use the DOVE cosmological N-body simulation to study the frequency
of massive satellites around Milky Way-mass haloes and possible
environmental effects in Local Group volumes. DOVE evolves a
$100^3 \Mpc^3$ cosmological box with periodic boundary conditions
\citep{Jenkins2013} with $1620^3$ collisionless particles with mass
per particle $m_{\rm p}= 8.8 \times 10^6$ $\; \rm M_\odot$. The initial
conditions for the box were made using \textsc{panphasia}
\citep{Jenkins2013} at $z=127$, and were evolved to $z=0$ using the
Tree-PM code \textsc{P-Gadget3}, a modified version of the publicly
available \textsc{Gadget-2} \citep{Springel2005b}.
\subsection{The APOSTLE simulations}
The APOSTLE project is a suite of ``zoom-in'' cosmological
hydrodynamical simulations of twelve Local Group-like environments,
selected from the DOVE box \citep[][]{Sawala2016}. These Local Group
volumes are defined by the presence of a pair of haloes whose masses,
relative radial and tangential velocities, and surrounding Hubble flow
match those of the Milky Way-Andromeda pair \citep[see][for
details]{Fattahi2016}.
APOSTLE volumes have been run with the EAGLE (Evolution and Assembly
of GaLaxies and their Environments) galaxy formation code
\citep{Schaye2015,Crain2015}, which is a modified
version of the Tree-PM SPH code {\sc P-Gadget3}. The subgrid physics
model includes radiative cooling, star formation in regions denser
than a metallicity-dependent density threshold, stellar winds and
supernovae feedback, homogeneous X-ray/UV background radiation, as
well as supermassive black-hole growth and active galactic nuclei
(AGN) feedback (the latter has substantive effects only on very
massive galaxies and its effects are thus essentially negligible in
APOSTLE volumes).
The model was calibrated to approximate the stellar mass function of
galaxies at $z=0.1$ in the stellar mass range of
$M_{\rm star}= 10^8$-$10^{12}\, M_\odot$, and to yield realistic
galaxy sizes. This calibration means that simulated galaxies follow
fairly well the abundance-matching
relation of \citet{Behroozi2013} or \citet{Moster2013}
\citep[see][]{Schaye2015}.
Although dwarf galaxy sizes were not used to adjust the model,
they are nevertheless in fairly good agreement with observational
data \citep{Campbell2017}. Isolated dwarf galaxies follow
as well a tight $M_{\rm star}$-$V_{\rm max}$ relation \citep[see
Fig.~1 in][]{Fattahi2018}, consistent with extrapolations of
abundance-matching models. The APOSTLE simulations have been run
at three different levels of resolution, all using the "Reference"
parameters of the EAGLE model. In this work we use the medium
resolution runs (labelled ``AP-L2''), with initial dark matter and gas
particle masses of $m_{\rm dm}\sim 5.9 \times 10^6 \Msun$ and
$m_{\rm gas}\sim 1.2\times10^5 \Msun$, respectively. As in DOVE,
haloes and subhaloes in APOSTLE are identified using a
friends-of-friends groupfinding algorithm \citep{Davis1985} and
\textsc{subfind} \citep{Springel2001}. These have been linked between
snapshots by means of merger trees, which allow us to trace individual
systems back in time \citep{Qu2017}.
\begin{figure}
\includegraphics[width=\columnwidth]{figs/vmax_mstar_24satLMC.pdf}
\caption{$V_{\rm max}$-$M_{*}$ relation for the most massive
satellites (crosses) of the 24 primaries (circles; i.e., MW and M31
analogues) from the 12 APOSTLE-L2 volumes at $z=0$. The shaded area delimits
the $M_{*}$ range around the LMC's observed stellar mass value (star
symbol) chosen to search for LMC-analogue candidates. The final LMC
analogues that were selected for analysis in this work (see
Sec.~\ref{SecLMCOrbits}), and their corresponding primaries, are
shown in red. A line shows the average
$V_{\rm max}$-$M_{*}$ relation for APOSTLE centrals from
\citet{Fattahi2018}. }
\label{fig:vmaxmstar}
\end{figure}
\subsection{Galaxy identification}
\label{SecGalID}
Particles in the simulations are grouped together using the
friends-of-friends algorithm \citep[FoF;][]{Davis1985}, with a linking
length of $0.2$ times the mean inter-particle separation. Self-bound
substructures within the FoF groups are identified
using \textsc{subfind} \citep{Springel2001}. We refer to the most massive subhalo
of a FoF group as ``central'' or ``primary'' and to the remainder as
``satellites''.
APOSTLE galaxies and haloes are identified as bound structures, found
by \textsc{subfind} within $3$ Mpc from the main pair barycentre. We
hereafter refer to the MW and M31 galaxy analogues as
``primaries''. Satellites are identified as galaxies located within
the virial radius of each of the primaries.
The objects of study in this paper have been assigned an identifier in
the form {\tt Vol-FoF-Sub}, where 'vol' {\tt Vol} refers to the corresponding
APOSTLE volume \citep[ranging from 1 to 12, see Table~2 in][]{Fattahi2016},
and {\tt FoF} and {\tt Sub} correspond to the FoF and
\textsc{subfind} indices, respectively. These indices are computed
for the snapshot corresponding to $z=0$ for LMC analogues (see Tab.~\ref{tab:periapo})
or for the snapshot corresponding to ``identification time'' ($t_{\rm id}$, see Sec.~\ref{SecLMCAssocSats})
for LMC-associated satellites.
We identify the stellar mass, $M_*$, of a subhalo with that of all stellar
particles associated with that system by \textsc{subfind}.
\section{LMC analogues in APOSTLE}
\label{SecLMCanalogs}
Fig.~\ref{fig:LMCimage} illustrates the distribution of dark matter,
gas, and stars in one of the APOSTLE volumes at $z\sim 0$. The upper
panel illustrates the dark matter distribution, centered at the
midpoint of the ``MW-M31 pair''. The M31 analogue is located in the
upper right part of the panel, whilst the MW analogue is in the bottom
left. A rectangle shows the area surrounding the MW analogue shown in
the bottom panels, which show the stellar component (left) and gas
(right). The most massive satellite of the MW analogue is situated at
the lower-right in the bottom panels. Note the purely gaseous trailing
stream that accompanies this satellite, invisible in the stellar
component panel. This is one of the ``LMC analogues'' studied in this paper. We focus here on the stellar mass and
kinematics of LMC analogues and their satellites, and defer the study of
the properties of the Magellanic stream-like gaseous features to a
forthcoming paper.
We search for ``LMC analogues'' in APOSTLE by considering first the
most massive satellites closer than 350 kpc to each of the
two primary galaxies in the 12 APOSTLE volumes. We note that
this distance is somewhat larger than the virial radius of the
primaries at $z=0$ ($\sim200$ kpc, see
Fig.~\ref{fig:LMCorbits}). This prevents us from missing cases of
loosely-bound LMC analogs that may be past first pericentre at $z=0$
and just outside the nominal virial boundary of its primary. This
yields a total of 24 candidates, which we narrow down further by
introducing stellar mass and kinematic criteria, in an attempt to
approximate the present-day configuration of the LMC.
The mass criterion is illustrated in Figure \ref{fig:vmaxmstar}, where
we show the stellar masses of all 24 APOSTLE primaries (circles) and
their corresponding most massive satellites (crosses), as a function
of their maximum circular velocity, $V_{\rm max}$ \citep[see also
figure 7 in][]{Fattahi2016}. For reference, the stellar mass and
circular velocity of the LMC are marked with a star:
$M_*^{\rm LMC}= 2.5\times10^9$ M$_\odot$
\citep{Kim1998} and $V_{\rm max}^{\rm LMC}=92$ km/s \citep{vanderMarel2014}.
We consider as candidate LMC analogues of each primary the most massive
satellite with
$8.75<\log M_\star/M_\odot<10$; i.e., those in the grey shaded area in
Figure \ref{fig:vmaxmstar}. This yields a total of $14$ candidates
with maximum circular velocities in the range $55<V_{\rm max}/$km
s$^{-1}<130$. For reference, this velocity range corresponds to a
virial mass range of roughly
$2.5\times10^{10}<M_{200}/M_\odot<4.5\times10^{11}$ for isolated
halos. Of the 14 LMC candidates, we retain only 9 for our analysis
(indicated in red in Figure \ref{fig:vmaxmstar}) after applying an
orbital constraint described in more detail below
(Sec.~\ref{SecLMCOrbits}).
\begin{figure*}
\includegraphics[width=17cm]{figs/pairs.pdf}
\caption{\textit{Left}: Separation vs. relative radial velocity of
halo pair members in DOVE. Open circles indicate MedIso sample galaxies
(see text for details). Filled blue circles correspond to a
subsample of MedIso pairs that further satisfies a total mass cut of
$\log \, ((M_{200,1}+M_{200,2}) /\Msun)=[12.2,12.6]$. Crosses mark
HiIso sample galaxies with the aforementioned total mass cut. APOSTLE
pairs, which are a subsample of the MedIso sample, are highlighted
with small orange circles. Dotted lines indicate timing
argument solutions for total masses of $\log (M/\Msun)=12.2$ and
$12.6$, as labelled. \textit{Right}: Total mass,
i.e. $M_{200,1}+M_{200,2}$, distribution of all the MedIso pairs
shown in the left hand panel. The shaded blue region indicates the
additional total mass constraint of
$\log (M_{\rm tot}/\Msun)=[12.2,12.6]$. An orange histogram shows
the total mass distribution of APOSTLE pairs.}
\label{fig:pairs}
\end{figure*}
\begin{figure}
\hspace{-0.2cm}
\includegraphics[width=\columnwidth]{figs/nu_function2_referee.pdf}
\caption{Subhalo $\vmax$ function, normalized by the host virial
velocity $V_{200}$ (i.e. $\nu=\vmax/V_{200,\rm host}$), for
subhaloes within $r_{200}$ of MW-mass haloes in DOVE. The black line
corresponds to the average result for 2028 subhaloes around isolated
haloes with mass $\log(M_{200}/\Msun)=[11.7,12.4]$. The fit to the
normalized $\vmax$ function from \citet{Wang2012} is shown with the
red dashed line. The average relation for haloes in the MedIso and
HiIso pair samples are presented with the light-blue solid line and
dark blue dashed-dotted line, respectively. The average result for
subhaloes around APOSTLE primaries is shown with the orange
connected circles.
Error bars on the black line indicate the $\pm1\sigma$ dispersion
around the mean, calculated from 1000 102-halo samples randomly
drawn from the DOVE catalogue (same number as MedIso primaries).}
\label{fig:vmaxFnc}
\end{figure}
\subsection{Frequency of LMC-mass satellites}
Fig.~\ref{fig:vmaxmstar} shows that, out of 24 APOSTLE primaries,
$14$ host
nearby
satellites massive enough to be comparable to the LMC.
Of these, 11 are within the virial radius of their host at $z=0$.
This is
a relatively high frequency somewhat unexpected compared with earlier
findings from large cosmological simulations. Indeed, in the
Millenium-II (MS-II) DM-only simulation only $8$ to $27\%$ of MW-mass
haloes with virial masses between $1$ and $2.5\times 10^{12}\, M_\odot$ are
found to host a subhalo at least as massive as that of the LMC within
their virial radii \citep{Boylan-Kolchin2010}.
This apparent tension motivates us to consider potential environmental
effects that may affect the presence of massive satellites. The Local
Group environment, after all, is characterized by a very particular
configuration, with a close pair of halos of comparable mass
approaching each other for the first time. Could this environment
favor the presence and/or late accretion of massive satellites into
the primaries, compared with isolated halos of similar mass?
We explore this using the DOVE simulation, where we identify pairs of
haloes according to well-defined mass, separation, and isolation
criteria in an attempt to approximate the properties of the Local
Group environment. We start by selecting haloes with virial masses
$M_{200}>5\times 10^{11}\Msun$ and select those that are within
(0.5-1.1) Mpc of another halo in the same mass range. We impose then a
mass ratio cut of $M_{200,2}/M_{200,1}>0.3$, in order
to retain pairs with comparable mass members, and similar to the
MW-M31 pair. (Here $M_{200,1}$ refers to the virial mass of the more
massive halo of the pair; $M_{200,2}$ to the other.)
We apply next an isolation criterion such that there is no halo (or
subhalo) more massive than $M_{200,2}$ within
$r_{\rm iso}=2.5$ Mpc, measured from the midpoint of the pair. A stricter
isolation criteria is defined by increasing the isolation radius to
$r_{\rm iso}=5$ Mpc. Following \citet{Fattahi2016}, we refer to the
first isolation as ``MedIso'' and to the stricter one as ``HiIso''.
We do not distinguish between centrals and non-centrals in our pair
selection. In fact, in some cases, pair members share the same FoF
group. These are always the two most massive subhaloes of their FoF
group. Our isolation criterion discards pairs of haloes that are
satellites of a more massive halo.
The relative radial velocity vs. separation of all MedIso pairs is
presented in the left panel of Fig.~\ref{fig:pairs} with open
circles. The total mass, $M_{\rm tot}=M_{200,1}+M_{200,2}$, of these
pairs span a wide range as shown by the grey histrogram in
the right-hand panel of Fig.~\ref{fig:pairs}. We further select only
pairs with total mass in the range
$\log (M_{\rm tot}/\Msun)=[12.2,12.6]$, as marked by the blue shaded
region in the right panel. This range includes the total masses of
all APOSTLE pairs (yellow histogram in the right panel). MedIso pairs
that satisfy this total mass criterion are highlighted
with blue filled circles in the left panel.
This mass cut excludes pairs with the largest total
masses and most extreme relative radial velocities, which are outliers from the
timing-argument predictions for two point-masses
on a radial orbit approaching each other for the first time (red
dotted curves labelled by the value of $\log M_{\rm tot}/M_\odot$) .
We shall hereafter refer as ``MedIso sample'' to the final sample of
DOVE pairs (with 51 pairs) that satisfy all the above ``Local Group criteria'',
summarized below:
\begin{itemize}
\item separation: $0.5$-$1.1\Mpc$
\item minimum mass of individual haloes: $M_{200}>5\times 10^{11} \Msun$
\item comparable mass pair members: $M_{200,2}/M_{200,1}>0.3$
\item total mass of pairs: $\log
(M_{200,1}+M_{200,2})/M_\odot=[12.2,12.6]$
\item MedIso isolation: $r_{\rm iso}=2.5 \Mpc$
\end{itemize}
The final ``HiIso sample'', with 17 pairs, satisfies all the above
conditions but has a stricter isolation criterion of $r_{\rm
iso}=5 \Mpc$. These are marked with crosses in Fig.~\ref{fig:pairs}.
APOSTLE pairs are a subsample of the MedIso group, but with extra
constraints on the
relative radial and tangential velocity between the primaries,
as well as on the Hubble flow velocities of objects surrounding the primaries out to 4 Mpc
\citep[see][for details]{Fattahi2016}. They are marked with small orange filled
circles in the left panel of Fig.~\ref{fig:pairs}, and their total
mass distribution is shown by the orange histogram in the right-hand
panel of the same figure.
We compare in Fig.~\ref{fig:vmaxFnc} the abundance of (massive)
subhaloes around APOSTLE primaries with those of MedIso and HiIso
pairs, as well as with all isolated MW-mass haloes in DOVE. The latter
is a ``control sample'' that includes all central subhalos with
$11.7<\log (M_{200}/M_\odot)<12.4$ found in the DOVE cosmological
box. This mass range covers the range of masses of individual pair
members in APOSTLE and in the MedIso sample.
Fig.~\ref{fig:vmaxFnc} shows the scaled subhalo $\vmax$ function,
i.e. $N (>\nu) \equiv N(>\vmax/V_{\rm 200,host} )$, averaged over host
haloes in various samples. We include all subhaloes within $r_{200}$
of the hosts. The scaled subhalo $\vmax$ function
of the control sample (solid black curve) is consistent with the fit
from \citet{Wang2012}, who used a number of large cosmological simulations and
a wide halo mass range (red dashed curve). The turnover at $\nu < 0.15$
is an artifact of numerical resolution, which limits our ability to
resolve very low mass haloes.
Interestingly, Fig.~\ref{fig:vmaxFnc} shows that, on average, our
various paired samples (MedIso, HiIso, APOSTLE) have an overabundance
of massive subhaloes relative to average isolated
$\sim10^{12}\, M_\odot$ haloes. Indeed, the chance of hosting a
massive subhalo with $\nu>0.6$ almost doubles for halos in LG-like
environments compared with isolated halos.
Error bars on the $\nu$ function of the control sample represent the
$\pm 1\sigma$ dispersion around the average, computed by randomly drawing 102 halos
(as the total number of halos in the MedIso paired sample) from the sample of 2028
DOVE centrals, 1000 times.
We find that only 2/1000 realizations reach the $\langle N(\nu)\rangle$
measured for APOSTLE pairs at $\nu=0.6$, proving the robustness of the result.
We note that the overabundance of massive subhaloes in halo pairs
persists when altering the isolation criterion (HiIso vs. MedIso) or
when using a more restrictive selection criteria
on the relative kinematics of the halos and the surrounding Hubble flow
(APOSTLE
vs. MedIso). We have additionally checked that imposing tighter
constraints directly on the MedIso sample ($V_{r}=[-250,0]\,\kms$,
$d=[0.6,1]\,\Mpc$) does not alter these conclusions.
Moreover, we have explicitly checked that the higher frequency of massive satellites
found in the paired halo samples is not enhanced by the most massive primaries in
the host mass range considered ($11.7<\log (M_{200}/M_\odot)<12.4$).
Therefore, the main environmental driver for the overabundance
of massive subhalos in Local Group-like environments seems to be the
presence of the halo pair itself.
This result is consistent with that of \citet{Garrison-Kimmel2014},
who report a global overabundance of subhalos in Local Group-like
pairs compared to isolated MW-like halos. However, we caution that
some of the volumes analysed by these authors were specifically
selected to contain LMC-like objects, so it is not straightforward to
compare our results quantitatively with theirs. We conclude that
haloes in pairs such as those in the Local Group have a genuine
overabundance of massive satellites compared to isolated
halos. LMC-like satellites are thus not a rare occurrence around Milky
Way-like hosts.
\subsection{The orbits of LMC analogues}
\label{SecLMCOrbits}
LMC analogues should not only match approximately the LMC's
stellar mass (Fig.~\ref{fig:vmaxmstar}) but also its orbital
properties and dynamical configuration. We therefore refine our identifying
criteria by inspecting the orbits of the $14$ LMC-analogue candidates,
shown in Fig.~\ref{fig:LMCorbits}. We shall retain as LMC analogues only
candidates that have been accreted relatively recently (i.e., those
that undergo the first pericentric passage at times
$t_{\rm fper} > 10$ Gyr, or $z_{\rm fper}<0.37$) and that, in
addition, have pericentric distances $r_{\rm peri} \lesssim 110$ kpc.
Fig.~\ref{fig:LMCorbits} shows that $9$ out of the $14$ original
candidates satisfy these conditions (this final sample of LMC analogues
is shown in red in Fig.~\ref{fig:vmaxmstar}).
We highlight the orbits of the selected candidates in
Fig.~\ref{fig:LMCorbits} using black curves, where the cyan and red
circles indicate their pericentres and apocentres,
respectively\footnote{These apocentres are actually best understood as
``turnaround radii''; i.e., as the maximum physical distance to the
primary before infall.}. The rest of the candidates that do not meet
the orbital criteria are shown in grey. Of these, we find only one
case with a very early first pericentre (at $t\sim 8.7$ Gyr) that is
at present on its second approach. The others have either not yet
reached pericentre by $z=0$ or have very large ($\sim 200$ kpc)
pericentric distances. The APOSTLE LMC analogues are thus recently
accreted satellites, in line with the conclusions of
\citet{BoylanKolchin2011a}, who find that $50\%$ of massive satellites
in the MS-II DMO simulation have infall times in the last 4 Gyrs.
We list the individual pericentric and apocentric distances of each of
our 9 LMC analogues in Table~\ref{tab:periapo}. The median
pericentre is $\sim 60$ kpc, in good agreement with the pericentre
estimates for the LMC at $\sim 50$ kpc. The analogues show a wide range
of apocentres, which extend from $\sim 260$ kpc all the way to
$700$ kpc, with a median of $\sim 420$ kpc. The typical orbit of LMC
analogues in our sample is therefore quite eccentric, with a median
eccentricity $\epsilon \equiv r_{\rm peri}/r_{\rm apo} = 0.12$.
One may use these typical values to draw inferences regarding the past
orbital history of the LMC around the MW. For example, taking the
LMC's current Galactocentric radial distance as pericentre distance
(i.e., $r_{\rm peri}^{\rm LMC} = 49.9$ kpc; see
Table~\ref{tab:lmcdata}) the median eccentricity, $\epsilon=0.12$,
suggests an apocentre for the LMC of $r_{\rm apo}^{\rm LMC} \sim 408$
kpc before starting its infall towards our Galaxy.
The large apocentric distances discussed above allow the $9$ LMC analogues
to acquire substantial angular momentum through tidal torqing by the
nearby mass distribution. Table~\ref{tab:periapo} lists the specific orbital angular
momentum of each simulated LMC analogue at first pericentre normalized
by the virial value ($r_{200}\times V_{200}$) of the corresponding
primaries measured at the same time. The median of the sample is
$|\vec{l}_{\rm orb}|/(r_{200} \times V_{200}) = 0.64$, in good agreement with
the value ($\sim 0.54$) estimated assuming the latest LMC kinematics
constraints from Table~\ref{tab:lmcdata} \citep{Kallivayalil2013} and
a virial mass $M_{200} = 1 \times 10^{12}\; \rm M_\odot$ for the
MW. Under the condition of recent infall, the large orbital spin of
the LMC around the Galaxy is not difficult to reproduce within $\Lambda$CDM
\citep[see also ][]{BoylanKolchin2011a}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/LMCcandi_orbits.pdf}
\caption{Radial distance to the primary versus time, for the $14$ LMC
analogues identified in Fig.~\ref{fig:vmaxmstar}. The final $9$ LMC
analogues analyzed in this work are shown in black, while the rest of
candidates are shown in gray. A cyan circle highlights the time when
the LMC analogue is at first pericentre. Red circles mark the time of
``turnaround'' (first apocentre). The average time evolution of the
virial radius of the primaries is shown with a dashed line (median
and 25-75 percentiles).}
\label{fig:LMCorbits}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/LMCsats_orbits_tscaledor.pdf}
\caption{Radial distance to the primary versus time for LMC analogues
(black) and LMC-associated satellites (orange). The time axis has
been shifted so that all objects are at their first pericentre at
$t'=0$. The times at which LMC-associated satellites have been
identified around their corresponding LMC analogues ('identification
time', $t_{\rm id}$), are highlighted with orange circles. }
\label{fig:satorbits}
\end{figure}
\begin{table*}
\centering
\caption{Observational data assumed in this work for the LMC. Stellar
mass, Galactocentric position and velocity, Galactocentric radial
distance and magnitude of the specific orbital angular momentum
vector. Galactocentric Cartesian position has been computed from
the RA, dec and $(m-M)$ values quoted in the latest data being made
available by the \citet{McConnachie2012} compilation. Galactocentric
velocities have been computed assuming a heliocentric line-of-sight velocity of
$V_{\rm los}= 262.3$ km/s \citep{vanderMarel2002} and proper motions
$\mu_W=-1.899$ mas/yr, $\mu_N=0.416$ mas/yr
\citep{Kallivayalil2013}. We assume a distance of the Sun from the
Milky Way of $R_\odot=8.29$ kpc, a circular velocity of the local
standard of rest (LSR) of $V_0=239$ km/s \citep{McMillan2011}, and a
peculiar velocity of the Sun with respect to the LSR of
$(U_\odot,V_\odot,W_\odot) = (11.1, 12.24,7.25)$ km/s
\citep{Schonrich2010}. }
\begin{tabular}{ l l l l l l l l l }
\toprule
$M_{*}$/M$_\odot$&$X$/kpc & $Y$/kpc & $Z$/kpc & $V_X$/km s$^{-1}$ & $V_Y$/km s$^{-1}$ & $V_Z$/km s$^{-1}$ & Distance/kpc & $|\vec{l_{\rm orb}}|$/(kpc km s$^{-1}$) \\
\midrule
2.5$\times 10^9$ & -0.58 & -41.77 & -27.47 & -85.41 & -227.49 & 225.29 & 49.99 & 16221.26\\
\bottomrule
\end{tabular}
\label{tab:lmcdata}
\end{table*}
\begin{table}
\centering
\small
\caption{Orbital characteristics of the $9$ LMC analogues presented in this work.
Column 1 indicates the LMC analogue identifier.
LMC analogues are identified with a label in the form {\tt Vol-FoF-Sub}, that indicates the corresponding
APOSTLE volume, as well as the FoF and \textsc{subfind} indices of the object in the $z=0$ snapshot.
Column 2 indicates the redshift at which the LMC analogue's corresponding satellites have been identified (`identification time' $t_{\rm id}$, see Sec.~\ref{SecLMCAssocSats}).
Throughout this paper, LMC analogues and their respective satellites are shown in a same color
consistently in all figures. LMC analogues in this table are ordered by this color, from red to dark blue.
Subsequent columns indicate the LMC analogue's pericentric distance,
apocentric distance, orbital eccentricity ($\epsilon = r_{\rm peri}/r_{\rm apo}$), and magnitude of the specific orbital angular momentum vector $\vec{l}_{\rm orb}$ normalized by ($r_{200}\times V_{200}$) of its corresponding primary.
}
\begin{tabular}{ l l l l l l }
\toprule
Label & $z_{\rm id}$ & $r_{\rm peri}$/kpc & $r_{\rm apo}$/kpc & $\epsilon$ & $|\vec{l_{\rm orb}}|$/($r_{200}\times V_{200}$) \\
\midrule
\midrule
5-2-2 &0.503 & 51.00 & 412.94 & 0.12 & 0.72 \\
2-1-3 &0.399 & 32.83 & 447.37 & 0.07 & 0.51 \\
1-1-1 &0.366 & 61.29 & 544.97 & 0.11 & 0.64 \\
12-1-4 &0.399 & 34.14 & 259.27 & 0.13 & 0.33 \\
11-1-4 &0.333 & 58.32 & 399.28 & 0.15 & 0.66 \\
11-1-3 &0.302 & 49.59 & 418.20 & 0.12 & 0.51 \\
10-1-2 &0.241 & 44.25 & 420.76 & 0.11 & 0.28 \\
1-2-2 &0.183 & 108.52 & 354.17 & 0.31 & 0.64 \\
3-1-1 &0.302 & 108.12 & 690.90 & 0.16 & 0.77 \\
\midrule
Median & & 50.99 & 418.19 & 0.12 & 0.64 \\
\bottomrule
\end{tabular}
\label{tab:periapo}
\end{table}
\section{LMC-associated satellites in APOSTLE}
\label{SecLMCAssocSats}
Given the relatively high masses of the LMC analogues, we expect them to
harbour their own population of satellite dwarfs. We identify them in
the simulations as follows. We first trace their orbits back from
pericentre until they are $\sim 100$ kpc away from the virial boundary
of the primary. At that time in the orbit, referred to as
``identification time", or $t_{\rm id}$, we flag as ``LMC satellites''
all luminous subhalos within $100$ kpc of each LMC analogue.
We include all luminous subhalos; i.e., with at least 1 star
particle, unless otherwise specified.
The procedure yields a combined total of $16$ satellites for the $9$
LMC analogues. Only one LMC analogue is ``luminous satellite-free''
at $t_{\rm id}$. We have traced the orbital evolution of the
LMC satellites in time and have confirmed that all are bound to
their LMC analogues, at least until first pericentre. One of the
satellites merges with its LMC analogue before the latter reaches
first pericentre. Our final sample therefore consists of $15$
LMC-associated satellites.
Using merger trees, we trace back and forth in time
each of the LMC-associated satellites. We show their orbits in
Fig.~\ref{fig:satorbits} with orange curves, together with those of
their respective primaries. Times in this figure have been shifted so
that $t'=t-t_{\rm fper}=0$ corresponds to that of the snapshot
corresponding to the closest approach of each LMC analogue.
``Identification times" for each LMC analogue are highlighted with orange
circles in Fig.~\ref{fig:satorbits}.
This figure shows that, at first pericentre, LMC-associated satellites
remain very close in radial distance to their corresponding LMC
analogue, although they may evolve differently afterwards. This implies,
as suggested in Sec.~\ref{SecIntro}, that any MW satellite associated with
the LMC should be found at a close distance from the LMC today. We shall
return to this issue in Sec.~\ref{SecLMCIdCrit}.
Hereafter, all the results shown correspond to $t_{\rm fper}$, unless otherwise stated.
\subsection{Projected position and orbital angular momentum}
\label{ssec:phasespace}
\begin{figure*}
\includegraphics[width=\linewidth]{figs/LMC_aitoff_realpos_MW_big.pdf}
\includegraphics[width=\linewidth]{figs/LMC_aitoff_realjorb_MW_big.pdf}
\caption{Position (top) and orbital angular momentum direction
(bottom) of satellites of the primary haloes relative to the LMC
(black star), in Galactocentric coordinates. LMC-associated
satellites are shown as large open circles with labels. The rest of
satellites of the primary are shown as crosses. Satellites belonging
to the same primary are shown in the same color. Coordinates
systems are rotated such that the positions and orbital poles of LMC
analogues coincide with the corresponding observed values for the LMC,
indicated with a large star. Observed MW satellites are shown as
open black circles with labels. MW satellites highlighted with a
filled circle or a cross are those deemed likely LMC associates
according to the discussion in Sec.~\ref{SecLMCIdCrit}. Thin gray
lines in the top panel show the individual orbital trajectories of
each of the 9 LMC analogues. An arrow indicates the direction of
motion of the LMC along the trajectory. In the bottom panels, for
reference, we show circles centred on the LMC with aperture
$32^\circ$ and $55^\circ$, respectively (see text for details). }
\label{fig:posaitoff}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.55\linewidth]{figs/cosalfa_rl_mw_hist.pdf}
\includegraphics[width=0.44\linewidth]{figs/delta_v_r.pdf}
\caption{ \textit{Left:} Angular separation between the position
vector of satellites and that of the LMC analogue, versus angular
separation between the orbital angular momentum direction of
satellites and that of the LMC analogue. LMC-associated satellites
from APOSTLE are shown as colored filled circles, and the rest of
satellites as crosses. Histograms show the distribution along the
axes of the different samples of satellites (i.e., all satellites,
LMC-associated satellites, and the rest of satellites of the
primary). \textit{Right:} Radial distance and 3D velocity of
LMC-associated satellites relative to that of their LMC analogue, at
first pericentre.
A shaded band indicates the 25-75 percentile range of $V_{\rm max}$
values for LMC analogues, as a reference.
Color-coding in both panels is the same as in
Fig.~\ref{fig:posaitoff}. For comparison, MW satellites are shown
as black open circles with labels. MW satellites highlighted
with a filled circle or a cross are those deemed likely LMC
associates according to the discussion in Sec.~\ref{SecLMCIdCrit}. }
\label{fig:cosalfa}
\end{figure*}
The top panel of Figure \ref{fig:posaitoff} shows an Aitoff projection
of the sky position of all satellites associated with the primaries
hosting LMC analogues at the time of first pericentre. Each of the
coordinate systems of the 9 LMC analogues has been rotated so that the
LMC analogue is at the same Galactocentric position in the sky as the
observed LMC and the orbital angular momentum vector of the LMC analogue
is parallel to that of the observed LMC (see Table~\ref{tab:lmcdata}
for the position and velocity data assumed for the LMC). The position
of the LMC (analogue and observed) is marked with a star, while
LMC-associated satellites are shown as large colored open circles with
labels. The remainder of the satellites of each primary are shown as
colored crosses. A different color is used for each of the 9 primaries
containing LMC analogues.
For comparison, observed MW satellites\footnote{We show data for all
known MW satellites within 300 kpc with measured kinematic data, including a few cases where it
is unclear if the system is a dwarf galaxy or a globular cluster
\citep[see][]{McConnachie2012}. See Table~\ref{TabScores}
for a listing of the objects considered and the corresponding data references.} are overplotted as small black open
circles with identifying labels. In addition, a thick gray line marks
the LMC's orbital plane and an arrow indicates the direction of motion
along this line. Individual thin gray lines show each of the LMC
analogues' orbital paths, starting at ``turnaround'' (apocentre) and
ending at pericentre. One interesting result is that APOSTLE LMC
analogues mostly follow the same orbital plane during their infall onto
the primary. This is in good agreement with \citet{Patel2017b}, who find
LMC-mass satellites in the Illustris simulations with late accretion
times generally conserve their orbital angular momentum up to $z=0$.
The spatial distribution in the sky of the LMC-associated satellites
clearly delineates the orbital plane of the LMC, which appear to
spread more or less evenly along the leading and trailing section of
the orbital path, as expected if LMC satellites were to accompany the
orbit of the LMC. The bottom panel of Figure \ref{fig:posaitoff}
shows that this is indeed the case: the instantaneous direction of the orbital
angular momentum vectors (or orbital 'poles') of LMC-asociated
satellites at $t_{\rm fper}$ seems to coincide rather well with that of the LMC
itself. Again the coordinate system of each LMC analogue has been
rotated\footnote{Here longitude coordinates have been rotated by 180
degrees to show the angular momentum of the LMC at the centre of the
Aitoff diagram.} such that the LMC analogue's orbital pole aligns with
that of the observed LMC, marked with a star.
\begin{figure*}
\includegraphics[width=\linewidth]{figs/XYZorbplane_vol10.pdf}
\caption{Orbital trajectory of the LMC-associated satellite (labelled
10-1-560, see Fig.~\ref{fig:posaitoff}) that appears to be
counter-orbiting with respect to its LMC analogue at the time of first
pericentre (in orange). The trajectory of the LMC analogue is shown
in black. A second satellite of the same LMC analogue is shown in
grey. The reference system is centred on the primary galaxy, and
the orbital plane of the LMC analogue is chosen as the XY plane. The
rightmost panel is a zoomed-in view of the region enclosed in a
rectangle in the leftmost panel. Arrows in the right-most panel
indicate the direction of the instantaneous velocity vectors of each
satellite at the final time. }
\label{fig:counterrot}
\end{figure*}
The clustering of the orbital poles of LMC-associated satellites is to
be expected although it is perhaps less tight than assumed in earlier
work \citep[see Fig.5 of][]{Kallivayalil2018}.
Indeed, some satellites are found to have orbital poles that differ
from that of the LMC by as much as $\sim55$ degrees (shown as a
dashed-line circle for reference), with a median value of $\sim 32$
degrees (shown as a solid-line circle).
The spatial and pole distributions on the sky of LMC-associated
satellites in APOSTLE are consistent with the location of the bulk of
the debris from the cosmological dark matter-only LMC analogue studied
first in \citet{Sales2011,Sales2017} and compared to \textit{Gaia} data in
\citet{Kallivayalil2018}. However, we find also a surprising result
here: there is the case of a simulated satellite whose orbital pole is
nearly 180 degrees away from its LMC analogue's. In other words, this
satellite appears to be ``counter-rotating'' the Milky Way relative to
the LMC (see orange open circle labelled 10-1-560 in
Fig.~\ref{fig:posaitoff}). We shall explore this case in more detail
in Sec.~\ref{ssec:counterrot}.
One conclusion from Fig.~\ref{fig:posaitoff} is that the orbital
pole condition leaves many MW satellites as potentially associated
with the LMC. It is therefore important to look for corroborating
evidence using additional information, such as positions and
velocities. We explore this in Fig.~\ref{fig:cosalfa}, where the left
panel shows the cosine of the angle between different directions that
relate the LMC with its satellites. The x-axis corresponds to the
angular distance ($\alpha_{\rm pos}$) between the position of the LMC
analogue and other satellites; the y-axis indicates the angular distance
($\alpha_{\rm orb}$) between their corresponding orbital poles.
Satellites associated with LMC analogues are shown with colored circles
in Fig.~\ref{fig:cosalfa}, and are compared with those of MW satellites
with available data (open black circles). The former are clearly quite
close to the LMC both on the sky in position (most have
$\cos \alpha_{\rm pos}>0.5$), and also have closely aligned orbital
poles (most have $\cos \alpha_{\rm orb}>0.5$).
What about the other satellites, which were not associated with the
LMC analogues before infall? Are their positions and/or kinematics
affected by the LMC analogue? Apparently not, as shown by the small
colored crosses in Fig.~\ref{fig:cosalfa} and by the histograms at
the top and right of the left-hand panel of the same figure. Filled
blue histograms show the distribution of each quantity (for simulated
satellites) on each axis. These show a small enhancement towards small
values of $\alpha_{\rm pos}$ and $\alpha_{\rm orb}$, but the
enhancement is entirely due to the satellites associated with the LMC
analogues (black histograms). Subtracting them from the total leaves the
red histogram, which is consistent with a flat, uniform
distribution. In other words, neither the angular positions nor the
orbital angular momentum directions of non-associated satellites seems
to be noticeably affected by a recently accreted LMC analogue.
Besides the projected distance and orbital pole separation shown on
the left panel of Fig.~\ref{fig:cosalfa}, our results also indicate
that satellites associated with the LMC analogues remain close in
relative distance and velocity (something already hinted at when
discussing Fig.~\ref{fig:satorbits}). This is shown in the right-hand
panel of Fig.~\ref{fig:cosalfa}, where we plot the relative velocity
($\Delta V_{\rm 3D}$)
and distance ($\Delta r$) between all satellites of the primary and the LMC
analogue. Satellites associated with the analogues (filled circles)
clearly cluster towards small $\Delta r$ and small $\Delta V_{\rm 3D}$, with a
median $\Delta r$ of just $\sim 37$ kpc and a median $\Delta V_{\rm 3D}$ of
just $\sim 138$ km/s. We shall use these results to refine our criteria for
identifying LMC-associated satellites in Sec.~\ref{SecLMCIdCrit},
after considering first the peculiar case of a counter-rotating
satellite.
\subsection{A counter-rotating LMC-associated satellite}
\label{ssec:counterrot}
We turn our attention now to the ``counter-rotating'' satellite
highlighted in the Aitoff projection in Fig.~\ref{fig:posaitoff}
(orange open circle labelled 10-1-560), which appears at
$\cos (\alpha_{\rm orb})\sim -0.75$ in the left panel of
Fig.~\ref{fig:cosalfa}. This is clearly an outlier relative to all
other satellites associated with LMC analogues. What mechanism could
explain this odd orbital motion?
With hindsight the explanation is relatively simple, and may be traced
to a case where the amplitude of the motion of a satellite around the
LMC analogue is comparable to the pericentric distance of the latter
around the primary host. This is shown in Fig. \ref{fig:counterrot},
which plots the orbital trajectory of satellite 10-1-560 in a
reference frame centred on the primary and where the XY plane is
defined to coincide with the orbital plane of the LMC analogue. The LMC
analogue is shown in black, and its two satellites in grey and
orange. In all panels, a line shows the trajectory of each object
starting at early times and finalizing at first pericentre (marked
with a circle), which, in this particular case, corresponds with the
last snapshot of the simulation, at $z=0$. The left and middle panels
show the XY and ZY projections of the trajectories in a box $600$ kpc
on a side. The right-hand panel shows a zoomed-in XY view $150$ kpc on
a side, where the arrows indicate the projections of the instantaneous
velocity vectors at first pericentre.
The velocity vectors explain clearly why satellite 10-1-560 appears to
counter-rotate: when the relative ``size'' of the LMC satellite system
is comparable to the pericentric distance of the LMC orbit, the orbital
motion may appear to carry an LMC satellite on an instantaneous orbit
that shares the same orbital plane but that goes around the primary
centre on the opposite side. We find this instance in only one out of
the $15$ satellites we identified and tracked. This is thus a
possible but relatively rare occurrence which should, however, be
kept in mind when considering the likelihood of association of
satellites that may pass all other criteria but are found to have
orbital planes approximately counter-parallel to the LMC.
\subsection{Contribution of LMC analogues to the primary satellite
population}\label{SecLMCSatContrib}
We consider now the contribution of satellites of LMC analogues to the
satellite population of the primary galaxy. The cyan curve in
Fig.~\ref{fig:satmf} shows the average satellite mass function of all
$24$ APOSTLE primaries at $z=0$, and compares it to that of the $9$
primaries with LMC analogues (at the time of their first pericentric
passage; orange curve).
Specifically, we consider all satellites within the virial radius
of the primary ($\sim200$ kpc on average).
The grey curve shows the MW satellite
population for reference (see Table~\ref{TabScores}).
All MW satellites in our study are found within $\sim 250$ kpc of the MW centre,
a distance that compares well with the virial radii of APOSTLE primaries.
The overall good agreement of APOSTLE with the MW satellite population
is reassuring, as it suggests that the simulated populations are
realistic and that their mass functions may be used to shed light on
the impact of the LMC on the overall MW satellite
population. Comparing the orange and cyan curves indicates that LMC
analogues have, as expected, a substantial impact on the massive end of
the satellite population, but, aside from that, the effect on the
whole population of satellites with $M_*>10^5\, M_\odot$ is relatively
modest. Indeed, the $9$ primaries with LMC analogues have $17.8^{+8.0}_{-1.2}$
(median and 25-75 percentiles)
such
satellites, compared with the average $16.1^{+6.3}_{-3.8}$ for all $24$ primaries
and with $16.3^{+4.6}_{-3.2}$ for the $15$ APOSTLE primaries without LMC
analogues. In other words, aside from the presence of the LMC itself,
the impact of the LMC satellites on the overall satellite population
is relatively minor.
This is also shown by the green curve in Fig.~\ref{fig:satmf}, which
indicates the (average) satellite mass function of the LMC analogues
at identification time, $t_{\rm id}$ (i.e., before infall).
The $9$ LMC analogues contribute a total of $16$ dwarfs with
$M_*> 10^5\, M_\odot$ at infall, or roughly $\sim 10\%$ of the satellite
population of each primary. In terms of numbers, the average
$\langle \rm N_{\rm sat}(M_{*}>10^5M_\odot)\rangle$
is $16/9=1.8\pm0.9$, where the error range specifies the $\pm 1\sigma$
spread of the distribution.
The green circles at the bottom of Fig.~\ref{fig:satmf}
show the individual stellar masses of each satellite in our $9$ LMC
analogues. None of our LMC analogues has a companion as massive
as the SMC, which has a stellar mass of order
$M_*\sim3\times 10^8\, M_\odot$. Most satellites contributed by LMC
analogues have stellar masses $M_*<10^6\, M_\odot$.
We note that the relatively modest impact of the LMC on the MW massive
satellite population suggested by our results is consistent with the
early semi-analytical models of \citet{Dooley2017b}, as well as with
other studies of isolated LMC-mass systems using the FIRE simulations
\citep{Jahn2019} and simulations from the Auriga project
\citep{Pardy2020}.
\begin{figure}
\includegraphics[width=\linewidth]{figs/satmassfunc_24LMChosts_corr_shade.pdf}
\caption{Average satellite mass function for all the 24 primaries in
AP-L2 runs at $z=0$ (cyan). This agrees fairly well with the observed
satellite mass function in the MW (gray line). The satellite mass
function of the $9$ primaries that contain a LMC analogue is shown in
orange for comparison, and suggests an excess on the high-mass end
due largely to the LMC analogue itself. On average, LMC analogues
contribute
roughly 10\% of all satellites
with $M_* > 10^5\; \rm M_\odot$ to their
primaries (green curve).
The shaded area shows the
$\pm1\sigma$ dispersion range.
Green symbols show the individual masses of satellites
identified in our $9$ LMC analogues. }
\label{fig:satmf}
\end{figure}
\subsection{LMC and the radial distribution of satellites}
\label{ssec:raddist}
\begin{figure}
\includegraphics[width=\linewidth]{figs/radialdistr_apostleLMC_r3d_258_referee3.pdf}
\caption{Average cumulative radial distribution of satellites within
250 kpc, for (i) all the 24 primaries in APOSTLE-L2, at $z=0$
(cyan); (ii) the primaries of the 9 LMC analogues, at first pericentre
(orange); and (iii) the Milky Way satellites. We include in all of
these samples only satellites with $M_*>10^5\, M_\odot$.
Thinner lines show the distributions for simulated satellites filtered
by stellar mass as quoted in the legend.
Note that
the MW satellite distribution appears more concentrated than the
average APOSTLE primary; this is well matched by systems with an LMC
analogue, a transient configuration that results from the particular
orbital configuration of the LMC
and its satellites
(at pericentre).}
\label{fig:raddist}
\end{figure}
The radial distribution of satellites contains important clues to the
accretion history of a galaxy \citep[see; e.g.,][and references
therein]{Samuel2020,Carlsten2020}. Recent results from the SAGA survey
have suggested that
\textit{"the radial distribution of MW satellites is much more concentrated
than the average distribution of SAGA satellites, mostly due to the presence of the
LMC and SMC" \citep{Mao2020}.}
We explore below whether our simulations confirm that this
effect is likely due to the LMC and its satellites.
The cyan curve in Fig.~\ref{fig:raddist} shows the average cumulative
radial distribution of all $M_*>10^5\, M_\odot$ satellites within
$250$ kpc of the $24$ APOSTLE primaries. The corresponding MW satellite population
is significantly more concentrated, as shown by the grey dashed curve
in the same figure\footnote{Radial distances for MW satellites
have been calculated from the RA, dec, $(m-M)$ data available in
\citet{McConnachie2012}'s Nearby Dwarf Database (see references therein).}.
Interestingly, the $9$ APOSTLE primaries with LMC
analogues, shown by the orange curve, also have more concentrated
satellite distributions, in good agreement with the MW satellite
population.
This is mainly a transient result of the particular
orbital phase of the LMC analogues, which are chosen to be near first
pericentric passage. Indeed, at $z=0$ the same $9$ primaries have less
centrally concentrated distributions, consistent with the average
result for all $24$ primaries (cyan curve).
Support for our
interpretation of the transient concentration as due to the LMC
analogues and their associated satellite systems is provided by the
thin orange lines in Fig.~\ref{fig:raddist}. The dashed and solid
(thin) orange lines indicate results for systems with stellar mass
exceeding or smaller than $10^6\, M_\odot$. The higher concentration
is only apparent in the latter case: this is consistent with our
earlier finding that LMC analogues contribute mainly systems with
$M_*<10^6\, M_\odot$ (see Fig.~\ref{fig:satmf}).
We conclude that the concentrated radial distribution of satellites in
the Galaxy is probably a transient caused by the presence of the LMC
and its satellites near first pericentre. This transient effect
illustrates the importance of taking into account the particular
kinematic stage of the LMC when comparing the properties of the
Galactic satellite population with that of other external galaxies.
\begin{figure*}
\includegraphics[width=\linewidth]{figs/quantify_v3d_r_3panels.pdf}
\caption{Cumulative distributions of the three diagnostics used to
rank MW satellites in terms of likely association with the
LMC. These diagnostics are the 3D velocity relative to the LMC
($\Delta V_{\rm 3D}$, left), the radial distance relative to the LMC
($\Delta r$, centre) and the alignment with the LMC's orbital pole
direction ($\vert \rm cos(\alpha_{\rm orb})\vert$, right). A red
line shows the cumulative distribution for LMC-associated satellites
in APOSTLE; a dashed blue line shows that for \textit{all} APOSTLE
satellites, and a black line shows the distribution for all MW
satellites, as labelled. For reference, the grey curve in the
$\Delta V_{\rm 3D}$ panel (left) shows a Gaussian distribution with
$\sigma_{\rm 1D}=90$ km/s. In the $\Delta r$ panel (centre) the
grey curve shows the cumulative mass profile of an NFW dark matter
halo with $V_{\rm max}=95$ km/s, roughly the average $V_{\rm max}$
of the APOSTLE LMC analogues. }
\label{fig:diagnostics}
\end{figure*}
\section{LMC-associated satellites in the Milky Way}
\label{SecLMCIdCrit}
We have seen in the above subsections that satellites associated with
LMC-analogues contribute modestly to the primary satellite population,
and distinguish themselves from the rest of a primary's satellites by
their proximity in phase space to their parent LMC analogue. Satellites
closely aligned in orbital pole direction, and at small relative distances
and velocities from the LMC, should be strongly favoured in
any attempt to identify which MW satellites have been contributed by
the LMC.
We may compile a ranked list of potential associations by assigning to
all MW satellites numerical scores on each of the above
diagnostics. This score consists of a numerical value equal to the
fraction of associated satellites in the simulations that are farther
from their own LMC analogue in each particular dignostic (i.e., a score
of $1$ means that a particular satellite is closer to the LMC than
{\it all} simulated satellites, in that diagnostic.). We illustrate
this scoring procedure in Fig.~\ref{fig:diagnostics}.
The left panel shows the cumulative distribution of
$\Delta V_{\rm 3D}$, the relative velocity between the LMC and other
satellites. The red curve corresponds to all simulated satellites
associated to LMC analogues, the dashed blue curve to all satellites of
APOSTLE primaries. The grey curve shows the cumulative distribution
expected if associated satellites had a Gaussian isotropic velocity
distribution around the analogue with a velocity dispersion of $\sigma_{\rm 1D}=90$
km/s. For example, the SMC (highlighted in Fig.~\ref{fig:diagnostics}
with a filled circle) has $\Delta V_{\rm 3D}=133$ km/s, which gives it a
relatively high score of $\sim 0.59$ in this diagnostic. According to
this diagnostic, any MW satellite whose LMC relative velocity exceeds
$\sim 220$ km/s has a score of zero, and its association with the LMC
is in doubt.
\begin{figure*}
\includegraphics[width=\columnwidth]{figs/rVrV3DVesc_LMC_rainbow.pdf}
\includegraphics[width=\columnwidth]{figs/rVrV3d_MWsat_lims_trunc.pdf}
\caption{Radial velocity $V_{\rm rad}$ and 3D velocity $V_{\rm 3D}$ versus radial distance, at pericentre.
\textit{Left}: APOSTLE LMC analogues (stars) and LMC-associated
satellites (circles). Radial (3D) velocities are shown with symbols
with grey (black) edges. Lines illustrate the escape velocity
profiles of the corresponding primaries. Color-coding is the same as
in previous figures. \textit{Right}: observed MW satellites at
$z=0$. Radial velocities are shown in black, and 3D velocities in
red. The LMC is marked with a star. Observed $V_{\rm rad}$ and
$V_{\rm 3D}$ are from \citet{Fritz2018} when available, or computed
from measured kinematic data as explained in Tab.~\ref{TabScores}.
Lines show the escape velocity profiles derived from the following
MW models proposed in the literature:
\citet{GaravitoCamargo2019,Errani2020,Bovy2015,Irrgang2013}. }
\label{fig:vrv3d}
\end{figure*}
The middle and right panels of Fig.~\ref{fig:diagnostics} show the
other two diagnostics we have chosen to rank possible LMC-associated
satellites. The middle panel indicates the relative distance between
satellites and the LMC. The red curve again corresponds to simulated
satellites associated with LMC analogues. Its distribution is very well
approximated by the radial mass profile of an NFW halo with
$V_{\rm max}=90$ km/s and concentration $c=10.2$ (grey curve). For
reference, the median $V_{\rm max}$ and 10-90 percentiles for LMC
analogues is $78^{+52}_{-16}$ km/s (see Fig.~\ref{fig:vmaxmstar}).
Together with the evidence from the left panel, this confirms that
satellites associated with LMC analogues are, at first pericentre,
distributed around the analogues more or less as they were before
infall. Tides, again, have not yet had time to disrupt the close
physical association of the Magellanic group in phase space. The SMC,
for example, scores $\sim 0.76$ in this diagnostic.
Finally, the right-hand panel of Fig.~\ref{fig:diagnostics} shows the
orbital pole alignment, where we have chosen to use the absolute value
of $\cos (\alpha_{\rm orb})$ in order to account for the possibility
of ``counter-rotating'' satellites. The SMC, again, scores high in
this diagnostic; with a score of $\sim 0.72$ for
$|\cos(\alpha_{\rm orb})|=0.93$. In this case, any MW satellites with
$|\cos(\alpha_{\rm orb})|<0.57$ would have a score of zero.
We may add up the three scores to rank all MW satellites according to
the likelihood of their association of the LMC. The data and scores
are listed in Table~\ref{TabScores}, and show that, out of $46$ MW
satellites, $11$ have non-zero scores in all three categories. Of
these $11$, the $7$ whose association appears firm are: Hydrus 1, SMC,
Car 3, Hor 1, Tuc 4, Ret 2, and Phx 2. These $7$ satellites are
highlighted with a solid central circle in the figures throughout the
paper. A second group with more tenuous association, mainly because of
their large relative velocity difference, contains Carina, Hor 2, and
Grus 2. The final member is Fornax, whose scores in relative velocity
and position are non-zero but quite marginal. These $4$ satellites are
highlighted with a cross in the figures.
Three satellites in this list have $M_*>10^5\, M_\odot$ (SMC, Carina,
Fornax). This is actually in excellent agreement with the discussion
in Sec.~\ref{SecLMCSatContrib}, where we showed that LMC analogues bring
$\sim2$
such satellites into their primaries. The same arguments
suggest that $\sim 10\%$ of all MW satellites might have been
associated with the LMC. This small fraction is in tension with the
$11$ out of $46$ satellites (i.e., $24\%$) in our list. We note,
however, that our current list of MW satellites is likely very
incomplete \citep[see; e.g.,][]{Newton2018,Nadler2020}, and highly
biased to include more than its fair share of LMC satellites. Indeed,
many of the new satellite detections have been made possible by DES, a
survey of the southern sky in the vicinity of the Magellanic Clouds
\citep{Bechtol2015,Koposov15a,Drlica-Wagner2015}.
Our list adds some candidates compared to the lists compiled by
earlier work, but also contain some differences. \citet{Sales2011} and
\citet{Sales2017} identified only three satellites as clearly
associated with the LMC: the SMC, Hor 1 and Tuc 2. The latter is, however,
deemed unlikely given our analysis, especially because of its large
LMC relative velocity, $\Delta V_{\rm 3D}=246$ km/s.
\citet{Kallivayalil2018}'s list of possible LMC-associated satellites includes
Car 2, Draco 2, and Hydra 2. According to our analysis, the first two
are ruled out by their large relative velocity. The last one is, on the
other hand, ruled out by its large orbital pole deviation.
\citet{Erkal2020} claim SMC, Hydrus 1, Car 3, Hor 1, Car 2, Phx 2 and
Ret 2 as associated with the LMC. Using a similar methodology,
\citet{Patel2020} also identifies the first 5 as LMC ``long term
companions''. Of these, our analysis disfavours Car 2, again on
account of its large relative velocity, $\Delta V_{\rm 3D}=235$ km/s. Finally,
\citet{Pardy2020} argues for Carina and Fornax as candidates for LMC
association. Our analysis does not rule out either (both have non-zero
scores in all three categories), although the evidence for association
is not particularly strong, especially for Fornax. Our results agree
with \citet{Erkal2020} in this regard, who argues the need for an
uncommonly massive LMC to accomodate Fornax as one of its satellites.
\begin{figure*}
\includegraphics[width=\linewidth]{figs/vrv3d_vesc_LMC_BovyGC.pdf}
\caption{ Radial velocity versus total 3D velocity, both normalized by
the escape velocity at the pericentric radius. \textit{Left}: APOSTLE LMC
analogues, LMC-associated satellites, and rest of satellites of the
corresponding primary at first pericentre. Color-coding is the same
as in previous figures. Centre: Observed MW satellites assuming the
\citet{Bovy2015} MW potential. \textit{Right}: Observed MW satellites
assuming the \citet{GaravitoCamargo2019} MW potential. A green
vertical line with shade shows the median
$V_{\rm 3D}/V_{\rm esc}(r)$ and 25-75\% percentiles for LMC analogues,
and is marked in all panels. In the centre and right panels the
observed LMC's position, as defined by the assumed MW escape
velocity, is highlighted with a red open circle. Objects with
$V_{\rm 3D}/V_{\rm esc}(r)>1$ are gravitationally unbound to the
primary given that choice of potential. }
\label{fig:vrv3d3pan}
\end{figure*}
\section{The LMC and the escape velocity of the Milky Way}
\label{SecVesc}
We have argued in the preceding sections that, because the LMC is
just past its first pericentric passage, then its associated
satellites must still be close in position and velocity. Other
corollaries are that both the LMC and its satellites must have
Galactocentric radial velocities much smaller than their tangential
velocities, and that their total velocities must approach the escape
velocity of the Milky Way at their location.
We explore this in the left panel of
Fig.~\ref{fig:vrv3d}, which shows the radial ($V_{\rm rad}$) and total
3D velocities ($V_{\rm 3D}$) of LMC analogues (stars) and LMC-associated
satellites (circles) at the LMC analogue's first pericentre, as a function of their
radial distance to the primary. Radial velocities are shown as symbols
without edges, and 3D velocities as symbols with dark edges. A
different color is used for each of the 9 LMC-analogue systems.
All LMC-analogues and most of their associated satellites are close to pericentre and have
therefore radial velocities much smaller than their total velocities:
half of the LMC analogues have $|V_{\rm rad}|/V_{\rm 3D}<0.10$, and half
of the $15$ associated satellites have
$|V_{\rm rad}|/V_{\rm 3D}<0.43$. (For reference, the LMC itself has
$|V_{\rm rad}|/V_{\rm 3D}\approx 0.2$.)
It is also clear from the left panel of Fig.~\ref{fig:vrv3d} that the
large majority of LMC analogues have total velocities that trace closely
the escape\footnote{Escape velocities are defined as the speed needed
for a test particle to reach infinity, assuming spherical symmetry
and that the mass of the primary halo does not extend beyond a
radius $r=2\times r_{200}$.} velocity of each of their primaries at
their location. This is interesting because many commonly used models
for the MW potential are calibrated to match observations in and
around the solar circle, but differ in the outer regions of the
Galaxy, near the location of the LMC.
This is illustrated in the right-hand panel of Fig.~\ref{fig:vrv3d},
where the 4 different curves show the escape velocity curves
corresponding to models recently proposed for the Milky Way; i.e., those of
\citet[][I13]{Irrgang2013}, \citet[][B15]{Bovy2015}, \citet[][GC19]{GaravitoCamargo2019}, and
\citet[][E20]{Errani2020}. These models differ in their predicted escape
velocities at the location of the LMC ($r\sim 50$ kpc) from a low
value of $\sim 330$ km/s (B15) to a high value of $\sim 445$ km/s (I13). The LMC could
therefore provide useful additional information about the total virial mass
of the MW, which dominates any estimate of the escape velocity.
We explore this in more detail in the left panel of
Fig.~\ref{fig:vrv3d3pan}, where we show the radial and total velocity
of LMC analogues and their satellites, expressed in units of the
escape velocity at their current location. The median
$V_{\rm 3D}/V_{\rm esc}$ and 25-75\% percentiles for LMC analogues is
$0.88^{+0.07}_{-0.15}$, a value that we indicate with a shaded green
line. For LMC-associated satellites the corresponding value is
similar; $0.82^{+0.11}_{-0.15}$, again highlighting the close
dynamical correspondence between LMC analogues and their satellites. The
high velocity of LMC-associated systems differs systematically from that
of regular satellites (i.e., those not associated with LMC analogues,
shown with colored crosses in Fig.~\ref{fig:vrv3d3pan}). These
systems have $V_{\rm 3D}/V_{\rm esc}=0.59^{+0.18}_{-0.14}$.
The well-defined value of $V_{\rm 3D}/V_{\rm esc}$ for LMC analogues allows us to
estimate the MW escape velocity at $50$ kpc from the total
Galactocentric velocity of the LMC, estimated at
$V_{\rm 3D} \approx 320$ km/s by \citet{Kallivayalil2013}. This
implies $V_{\rm esc}^{\rm MW}$(50 kpc)$\approx 365$ km/s, favouring
models with modest virial masses for the MW.
The four models shown in the right-hand panel of Fig.~\ref{fig:vrv3d}
have $V_{\rm esc}(50\, \rm kpc)=$ 397 (GC19); 413 (E20); 330 (B15); 445 (I13) km/s.
Of these, the closest to our estimate is that of GC19, which has a
virial mass of $M_{200}=1.2\times10^{12}\, M_\odot$. Interestingly, this is also the
mass favored by the recent analysis of stellar halo kinematics by
\citet{Deason2020}.
Further constraints may be inferred by considering simulated
satellites with velocities higher than the local escape speed. These
are actually quite rare in our APOSTLE simulations: only $2$
LMC-associated satellites and $3$ regular satellites (out of a total
of $163$) appear ``unbound''. We compare this with observed MW
satellites in Fig.~\ref{fig:vrv3d3pan}, where the middle panel
corresponds to the B15 model potential and the right-hand panel to
that of GC19. (MW satellite Galactocentric radial and 3D velocities
are taken from \citet{Fritz2018} if available, or otherwise computed
from measured kinematic data as explained in the caption to
Tab.~\ref{TabScores}.) Although the LMC $V_{\rm 3D}/V_{\rm esc}$
seems acceptable in both cases, assuming the B15 potential would yield
$8$ escaping satellites out of $46$, a much higher fraction than
expected from the simulations. Even after removing Hya 2, Leo 4, Leo 5
and Pis 2, which are distant satellites with large velocity
uncertainties (in all these cases exceeding $\sim 250$ km/s), the
fraction of escapers would still be $\sim 10\%$, much larger than
predicted by APOSTLE.
The GC19 potential fares better, with three fewer escapers than B15:
Gru 1, Car 3 and Boo 2 are all comfortably bound in this
potential. Hya 2, Leo 4, Leo 5 and Pis 2 are still unbound,
however. Indeed, Leo 5 and Pis 2 would be unbound even in the I13
potential, the most massive of the four, with a virial mass
$M_{200}=1.9\times10^{12}\, M_\odot$. Should the velocities/distances of those
satellites hold, it is very difficult to see how to reconcile their
kinematics with our simulations, unless those velocities are
substantially overestimated. Tighter, more accurate estimates of their
kinematics should yield powerful constraints on the Galactic
potential.
\section{Summary and Conclusions}
\label{SecConc}
We have used the APOSTLE suite of cosmological hydrodynamical
simulations to study the accretion of
LMC-mass satellites into the halo
of MW-sized galaxies. APOSTLE consists of simulations of $12$
cosmological volumes selected to resemble the Local Group. Each volume
includes a pair of halos with halo masses, separation, and relative
radial and tangential velocities comparable to the MW and M31. We
identify ``LMC analogues'' as massive satellites of any of the 24
APOSTLE primary galaxies. These satellites are chosen to be representative
of the recent accretion of the LMC into the Galactic halo, taking into
account the LMC stellar mass and its particular kinematic state near
the first pericentric passage of its orbit.
Our results allow us to address the role of the LMC (the most massive
Galactic satellite) on the properties of the MW satellite population,
including (i) the frequency of LMC-mass satellites around MW-sized
galaxies and the effects of the Local Group environment; (ii)
observational diagnostics of possible association between MW
satellites and the LMC before infall, (iii) the contribution of the
LMC to the population of ``classical'' satellites of the MW; and (iv)
the constraints on the MW gravitational potential provided by the LMC
motion. To our knowledge, this is the first study of ``LMC analogues''
and their satellite companions carried out in realistic Local Group
cosmological hydrodynamical simulations.
Our main results may be summarized as follows.
\begin{itemize}
\item We find that $14$ out of $24$ primaries in APOSTLE have a
satellite of comparable mass to the LMC
($8.75 \leq \log M_*/M_\odot \leq 10$) within 350 kpc at $z=0$. This
is a higher fraction than estimated in previous work.
We use the DOVE simulation to study the
frequency of massive satellites around MW-mass haloes that are isolated
and in pairs.
The high
frequency of LMC analogues in APOSTLE seems to have an environmental
origin, as
LMC-like companions are roughly twice more
frequent around primaries in Local Group-like environments than
around isolated halos of similar mass.
\item Out of the $14$ LMC analogues, we select a subsample of $9$ which
have reached their first pericentric passage in the past $4$ Gyr.
These satellites inhabit $M_{200}\sim10^{11}$ M$_\odot$ halos before
infall, and have rather eccentric orbits, with median pericentric
and apocentric
distances of $\sim 60$ kpc and $\sim 420$ kpc, respectively.
\item LMC analogues host their own satellites and contribute them to the
primary satellite population upon infall. We find a total of $16$
LMC-associated satellites
before infall
with $M_* > 10^5\; \rm M_\odot$ for the
$9$ LMC analogues, or slightly fewer than $2$ ``classical'' satellites
per LMC.
One satellite merges with the LMC analogue before first pericentre.
The LMC satellites contribute, on average, about $\sim 10\%$
of the total population of primary satellites.
\item In agreement with previous work, we find that at the time of
first pericentre, LMC-associated satellites are all distributed
close to, and along, the orbital plane of the LMC,
extending over $\sim 45^\circ$ along the leading and trailing part of
the orbit. Their orbital angular momentum vectors are aligned with
that of the LMC, with a median relative angle of
$32^\circ$.
\item We report one case of an LMC-associated satellite that is
apparently {\it counter-rotating} the primary compared with the
LMC. The apparent counter-rotation may result when the orbital
motion of the satellite around the LMC is comparable or larger than
the pericentric distance of the LMC. Under some circumstances, this
leads the satellite to approach the centre of the primary ``on the
other side'' relative to the LMC. This is relatively rare, and only
one of the $15$ LMC-associated satellites appears to
``counter-rotate''.
\item We find that LMC-associated satellites
are located very
close to their LMC analogue in position and velocity, with a median
relative radial distance of $\sim 37$ kpc and a median relative 3D
velocitys of $\sim 138$ km/s. This is because there has not
been enough time for tidal interactions from the MW to disperse the
original orbits of LMC-companion satellites.
\item We may use the proximity of associated satellites to the LMC in
phase space to rank MW satellites according to the likelihood of
their LMC association. We find that $11$ out of $46$ MW satellites
could in principle be LMC associates. For $7$ of those the
association appears firm: Hydrus 1, SMC, Car 3, Hor 1, Tuc 4, Ret 2,
and Phx 2. Others, such as Carina, Hor 2, Grus 2 and Fornax are
potential associates as well, but their large LMC relative
velocities weakens their case.
\item The radial distribution of the satellite populations of
primaries with LMC analogues is more concentrated than those of
average APOSTLE primaries. This effect is largely driven by the particular
kinematic stage of the LMC, near its first pericentric passage, and
largely disappears after the LMC (and its associated satellites)
move away from pericentre. This offers a natural explanation for the
more concentrated radial distribution of satellites in the MW
compared to observed MW-analogues in the field, as recently reported by the
SAGA survey \citep{Mao2020}.
\item The 3D velocity of LMC analogues near first pericentre is very
close to the escape velocity of their primaries, with a median
$V_{\rm 3D}/V_{\rm esc}\approx 0.9$. We may use this result to
derive an estimate for the MW's escape velocity at the location of
the LMC ($r\sim 50$ kpc) of $\sim 365$ km/s. We also find that very
few simulated satellites (fewer than roughly $1$ in $30$) are unbound
from their primaries. This information may be used to discriminate
between different models of the MW potential. We find the model
proposed by \citet{GaravitoCamargo2019} to be
in reasonable agreement with our constraints, suggesting a MW virial
mass of roughly $1 \times 10^{12}\, M_\odot$.
\end{itemize}
Our analysis shows that $\Lambda$CDM simulations of the Local Group
can easily account for the properties of the Magellanic accretion into
the halo of the Milky Way, and offer simple diagnostics to guide the
interpretation of extant kinematic data when attempting to disentangle
Magellanic satellites from the satellite population of the Milky
Way. The accretion of the LMC and its associated satellites into the
Milky Way seems fully consistent with the hierarchical buildup of the
Galaxy expected in the $\Lambda$CDM paradigm of structure formation.
\begin{table*}
\centering
\small
\caption{Values and 'scores' of MW satellites according to the
different diagnostics used in this paper to assess association with
the LMC: the 3D velocity relative to the LMC ($\Delta V_{\rm 3D}$), the
radial distance relative to the LMC ($\Delta r$), and the alignment
with the LMC's orbital pole direction
($|\rm cos(\alpha_{\rm orb})|$). MW satellites are ordered
according to their \textit{total} score in these 3 categories (last
column). The 11 MW satellites which we consider in this paper may
be possibly associated to the LMC according to APOSTLE predictions
(i.e., those with non-zero scores in all 3 categories) are
highlighted in red. Column 3 indicates if the satellite is
co-rotating (+) or counter-rotating (-) the primary with respect to
the LMC. Column 4 shows the stellar mass of MW satellites, computed
applying a mass-to-light ratio to the $V$-band luminosities in
\citet{McConnachie2012}'s database. We assume $M_*/L_V=1.6$ for all
satellites (appropriate for dSph-type galaxies) except for the SMC,
where $M_*/L_V=0.7$ has been used \citep[see][]{Woo2008}. We
consider all MW satellites for which kinematic data is available.
For all satellites we adopt the positions and distance moduli
data (RA, dec, $(m-M)$) in \citet{McConnachie2012}'s database.
Line-of-sight velocities and proper motions have been taken from
\citet{Fritz2018} (their Table~2) when available, and otherwise from
\citet{McConnachie2020} (Tables~1 and ~4). SMC kinematic data is from
\citet{Kallivayalil2013}. Galactocentric positions and velocities
have been computed assuming a distance of the Sun from the Milky Way
of $R_\odot=8.29$ kpc, a circular velocity of the local standard of
rest (LSR) of $V_0=239$ km/s \citep{McMillan2011}, and a peculiar
velocity of the Sun with respect to the LSR of
$(U_\odot,V_\odot,W_\odot) = (11.1, 12.24,7.25)$ km/s
\citep{Schonrich2010}. }
\begin{tabular}{ l l l l l l l l l l l }
\toprule
MW satellite & & Sign & $M_*$/($10^5$M$_\odot$) & $\Delta V_{\rm 3D}$/km s$^{-1}$& $\Delta r$/kpc & $\rm |cos(\alpha_{\rm orb})|$ & Score $\Delta V_{\rm 3D}$& Score $\Delta r$ & Score $|\rm cos(\alpha_{\rm orb})|$ & Total Score \\
\midrule
\midrule
\textcolor{red}{Hydrus1 } & Hyi1 & + & 0.10 & 99.01 & 24.87 & 0.98 & 0.87 & 0.76 & 0.97 & 2.60 \\
\textcolor{red}{SMC} & SMC & + & 3229.22 & 132.97 & 24.47 & 0.93 & 0.59 & 0.76 & 0.72 & 2.07 \\
\textcolor{red}{Horologium1} & Hor1 & + & 0.04 & 141.33 & 38.19 & 0.99 & 0.45 & 0.46 & 1.00 & 1.91 \\
\textcolor{red}{Carina3 } & Car3 & + & 0.01 & 168.75 & 25.81 & 0.96 & 0.27 & 0.76 & 0.86 & 1.89 \\
\textcolor{red}{Tucana4} & Tuc4 & + & 0.03 & 167.70 & 27.29 & 0.95 & 0.28 & 0.75 & 0.82 & 1.84 \\
\textcolor{red}{Reticulum2 } & Ret2 & + & 0.05 & 171.09 & 24.43 & 0.94 & 0.26 & 0.76 & 0.73 & 1.76 \\
\textcolor{red}{Phoenix2} & Phx2 & + & 0.03 & 145.83 & 54.18 & 0.97 & 0.42 & 0.36 & 0.95 & 1.73 \\
Tucana3 & Tuc3 & - & 0.01 & 378.12 & 32.64 & 0.96 & 0.00 & 0.73 & 0.84 & 1.57 \\
\textcolor{red}{Carina} & Car & + & 8.09 & 196.22 & 60.69 & 0.98 & 0.15 & 0.33 & 0.99 & 1.47 \\
Reticulum3 & Ret3 & - & 0.03 & 487.03 & 44.15 & 0.98 & 0.00 & 0.42 & 0.99 & 1.41 \\
Sculptor & Scl & - & 29.12 & 525.06 & 66.25 & 0.98 & 0.00 & 0.31 & 0.98 & 1.30 \\
\textcolor{red}{Horologium2} & Hor2 & + & 0.01 & 206.19 & 38.43 & 0.84 & 0.11 & 0.46 & 0.50 & 1.07 \\
\textcolor{red}{Grus2} & Gru2 & + & 0.05 & 194.33 & 46.43 & 0.83 & 0.15 & 0.41 & 0.49 & 1.05 \\
Draco & Dra & + & 4.17 & 463.86 & 125.79 & 0.97 & 0.00 & 0.08 & 0.96 & 1.04 \\
CanesVenatici2 & CVen2 & - & 0.16 & 308.17 & 196.32 & 0.99 & 0.00 & 0.00 & 1.00 & 1.00 \\
Carina2 & Car2 & + & 0.09 & 235.30 & 19.87 & 0.67 & 0.00 & 0.78 & 0.17 & 0.96 \\
Segue1 & Seg1 & - & 0.00 & 262.35 & 58.59 & 0.84 & 0.00 & 0.34 & 0.51 & 0.85 \\
Draco2 & Dra2 & + & 0.02 & 679.81 & 74.32 & 0.84 & 0.00 & 0.29 & 0.52 & 0.81 \\
Crater2 & Cra2 & + & 2.61 & 410.77 & 115.22 & 0.93 & 0.00 & 0.11 & 0.70 & 0.81 \\
Aquarius2 & Aq2 & - & 0.08 & 518.58 & 115.28 & 0.91 & 0.00 & 0.11 & 0.66 & 0.77 \\
Tucana5 & Tuc5 & + & 0.01 & 329.49 & 29.56 & 0.09 & 0.00 & 0.74 & 0.00 & 0.74 \\
\textcolor{red}{Fornax} & Fnx & + & 331.22 & 215.01 & 114.53 & 0.86 & 0.08 & 0.11 & 0.54 & 0.73 \\
Tucana2 & Tuc2 & + & 0.05 & 245.89 & 36.80 & 0.66 & 0.00 & 0.54 & 0.17 & 0.71 \\
CanesVenatici1 & CVen1 & + & 3.73 & 367.17 & 254.35 & 0.92 & 0.00 & 0.00 & 0.68 & 0.68 \\
UrsaMinor & UMi & + & 5.60 & 470.43 & 125.73 & 0.90 & 0.00 & 0.08 & 0.60 & 0.67 \\
Leo5 & Leo5 & - & 0.08 & 419.04 & 187.19 & 0.90 & 0.00 & 0.00 & 0.61 & 0.61 \\
Sagittarius2 & Sag2 & + & 0.17 & 150.34 & 79.92 & 0.04 & 0.33 & 0.27 & 0.00 & 0.61 \\
Columba1 & Col1 & + & 0.09 & 295.47 & 148.11 & 0.80 & 0.00 & 0.00 & 0.41 & 0.41 \\
Hydra2 & Hya2 & + & 0.09 & 156.49 & 121.81 & 0.54 & 0.31 & 0.09 & 0.00 & 0.40 \\
Pisces2 & Pis2 & + & 0.07 & 492.15 & 196.14 & 0.79 & 0.00 & 0.00 & 0.39 & 0.39 \\
SagittariusdSph & SagdSph & - & 343.65 & 381.88 & 52.08 & 0.16 & 0.00 & 0.37 & 0.00 & 0.37 \\
Bootes1 & Boo1 & - & 0.35 & 280.81 & 99.81 & 0.66 & 0.00 & 0.20 & 0.17 & 0.37 \\
Segue2 & Seg2 & - & 0.01 & 321.30 & 64.08 & 0.27 & 0.00 & 0.32 & 0.00 & 0.32 \\
Antlia2 & Ant2 & + & 5.60 & 264.32 & 103.77 & 0.60 & 0.00 & 0.17 & 0.14 & 0.31 \\
Triangulum2 & Tri2 & - & 0.01 & 389.81 & 67.73 & 0.41 & 0.00 & 0.31 & 0.00 & 0.31 \\
UrsaMajor2 & UMa2 & - & 0.07 & 296.04 & 76.99 & 0.33 & 0.00 & 0.28 & 0.00 & 0.28 \\
Bootes2 & Boo2 & - & 0.02 & 357.00 & 77.87 & 0.50 & 0.00 & 0.28 & 0.00 & 0.28 \\
ComaBerenices & CBer & + & 0.08 & 434.12 & 80.77 & 0.07 & 0.00 & 0.27 & 0.00 & 0.27 \\
Willman1 & Will1 & + & 0.01 & 336.92 & 81.77 & 0.07 & 0.00 & 0.27 & 0.00 & 0.27 \\
Grus1 & Gru1 & + & 0.03 & 374.43 & 92.55 & 0.03 & 0.00 & 0.23 & 0.00 & 0.23 \\
Sextans & Sxt & + & 6.98 & 376.57 & 93.79 & 0.50 & 0.00 & 0.22 & 0.00 & 0.22 \\
Hercules & Her & - & 0.29 & 342.98 & 164.59 & 0.72 & 0.00 & 0.00 & 0.20 & 0.20 \\
Leo2 & Leo2 & + & 10.77 & 352.40 & 255.00 & 0.58 & 0.00 & 0.00 & 0.11 & 0.11 \\
UrsaMajor1 & UMa1 & - & 0.15 & 347.02 & 136.85 & 0.58 & 0.00 & 0.00 & 0.10 & 0.10 \\
Leo1 & Leo1 & + & 70.49 & 231.14 & 262.99 & 0.14 & 0.00 & 0.00 & 0.00 & 0.00 \\
Leo4 & Leo4 & - & 0.14 & 403.60 & 163.35 & 0.43 & 0.00 & 0.00 & 0.00 & 0.00 \\
\bottomrule
\end{tabular}
\label{TabScores}
\end{table*}
\section*{Data availability}
The simulation data underlying this article can be shared on reasonable
request to the corresponding author.
The observational data for Milky Way satellites used in this article comes from the following references:
\citet[][see \url{http://www.astro.uvic.ca/~alan/Nearby_Dwarf_Database_files/NearbyGalaxies.dat}, and references therein]{Kallivayalil2013,Fritz2018,McConnachie2020,McConnachie2012}.
\section*{Acknowledgements}
We wish to acknowledge the generous contributions of all those who made
possible the Virgo Consortium’s EAGLE/APOSTLE and DOVE simulation projects.
ISS is supported by the Arthur B. McDonald Canadian
Astroparticle Physics Research Institute. JFN is a Fellow of the
Canadian Institute for Advanced Research.
AF acknowledges support by the Science and Technology Facilities Council (STFC)
[grant number ST/P000541/1] and the Leverhulme Trust.
LVS is thankful for financial support from the Hellman Foundation as well
as NSF and NASA grants, AST-1817233 and HST-AR-14552.
This work used the DiRAC@Durham facility
managed by the Institute for Computational Cosmology on behalf of the
STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by
BEIS capital funding via STFC capital grants ST/K00042X/1,
ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and
STFC operations grant ST/R000832/1. DiRAC is part of the National
e-Infrastructure.
\bibliographystyle{mnras}
|
2,869,038,153,821 | arxiv | \section{Motivation and Introduction}
Many biomedical and clinical trials are planned as factorial designs. Here, not only the (main) effects of separate factors but also interaction effects that are related to possibly complex factor combinations are of importance.
Such interaction effects
may even alter the interpretation of main effects leading to the established comment by \cite{lubsen1994factorial} that
{\it`it is desirable for reports of factorial trials to include estimates of the interaction between the treatments'}.
On the other hand, {\it nonparametric} estimation and the inference of adequate effects in such designs can be rather involved.
In particular, most existing inference procedures have focused on testing hypotheses formulated in terms of distribution functions \citep{brunner97, brunner2001nonparametric, gao2005unified, gao2008nonparametric, gao2008nonparametriconeway, akritas2011nonparametric, dutta16, friedrich2016CSDA, UKP2016} which cannot be inverted to obtain confidence intervals or regions for meaningful effects.
Only recently, nonparametric methods for inferring adequate effects in general factorial designs with independent and dependent observations have been established \citep{rankFD, brunnerrank2016, umlauft2017wild, dobler2017nonparametric2}. These procedures are, however, only developed for completely observed data and not applicable for
partially observed time-to-event data.
Since many clinical studies are concerned with survival outcomes, adequate statistical inference methods for complex factorial time-to-event designs are of particular interest.
To detect main effects, weighted logrank tests or their extensions may be applied in case of two or multiple samples
\citep{mantel1966evaluation, abgk93, ehm1995power, liu1995design, janssen1997two, bathke2009combined, yang2010improved, fleming2011counting, Brendel_etal_2014}. However, these procedures only infer conclusions
in terms of cumulative hazard functions and cannot be applied to obtain
concrete {\it effect parameters} with informative confidence intervals nor tests for the presence of interactions.
In practice, interaction effects are usually modeled with the help of
Cox-, Aalen- or even Cox-Aalen regression models \citep{cox72, scheike2002additive, scheike2003extensions} with factors as covariates and incorporated interaction terms. However, although very flexible, these models are usually more driven towards hazards modeling by continuous covariates while the incorporation of several factor variables (e.g., via multiple dummy variables per factor) can become cumbersome; especially when interactions are incorporated, see also \citet{green2002factorial} and \cite{crowley2012} for the uncensored case.
The above problems directly motivate a nonparametric approach for estimating and inferring main and interaction effects in factorial designs with censored observations.
So far, the only existing methods in this context are given by the nonparametric survival procedures of \cite{akritas97} and \cite{akritas2011nonparametric}.
They are based on a purely nonparametric model that does not require any multiplicative or additive structure of the hazards and can even be applied for arbitrary, possibly non-continuous survival distributions (i.e., it can be readily used for survival times rounded to days, weeks or months).
Moreover, it leads to tests for
main and interaction effects in case of independent right-censored data.
However, these tests suffer from several drawbacks:
the procedure is based on a rather strong assumption on the underlying censorship distribution
which is often hard to verify in practical situations. In addition,
null hypotheses are only formulated in terms of distribution functions.
As a result, there is no direct quantification and estimation of main and interaction effects in terms of confidence intervals as, e.g., required by regulatory authorities (ICH E9 Guideline, 1998, p. 25).
This is to be changed in the current paper. We develop and rigorously analyze nonparametric inference procedures, i.e. tests and confidence intervals, for meaningful effect sizes in factorial survival designs, where data may be subject to random right-censoring.
Similar to the adaption of the \citet{brunner2000nonparametric} test to the two-sample survival set-up by \citet{dobler2016bootstrap}, we consider the recently proposed unweighted nonparametric effects of \citet{brunnerrank2016} and extend their ansatz to a general survival setting. In the special case of proportional hazards, these effects have a direct relationship to hazard ratios in two-sample settings \citep{bruckner2017sequential} while they remain meaningful in case of non-proportional hazards. This fact makes the effect sizes even more appealing for practical purposes.
The paper is organized as follows. The statistical model and important results on the basic estimators are presented in Section~\ref{sec:mod}. The resulting test statistic for the null hypotheses of interest is stated and mathematically analyzed in Section~\ref{sec:test_stat}. Since the asymptotic distribution of the test statistic depends on unknown parameters,
we propose a distribution-free multiplier resampling approach in Section~\ref{sec:wbs} and prove its consistency.
In Section~\ref{sec:simus} it is supplemented by a simulation study to assess the finite sample properties of the proposed procedure.
They are then exemplified on a colon cancer study in Section~\ref{sec:data_Example}, where in the original study~\citep{moertel90} the analysis was made in terms of Cox models.
Finally, the paper closes with concluding comments in Section~\ref{sec:dis}.
All proofs are deferred to the technical Appendix.
\section{The set-up}
\label{sec:mod}
To establish the general model, we consider sequences of mutually independent random variables
\begin{equation}\label{eq:mod}
T_{ik} \stackrel{\text{ind}}{\sim} S_i \quad{\mbox{and}}\quad C_{ik} \stackrel{\text{ind}}{\sim} G_i \qquad( i = 1, \dots, d, \ k = 1, \dots, n_i),
\end{equation}
where $T_{ik}$ denotes the actual survival time of subject $k$ in group $i$ and $C_{ik}$ the corresponding censoring variable. Moreover, to even allow for ties or survival times rounded to weeks or months, the survival functions $S_i$ and $G_i$, $i=1, \dots, d$, defined on $(0,\infty)$ may be {\it possibly discontinuous}.
That is, the corresponding hazard rates may, but need not exist.
The actually observable data consist of the right-censored survival times $X_{ik} = T_{ik} \wedge C_{ik}$ and the uncensoring indicators $\delta_{ik} = 1 \{ T_{ik} \leq C_{ik} \}$,
$i = 1, \dots, d, \ k = 1, \dots, n_i$. In this set-up, a factorial structure can be incorporated by splitting up indices, see Section~\ref{sec:simus} for details.
In the special case of $d=2$ groups with continuous survival times \citet{efron67} introduced an estimator for the {\it concordance probability}
$$
w= P(T_{11} > T_{21}) = - \int S_1 dS_2
$$
that a randomly chosen subject from the first group survives longer than someone from the second group.
If all subjects are completely observable, this effect size $w$ reduces to the well-known Mann-Whitney effect underlying the \citet{brunner2000nonparametric} test.
Inference procedures for $w$ and related quantities in survival set-ups (such as the concordance parameter or the average hazard ratio) have, e.g. been developed by \citet{bruckner2017sequential, dobler2016bootstrap}. However, an extension of the definition of $w$ to the more general design \eqref{eq:mod}, allowing for an arbitrary factorial structure, is not straightforward. In particular, for the case of completely observed data, \citet{brunnerrank2016} and \citet{brunner2018ranks} point out several pitfalls that may lead to paradoxical results when working with a `wrong' extension of $w$. Adopting their solution to the present situation, we introduce an additional `benchmark' survival time $Z$, independent of the above, with averaged survival function $Z\sim \bar S = \frac1d \sum_{i=1}^d S_i$. This is used to extend $w$ to
$$\tilde p_i = P(T_{i1} > Z) + \frac12 P(T_{i1} = Z) =
- \int S_i^\pm \d \bar S, $$
where the superscript $\pm$ denotes the average of a right-continuous function and its left-continuous version.
The use of such normalized survival functions adequately handles discrete components of the survival distribution, i.e.\ ties in the data are explicitly allowed.
The choice of the effect parameter $\tilde p_i$ is motivated by recent findings on nonparametric analyses of factorial designs with
complete observations in \cite{brunnerrank2016, brunner2018ranks}. \nocite{dobler2017nonparametric2} They stress that other choices, e.g.
pairwise-comparisons of all concordance probabilities $w$ or comparisons with the weighted survival function
$\sum_{i=1}^d \frac{n_i}{N}S_i$ instead of $\bar S$, may easily result in paradoxical outcomes. This is no issue for the effects $\tilde p_i$ which are sample size independent.
For later calculations, we emphasize that the effect parameters are balanced in the mean.
In particular, we have
$$
\frac1d \sum_{i=1}^d \tilde p_i = - \int \bar S^\pm \d \bar S = \frac12
\quad\text{and}\quad
\tilde p_i = - \sum_{\substack{j = 1 \\ j \neq i}}^d \int S_i^\pm \d S_j + \frac1{2d}.
$$
From a practical point of view, estimation of the $\tilde p_i$'s would need 'arbitrarily' large survival times since the integral is defined on $(0,\infty)$.
However, every study ends at a certain point in time. For practical applicability, we therefore assume that the censoring times are bounded and we have to modify the $\tilde p_i$'s accordingly:
denote by $\tau > 0$ the largest possible censoring time.
In comparisons of survival times, which belong to different groups and which exceed $\tau$,
no group shall be favored. In other words, the remaining mass has to be split up equally among the groups. Technically, this is realized by setting the remaining mass of the survival functions to zero: $S_i(\tau) =0$.
Redefining $S_i$ and $\bar S$ from now on as the survival functions of $\min(T_{i1}, \tau)$ and $\min(Z, \tau)$, respectively, this translates into the {\it nonparametric concordance effects}
\begin{equation}
p_i = P(\min(T_{i1}, \tau) > \min(Z, \tau)) + \frac12 P(\min(T_{i1}, \tau) = \min(Z, \tau)) = - \int S_i^\pm \d \bar S.
\end{equation}
Obviously, all of the above-discussed positive properties of the effects parameter $\tilde p_i$ also transfer to the nonparametric concordance effects $p_i$:
it is a meaningful effect measure for ordinal and metric data, sample size independent, and allows for a suitable treatment of ties.
We aggregate all effects into the vector $\b p = (p_1, \dots, p_d)'$ and borrow a trick from \cite{konietschke2012rank} and \cite{brunnerrank2016} to express them as
\begin{equation}\label{eq:p_as_w_expression}
\b p = \Big(\b I_d \otimes \frac1d \b 1_d'\Big) \cdot (\b w_1', \dots, \b w_d')' =: \b E_d \cdot \b w.
\end{equation}
Here, $\b w_i = (w_{1i}, \dots, w_{di})' = - \int S_i^\pm \d \b S$ is the $\R^d$-vector of effects for direct comparisons of group $i$ with respect to all groups $j=1,\dots,d$,
and $\b S = (S_1, \dots, S_d)'$ is the aggregation of all survival functions. Moreover, $\b I_d$ denotes the identity matrix in $\R^d$, $\b 1_d$ the $d$-dimensional vector of 1's and the symbol $\otimes$ denotes the Kronecker product. In this way the $i$th entry of $\b w_i$ is $w_{ii} = \frac12$ which makes sense because equal groups should be valued equally high.
Anyhow, Equation~\eqref{eq:p_as_w_expression} shows that the problem of estimating $\b p$ reduces to the estimation of the pair-wise effects $w_{ji}$. But this can be achieved by substituting each involved survival function $S_i$ by its Kaplan-Meier estimator $\wh S_i, i=1,\dots,d$ \citep{kaplan58}. Proceeding in this way we denote by
$\wh {\b w}$ and $\wh {\b w}_i$ these estimated counterparts of $\b w$ and $\b w_i$. Let $N = \sum_{i=1}^d n_i$ be the total sample size.
Below we establish the asymptotic normality of
$\sqrt{N}(\wh {\b w} - {\b w})$ under the following framework:
\begin{align}
\label{eq:sample_size_conv}
N^{-1} \boldsymbol n := \Big(\frac{n_1}{N}, \dots, \frac{n_d}{N}\Big)' \rightarrow \boldsymbol \lambda := (\lambda_1, \dots, \lambda_d)' \in (0,1)^d
\end{align}
as $\min \boldsymbol n \rightarrow \infty$. To give a detailed description of the resulting asymptotic covariance structure, however, we first have to introduce some additional notation:
Let $D[0,\tau]$ be the space of all c\`adl\`ag-functions on $[0,\tau]$, equipped with the Skorokhod metric, and $BV[0,\tau] \subset D[0,\tau]$ its subspace of c\`adl\`ag-functions with bounded variation.
For the subsequent arguments it is essential that we can represent $\b w = \phi \circ \b S $ as a functional of $\b S$. In particular, the functional
$$ \phi: \ (BV[0,\tau])^d \rightarrow \R^{d^2}, \qquad (f_1, \dots, f_d)' \longmapsto \Big( - \int f_i^\pm \d f_j \Big)_{i,j = 1}^d, $$
with inner index $j$, is Hadamard-differentiable at $\b S$; see the proof of Lemma~\ref{lem:w} below for details.
We denote its Hadamard-derivative at $\b S$ by $\d \phi_{\b S}$, which is a continuous linear functional.
For technical reasons, we assume throughout that $P(T_{i1} > \tau) >0$ for all groups $i=1, \dots, d$.
We may now state the first preliminary but essential convergence result.
\begin{lemma}\label{lem:w}
Under the asymptotic regime \eqref{eq:sample_size_conv} we have
$$ \sqrt N ( \wh {\b w} - \b w ) \oDo \b W, $$
where $\b W$ has a centered multivariate normal distribution on $\R^{d^2}$.
\end{lemma}
In particular, we can write $\b W = \d \phi_{\b S} \cdot diag(\boldsymbol \lambda)^{-1/2} \b U$,
where $\b U$ consists of independent, zero-mean Gaussian processes $U_1, \dots, U_d$ with covariance functions
$$\Gamma_i(r,s) = S_i(r) S_i(s) \int_0^{r \wedge s} \frac{\d \Lambda_i}{S_{i-} G_{i-} (1 - \Delta \Lambda_i)}, \quad i=1,\dots,d,$$
where $\Lambda_i$ denotes the cumulative hazard function corresponding to $S_i = \prodi (1 - \d \Lambda_i)$, $i=1, \dots, d$;
the symbol $\prodi$ denotes the product integral \citep{gill90}.
Here, a minus sign in a subscript indicates the left-continuous version of a function and $\Delta \Lambda = \Lambda - \Lambda_-$ is the jump size function of $\Lambda$.
Note that the covariance matrix of $\b W$ is singular; in particular, $ ( \d \phi_{\b S} \cdot diag(\boldsymbol \lambda)^{-1/2} \b U )_{i,i} = 0 $ for all $i = 1, \dots, d$.
The other entries ($i \neq j$) are distributed as follows:
\begin{align*}
( \d \phi_{\b S} \cdot diag(\boldsymbol \lambda)^{-1/2} \b U )_{i,j}
= \frac{1}{\sqrt{\lambda_i}} \int U_i^\pm \d S_j
- \frac{1}{\sqrt{\lambda_j}} \int U_j^\pm \d S_i
\sim N \Big(0, \frac{1}{\lambda_i} \int \int \Gamma_i^{\pm \pm} \d S_j \d S_j
+ \frac{1}{\lambda_j} \int \int \Gamma_j^{\pm \pm} \d S_i \d S_i \Big) .
\end{align*}
Here, the double appearance of $\pm$ signs means the average of all four combinations of left- and right- continuous versions in both arguments of a two-parameter function.
Let us now turn to the estimation of the nonparametric concordance effects $\b p$.
A matrix multiplication of $\wh w$ with $\b E_d$ from the left is basically the same as taking the mean with respect to the inner index $j$. This immediately brings us to the first main result:
\begin{thm}
\label{thm:p}
Under the asymptotic regime \eqref{eq:sample_size_conv} we have
$$\sqrt N ( \wh {\b p} - \b p) := \sqrt N \b E_d ( \wh {\b w} - \b w ) \oDo \b E_d \b W =\Big( \frac1d \sum_{i=1}^d \frac{1}{\sqrt{\lambda_i}} \int U_i^\pm \d S_j - \frac{1}{\sqrt{\lambda_j}} \int U_j^\pm \d \bar S \Big)_{j=1}^d,$$
where $ \b E_d \b W$ has the variance-covariance matrix $\b V$ with the following entries:
$$ V_{ii} = \frac{1}{\lambda_i} \int \int \Gamma_i^{\pm \pm} \d \bar S \d \Big( \bar S - \frac2d S_i \Big)
+ \frac{1}{d^2} \sum_{j=1}^d \frac{1}{\lambda_j} \int \int \Gamma_j^{\pm \pm} \d S_i \d S_i $$
in the $i$th diagonal entry, $i = 1, \dots, d$, and
$$ V_{ij} = \frac{1}{d^2} \sum_{j=1}^d \frac1{\lambda_j} \int \int \Gamma_j^{\pm \pm} \d S_i \d S_j
- \frac1d \frac1{\lambda_i} \int \int \Gamma_i^{\pm \pm} \d \bar S \d S_j
- \frac1d \frac1{\lambda_j} \int \int \Gamma_j^{\pm \pm} \d \bar S \d S_i $$
in the off-diagonal entries $(i,j)$, \ $i \neq j$.
\end{thm}
A more compact form of the matrix $\b V$ is given in Appendix~\ref{app:asy.cov}.
\section{Choice of Test Statistic}
\label{sec:test_stat}
In order to develop hypothesis tests based on the estimator $\wh{\b p}$,
we next need to find a consistent estimator $\wh {\b V}_N$ for $\b V$.
A natural choice is to plug in estimators for all unknown quantities that are involved in $\b V$.
In particular, we use the Kaplan-Meier estimators for all survival functions and
$\wh \Gamma_i(s,t) = \wh S_i(s) \wh S_i(t) n_i \int_0^{s \wedge t} [ Y_i (1 - \Delta \wh \Lambda_i)]^{-1} \d \wh \Lambda_i$ for each covariance function $\Gamma_i$,
where $Y_i$ is the number at risk process and $\wh \Lambda_i$ is the Nelson-Aalen estimator for the cumulative hazard matrix in group $i$.
Note that if $\Delta \wh \Lambda_i(u) = 1 $, we also have $\wh S_i(u) = 0$ in which case we let $\wh \Gamma_i(s,t) = 0$ if $s \geq u$ or $t \geq u$.
We denote the resulting covariance matrix estimator by $\wh {\b V}_N$.
\iffalse
which results in
\begin{align*}
\wh {\b V}_N
= & N \Big[ \frac{1}{d^2} \int \int \b 1_d' diag(\boldsymbol n)^{-1} \wh {\boldsymbol \Gamma}^{\pm \pm} \d \wh {\b S} \d \wh {\b S}'
+ diag(\boldsymbol n)^{-1} \int \int diag(\wh {\boldsymbol \Gamma}^{\pm \pm}) \d \wh {\bar S} \d \wh {\bar S} \\
& - \frac1d \int \int diag(\boldsymbol n)^{-1} \wh {\boldsymbol \Gamma}^{\pm \pm} \d \wh {\bar S} \d \wh {\b S}'
- \Big( \frac1d \int \int diag(\boldsymbol n)^{-1} \wh {\boldsymbol \Gamma}^{\pm \pm} \d \wh {\bar S} \d \wh {\b S}' \Big)' \Big].
\end{align*}
\fi
\begin{lemma}
\label{lem:gamma}
Under the asymptotic regime \eqref{eq:sample_size_conv} we have the consistency
$\wh {\b V}_N \oPo \b V$.
\end{lemma}
All of the developed convergence results are now utilized to find the most natural test statistic.
First note that the asymptotic covariance matrix $\b V$ is singular
since $\b 1_d' \sqrt{N} (\wh {\b p} - \b p) \equiv 0$,
whence $r(\b V) \leq d-1$ follows.
Furthermore, it is not at all obvious whether
the ranks of the Moore-Penrose inverse $r( (\b C \wh {\b V}_N \b C' )^+ )$
converge in probability to the rank $ r( (\b C \b V \b C')^+)$
for a compatible contrast matrix $\b C$.
Hence, the Wald-type statistic
$ N \wh {\b p}' \b C' (\b C \wh {\b V}_N \b C')^+ \b C \wh {\b p} $
is not suitable for testing $H_0^p(\b C): \b C \b p = \b 0$:
Its asymptotic behaviour is unclear and, hence, there is no reasonable choice of critical values.
Instead, we utilize a statistic that does not rely on the uncertain convergence of ranks of generalized inverses.
This leads us to the survival version of the so-called ANOVA-rank-type statistic
\begin{equation}\label{eq:ATS}
F_N(\b T) = \frac{N}{tr(\b T \wh {\b V}_N)} \wh {\b p}' \b T \wh {\b p},
\end{equation}
where $\b T = \b C' (\b C \b C')^+ \b C$ is the unique projection matrix onto the column space of $\b C$.
Below we analyze both, its asymptotic behaviour under null hypotheses of the from $H_0^p(\b C): \b C \b p = \b 0$ and under the corresponding alternative hypotheses $H_a^p(\b C): \b C \b p \neq \b 0$.
\begin{thm}
\label{thm:teststat}
Assume the asymptotic regime \eqref{eq:sample_size_conv} and that $tr(\b T {\b V})>0$.
\begin{itemize}
\item[a)] Under $H_0^p(\b C)$ and as $N \rightarrow \infty$, we have
$F_N(\b T) \oDo \chi = \b W' \b E_d' \b T \b E_d \b W/tr(\b T {\b V})$
which is non-degenerate and non-negative with $E(\chi)=1$.
\item[b)] Under $H_a^p(\b C)$ and as $N \rightarrow \infty$, we have
$F_N(\b T) \oPo \infty.$
\end{itemize}
\end{thm}
%
%
\iffalse
Rather not:
%
Another, commonly used approach for obtaining critical values in a multi-sample problem
is the permutation technique applied to the ANOVA-type statistic
$$ F_N(\b C) = F_N(\b T) = \frac{N}{tr(\b T \wh {\b V}_N)} \wh {\b p}' \b T \wh {\b p},$$
where $\b T = \b C' (\b C \b C')^+ \b C$ is the unique projection matrix on the column space of $\b C$.
\fi
%
As the distribution of $\chi$ depends on unknown quantities (cf. Theorem~\ref{thm:p}) the test statistic $F_N(\b T)$ in \eqref{eq:ATS} is no asymptotic pivot. To nevertheless
obtain proper critical values which lead to asymptotically exact inference procedures we next propose and study a resampling approach.
\iffalse
\section{Rather not: Permutation Approach}
Simulations (by Frank) have shown that the following approach does not yield a good approximation:
A possible problem when trying to apply the permutation technique is that the eigenvalues involved in the asymptotic distribution of $F_N(\b T)$ may be altered in the asymptotic distribution of the permutation version of $F_N(\b T)$.
Nevertheless, this permutation technique may be utilized for a new kind of approximation technique
that will be competing with the ANOVA-Box-type-p tests of Brunner et al. (2016)
or with related ANOVA-Pearson-type-p tests.
The new approximation works in the following way:
First we introduce a studentized test statistic, i.e.,
\begin{align}
F_{N,\text{stud}}(\b T) = \frac{N \wh {\b p}' \b T \wh {\b p} - tr(\b T \wh {\b V}_N)}{\sqrt{2 tr(\b T \wh {\b V}_N \b T \wh {\b V}_N)}}.
\end{align}
This choice of studentization is justified by the asymptotic expectation of $N \wh {\b p}' \b T \wh {\b p} $ under $H_0^p$ being the sum of eigenvalues $\sum_{i=1}^d \varrho_i(\b T \b V)$
which in turn is the same as the trace.
The asymptotic variance of $N \wh {\b p}' \b T \wh {\b p} $ under $H_0^p$ equals $2 \sum_{i=1}^d \varrho_i^2(\b T \b V)$.
But the sum of the squared eigenvalues is the same as the sum of eigenvalues of $(\b T \b V)^2$, i.e., $\sum_{i=1}^d \varrho_i(\b T \b V \b T \b V)$.
We now establish a similar studentization of the permutation version of the ANOVA-type statistic.
To this end, denote by $\b p^\pi$ and $\b V_N^\pi$ the permutation versions
of $\wh {\b p}$ and $\wh {\b V}$, respectively.
By the same arguments as above, similar estimators for the asymptotic expectation and variance are utilized, leading to
\begin{align}
F_{N,\text{stud}}^{\pi}(\b T) = \frac{N (\b p^\pi)' \b T \b p^\pi - tr(\b T {\b V}_N^\pi)}{\sqrt{2 tr(\b T {\b V}_N^\pi \b T {\b V}_N^\pi)}}.
\end{align}
It remains so be proven that the conditional distribution of $N (\b p^\pi)' \b T \b p^\pi$ is consistent for $\mc{L}(\sum_{i=1}^d \varrho_i(\b T \b V^\pi) C_i^2)$
where the $C_i$ are i.i.d. $N(0,1)$ random variables and $\b V^\pi$ is the limit in probability of $\b V_N^\pi$.
Furthermore, simulations will be conducted in order to compare the performance of the suggested permutation-based approximation procedure to the well-known Box-type approximation technique.
\fi
\iffalse
\section{Kommentare von Markus}
Markus meinte, dass Frank einen aehnlichen Permutationsansatz wie oben bereits ausprobiert hatte... die Ergebnisse waren leider nicht gut.
Generell sollten wir entweder wieder eine Momenten-Anpassungsmethode waehlen...
oder direkt den Wild BS! Angewandt auf die Martingaldarstellung des Kaplan-Meiers!
Dann sollte der richtige Limes herauskommen und alles muesste klappen!
Wir wuerden wieder ab irgendeinem Punkt abschneiden und dahinterliegende Masse auf alle Gruppen verteilen (wie in unserem ersten Konkordanz-Papier).
In einem Kommentar koennen wir festhalten, dass wir auch jeweils bis zu den Endzeitpunkten gehen koennten.
Der Kaplan-Meier ist nach Gill auch in dieser Situation noch schwach konvergent.
Der BS laesst sich nach meiner Diss auch darauf anwenden!
Und der Wild BS ebenfalls, wenn man wieder mit Martingalargumenten an das ganze herangeht!!!
\section{Moment-Matching Approximations}
{\color{red}streichen!?}
In other situations it was seen that approximation techniques aiming at matching moments of quadratic form statistics may yield similar or even better small sample results when compared to asymptotically exact inference procedures based on different kinds of resampling techniques;
see ... and ... for such approximations in the context of approximating Cram\'er-von Mises-type statistics in Aalen-Johansen estimators for cumulative incidence functions under independent right-censoring and left-truncation.
In order to be able to match as many moments as possible, we introduce a studentized version of the above ANOVA-type statistic, i.e.
\begin{align}
\label{eq:test_stat_stud}
F_{N,\text{stud}}(\b T) = \frac{N \wh {\b p}' \b T \wh {\b p} - tr(\b T \wh {\b V}_N)}{\sqrt{2 tr(\b T \wh {\b V}_N \b T \wh {\b V}_N)}}.
\end{align}
This choice of studentization is justified by the asymptotic expectation of $N \wh {\b p}' \b T \wh {\b p} $ under $H_0^p$ being the sum of eigenvalues $\sum_{i=1}^d \varrho_i(\b T \b V)$
which in turn is the same as the trace.
The asymptotic variance of $N \wh {\b p}' \b T \wh {\b p} $ under $H_0^p$ equals $2 \sum_{i=1}^d \varrho_i^2(\b T \b V)$.
But the sum of the squared eigenvalues is the same as the sum of eigenvalues of $(\b T \b V)^2$, i.e. $\sum_{i=1}^d \varrho_i(\b T \b V \b T \b V)$.
Using a principle component decomposition, it is furthermore easy to see that the asymptotic null distribution of $F_N(\b T)$ is $\mc{L}(\sum_{i=1}^d \varrho_i(\b T \b V) B_i^2)$
where the $B_i$ are i.i.d. $N(0,1)$ random variables.
We would like to stress that the estimators involved in the studentization~\eqref{eq:test_stat_stud} are obviously consistent under condition~\eqref{eq:sample_size_conv}.
\fi
\section{Inference via Multiplier Bootstrap}
\label{sec:wbs}
In this section, we apply suitably tailored multiplier bootstrap techniques
in order to approximate the small sample distribution of $F_N( \b T )$.
To this end, we consider the situation under $H_0^p(\b C)$ in which case we may expand
\begin{align*}
F_N( \b T ) & = \frac{N}{tr(\b T \wh {\b V}_N)} ( \wh {\b p} - \b p )' \b T ( \wh {\b p} - \b p )
= \frac{N}{tr(\b T \wh {\b V}_N)} ( \d \phi_{\b S} \cdot ( \wh {\b S} - \b S ))' \b E_d' \ \b T \ \b E_d ( \d \phi_{\b S} \cdot ( \wh {\b S} - \b S )) + o_p(1) ,
\end{align*}
where $\wh {\b S}$ is the vectorial aggregation of all Kaplan-Meier estimators $\wh S_1, \dots, \wh S_d$.
First, we replace the martingale residuals, that are attached to the Kaplan-Meier estimators,
with independent centered random variables which have approximately the same variance.
In particular, we replace $\sqrt{N} (\wh {S}_i - S_i)$ with
$$\wh S(t) \cdot \sqrt{N} \sum_{k=1}^{n_i} G_{ik} \int_0^t [ (Y_i(u) - \Delta N_i (u)) Y_i(u) ]^{-1/2} \d N_{ik}(u); $$
cf. \cite{dobler17} for a similar wild bootstrap Greenwood-type correction for tied survival data.
Here we utilized the usual counting process notation \citep{abgk93}:
$N_{ik}$ indicates whether the event of interest already took place for individual $k$ in group $i$.
The \emph{wild bootstrap} multipliers $G_{ik}, i=1, \dots, n_i, \ i=1, \dots, d$,
are i.i.d. with zero mean and unit variance and also independent of the data.
In \cite{bluhmki18a, bluhmki18b} a similar multiplier resampling approach is
applied to Nelson-Aalen and Aalen-Johansen estimators in one- and two-sample problems.
In a next step toward the construction of a wild bootstrap statistic, we replace $\d \phi_{\b S}$ with $\d \phi_{\wh{\b S}}$.
Let us denote the thus obtained wild bootstrap version of
$\sqrt{N} \d \phi_{\b S} \cdot ( \wh {\b S} - \b S )$ by
${\b W}_N^*$. Conditionally on the data, this $d^2$-variate random vector is for large $N$
approximately normally distributed and its limit distribution coincides with that of $\b W$; see the proof of Theorem~\ref{thm:wbs} below for details.
\iffalse
\begin{rem}
In the case of continuously distributed survival data one might as well extend both central limit theorems
to the unrestricted Skorohod space, i.e. to $D[0,\tau_j]$ instead of $D[0,K]$.
Here $\tau_j$ is the right end point of the support of $S_j$ which may be different for each $j=1, \dots, d$.
This convergence in distribution of the univariate Kaplan-Meier estimator was shown in \cite{gill83} and \cite{ying89}.
That this convergence result holds as well for resampling counterparts was first shown in \cite{dobler16},
where Efron's (1981)\nocite{efron81} bootstrap for the Kaplan-Meier estimator was extended to the whole support of the survival function with the help of martingale arguments.
The same martingale arguments apply
when considering the martingale properties (cf. \citealt{bluhmki16}) of the wild bootstrapped Nelson-Aalen estimators
\begin{align}
\label{eq:wbs_nae_ties}
\sqrt{N} \sum_{i=1}^{n_j} G_{ij} \int_0^t \sqrt{\frac{Y_j(u) - \Delta N_j(u)}{Y_j^3(u)}} \d N_{ij}(u)
\end{align}
which contribute essentially to the Kaplan-Meier estimators.
\end{rem}
\fi
Finally, a wild bootstrap version $F_N^*(\b T)$ of $F_N(\b T)$ requires
that we also use a consistent wild bootstrap-type estimator $tr(\b T {\b V}_N^*)$ of $tr(\b T \wh {\b V}_N)$.
It is found by replacing the estimators $\wh \Gamma_i$ with
$$ \Gamma^*_i(s,t) = \wh S_i(s) \wh S_i(t) \ n_i \sum_{k=1}^{n_i} G_{ik}^2 \int_0^{s \wedge t} \frac{\d N_{ik}}{ (Y_i - \Delta N_i) Y_i}.$$
Its conditional consistency was argued in \cite{dobler17} and a sufficient condition for this is $E(G_{11}^4) < \infty$.
These wild bootstrap-type variance estimators also have the nice interpretation of optional variation processes of the wild bootstrapped Kaplan-Meier estimators (Dobler, 2017).
Hence, the resulting wild bootstrap version of $F_N( \b C )$ is
\begin{align*}
F_N^*( \b T ) & = \frac{1}{tr(\b T {\b V}_N^*)} {\b W}_N^{*}{\!}' \b E_d' \ \b T \ \b E_d {\b W}_N^*.
\end{align*}
The following conditional central limit theorem ensures the consistency of this resampling approach.
\begin{thm}
\label{thm:wbs}
Assume $E(G_{11}^4) < \infty$ and that the conditions of Theorem~\ref{thm:teststat} hold.
Conditionally on $(X_{ik}, \delta_{ik}), i =1, \dots, d, \ k = 1, \dots, n_i$, we have for all underlying values of ${\b p}$
$$F_N(\b T)^* \oDo \chi = {\b W}' \b E_d' \b T \b E_d {\b W}/ tr(\b T {\b V})$$
in probability as $N \rightarrow \infty$.
\end{thm}
We would like to stress that the limit distribution coincides with that of $F_N(\b T)$ under $H_0^p(\b C)$.
For the wild bootstrap version $F_N(\b T)^*$, however, the convergence result holds under both, the null and the alternative hypothesis,
i.e. its conditional distribution always approximates the correct null distribution of the test statistic.
\iffalse
\begin{rem}
Another possible resampling technique for $F_N(\b T)$ is the bootstrap by \cite{efron81},
where, for calculating resampling versions of the Kaplan-Meier estimators,
$d$ independent bootstrap samples with sample sizes $n_1, \dots, n_d$ are drawn separately for each group.
On the other hand, bootstrapping based on pooled bootstrap samples
would result in modified covariances
and thus in an incorrect limit distribution of the quadratic form.
This is why the pooled resampling approach of \cite{dobler2016bootstrap} does not generalize to the present set-up.
\end{rem}
\fi
We conclude the theoretical part of this article with a presentation of deduced inference procedures for the effect sizes $\b p$.
To this end, let $c^*_{N,\alpha}$ denote the $(1-\alpha)$-quantile, $\alpha \in (0,1)$, of the conditional distribution of $F_N^*(\b T)$ given the data.
In practice, this quantile is approximated via simulation by repeatedly generating sets of the wild bootstrap multipliers $G_{ki}$.
\begin{cor}
\label{cor:test}
Under the assumptions of Theorem~\ref{thm:wbs}, the test
$$ \varphi_N = 1\{ F_N(\b T) > c_{N,\alpha}^* \} $$
is asymptotically exact and consistent.
That is, $E( \varphi_N ) \rightarrow \alpha \cdot 1_{H_0^p(\b C)} + 1_{H_a^p(\b C)}$ as $N \rightarrow \infty$.
\end{cor}
Now, let $r$ be the number of columns of $\b C'$ and denote by $\b c_1, \dots, \b c_r$ its column vectors.
The presentation of a simultaneous confidence region for the contrasts $\b c_\ell' \b p, \ell = 1, \dots, r,$ in Corollary~\ref{cor:sCIs} below will be done in an implicit manner.
\begin{cor}
\label{cor:sCIs}
Under the assumptions of Theorem~\ref{thm:wbs}, an asymptotically exact $(1-\alpha)$-confidence ellipsoid for the contrasts $\b c_\ell' \b p, \ell = 1, \dots, r,$
is given by
$$ CE = CE_{N,1-\alpha}(\b C) = \Big\{ \b v \in \R^r :
(\b C \wh {\b p} - \b v)' (\b C \b C')^+ ( \b C \wh {\b p} - \b v) \leq \frac{tr(\b T \wh {\b V}_N)}{N} c_{N,\alpha}^* \Big\}.$$
That is, $P( \b C \b p \in CE ) \rightarrow 1 - \alpha $ as $N \rightarrow \infty$.
\end{cor}
\section{Simulations}\label{sec:simus}
In this section, we assess the small sample properties of the test $\varphi_N$ as proposed in Corollary~\ref{cor:test}.
\subsection{Behaviour under null hypotheses}
We first focus on its type I error control with respect to
\begin{itemize}
\item various kinds of contrast matrices
\item and different censoring intensities.
\end{itemize}
{\bf Design and Sample Sizes.} For ease of presentation we restrict ourselves to a design with $d=6$ groups with different sample size layouts:
we considered small samples in a balanced design with $\b n_1 = (n_1,\dots,n_6)' = (10,10,10,10,10,10)'$ and two unbalanced designs with $\b n_2 = (n_1,\dots,n_6)' = (10,12,14,10,12,14)'$ and $\b n_3 = (10,12,14,14,10,12)'$, respectively.
To obtain designs with moderate to large sample sizes we increase these vectors component-wise by the factors $K \in \{2,3,5,10\}$. Moreover, depending on the question of interest, we below distinguish between a one-way layout with six independent groups and a $2\times 3$ two-way design.\\[0.4cm]
{\bf Censoring Framework.} We considered exponentially distributed censoring random variables $C_{i1}\stackrel{ind}{\sim}Exp(\lambda_i)$ with the following vectors
$\boldsymbol \lambda = (\lambda_1,\dots,\lambda_6)'$ of rate parameters:
$\boldsymbol \lambda_1 = 0.4 \cdot \boldsymbol 1$, \ $\boldsymbol \lambda_2 = 0.5 \cdot \boldsymbol 1$, \ $\boldsymbol \lambda_3 = 2/3 \cdot \boldsymbol 1$, \
$\boldsymbol \lambda_4 = (0.4, 0.5, 2/3, 0.4, 0.5, 2/3)'$, \ $\boldsymbol \lambda_5 = (0.4, 0.5, 2/3, 2/3, 0.5, 0.4)'$,
where $\boldsymbol 1 \in \R^6$ is the vector consisting of $1$s only.
Thus, the first three settings correspond to equal censoring mechanisms with increased censoring rate from $\boldsymbol \lambda_1$ to $\boldsymbol \lambda_3$. The other two ($\boldsymbol \lambda_4$ and
$\boldsymbol \lambda_5$) lead to unequal censoring. By considering all $75$ possible combinations, many possible effects of censoring and sample size assignments are analyzed.
For example, in the set-up with $\b n_2$, $K=10$ and $\boldsymbol \lambda_4$, larger sample sizes are matched with a stronger censoring rate in an unbalanced design.\\[0.4cm]
{\bf Contrast Matrices and Null Hypotheses.}
We simulated the true significance level of the tests for the null hypotheses $H_0^p(\b C): \b C \b p = \b 0$ for two designs and different contrast matrices of interest:\\
In case of a one-way design with $d=6$ groups we were interested in the null hypotheses of `{\it no group effect}' or
`{\it equality of all treatment effects}' $H_0^p(\b C_1): \{ \b C_1 \b p = \b 0\} = \{p_1=\dots=p_6\}$. This may be described by considering the matrix $\b C_1 = \b P_6$, where here and below $\b P_d = \b I_d - \b J_d/d \equiv \b I_d - \boldsymbol 1_d \boldsymbol 1_d' /d$ denotes the $d$-dimensional centering matrix.\\
\iffalse
$$ \b C_1 = \begin{pmatrix}
1 & -1 & & & \\
& 1 & -1 & & \\
& & \ddots & \ddots & \\
& & & 1 & -1
\end{pmatrix} \in \R^{5 \times 6}.
$$
\fi
Next, we consider a $2\times 3$ two-way layout with two factors $A$ (with two levels) and $B$
(with three levels). This is incorporated in Model \eqref{eq:mod} by setting $d=a\cdot b= 2\cdot 3 =6$ and splitting up the index $i$ into two indices $i_1=1,2$ (for the levels of factor $A$) and $i_2=1,2,3$ (for the levels of factor $B$). Thus, we obtain survival times $T_{i_1i_2k}$, $k=1,\dots,n_{i_1i_2}$, and corresponding
nonparametric concordance effects $p_{i_1i_2}$. More complex factorial designs can be incorporated similarly. In this $2\times 3$ set-up we are now interested in testing the null hypotheses of
\begin{itemize}
\item[(A)] `{\it No main effect of factor $A$}': $H_0^p(\b C_{2,A}): \{ \b C_{2,A} \b p = \b 0\} = \{ \bar{p}_{1\cdot} = \bar{p}_{2\cdot}\}$,
\item[(B)] `{\it No main effect of factor $B$}': $H_0^p(\b C_{2,B}): \{ \b C_{2,B} \b p = \b 0\} = \{ \bar{p}_{\cdot 1} = \bar{p}_{\cdot 2} = \bar{p}_{\cdot 3}\}$ and
\item[(AB)] `{\it No $A\times B$ interaction effect}': $H_0^p(\b C_{2,AB}): \{ \b C_{2,AB} \b p = \b 0 \}
= \{p_{i_1i_2} - \bar{p}_{i_1\cdot} -\bar{p}_{\cdot i_2} + \bar{p}_{\cdot\cdot} = 0 \text{ for all } i_1,i_2\}$,
\end{itemize}
where $\bar{p}_{i_1\cdot}$, $\bar{p}_{\cdot i_2}$ and $\bar{p}_{\cdot\cdot}$ denote the means over the dotted indices. In particular, the corresponding contrast matrices are given by
$ \b C_{2,A} = \b P_2 \otimes \frac13 \b J_3,
\b C_{2,B} = \frac12 \b J_2 \otimes \b P_3,$ and $\b C_{2,AB} = \b P_2 \otimes \b P_3$,
where $\otimes$ indicates the Kronecker product.\\[0.4cm]
{\bf Survival Distributions.} For ease of presentation we only considered a rather challenging scenario, where the groups follow different survival distributions. In particular, we simulated
\begin{itemize}
\item[(G1)] a lognormal distribution with meanlog parameter $0$ and sdlog parameter $0.2726$ for the
first group,
\item[(G2)] a Weibull distribution with scale parameter $1.412$ and shape parameter $1.1$ for the second group,
\item[(G3)] a Gamma-distribution with scale parameter $0.4$ and shape parameter $2.851$ for the third group and
\item[(G4-G6)\!\!\!\!\!\!\!\!\!] \ \ \ \ \ \ mixing distributions of all pair combinations of the first three survival functions for the last three groups.
\end{itemize}
The first three survival functions are illustrated in Figure~\ref{fig:surv_func}. We note that preliminary simulations for more crude scenarios with identical survival distributions in all groups exhibited a much better type-$I$-error control of our testing procedure (results not shown).
Anyhow, the parameters of the above distributions were chosen in such a way that the nonparametric concordance effects of all groups are equal, i.e. $p_i = 0.5$ for all $i=1,\dots, 6$ (one-way) and
$p_{i_1i_2} = 0.5$ for all $i_1=1,2; \ i_2=1,2,3$ (two-way), respectively. Thus, all considered null hypotheses are true.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{surv_func.pdf}
\caption{Survival functions underlying the first three simulated sample groups.}
\label{fig:surv_func}
\end{figure}
We would like to stress that the case of continuously distributed survival times corresponds to an infinite-dimensional problem and is thus more difficult than the discrete case. For example, the simulation study in \cite{dobler17} confirms this observation in a related problem: the convergence rate of the actual coverage probabilities of confidence bands
to the nominal confidence level is much faster the more discretely distributed the survival data is. Moreover, to make the simulation scenario even more challenging, we considered the situation with infinite $\tau$ to also get an indication of the functionality of the test in this case.\\[0.4cm]
{\bf Simulations.}
We chose as wild bootstrap multipliers centered unit Poisson variables because a formal Edgeworth expansion and two simulation studies in \cite{dobler17weird} indicate
that those have theoretical and practical advantages over the common choice of standard normal multipliers.
We chose the nominal level $\alpha=5\%$ and conducted each test 10,000 times for $K=1,2,3$ and 5,000 times for $K=5,10$ because of the massively increasing computational complexity for large samples.
Each test was based on critical values that were found using 1,999 wild bootstrap iterations.
All simulations were conducted with the help of the R computing environment \citep{Rteam}.\\[0.4cm]
{\bf Results.}
The true type-$I$-error results for the four different null hypotheses are shown in Table~\ref{tab:level1} (left panel: one-way for $H_0^p(\b C_1)$ and right panel: two-way for
$H_0^p(\b C_{2,A})$) and Table~\ref{tab:level2} (two-way for $H_0^p(\b C_{2,B})$ in the left and $H_0^p(\b C_{2,AB})$ in the right panel).
It is apparent that all simulated levels are elevated for the smallest sample sizes ($K=1$),
especially for the one-way test: here almost all type I error probabilities are between $13.0\%$ and $17.7\%$.
For the two-way tests, these probabilities are mainly between $8.1\%$ and $11.7\%$ in this case ($K=1$).
On the one hand, this is due to the relatively strong censoring rates:
for $\lambda = 0.4$, the censoring probabilities across all sample groups are between $33\%$ and $37\%$ (found by simulating 100,000 censoring and survival time random variables each), for $\lambda = 0.5$, these probabilities range from $39.5\%$ to $41.5\%$, and for $\lambda = 2/3$, they even reach values between $48.5\%$ and $49\%$; resulting in only $5$ to $7$ uncensored observations per group. On the other hand, not to restrict the time horizon in inferential procedures about survival functions appears to slightly slow down the convergence of type I error probabilities to the nominal level as the sample size increases;
see \cite{dobler18} for similar findings in the context of confidence bands for unrestricted survival functions.
However, the error probabilities recover for samples of double size (i.e. between 20 and 28) already:
in the one-way design, these error rates drop to mainly $8.2\% - 9.9\%$,
and in all two-way tests, we even achieve rates of mainly $6.1\% - 8\%$.
If the sample sizes are tripled (i.e. between 30 and 42), most of the type I error probabilities are between $7\%-8\%$ (one-way) or $5.2\%-6.9\%$ (two-way).
In case of the sample size factor $K=5$, all results are only slightly liberal,
and for $K=10$ (i.e. sample sizes between 100 and 140), we see that the nominal level is well attained.
\begin{table}[ht]
\centering
\begin{tabular}{cc|ccccc}
$n$ & \ \ $\lambda$ / \ \ $K$ & 1 & 2 & 3 & 5 & 10 \\ \hline
& $\boldsymbol \lambda_1$ & 14.7 & 8.8 & 7.2 & 6.4 & 5.7 \\
& $\boldsymbol \lambda_2$ & 16.6 & 9.3 & 7.7 & 6.3 & 5.7 \\
$\b n_1$ & $\boldsymbol \lambda_3$ & 19.7 & 11.0 & 8.6 & 6.6 & 5.8 \\
& $\boldsymbol \lambda_4$ & 17.7 & 9.9 & 7.7 & 6.1 & 5.9 \\
& $\boldsymbol \lambda_5$ & 17.4 & 9.5 & 7.8 & 6.7 & 6.3 \\ \hline
& $\boldsymbol \lambda_1$ & 13.0 & 7.9 & 6.7 & 5.9 & 5.7 \\
& $\boldsymbol \lambda_2$ & 13.9 & 8.4 & 7.1 & 6.9 & 5.4 \\
$\b n_2$ & $\boldsymbol \lambda_3$ & 16.5 & 9.1 & 7.8 & 6.2 & 5.8 \\
& $\boldsymbol \lambda_4$ & 14.9 & 8.5 & 7.3 & 6.0 & 5.6 \\
& $\boldsymbol \lambda_5$ & 14.6 & 9.0 & 6.8 & 6.3 & 5.6 \\ \hline
& $\boldsymbol \lambda_1$ & 12.2 & 8.2 & 7.1 & 6.0 & 5.2 \\
& $\boldsymbol \lambda_2$ & 13.7 & 8.6 & 7.0 & 6.1 & 5.3 \\
$\b n_3$ & $\boldsymbol \lambda_3$ & 17.7 & 9.3 & 7.6 & 6.8 & 5.9 \\
& $\boldsymbol \lambda_4$ & 14.8 & 8.8 & 7.7 & 6.3 & 5.9 \\
& $\boldsymbol \lambda_5$ & 14.1 & 8.6 & 7.2 & 6.3 & 5.9 \\
\hline
\end{tabular}
\qquad
\begin{tabular}{cc|ccccc}
$n$ & \ \ $\lambda$ / \ \ $K$ & 1 & 2 & 3 & 5 & 10 \\ \hline
& $\boldsymbol \lambda_1$ & 9.0 & 6.3 & 6.0 & 5.7 & 5.4 \\
& $\boldsymbol \lambda_2$ & 9.0 & 6.7 & 6.0 & 5.6 & 4.6 \\
$\b n_1$ & $\boldsymbol \lambda_3$ & 10.5 & 7.0 & 6.3 & 5.9 & 5.6 \\
& $\boldsymbol \lambda_4$ & 10.1 & 6.7 & 6.1 & 5.7 & 5.7 \\
& $\boldsymbol \lambda_5$ & 9.4 & 6.2 & 5.8 & 5.6 & 5.0 \\ \hline
& $\boldsymbol \lambda_1$ & 7.7 & 6.3 & 5.8 & 5.3 & 5.9 \\
& $\boldsymbol \lambda_2$ & 8.1 & 6.3 & 6.0 & 6.0 & 5.3 \\
$\b n_2$ & $\boldsymbol \lambda_3$ & 9.7 & 6.8 & 6.4 & 5.0 & 5.4 \\
& $\boldsymbol \lambda_4$ & 8.2 & 6.3 & 6.2 & 5.9 & 5.4 \\
& $\boldsymbol \lambda_5$ & 8.9 & 6.7 & 6.2 & 5.4 & 5.1 \\ \hline
& $\boldsymbol \lambda_1$ & 7.8 & 6.3 & 6.0 & 5.6 & 4.9 \\
& $\boldsymbol \lambda_2$ & 8.5 & 6.2 & 5.5 & 5.3 & 5.0 \\
$\b n_3$ & $\boldsymbol \lambda_3$ & 9.2 & 6.9 & 6.5 & 5.6 & 5.1 \\
& $\boldsymbol \lambda_4$ & 8.4 & 6.1 & 6.1 & 5.8 & 4.5 \\
& $\boldsymbol \lambda_5$ & 8.2 & 6.7 & 5.7 & 6.0 & 5.7 \\
\hline
\end{tabular}
\caption{Simulated type I error probabilities in a one-way layout (left) and in a two-way design for main effect A (right) with sample size factor $K$.}
\label{tab:level1}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{cc|ccccc}
$n$ & \ \ $\lambda$ / \ \ $K$ & 1 & 2 & 3 & 5 & 10 \\ \hline
& $\boldsymbol \lambda_1$ & 10.0 & 7.2 & 6.4 & 6.2 & 6.0 \\
& $\boldsymbol \lambda_2$ & 11.4 & 7.7 & 6.7 & 5.9 & 4.9 \\
$\b n_1$ & $\boldsymbol \lambda_3$ & 13.4 & 8.0 & 6.9 & 5.9 & 5.8 \\
& $\boldsymbol \lambda_4$ & 12.2 & 7.6 & 6.9 & 6.1 & 5.9 \\
& $\boldsymbol \lambda_5$ & 12.1 & 7.5 & 6.7 & 5.9 & 5.6 \\ \hline
& $\boldsymbol \lambda_1$ & 9.5 & 6.6 & 6.1 & 6.0 & 5.0 \\
& $\boldsymbol \lambda_2$ & 10.2 & 7.4 & 6.5 & 6.0 & 5.5 \\
$\b n_2$ & $\boldsymbol \lambda_3$ & 11.6 & 7.8 & 6.6 & 5.6 & 5.7 \\
& $\boldsymbol \lambda_4$ & 10.4 & 7.0 & 6.4 & 5.5 & 6.1 \\
& $\boldsymbol \lambda_5$ & 10.2 & 7.1 & 6.2 & 5.9 & 5.4 \\ \hline
& $\boldsymbol \lambda_1$ & 9.5 & 7.2 & 5.8 & 5.2 & 5.2 \\
& $\boldsymbol \lambda_2$ & 9.6 & 6.8 & 6.3 & 5.4 & 5.6 \\
$\b n_3$ & $\boldsymbol \lambda_3$ & 11.6 & 7.4 & 6.9 & 6.2 & 5.6 \\
& $\boldsymbol \lambda_4$ & 9.9 & 7.4 & 6.2 & 6.5 & 5.4 \\
& $\boldsymbol \lambda_5$ & 9.7 & 7.2 & 6.4 & 5.7 & 5.0 \\
\hline
\end{tabular}
\qquad
\begin{tabular}{cc|ccccc}
$n$ & \ \ $\lambda$ / \ \ $K$ & 1 & 2 & 3 & 5 & 10 \\ \hline
& $\boldsymbol \lambda_1$ & 10.1 & 7.2 & 6.3 & 5.7 & 5.3 \\
& $\boldsymbol \lambda_2$ & 11.2 & 7.2 & 6.2 & 5.9 & 5.1 \\
$\b n_1$ & $\boldsymbol \lambda_3$ & 13.3 & 8.5 & 7.0 & 6.5 & 5.5 \\
& $\boldsymbol \lambda_4$ & 11.6 & 7.8 & 6.6 & 6.1 & 5.3 \\
& $\boldsymbol \lambda_5$ & 11.6 & 7.7 & 6.4 & 5.9 & 5.6 \\ \hline
& $\boldsymbol \lambda_1$ & 9.2 & 6.6 & 5.9 & 6.2 & 5.3 \\
& $\boldsymbol \lambda_2$ & 9.8 & 6.9 & 6.6 & 5.4 & 5.7 \\
$\b n_2$ & $\boldsymbol \lambda_3$ & 11.7 & 7.5 & 6.4 & 5.8 & 5.4 \\
& $\boldsymbol \lambda_4$ & 9.8 & 6.8 & 6.4 & 5.2 & 5.0 \\
& $\boldsymbol \lambda_5$ & 10.8 & 7.1 & 6.2 & 5.8 & 5.6 \\ \hline
& $\boldsymbol \lambda_1$ & 8.3 & 6.6 & 6.3 & 5.2 & 5.1 \\
& $\boldsymbol \lambda_2$ & 9.6 & 6.9 & 5.9 & 5.8 & 5.7 \\
$\b n_3$ & $\boldsymbol \lambda_3$ & 11.2 & 7.6 & 6.4 & 5.3 & 5.8 \\
& $\boldsymbol \lambda_4$ & 10.4 & 6.9 & 5.8 & 5.4 & 4.7 \\
& $\boldsymbol \lambda_5$ & 9.8 & 6.8 & 5.5 & 5.5 & 5.7 \\
\hline
\end{tabular}
\caption{Simulated type I error probabilities in a two-way design for main effect B (left) and for interaction effect AB (right) with sample size factor $K$.}
\label{tab:level2}
\end{table}
\subsection{Behaviour under shift alternatives}
In addition to the simulations of the previous subsection, we also conducted a small power simulation of the above tests. For the alternative hypotheses, we considered a shift model: taking the same six basic survival and censoring functions as in the first set of simulations, we shift all survival and censoring times of the first sample group by $\delta \in \{0.1, 0.2, \dots, 1\}$.
In this way, we maintain the same censoring rates as before and the distance to the null hypotheses is gradually increased: for growing $\delta >0$, we obtain a growing relative effect $p_1 > 0.5$ (one-way) and
$p_{11} > 0.5$ (two-way), respectively. For each of the above considered contract matrices, $\b C_1, \b C_{2,A}, \b C_{2,B}, \b C_{2,AB}$,
we conducted one set of simulations with different unbalanced sample sizes and censoring rate combinations.
For each set-up, we increased the sample sizes by the factors $K=1,3,5$. The results are displayed in Figure~\ref{fig:power}.
We see that, even for the smallest sample sizes (between 10 and 14), the power of the two-way testing procedures increase to $0.5$ or $0.6$ as the shift parameter approaches 1. For larger samples sizes the theoretically proven consistency is apparent. In comparison, the one-way test has a much higher power:
For the undersized case ($K=1$) it already reaches a power of $0.8$ while for moderate to larger sample sizes the power is almost $1$ for shift parameters $\delta\geq 0.5$. In comparison to the two-way procedure its superior power is, however, partially paid at the price of its pronounced liberality; especially for small sample sizes.
All in all, the simulations confirm that all tests have a satisfactory power with increasing sample size and/or shift parameter while maintaining a reasonable control of the nominal level for sample sizes of $30$ to $42$ already.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{power_plot.pdf}
\caption{Power functions for shift alternatives for different null hypotheses: in the one-way layout (sample sizes $\b{n} = K \cdot \b{n}_2$, censoring rates $\boldsymbol \lambda = \boldsymbol \lambda_5$), in the two-way layout for main effect A ($\b{n} = K \cdot \b{n}_3$, $\boldsymbol \lambda = \boldsymbol \lambda_4$), for main effect B ($\b{n} = K \cdot \b{n}_3$, $\boldsymbol \lambda = \boldsymbol \lambda_2$), and for the interaction effect ($\b{n} = K \cdot \b{n}_2$, $\boldsymbol \lambda= \boldsymbol \lambda_4$), $K = 1,3,5$. The nominal significance level is $\alpha =5\%$ (- - -).}
\label{fig:power}
\end{figure}
\section{Data example}\label{sec:data_Example}
We illustrate the developed theory on a dataset from a colon cancer study \citep{moertel90}.
Considering the patients in \emph{Stage C}, that is, there had been metastases to regional lymph nodes,
the data consist of eligible 929 patients suffering from colon cancer. Survival (measured in days) was the primary endpoint of the study. We focus on the two factors `gender' and `treatment' (with three levels) to obtain a crossed $2\times 3$ survival design which is in line with a setting from our simulation study. In particular, there were
315 patients in the observation group,
310 others were treated with levamisole,
and 304 received levamisole, combined with fluorouracil.
Levamisole was originally used as an anthelmintic drug and fluorouracil (5-FU) is a medicine to treat various types of cancer.
The patients in the study had been randomized into one of these three treatment groups.
Also, there were nearly as many women (445) as men (484) involved in the study.
Figure~\ref{fig:kmes} depicts the Kaplan-Meier estimates of the survival probabilities for each treatment $\times$ sex subgroup.
We refer to \cite{moertel90} for more details about the study.
The dataset is freely accessible via the \texttt{R} command \texttt{data(colonCS)} after having loaded the package \emph{condSURV} \citep{meira16,meira16R}.
The aim is now to investigate the presence of main or interaction effects of treatment and gender. As there are several ties in the data (roughly $16\%$; see Appendix~\ref{app:add_info} for details) and we do not want to impose specific distributional assumptions, we focus on the nonparametric concordance effects. To this end, we first have to choose a proper $\tau$. From our retrospective view, the most reasonable choice is found by determining for each group the minimal observed censoring time that exceeds all observed survival times in that group. We call these censoring times ``terminal times''. Then, $\tau$ is set to be the minimal terminal time. In doing so, the group with that minimum terminal time does not benefit nor does it suffer from having the earliest terminal time when compared to the other groups.
The first block in Table~\ref{tab:dataex_descr} shows the sample sizes of all subgroups.
In the present data example, the minimal terminal time is $\tau = 2173$; see the second block of Table~\ref{tab:dataex_descr}.
In view of the sample sizes and the censoring rates given in the third block of Table~\ref{tab:dataex_descr},
we compare the present dataset with the simulation set-ups in Section~\ref{sec:simus}: a
similarly strong censorship is obtained for $\boldsymbol\lambda_3$ and comparable sample sizes $n \in [100,140]$ for the choice $K=10$. Thus, judging from the rightmost columns of Tables~\ref{tab:level1} and~\ref{tab:level2}, we find it is safe to assume actual type I error probabilities of about $5.1\%$ to $5.9\%$ of the proposed nonparametric one- and two-way survival tests.
\begin{table}[ht]
\centering
\begin{tabular}{l|cc|cc|cc|cc}
& \multicolumn{2}{|c}{sample size} & \multicolumn{2}{|c}{terminal time} & \multicolumn{2}{|c}{censoring rate} & \multicolumn{2}{|c}{effect size}\\
treatment & male & female & male & female & male & female & male & female \\ \hline
observation & 166 & 149 & 2800 & 2562 & 47.6 & 51.0 & 0.475 & 0.483 \\
levamisole & 177 & 133 & 2915 & 2173 & 47.5 & 52.6 & 0.459 & 0.501 \\
levamisole plus fluorouracil & 141 & 163 & 2726 & 2198 & 68.8 & 55.2 & 0.581 & 0.501 \\ \hline
\end{tabular}
\caption{For each subgroup: sample size, smallest censoring time (in days) exceeding the largest survival time, censoring rate (in \%) after taking the minimum of each event time and $\tau = 2173$, and nonparametric concordance effects. Columns: sex, row: treatments.}
\label{tab:dataex_descr}
\end{table}
\begin{figure}
\includegraphics[width=\textwidth]{KMEs}
\caption{Kaplan-Meier estimator for male and female subgroups, discriminated further according to treatment: obs = observation, lev = levamisole treament, lev+fluo = combined levamisole and fluorouracil treatment. The end time in the plot is $\tau$ = day 2173.}
\label{fig:kmes}
\end{figure}
We tested the data in one- and two-factorial set-ups and chose $\alpha = 5\%$ as the significance level.
As in the simulation study, we used $B=1,\!999$ bootstrap iterations for each test.
For the tests in the two-factorial model, we considered the null hypotheses corresponding to no main treatment effect, no main effect in sex, and no interaction effect between both.
The test results, by means of p-values, are shown in Table~\ref{tab:dataex_pval}.
\begin{table}[ht]
\centering
\begin{tabular}{lll|c}
Null hypothesis & $H_0^p ( \cdot ) $ & set-up & p-value \\ \hline
Equality of all effects & $H_0^p (\b C_1)$ & one-factorial & $ < 0.001$ \\
No main effect in sex & $H_0^p(\b C_{2,A})$ & two-factorial & $ 0.331 $ \\
No main effect in treament & $H_0^p(\b C_{2,B})$ & two-factorial & $< 0.001 $ \\
No interaction effect & $H_0^p(\b C_{2,AB})$ & two-factorial & $< 0.001 $ \\ \hline
\end{tabular}
\caption{p-values of different hypothesis tests for the anaylsis of the colonCS data-set.}
\label{tab:dataex_pval}
\end{table}
We found a significant indication against the equality of all $d=6$ groups (p-value $ < 0.001$) but this difference between groups could not be inferred to result from a difference between the sexes (p-value $= 0.331$).
However, we found a significant treatment effect (p-value $ < 0.001$)
as well as a significant interaction effect between treatment and sex (p-value $ < 0.001$).
The $p$-values in Table~\ref{tab:dataex_pval} have not been adjusted for a type I error multiplicity but it is obvious that the results remain the same after an application of, say, the Bonferroni-Holm procedure.
Indeed, looking at the rightmost block of Table~\ref{tab:dataex_descr},
we agree with the findings of the hypothesis tests:
the gender effect seems to be cancelled out if the treatment groups are combined,
but within the male gender there seems to be a big difference in the concordance effects ($p_{1i_2} \in [0.459, 0.581]$).
Also, the interaction effect is apparent,
as the female groups do not seem to strongly benefit from any treatment ($p_{2i_2} \in [0.483, 0.501]$).
On the other hand, the male groups exhibit a worse than average survival fitness in the observation and the levamisole treatment group ($p_{11}=0.475,\ p_{12}=0.459$)
but a much better than average survival fitness for the combination treatment ($p_{13} = 0.581$). Here the value
$p_{13} = 0.581$ roughly means that a randomly generated observation from this specific group survives
a randomly generated observation from the mean distribution of all groups with probability $58.1\%$. Taking another look at the Kaplan-Meier curves in Figure~\ref{fig:kmes},
we immediately see that our concordance effects and the test outcomes make sense.
We clearly see that there is a big difference in the male survival probabilities
(the combination treatment group is superior to the levamisole treatment group which is in turn superior to the observation group)
but there is not much of a difference between the female groups' survival curves.
Indeed, comparing the Kaplan-Meier curve of the pooled males' survival times with that of the pooled females' times,
we graphically find no evident main gender effect.
The plot of both Kaplan-Meier estimators is shown in Appendix~\ref{app:add_info}.
Finally, we relate our results to the original findings of \cite{moertel90}
whose analyses involve the Cox proportional hazards model and logrank tests.
The authors also detected that ``Therapy with levamisole plus fluorouracil produced an unequivocal advantage over observation'' and that levamisole alone did not produce a detectable effect.
Furthermore, they concluded from an exploratory subset analysis that the ``levamisole-fluorouracil treatment appeared to have the greatest advantage among male patients [...]''.
This is exactly what we confirm in our analysis based on the nonparametric two-factorial tests.
However, \cite{moertel90} neither account for the present ties in the data (note that the Cox regression postulates continuous outcomes) nor did they clearly stress the rather weak effect of the levamisole-fluorouracil treatment for women.
They just state that their ``results show [...] striking contradictions to those of subset analyses reported in the NCCTG study, in which levamisole plus fluorouracil was found to be most effective in reducing the risk of recurrence among female patients [...]'' among other subgroups of patients.
\section{Discussion}\label{sec:dis}
We proposed novel nonparametric inference procedures for the analysis of factorial survival data that may be subject to independent random right-censoring. Critical values are obtained from a multiplier wild bootstrap approach leading to
asymptotically valid tests and confidence regions for meaningful effect parameters. Thereby, the procedures do not require any multiplicative or additive hazard structure nor specific distributional survival and censoring assumptions. In particular, different group distributions are allowed and
ties are accounted for accordingly. Moreover, different to the nonparametric survival procedures of \cite{akritas97} and \cite{akritas2011nonparametric}, our methods are not only driven towards hypothesis testing but also to uncertainty quantification of the underlying effect estimators. The latter can be used to comprehensibly describe and infer main and interaction effects in general nonparametric factorial survival designs with an arbitrary number of fixed factors.
Together with a $1$-$1$ connection with hazard ratios in proportional two-sample designs \citep{bruckner2017sequential},
this makes the new methods appealing for practical purposes.
To investigate their theoretical properties, we rigorously proved central limit theorems of the underlying statistics and consistency of the corresponding procedures. In addition,
extensive simulations were conducted for one- and two-way designs to also assess their finite sample properties in terms of power and type-$I$-error control. In case of small sample sizes with less than $10$ completely observed subjects per group, they revealed a liberal behaviour; especially for the one-way testing procedure. However, for moderate to larger sample sizes the asymptotic results kicked in and the stated theoretical results were recovered.
Finally, the methods were used to exemplify the analysis of survival data in a study about treatments for colon cancer patient within a two-factorial survival design. As severe ties were present in the data, classical hazard based methods were not directly applicable. In comparison, our newly proposed nonparametric methods provided a very decent alternative for the analysis of such factorial survival designs without postulating any strict assumptions.
To allow for a straightforward application, it is planned to implement the procedure into an easy to use R-package.
In future research we will consider the case of stochastically ordered subgroups, for which a multiple testing algorithm could be developed with the aim to detect significantly different collections of all subgroups:
subgroups with no significant differences in the nonparametric concordance effects may be combined to facilitate the interpretation of the outcomes
and to ultimately serve for the development of different, more personalized medicines, one for each new subgroup combination.
Moreover, extensions of the current methodology to ordered alternatives or factorial designs obtained via stratified sampling will be part of a practically useful consecutive testing procedure.
\section*{Acknowledgements}
Markus Pauly likes to thank for the support from the German Research Foundation (Deutsche Forschungsgemeinschaft).
\bibliographystyle{plainnat}
|
2,869,038,153,822 | arxiv | \section{Affine equivalence of $\M mn$ and $\M nm$}\label{sec:affine}
In this section we explicitly describe the affine equivalence between the dual \bm surfaces $\M mn$ and $\M nm$
(see Theorem \ref{thm:Hooper}) that we will use to characterize cutting sequences. As usual, we first describe it through the concrete example of $\M 34$ and $\M 43$, then comment on the general case.
The semi-regular polygon presentation of $\M 34$ was shown in Figure \ref{octcyl}, with its horizontal and vertical cylinder decompositions, and the analogous picture for $\M 43$ is shown in Figure \ref{hexcyl}. In this section, we will show that the two surfaces are affinely equivalent.
\begin{figure}[!h]
\centering
\includegraphics[width=1\textwidth]{hexcyl.pdf}
\begin{quote}\caption{Horizontal and vertical cylinder decompositions for $\M{4}{3}$. \label{hexcyl}} \end{quote}
\end{figure}
We exploit the orthogonal presentation we built in \S \ref{hooperdiagrams} (shown for $\M{3}{4}$ in Figure \ref{octort}), which provides a convenient way to visualize the central step of this equivalence. We first discuss how to go from one orthogonal decomposition to the dual one. We then combine this step with flips and shears, see \S\ref{flipandshears}.
\subsection{The dual orthogonal decomposition}
In \S \ref{hoopertobm}, we constructed an orthogonal presentation for $\M 34$, by \emph{cutting} the Hooper diagram vertically and associating to each piece a semi-regular polygon. This orthogonal presentation is in Figure \ref{octort}, and is in the top left of Figure \ref{hexort1} below.
We can consider the same graph $\G{3}{4}$ and decompose it into \emph{horizontal pieces}, instead of using the vertical pieces as we did in \S \ref{hoopertobm}, and then apply the same procedure.
This produces the orthogonal presentation of the \emph{dual} \bm surface $\M 43$, shown on the top right in Figure \ref{hexort1}.
\begin{figure}[!h]
\centering
\includegraphics[width=1\textwidth]{length.pdf}
\begin{quote}\caption
The top row shows the orthogonal polygonal presentation of $\M 43$ from Figure \ref{octort}, and then a rearrangement of the same pieces, and then a different dissection of the same pieces into the dual orthogonal presentation of $\M 34$, after rescaling the lengths by a diagonal matrix. The second row shows the semi-regular presentations of $\M 43$ and $\M 34$, respectively, which are sheared images of the figures above. It also shows the length comparison for the two surfaces, where the number $a$ is calculated in (\ref{calculatea}). \label{hexort1}} \end{quote}
\end{figure}
\begin{remark}\label{rk:rotating} The same figure (i.e. the dual orthogonal presentation shown on the top right in Figure \ref{hexort1}) could also be obtained by vertically decomposing the graph $\G{4}{3}$ and repeating a procedure similar to the one described in \S \ref{hoopertobm}.
The fact that the surface $\M mn$ and the surface $\M nm$ can each be obtained from either of the respective graphs, $\G{m}{n}$ and $\G{n}{m}$ (decomposing each vertically), or both from the same graph $\G{m}{n}$ (one decomposing vertically, the other horizontally) is coherent with the construction of the graphs $\G{m}{n}$ and $\G{n}{m}$, because it is easy to check that we can obtain one from the other by rotating the graph by $\frac{\pi}{2}$ and changing the directions of the permutation cycles around the black dots.
More precisely, from a point of view of the combinatorics of the surface, the change of direction of the permutations does not change it, because since the black dots represent a lateral gluing, the rectangles gluing on the right will be glued on the left instead and vice versa, which corresponds on the surface to a vertical flip.
The rotation corresponds to the equivalence between the vertical decomposition of the first graph and the horizontal decomposition of the second one and vice versa.
This can be seen on the polygonal presentation from the fact that we consider diagonals with different slopes if we decompose the graph vertically or horizontally.
\end{remark}
\subsection{Cut and paste and rescaling between orthogonal presentations}\label{sec:cut_paste_rescale}
The procedure described above can be done starting from any graph $\G{m}{n}$: by decomposing it vertically, we obtain an orthogonal presentation of $\M mn$; by decomposing it horizontally, a dual orthogonal presentation of $\M nm$. The two presentations, if we consider only the combinatorics of the surface,
differ only by a cut and paste map.
We can in fact cut along the horizontal and vertical diagonals in the two parallelograms that come from shearing a square and paste them along the side that was a diagonal of one of our basic rectangles, as shown in Figure \ref{hexort1} for the example of $\M 43$ and $\M 34$.
\begin{remark} \label{alternate}
The rectangles containing a diagonal in \mnbms are exactly the complementary ones to the rectangles containing a diagonal in $\M nm$.
This comes from the fact that by construction, only the vertical edges in one case and the horizontal ones in the other case are repeated and that the edges repeated in two different pieces are the ones which have a diagonal.
One can see this in the top line of Figure \ref{hexort1}, and also later we will see them superimposed on the same picture in Figure \ref{coincide}.
\end{remark}
While from the point of view of the combinatorics of the surface the two presentations can be cut and pasted to each other, if we computed the associated widths of cylinders as described in \S \ref{moduli} we we would see that the lengths of the sides of the basic rectangles are not the same. Since we want both surfaces to have the \emph{same area} (in particular, we want them to be in the \emph{same Teichm\"uller disc},
we want to define a \emph{similarity} that allows us to rescale the lengths of the sides of the basic rectangles suitably.
To determine the similarity, let us impose for the two surfaces in the orthogonal presentations to have same areas, by keeping constant the ratios between the side lengths in the semi-regular polygon presentation. Let us recall that the lengths, obtained from the polygonal description, or equivalently from the critical eigenfunctions for the graph, give us the lengths of the sides of the basic rectangles up to similarity.
Let us work out this explicitly in the $\M{3}{4}$ and $\M{4}{3}$ example.
We can assume that the sides of the original octagon all have length $1$.
The area of the polygonal presentation of $\M{3}{4}$ will then be clearly $A_1=2(2+ \sqrt 2)$.
Denoting by $a$ and $a'= \frac{\sqrt 2}{2} a$ the two side lengths of the sides of the polygons in $\M{4}{3}$, the area is $A_2= \sqrt 3 (1+\sqrt 2) a$.
Requiring them to have the same area, $A_1=A_2$ gives us
\begin{equation}\label{calculatea}
a=\sqrt{\frac{2\sqrt 6}{3}} \quad \text{ and } \quad a'=\frac{\sqrt 2}{2} \sqrt{\frac{2\sqrt 6}{3}}.
\end{equation}
From now on we will assume that $\M{4}{3}$ has these side lengths.
Shearing the surface to make the two cylinder decomposition directions orthogonal gives us basic rectangles with the side lengths marked in Figure \ref{hexort1}.
The transformation that rescales the basic rectangles can be easily deduced from the figure, as the sides on the left and the corresponding sides on the right have the same ratio if we consider the vertical ones and the horizontal ones separately.
The transformation will hence be achieved by a diagonal matrix, with the two ratios as its entries.
We remark that since we imposed for the area to be preserved, the matrix will be unitary.
For our example taking the orthogonal presentation of $\M 43$ to the orthogonal presentation of $\M 34$, this diagonal matrix is
\[
\diag{4}{3} = \matp {\sqrt[4]{\frac 32}} 0 0 {\sqrt[4]{\frac 23}}.
\]
We can extend all this reasoning to any generic \bm surface, and compute in a similar way a similarity that rescales the dual orthogonal presentation of $\M{n}{m}$ so that it has the same area as the orthogonal presentation of $\M{m}{n}$; see (\ref{def:generalmatrices}) for the general form.
\subsection{Flip and shears}\label{flipandshears}
So far we built a sheared copy of $\M{m}{n}$ (its orthogonal presentation) which can be cut and pasted and rescaled (as described in \S \ref{sec:cut_paste_rescale}) to obtained a sheared copy of $\M{n}{m}$ (its dual orthogonal presentation). Thus, one can obtain an affine diffeomorphism between $\M{m}{n}$ and $\M{n}{m}$ through a shear, a cut and paste, a rescaling and another shear. In order to renormalize cutting sequences, we also add a \emph{flip} (the reason will be clear later, see \S \ref{derivation}), to obtain the affine diffeomorphism $\AD m n: \M mn \to \M nm$ defined in formulas below. Let us first describe it in a concrete example.
\begin{example}
The affine diffeomorphism $\AD 43$ which we use to map $\M 43$ to $\M 34$, is realized by a sequence of flips, shears and geodesic flow shown in Figure \ref{hexTOoct}: starting from $\M 43$ we first apply the vertical flip $f$, then the shear $\shear {4}{3}$ to bring it to the orthogonal presentation. By cutting and pasting as explained in Figure \ref{hexort1} and then applying the diagonal matrix $\diag{4}{3}$ computed in the previous section, we obtain the dual orthogonal presentation of $\M 34$. Finally, we shear the dual orthogonal presentation of $\M 34$ to the semi-regular presentation of $\M 34$ by the shear $\shear {3}{4}$.
\begin{figure}[!h]
\centering
\includegraphics[width=1\textwidth]{HexToOct.pdf}
\begin{quote}\caption{The affine diffeomorphism $\AD 43$ from $\M 43$ to $\M 34$, composition of $f$, $\shear {4}{3}$, cut and paste and $\diag{4}{3}$ and finally $\shear{3}{4}$. \label{hexTOoct}} \end{quote}
\end{figure}
\end{example}
To define $\AD m n$ in the \emph{general case}, consider a vertical flip $f$, the shear $\shear{m}{n}$ and the diagonal matrix $\diag{m}{n}$ given by:
\begin{equation} \label{def:generalmatrices}
f = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}, \qquad
\shear m n = \begin{pmatrix} 1 & \cot \left( \frac {\pi}{n} \right) \\ 0 & 1 \end{pmatrix}, \qquad
\diag{m}{n} = \begin{pmatrix} \sqrt {\frac {\sin \frac {\pi}{n}}{\sin \frac {\pi}{m}}} & 0 \\ 0 & \sqrt {\frac {\sin \frac {\pi}{m}}{\sin \frac {\pi}{n}}} \end{pmatrix}.
\end{equation}
The affine diffeomorphism $\AD m n$ is obtained by first applying the flip $f$ to $\M mn$ and shearing it by
$\shear{m}{n}$, which produces the orthogonal presentation of $\M mn$. We then compose with the cut and paste map and the similarity given by $\diag{m}{n}$, which maps the orthogonal presentation of $\M mn$ to the dual orthogonal presentation of $\M nm$. Finally, we compose with the other shear $s_{nm}$ which produces the semi-regular presentation of $\M nm$.
Thus, the \emph{linear part} of $\AD{m}{n}$, which we will denote by $\derAD m n $, is given by the following product:
\begin{equation} \label{def:derivativeAD}
\derAD m n = \shear nm \diag{m}{n} \shear{m}{n} f = \begin{pmatrix} -\sqrt {\frac {\sin \frac {\pi}{n}}{\sin \frac {\pi}{m}}} & \frac{\cos\frac{\pi}{m}+\cos \frac{\pi}{n}}{\sqrt{\sin \frac{\pi}{m} \sin \frac{\pi}{n}}} \\ 0 & \sqrt {\frac {\sin \frac {\pi}{m}}{\sin \frac {\pi}{n}}} \end{pmatrix}.
\end{equation}
The action of $\AD m n$ on directions will be described in \S \ref{sec:2farey}, and the action on cutting sequences in \S \ref{derivation}.
\section{Background} \label{background}
In this section
we present some general background on the theory of translation surfaces, in particular giving the definition of translation surfaces (\S \ref{sec:trans_surf}), of affine deformations and of Veech groups (\S \ref{sec:Veech}) and we briefly list known examples of Veech surfaces (\S \ref{sec:Veech_history}).
\subsection{Translation surfaces and linear trajectories}\label{sec:trans_surf}
The surface $T$ obtained by identifying opposite parallel sides of the square, and the surface $\mathcal O$ obtained by identifying opposite parallel sides of the regular octagon, are examples of translation surfaces. The surface $T$ has genus $1$, and the surface $\mathcal O$ has genus $2$. Whenever we refer to a translation surface $S$, we will have in mind a particular collection of polygons in $\mathbb{R}^2$ with identifications. We define translation surfaces as follows:
\begin{definition}\label{translationsurface}
A \emph{translation surface} is a collection of polygons $P_j$ in $\R^2$, with parallel edges of the same length identified, so that
\begin{itemize}
\item edges are identified by maps that are restrictions of translations,
\item every edge is identified to some other edge, and
\item when two edges are identified, the outward-pointing normals point in opposite directions.
\end{itemize}
If $\sim$ denotes the equivalence relation coming from identification of edges, then we define the surface $S=\bigcup P_j/\sim$.
\end{definition}
Let $\mathcal{S}$ be the set of points corresponding to vertices of polygons, which we call \emph{singular points}.\footnote{Standard usage says that such a point is singular only if the angle around it is greater than $2\pi$, but since all of our vertices satisfy this, we call all such points singular points.}
We will consider geodesics on translation surfaces, which are straight lines: any non-singular point has a neighborhood that is locally isomorphic to the plane, so geodesics locally look like line segments, whose union is a straight line. We call geodesics \emph{linear trajectories}. We consider trajectories that do not hit singular points, which we call \emph{bi-infinite} trajectories.
A trajectory that begins and ends at a singular point is a \emph{saddle connection}. Every periodic trajectory is parallel to a saddle connection, and is contained in a maximal family of parallel periodic trajectories of the same period. This family fills out a \emph{cylinder} bounded by saddle connections.
A \emph{cylinder decomposition} is a partition of the surface into parallel cylinders. The surfaces that we consider, \bm surfaces, have many cylinder decompositions (see Figure \ref{octcyl}). For a given cylinder, we can calculate the \emph{modulus} of the cylinder, which is ratio of the width (parallel to the cylinder direction) to the height (perpendicular to the cylinder direction). For the cylinder directions we use on \bm surfaces, all of the cylinders have the same modulus (see Theorem \ref{modthm}, proven in \cite{D14}).
\subsection{Affine deformations and affine diffeomorphisms}\label{subsec:affine}
Given $\nu \in GL(2,\mathbb{R})$, we denote by $\nu P \subset \mathbb{R}^2$ the image of $P \subset \mathbb{R}^2$ under the linear map $\nu$. Note that parallel sides in $P$ are mapped to parallel sides in $\nu P$. If $S $ is obtained by gluing the polygons $P_1, \ldots, P_n$, we define a new translation surface that we will denote by $\nu \cdot S$, by gluing the corresponding sides of $\nu P_1, \dots, \nu P_n $.
The map from the surface $S$ to the surface $\nu \cdot S$, which is given by the restriction of the linear map $\nu$ to the polygons $P_1 , \dots , P_n$, will be called the \emph{affine deformation given by $\nu$}.
Let $S$ and $S'$ be translation surfaces. Consider a homeomorphism $\Psi$ from $S$ to $S'$ that takes $\mathcal{S}$ to $\mathcal{S}'$ and is a diffeomorphism outside of $\mathcal{S}$. We can identify the derivative $D\Psi_p$ with an element of $GL(2,\mathbb{R})$. We say that $\Psi$ is an \emph{affine diffeomorphism} if the derivative $D\Psi_p$ does not depend on $p$. In this case we write $D\Psi$ for $D\Psi_p$.
The affine deformation $\Phi_\nu$ from $S$ to $\nu \cdot S$ described above is an example of an affine diffeomorphism. In this case $D\Phi_\nu=\nu$.
We say that $S$ and $S'$ are \emph{affinely equivalent} if there is an affine diffeomorphism $\Psi$ between them.
We say that $S$ and $S'$ are \emph{translation equivalent} if they are affinely equivalent with $D\Psi=Id$. If $S$ is given by identifying sides of polygons $P_j$ and $S'$ is given by identifying sides of polygons $P'_k$ then a translation equivalence $\Upsilon$ from $S$ to $S'$ can be given by a ``\emph{cutting and pasting}'' map. That is to say we can subdivide the polygons $P_j$ into smaller polygons and define a map $\Upsilon$ so that the restriction of $\Upsilon$ to each of these smaller polygons is a translation and the image of $\Upsilon$ is the collection of polygons $P'_k$.
An affine diffeomorphism from $S$ to itself is an \emph{affine automorphism}. The collection of affine diffeomorphisms is a group which we denote by $Af\!f(S)$.
If $S$ is given as a collection of polygons with identifications then we can realize an affine automorphism of $S$ with derivative $\nu$ as a composition of a map $\Psi_\nu:S\to\nu\cdot S$ with a translation equivalence, or cutting and pasting map, $\Upsilon:\nu\cdot S \to S$.
\subsection{The Veech group and Veech surfaces}\label{sec:Veech}
The Veech homomorphism is the homomorphism $\Psi\mapsto D\Psi$ from $Af\!f(S)$ to $GL(2,\R)$. The image of this homomorphism lies in the subgroup of matrices with determinant $\pm1$ which we write as $SL_{\pm}(2,\mathbb{R})$. We call \emph{Veech group} and we denote by $V(S)$ the image of $Af\!f(S)$ under the Veech homomorphism.
It is common to restrict to orientation-preserving affine diffeomorphisms in defining the Veech group, but
since we will make essential use of orientation-reversing affine automorphisms, we will use the term \emph{Veech group} for the larger group $V(S)$. Note that the term \emph{Veech group} is used by some authors to refer to the image of the group of orientation-preserving affine automorphisms in the projective group $PSL(2,\R)$.
A translation surface $S$ is called a \emph{Veech surface} if
$V(S)$ is a lattice in $SL_{\pm}(2, \mathbb{R})$. The torus $T^2=\mathbb{R}^2 / \mathbb{Z}^2$ is an example of a Veech surface whose Veech group is $GL(2, \mathbb{Z})$. Veech proved more generally that all translation surfaces obtained from regular polygons are Veech surfaces.
Veech surfaces satisfy the \emph{Veech dichotomy} (see \cite{Veech}, \cite{Vorobets}) which says that if we consider a direction $\theta$ then one of the following two possibilities holds: either there is a saddle connection in direction $\theta$ and the surface decomposes as a finite union of cylinders each of which is a union of a family of closed geodesics in direction $\theta$, or each trajectory in direction $\theta$ is dense and uniformly distributed.
We will use the word \emph{shear} to denote an affine automorphism of a surface whose derivative is $\sm 1s01$ for some real number $s$. If a translation surface admits a shear, we can decompose it into cylinders of commensurable moduli, so the shear acts as a Dehn twist in each cylinder.
\subsection{Known examples of Veech surfaces}\label{sec:Veech_history}
Several families of Veech surfaces are known. A brief history of known Veech surfaces is as follows.
\begin{itemize}
\item The simplest example of a Veech surface is the square, with pairs of parallel sides identified to create the square torus.
\item Covers of the square torus, called \emph{square-tiled surfaces}, are created by identifying opposite parallel edges of a collection of congruent squares. Gutkin and Judge \cite{GJ} showed that square-tiled surfaces are equivalent to those surfaces whose Veech group is \emph{arithmetic}, i.e. commensurable with $SL(2,\ZZ)$. Subsequently, Hubert and Leli\`evre showed that in genus $2$, all translation surfaces in $H(2)$ that are tiled by a prime number $n > 3$ of squares fall into exactly two Teichm\"uller discs.
\item Veech was the first to define in \cite{Veech} Veech groups and lattice surfaces, and to prove that all regular polygon surfaces are Veech surfaces and satisfy the Veech dichotomy described above.
\item Clayton Ward discovered a new family of Veech surfaces about $10$ years after regular polygonal surfaces \cite{Ward}. These surfaces are created by identifying opposite parallel edges of three polygons: two regular $n$ gons and a regular $2n$-gon (see Figure \ref{m34} for the case when $n=4$.) Ward's surfaces turn out to be a special case of \bm surfaces, those made from exactly $3$ polygons, the $\M3n$ family
\item Veech surfaces are related to billiards on triangles; we will not describe the correspondence here. Kenyon and Smille \cite{KS} showed that, other than the triangles corresponding to the examples above, only three other triangles correspond to Veech surfaces. Two of these were already known to Vorobets \cite{Vorobets}.
\item Kariane Calta \cite{calta} and Curt McMullen \cite{Mc} discovered independently infinitely many new Veech surfaces in genus $2$, each of which can be presented as an L-shaped polygon with certain integer measurements in a quadratic vector field.
\item Irene Bouw and Martin M\"oller discovered a new family of Veech curves (i.e. quotients of $SL(2, \mathbb{R})$ by a Veech group) with triangular Veech groups in \cite{BM}. Then Pat Hooper in \cite{Hooper} showed that special points on these Veech curves can be obtained by gluing semi-regular polygons; see the definition given in the next section. In this paper, we will call \bm surfaces this family of Veech surfaces obtained by gluing semi-regular polygons (as it has been done often in previous literature). We remark that Hooper showed that the Teichm\"uller curves associated to his semi-regular polygon surfaces were the same as Bouw and M\"oller's Veech curves in many cases, with a few exceptions. Later, Alex Wright \cite{Wright} showed this equality in all the remaining cases. We remark also that while Ward's surfaces are always glued from exactly $3$ polygons (they correspond as mentioned above to the $\M3n$ \bm family), \bm Veech surfaces
can be obtained by gluing any number $m\geq 2$ of (semi-regular) polygons
\end{itemize}
Providing a full classification of Veech surfaces is a big open question in Teichm\"uller dynamics, since Veech surfaces correspond indeed to closed $SL(2, \mathbb{R})$-orbits and hence are the smallest orbit closures of the $SL(2, \mathbb{R})$ action on the moduli space of Abelian differentials. Several very recent results are in the direction of proving that there exists only finitely many Veech surfaces
in several strata of translation surfaces, see for example \cite{AN,ANW,BHM,LN, LNW, MaW,NW}.
\subsection{Bouw-M\"oller surfaces: semi-regular polygonal presentation} \label{bmdefsection}
We will now describe the polygonal presentation of the \bm surfaces, given by Pat Hooper \cite{Hooper}. We create the surface $\M m n$ by identifying opposite parallel edges of a collection of $m$ \emph{semi-regular} polygons that each have $2n$ edges.
A \emph{semi-regular polygon} is an equiangular polygon with an even number of edges. Its edges alternate between two different lengths. The two lengths may be equal (in which case it is a regular $2n$-gon), or one of the lengths may be $0$ (in which case it is a regular $n$-gon).
\begin{example} \label{m43ex}
The \bm surface \M 4 3 ($m=4$, $n=3$) is made of $4$ polygons, each of which have $2n=6$ edges (Figure \ref{m43}). From left to right, we call these polygons $P(0), P(1), P(2), P(3)$. Polygon $P(0)$ has edge lengths $0$ and $\sin\pi/4=1/\sqrt{2}$, polygon $P(1)$ has edge lengths $1/\sqrt{2}$ and $\sin(\pi/2)=1$, polygon $P(2)$ has edge lengths $1$ and $1/\sqrt{2}$, and polygon $P(3)$ has edge lengths $1/\sqrt{2}$ and $0$.
\begin{figure}[!h]
\centering
\includegraphics[width=350pt]{m43.png}
\begin{quote}\caption{The \bm surface $\M 43$ with \mbox{$m=4$}, \mbox{$n=3$} is made from two equilateral triangles and two semi-regular hexagons. Edges with the same label are identified. \label{m43}} \end{quote}
\end{figure}
\end{example}
Definition \ref{semiregular} gives an explicit definition of an equiangular $2n$-gon whose edge lengths alternate between $a$ and $b$:
\begin{definition} \label{semiregular}
Let $P_n (a,b)$ be the polygon whose edge vectors are given by:
\[
\mathbf{v}_i =
\begin{cases}
a \ [\cos \frac{i\pi}{n},\sin \frac{i\pi}{n}] & \text{if }i\text{ is even} \\
b\ [\cos \frac{i\pi}{n}, \sin \frac{i\pi}{n}] & \text{if }i\text{ is odd}
\end{cases}
\]
for $i=0,\ldots,2n-1$. The edges whose edge vectors are $\mathbf{v_i}$ for $i$ even are called \emph{even edges}. The remaining edges are called \emph{odd edges}. We restrict to the case where at least one of $a$ or $b$ is nonzero. If $a$ or $b$ is zero, $P_n (a,b)$ degenerates to a regular $n$-gon.
\end{definition}
In creating polygons for a \bm surface, we carefully choose the edge lengths so that the resulting surface will be a Veech surface (see \S \ref{sec:Veech}).
\begin{definition}\label{pk} Given integers $m$ and $n$ with at least one of $m$ and $n$ nonzero, we define the polygons $P(0), \ldots, P(m-1)$ as follows.
\[
P(k) =
\begin{cases}
P_n \left(\sin \frac{(k+1)\pi}{m}, \sin \frac{k\pi}{m}\right) & \text{if }m\text{ is odd} \\
P_n \left(\sin \frac{k\pi}{m}, \sin \frac{(k+1)\pi}{m}\right) & \text{if }m \text{ is even and } k \text{ is even} \\
P_n \left(\sin \frac{(k+1)\pi}{m}, \sin \frac{k\pi}{m}\right) & \text{if }m \text{ is even and } k \text{ is odd.} \\
\end{cases}
\]
\end{definition}
An example of computing these edge lengths was given in Example \ref{m43ex}.
\begin{remark}
$P(0)$ and $P(m-1)$ are always regular $n$-gons, because $\sin\frac{0\pi}m=0$ and $\sin\frac{(m-1+1)\pi}m=0$. If $m$ is odd, the central $2n$-gon is regular, because $\sin (k\pi /m) = \sin ((k+1)\pi/m)$ for $k=(m-1)/2$. Figure \ref{m34} shows both of these in $\M 34$.
\end{remark}
\begin{figure}[!h]
\centering
\includegraphics[width=290pt]{m34.png}
\begin{quote}\caption{The \bm surface $\M 34$ with \mbox{$m=3$}, \mbox{$n=4$} is made from two squares and a regular octagon. Edges with the same label are identified. \label{m34}} \end{quote}
\end{figure}
Finally, we create a \bm surface by identifying opposite parallel edges of $m$ semi-regular polygons $P(0),\ldots,P(m-1)$. For each polygon in the surface, $n$ of its edges (either the even-numbered edges or the odd-numbered edges) are glued to the opposite parallel edges of the polygon on its left, and the remaining $n$ edges are glued to the opposite parallel edges of the polygon on its right. The only exceptions are the polygons on each end, which only have $n$ edges, and these edges are glued to the opposite parallel edges of the adjacent polygon. These edge identifications are shown in Figures \ref{m43} and \ref{m34}.
We now give the edge identifications explicitly:
\begin{definition}\label{bmdefinition}
The \bm surface $\M mn$ is made by identifying the edges of the $m$ semi-regular polygons $P(0),\ldots,P(m-1)$ from Definition \ref{pk}.
We form a surface by identifying the edges of the polygon in pairs. For $k$ odd, we identify the even edges of $P(k)$ with the opposite edge of $P(k+1)$, and identify the odd edges of $P(k)$ with the opposite edge of $P(k-1)$. The cases in Definition \ref{pk} of $P(k)$ are chosen so that this gluing makes sense.
\end{definition}
\begin{theorem}[\cite{D14}, Lemma 6.6]\label{modthm} Every cylinder of the \bm surface $\M mn$ in direction $k\pi/n$ has the same modulus. The modulus of each such cylinder is \mbox{$2\cot\pi/n +2 \frac{\cos \pi/m}{\sin \pi/n}$}.
\end{theorem}
We will use this fact extensively, because it means that one element of the Veech group of $\M mn$ is a \emph{shear}, a parabolic element whose derivative is $\sm 1 s 01$ for some real number $s$. For $\M mn$, \mbox{$s=2\cot\pi/n +2 \frac{\cos \pi/m}{\sin \pi/n}$} as above.
\begin{theorem} [Hooper]\label{thm:Hooper}
$\M mn$ and $\M nm$ are affinely equivalent.
\end{theorem}
This means that $\M mn$ can be transformed by an affine map (a shear plus a dilation) and then cut and reassembled into $\M nm$.
\begin{example} In Figure \ref{shear-equiv} it is for example shown how the surface in Figure \ref{m43} can be cut and reassembled into a sheared version of the surface in Figure \ref{m34}.
\begin{figure}[!h]
\centering
\includegraphics[width=400pt]{shear-equiv.png}
\begin{quote}\caption{The bold outline on the left shows how the left surface can be cut and reassembled into a sheared version of the right surface, and vice-versa. \label{shear-equiv}} \end{quote}
\end{figure}
\end{example}
We will use this affine equivalence extensively, since as already mentioned in the introduction our derivation and characterization of cutting sequences exploit the relation between cutting sequences on $\M mn$ and $\M nm$. The affine diffeomorphism between $\M mn$ and $\M nm$ that we use for derivation (which also includes a flip, since this allows a simpler description of cutting sequences) is described in \S \ref{sec:affine}.
\subsection{The Veech group of \bm surfaces} \label{veechofbm}
The Veech group of $\M mn$, as well as the Veech group of the $\M nm$, is isomorphic to the $(m,n,\infty)$ triangle group. The only exception to this is when $m$ and $n$ are both even, in which case the Veech group of $\M mn$ has index $2$ in the $(m,n,\infty)$ triangle group see \cite{Hooper}. Thus, the Veech group contains two elliptic elements of order $2m$ and $2n$ respectively. One can take as generators one of this two elements and a shear (or a ``flip and shear'') automorphism from the $(m,n)$ surface to itself. In the $(n,m)$ polygon presentation of $\M mn$ the elliptic element of order $2m$ is a rotation of order $\pi/m$ (while in the $(m,n)$ polygon decomposition the elliptic element is a rotation of order $\pi/n$). Thus, the elliptic element of order $2n$ acting on $\M mn$ can be obtained conjugating the rotation of $\pi/n$ on the dual surface $\M nm$ by the affine diffeomorphism between $\M mn$ and $\M nm$ given by Theorem \ref{thm:Hooper}. In section Section \ref{teich} we describe the action of these Veech group elements on a tessellation of the hyperbolic plane by $(m,n,\infty)$ triangles shown in Figure \ref{hypdisk}.
\section{Characterization of cutting sequences} \label{sec:characterization}
In this section we will give a complete characterization of (the closure of) \bm cutting sequences in the set of all bi-infinite sequences in the alphabet $\LL mn $.
As in \cite{SU} we cannot give a straightforward characterization of the cutting sequences as the closure of infinitely derivable sequences.
In fact, such a characterization holds only for the case of the Sturmian sequences on the square, presented in \cite{Series}. As in the regular $2n$-gons case for $n \geq 3$, we can still give a full characterization and we will present it in two different ways.
The first way, as in \cite{SU}, consists in introducing the so-called \emph{generation rules}, a combinatorial operation on sequences inverting the derivation previously defined. These are defined in \S \ref{sec:generation}, where it is shown that they allow us to \emph{invert} derivation. In \S \ref{sec:generation_characterization} we then state and prove a characterization using generation (see Theorem \ref{thm:generation_characterization}).
The second way, presented in \S \ref{sec:substitutionscharacterization}, will be obtained from the previous one by replacing the generation rules with the better known substitutions, in order to obtain an $\mathcal{S}$-adic presentation, i.e. Theorem \ref{thm:substitutionscharacterization}.
\subsection{Generation as an inverse to derivation}\label{sec:generation}
In this section we define \emph{generation operators},\footnote{The name was introduced in \cite{SU}, where this type of operator was also used to invert derivation.} which will allow us to \emph{invert derivation}. Generations are combinatorial operations on sequences which, like derivation, act by interpolating a sequence with new edge labels and dropping the previous ones. They will be used to produce sequences which, derived, give back the original sequence.
Let us recall that in our renormalization procedure we always compose the derivation operators (alternatively $D m n$ and $D n m$) with a normalization operator ($\Norm nm $ or $\Norm mn$ respectively) which maps sequences admissible in other sectors back to sequences admissible in the standard sectors (on which the derivation operators are defined). Thus, we want more precisely to define operators that invert the action of the composition $\Norm nm D m n$ of derivation and normalization on sequences in $\M mn$.
It turns out that the operator $\Norm nm D m n$ cannot be inverted uniquely. This is because, as we saw in \S \ref{derivation}, under the action of $\AD m n$, one of our sectors in $\M mn$ opens up to a whole range of sectors in $\M nm$ (more precisely $m-1$ sectors, as many as the sectors in the complement of the standard one for $\M nm$.)
Then by normalizing, we bring each of these sectors back to the standard one.
As a consequence, when we have the cutting sequence of a trajectory in $\Sec 0mn$ for $\M nm $, there exist $m-1$ cutting sequences of trajectories in the standard sector for $\M nm$ which, derived and normalized, produce the same cutting sequence. To uniquely determine an inverse, we have to specify the sector in which the derived sequence is admissible before normalizing.
We will hence define $m-1$ generations $\gen i mn $ for $1\leq i \leq m-1$, each of which inverts $\Norm nm D m n$: each $\gen i mn $ will send an admissible sequence $w$ in $\T 0 nm$
to an admissible sequence $\gen i mn w $ in $\T 0 mn$ which, when derived and normalized, gives back the sequence $w$. For how we defined derivation and normalization, the derived and normalized sequence of the cutting sequence of a trajectory in $\M{m}{n}$, is the cutting sequence of a trajectory in $\M{n}{m}$. Generations $\gen i mn $ will act in the same way: applying them to a cutting sequence of a trajectory in the standard sector on $\M{n}{m}$ will give a cutting sequence in $\M{m}{n}$.
We will first define operators $\mathfrak g_i^0$, for $1\leq i \leq m-1$ (which invert $D m n $), and then to use them to define $\gen i n m$ (which inverts $\Norm nm D m n $). The operator $\mathfrak g_i^0$ applied to the sequence $w$ of a trajectory in $\Sec j mn$ in $\M{m}{n}$, will produce a sequence $W=\mathfrak g_i^j w$ admissible in transition diagram $\T i n m$, and such that $D m n W=w$.
As usual let us first start with defining generations for the $\M{3}{4}$ case.
First we will define $\mathfrak g_i^0$.
Such an operator, applied to the cutting sequence of a trajectory in $\M{3}{4}$ admissible in the diagram $\T i 3 4$, gives a sequence admissible in diagram $\T 0 4 3$, for trajectories in $\M{4}{3}$.
In the proof of Proposition \ref{inverse}, we will explain how to construct the diagram from which we deduce such an operator.
For the general case, the following definition will remain the same, but the diagrams in Figures \ref{gendiagram-43} and \ref{gendiagram-34} will be obviously different, find in the way described in the proof of the Proposition.
\begin{definition}
Let $w$ be a sequence admissible in diagram $\T k n m$.
Then $\mathfrak g_k^0 w$ is the sequence obtained by following the path defined by $w$ in $\T k n m$, interpolating the elements of $w$ with the labels on the arrows of a diagram analogous to the ones in Figure \ref{gendiagram-43} (which we will call \emph{generation diagrams} and denote by $\GD k n m$), and dropping the previous ones.
For example, if our sequence contains $w= \dots n_1 n_2 n_3 \dots$, and the arrow from $n_1$ to $n_2$ in diagram $\GD k n m$ has the label $w_{n_1 n_2}^k$, while the arrow from $n_2$ to $n_3$ in $\GD k n m$ has the label $w_{n_2 n_3}^k$, then $\mathfrak g_k^0 w= \dots w^k_{n_1 n_2} w^k_{n_2 n_3} \dots$.
\end{definition}
\begin{figure}[!h]
\centering
\includegraphics[width=280pt]{gendiagram-43.png}
\begin{quote}
\caption{Generation diagrams describing the operator $\mathfrak g_i^0$ for $\M 43$ \label{gendiagram-43}} \end{quote}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{gen-op.png}
\begin{quote}
\caption{Generation diagrams describing the generation operator $\mathfrak g_i^0$ for $\M 34$} \label{gendiagram-34} \end{quote}
\end{figure}
We then define the generation operator as follows:
\begin{definition} \label{genoperator}
The generation operator $\gen i n m$ is defined by
\[
\gen i n m w= \mathfrak g_i^0 (\perm i n m)^{-1} w,
\]
where $w$ is a sequence admissible in $\T 0 n m$ and $\perm i n m$ is the $i^{\text{th}}$ isometry permutation in $\M n m$.
\end{definition}
As we said, this operator inverts the derivation and normalization operation on sequences. More specifically, we have the following:
\begin{proposition}[Generation as an inverse to derivation.]\label{inverse}
Let $w$ be a sequence admissible in diagram $\T 0 n m$.
Then for every $1\leq i \leq n-1$, the sequence $W=\gen i n m w$ is admissible in diagram $\T 0 m n$ and satisfies the equation $\Norm nm D m n W=w$. Moreover, the derivative $D mn W$ (before normalization) is a sequence admissible in diagram $\T i m n$.
\end{proposition}
In order to prove the Proposition, the following Lemma will be crucial.
The proof of the Proposition relies in fact on the idea that the diagrams in Figures \ref{gendiagram-43} and \ref{gendiagram-34} are constructed exactly in such a way to invert the derivation operation.
This Lemma is useful exactly in this sense and is true for generic \bm surfaces.
\begin{lemma} \label{uniquepath}
Let us consider the derivation diagram for $\M{m}{n}$,
as in Figure \ref{hextooct}.
Given two green edge labels, which are on the arrows of the derivation diagram, if there is a way to get from one to the other passing through edges and vertices of the transition diagrams, but without crossing another green edge label, than it is unique.
In other words, we cannot always go from a green edge label to an other green edge label following a path satisfying such conditions, but if it is possible, then there exists only one such path.
\end{lemma}
\begin{proof}
The condition of not passing through another green edge label implies that we can move either in the same column, upwards or downwards, or on the next one, on the left or on the right, because we have a green edge label on each horizontal arrow.
Starting from a green edge label, unless we are on a boundary edge, we have the first choice to do.
We will have the choice of which of the two arrows carrying that edge label to follow.
The choice will be related to the two different cases of moving upwards or downwards if we are going to the same column, or if we are moving left or right if we are changing the column.
Let us now assume that we want to reach an edge label on the same column.
For the structure of the derivation diagram, we know that if we follow one of the arrows we will get to a red vertex whose column has arrows going upwards, while if we choose the other one we will get to arrows going downwards.
According to whether the green edge label we want to reach is higher or lower with respect to the starting one, we will choose which way to go.
Clearly, in the opposite case, we will be restricted to go on the wrong side and we will never reach the targeted green edge label.
At that point, we follow arrows upwards or downwards until reaching the level of the green edge label where we want to arrive.
In fact, trying to move again to try to change column would make us cross a new green edge label, which we can afford to do only once we reach the level of the edge label we want.
After stopping on the right red vertex we have half more arrow to move back to the column of the green edge label, reaching the one we were targeting.
This is obviously possible, because the horizontal arrows are always double.
On the contrary, we consider now that we want to reach a green edge label on a column adjacent to the previous one.
In this case, the first choice of the edge to follow for the first half arrow depends on whether the adjacent column is the right or the left one.
Clearly, going in the other direction would make it impossible to reach the green edge label we want.
As before, at that point, we can only follow the arrows going upwards or downwards, according to the parity of the column.
As we said, we are assuming that there is a path connecting the two edge labels.
In fact, if for example the arrows are going downwards and the edge label is on a higher row, then such a path does not exist, but this is a case we are not considering.
From what we said, it is clear that at each step the choice made is the only possible one to reach the targeted edge label.
\end{proof}
The path that was found in the proof of the Lemma \ref{uniquepath} will be used again later in the proof of Proposition \ref{inverse}.
We also prove the following Lemma, which will be used later in the proof of Theorem \ref{thm:substitutionscharacterization}.
\begin{lemma}\label{lemma:uniqueprecedent}
For any vertex $n_1$ of any generation diagram $\GD i mn$, the labels of all the arrows of $\GD i mn$ which end in vertex $ n_1$ end with the same edge label of the alphabet $\LL nm$.
\end{lemma}
The proof of the Lemma is given below. For example, in Figure \ref{gendiagram-43}, one can see that the three arrows which end in {\rd 9} carry the labels {\gr 6} and {\gr 36}, which all end with {\gr 6}. In this case one can verify by inspection of $\GD i 43$ that the same is true for any other vertex.
\begin{definition}[Unique precedent]
For any ${ n_1} \in \LL mn$, the unique edge label ${ n_1} \in \LL nm$ given by Lemma \ref{lemma:uniqueprecedent} (i.e. the edge label of $\LL nm$ with which all labels of arrows ending at vertex ${ n_1}$ ends) will be called the \emph{unique precedent} of ${n_1}$.
\end{definition}
\begin{proof}[Proof of Lemma \ref{lemma:uniqueprecedent}]
The proof uses the stairs configuration introduced in \S \ref{stairsandhats}.
Let us first recall that (by definition of generation diagrams and Proposition \ref{inverse}) given a path in $\GD i mn$, the generated sequences obtained by reading off the labeles of arrows of $\GD i mn$ along the path are by construction admissible sequences in $\T 0 nm$ that, derived, give the sequence of labels of vertices crossed by the path. Furthermore, each label of an arrow on $\GD i mn$ is a cutting sequence of a piece of a trajectory in the standard sector $\Sec 0 nm$ that crosses the sequence of sides of $\M nm$ described by the label string. This is because, when following on $\GD i mn$ a path coming from a cutting sequence, we produce a cutting sequence, with the labels of the arrows crossed as subsequences.
Such a label string will hence be part of a cutting sequence in sector $\Sec 0 nm$.
The label of the incoming vertex is an edge label of the flip and sheared copy of $\M mn$ that is hit next by the same trajectory. If we apply a shear to pass to the orthogonal presentation, we are considering trajectories with slope in the first quadrant, and the labels of an arrow describe the sequence of {\rd negative} diagonals of basic rectangles hit (see for example Figure \ref{coincide}), while the vertex label is the label of a {\gr positive} diagonal.
Without loss of generality, we can assume that the edge label of $\LL mn$ that we are considering is the label of the {\gr positive} diagonal $\gr b$ in the stair configuration in Figure \ref{stair}, since
recalling Convention \ref{convention:ordered_sides}, vertical or horizontal sides can be considered as degenerated diagonals in a degenerated stair (corresponding to a degenerated hat in the augmented Hooper diagram).
One can then see from Figure \ref{stair} that any trajectories with slope in the first quadrant which hit the {\gr positive} diagonal labeled by {\gr b} in Figure \ref{stair}, last hit the {\rd negative} diagonal labeled by {\rd a}. This hence shows that all labels of arrows in $\GD i mn$ which end in the vertex corresponding to the side {\gr b} end with the edge label $\rd a$ of $\LL nm$.
\end{proof}
We are now ready to prove Proposition \ref{inverse} and at the same time explain how to construct in general the generation diagrams for the operator $\mathfrak g_i^0$.
\begin{proof}[Proof of Proposition \ref{inverse}]
As we explained in \S \ref{derivation}, the operation of derivation consists of taking a cutting sequence in $\M{m}{n}$ and interpolating pairs of edge labels with new ones, then dropping the previous ones.
In this way we get a cutting sequence in $\M{n}{m}$.
To invert it, given a cutting sequence in $\M{n}{m}$, we want to recover the previous edge labels to appear in the new ones.
As we saw, derivation might insert or not an edge label between two original ones, and if it does, it is only one.
This implies that generation will add edge labels (one or a string) between each and every pair of edge labels of the new sequence.
For clarity, we first explain how to recover the edge labels to interpolate through the example of $\M{3}{4}$. The proof for general $(m,n)$ follows verbatim the proof in this special case.
In \S \ref{derivation}, we started from a sequence in the standard sector of $\M{4}{3}$, colored in red in the figures, and got a sequence in $\M 3 4$, colored in green in the figures.
The method consisted in interpolating the red edge labels with the green ones, following the diagram in figure \ref{hextooct} (see also Figure \ref{34auxtd} in \S \ref{transitiondiagrams}).
\begin{figure}[!h]
\centering
\includegraphics[width=100pt]{hextooct.png}
\begin{quote}\caption{The derivation diagram for $\M{4}{3}$. \label{hextooct}} \end{quote}
\end{figure}
Let us now assume that we have a sequence $w$ in green edge labels.
It will be a path in one of the transition diagrams $\T i 3 4$.
Since we saw that a sequence in the standard sector gives a sequence admissible in one of the other ones for the other surface, $i$ will be between 1 and $n-1$, so here $i=1,2,3$.
Let us define $k$ such that $w$ is admissible in $\T k 3 4$.
Each pair of green edge labels is hence a transition in $\T k 3 4$.
For each of these pairs, we want to recover which path in the diagram in Figure \ref{hextooct} (i. e. from which cutting sequence in red edge labels) it can come from.
This means that we have two green edge labels and we want to find a path leading from one to the other through edges and vertices of our derivation diagram for $\M{4}{3}$.
Since we are considering a transition in $w$, we want a path which does not intersect other green edge labels in the middle, or we would have the corresponding transition instead.
A path connecting two green edge labels admitted in $\T k 3 4$ will always exist, because the derivation opens the standard sectors surjectively on all the others.
These are exactly the hypotheses of Lemma \ref{uniquepath}, so we can find a unique such path.
We then record the red edge labels crossed by such a path on the arrow in $\T k 3 4$ corresponding to the transition that we are considering.
Such diagrams with labels are called $\GD k nm$, and in the case $\M{3}{4}$ this procedure gives the diagrams in Figure \ref{gendiagram-34}.
By construction, each of these strings that we add on the arrows represents the unique string the transition in $w$ can come from.
Hence, it creates an operator that inverts derivation.
The same procedure can be applied to a generic \bm surface, as we saw that in all cases the two transition diagrams for the $(m,n)$ and $(n,m)$ surfaces are combined together forming the derivation diagram we described in \S \ref{transitiondiagrams}.
\end{proof}
\subsection{Characterization via generation operators\label{sec:generation_characterization}}
The following theorem gives a characterization of the closure of the set of cutting sequences.
\begin{theorem}[Characterization of \bm cutting sequences via generation]\label{thm:generation_characterization}
A word $w$ is in the closure of the set of cutting sequences of bi-infinite linear trajectories on $\M mn$ if and only if there exists $0\leq b_0 \leq 2n-1$
and two sequences $(a_k)_k \in \{1, \dots , m-1\}^\mathbb{N}$ and $(b_k)_k \in \{1, \dots , n-1\}^\mathbb{N}$
such that
\begin{equation} \label{infiniteintersection}
w \in \mathscr{G}\left(b_0, (a_1,b_1)\dots,(a_k,b_k) \right) := \bigcap_{k\in\mathbb{N}} (\perm {b_0}{m} n)^{-1}(\gen {a_1} n m \gen {b_1} mn)( \gen {a_2} n m \gen {b_2} mn ) \ldots ( \gen {a_k} n m \gen {b_k} m n) \textrm{Ad}_{m,n},
\end{equation}
where $\textrm{Ad}_{m,n}$ denotes the set of words in $\LL mn ^\mathbb{Z} $ which are admissible in $\T 0m n $.
Thus, a word $w$ belongs to the closure of the set of cutting sequences if and only if
\begin{equation} \label{unionfiniteintersection}
w \in \bigcup_{0 \leq b_0 \leq 2n-1} \, \, \bigcap_{k\in\mathbb{N}} \, \, \bigcup_{\substack{ 1\leq a_k \leq m-1 \\ 1\leq b_k \leq n-1} } \mathscr{G}\left(b_0,(a_1,b_1)\dots,(a_k,b_k) \right).
\end{equation}
\end{theorem}
\begin{remark}
As we will show in the proof, the sequences $(a_k)_k \in \{1, \dots , n-1\}^\mathbb{N}$ and $(b_k)_k \in \{1, \dots , m-1\}^\mathbb{N}$ in Theorem \ref{thm:generation_characterization} will be given by the itinerary of the direction $\theta$ of the trajectory of which $w$ is cutting sequence under the \bm Farey map $\FF mn$.
\end{remark}
\begin{proof}[Proof of \ref{thm:generation_characterization}]
Let us denote by $I \subset {\LL mn}^{\mathbb{Z}}$ the union of intersections in \eqref{unionfiniteintersection}, by $CS$ the set of cutting sequences of bi-infinite linear trajectories on $\M mn$ and by $\overline{CS}$ be its closure in ${\LL mn}^{\mathbb{Z}}$.
In order to show that $\overline{CS} = \I$, one has to show that $CS \subset \I$, that $\I$ is closed and that $CS$ is dense in $\I$.
\smallskip
{\it Step 1 ($CS \subset \I$)}
Let $w$ be the cutting sequence of a trajectory $\tau$ in direction $\theta$. Let $b_0$ be such that $\theta \in \Sec {b_0}{m}{n} $ and let $(a_k)_k $ and $(b_k)_k $ be such that $\left( (a_k, b_k) \right)_k$ is the itinerary of $\theta_0:=\refl {b_0} m n [\theta] \in \Sec 0 mn$ under $\FF mn$ (where $\refl {b_0} m n [\theta] $ denotes the action of $\refl {b_0} m n $ on directions, see the notation introduced in \S \ref{sec:projective}).
Let $(w^k)_k$ be the sequence of derivatives, see Definition \ref{def:derivatives}. From Propostion \ref{thm:derivable}, it follows that $w^k$ is the cutting sequence of a trajectory $\tau^k$. Furtheremore, the sequence $(\tau^k)_k$ is obtained by the following recursive definition (which gives the geometric counterpart of the renormalization process on cutting sequeneces obtained by alternatively deriving and normalizing):
\begin{equation} \label{trajectories_rec}
\tau^0:=\tau, \qquad \tau^{k+1} := \begin{cases} \AD m n (\refl {b_j} mn) \tau^k , & k=2j \text{ even} ; \\ \AD nm (\refl {a_j} nm) \tau^k, & k=2j-1 \text{ odd} . \end{cases}
\end{equation}
The direction of the trajectory $\tau^k$ belongs to $\Sec {a_j}{n}{m}$ for $k=2j-1$ odd and to $\Sec {b_j}{n}{m}$ for $k=2j$ even, as shown in the proof of Proposition \ref{prop:itineraries_vs_sectors}.
Let $(u_k)_k$ be the sequence of \emph{normalized derivatives}, given by
\begin{equation*}
u_k := \begin{cases} \perm {a_j} mn w_k , & k=2j-1 \, \text{ odd} ; \\ \perm {b_j} mn w_k, & k=2j \, \text{ even} .
\end{cases}
\end{equation*}
Remark that when $w$ is non-periodic, this could be simply written as $u_k := \Norm mn w_k$ or $u_k := \Norm nm w_k$ according to the parity of $k$, but for periodic sequences the operators $\Norm mn$ and $\Norm nm$ are a priori not defined (since a derivative could possibly be admissible in more than one sector), so we are using the knowledge of the direction of the associated trajectory to define normalizations.
We will then show that for any $k\geq 0$:
\begin{equation} \label{inductionass}
w= (\perm {b_0} m n)^{-1} (\gen {a_1} n m \gen {b_1} mn)( \gen {a_2} n m \gen {b_2} mn ) \ldots ( \gen {a_k} n m \gen {b_k} m n) u_{2k}.
\end{equation}
This will show that $w$ belongs to the intersections \eqref{infiniteintersection} and hence that $CS \subset \I$.
First let us remark that by replacing $w$ with $\Norm mn w= \perm {b_0} m n w$ we can assume without loss of generality that $b_0=0$, so $\perm {b_0} m n$ is the identity. Notice also that by Proposition \ref{inverse} $w^k$ is the cutting sequence of a trajectory $\tau^k$ whose direction, by definition of the Farey map and its itinerary, is in $\Sec {a_j}{n}{ m}$ for $k=2j-1$ odd and in $\Sec {b_j}{ m}{ n}$ for $k=2j$ even. Thus, if $k=2j-1$ is odd (respectively $k=2j$ is even), $u^{k}= \Norm nm D m n u^{k-1}$ (respectively $ u^{k}=\Norm mn D nm u^k$) and $w^{k}$ is the cutting sequence of a trajectory in sector $\Sec {a_j} nm$ (respectively $\Sec {b_j} mn$. By Proposition \ref{inverse}, $u^{k-1}$ is hence equal to $ \gen {a_j} n m u^{k}$ (respectively $ \gen {b_j} m n u^{k-1}$). Thus, if by the inductive assumption we have \eqref{inductionass} for $k-1$, we can write $u_{2(k-1)} = \gen {a_k} n m \gen {b_k} m n) u^{2k}$ and get \eqref{inductionass} for $k$. This concludes the proof of this step.
\smallskip
{\it Step 2 ($\I$ is closed)}
$\I$ is given by \eqref{unionfiniteintersection} as a union of countable intersections of finite unions.
Since the set $\mathrm{Ad}_{m,n}$ of admissible words in $\T 0 m n$ is a subshift of finite type, $\mathrm{Ad}_{m,n}$ is closed (see for example Chapter $6$ of \cite{LM:sym}). Moreover, one can check that the composition $\gen {i}{n}{m} \gen {j}{m}{n}$ is an operator from $\LL m n^\mathbb{Z}$ back to itself which is Lipschiz, since if $u, v \in \mathrm{Ad}_{m,n}$ have a common subword, the interpolated words $\gen {i}{n}{m} \gen {j}{m}{n}{u}$ and $\gen {i}{n}{m} \gen {j}{m}{n}{v}$ have an even longer common subword. Thus, the sets $\mathscr{G}\left((a_1,b_1)\dots,(a_k,b_k) \right)$ in (\ref{unionfiniteintersection}) are closed, since they are the image of a closed set under a continuous map from the compact space ${\LL mn}^{\mathbb{Z}}$.
Since in (\ref{unionfiniteintersection}), for each $k$, one considers a finite union of closed sets,
$\I$ is a finite union of countable intersection of closed sets and thus it is closed.
\smallskip
{\it Step 3 ($CS$ is dense in $\I$)}
By the definition of topology on ${\LL mn}^\mathbb{Z}$ (see for example \cite{LM:sym}), to show that cutting sequences are dense in \eqref{unionfiniteintersection}, it is enough to show that each arbitrarily long finite subword $u$ of a word $w$ in the intersection \eqref{infiniteintersection} is contained in a bi-infinite cutting sequence of a trajectory on $\M mn$.
Let $v$ be such a finite subword and let $b_0$ and $\left((a_k, b_k)\right)_k$ be the integer and sequences, respectively, that appear in the expression \eqref{infiniteintersection}.
Let $(w^k)_k$ be the sequence of derivatives given by Definition \ref{def:derivatives} and let $(v^k)_k$ be the subwords (possibly empty) which are images of $v$ in $w^k$ (using the terminology introduced at the very beginning of \S~\ref{sec:fixedpoints}).
Recall that the operator $D nm \Norm nm D mn \Norm mn$ either strictly decreases or does not increase the length of finite subwords (see Remark \ref{rk:derivation_short}).
Thus, either there exists a minimal $\overline{k}$ such that $v^{{\overline{k}+1}}$ is empty (let us call this situation Case $(i)$), or there exists a minimal $\overline{k}$ such that $v^{\overline{k}}$ has the same length as $v^{{k}}$ for all $k\geq \overline{k}$ (Case $(ii)$).
Let us show that in both cases $v^{\overline{k}}$ is a subword of the cutting sequence of some periodic trajectory $\tau^{\overline{k}}$.
In Case $(i)$,
let $n_1$ (respectively $n_2$) be the last (respectively the first) edge label of $w^{\overline{k}}$ which survives in $w^{\overline{k}+1} $ before (respectively after) the occurrence of the subword $ { u^{\overline{k}}}$. Thus, since $u^{\overline{k}+1}$ is the empty word by definition of $\overline{k}$, $n_1n_2$ is a transition in $w^{\overline{k}+1}$.
By definition of a transition, we can hence find a trajectory ${\tau}^{\overline{k}+1} $ which contains the transition $n_1n_2$ in its cutting sequence.
If we set $\tau^{\overline{k}}$ to be equal to $(\refl {a_j} nm)^{-1} (\AD nm )^{-1} {\tau}^{\overline{k}+1} $,
if $\overline{k}=2j-1$ is odd (respectively $(\refl {b_j} mn)^{-1} (\AD mn )^{-1} {\tau}^{\overline{k}+1} $ if $k=2j$ is even), the cutting sequence of $\tau^{\overline{k}}$ contains the block $ { v^{\overline{k}}}$ in its cutting sequence.
In Case $(ii)$, note that since $v^{\overline{k}}$ has the same length as $v^{\overline{k}+2}$, by Lemma \ref{periodicABAB} it must be a finite subword of the infinite periodic word $\dots n_1n_2n_1n_2 \dots$ for some edge labels $n_1,n_2$.
Now, by Lemma \ref{realizecs}, all words of this type are cutting sequences of periodic trajectories, so there exists a periodic trajectory ${\tau}^{\overline{k}} $ which contains $v^{\overline{k}}$ in its cutting sequence.
Finally, once we have found a trajectory $\tau^{\overline{k}}$ which contains $v^{\overline{k}}$ in its cutting sequence, we will reconstruct a trajectory $\tau$ which contains $v$ in its cutting sequence
by applying in reverse order the steps which invert derivation at the combinatorial level (i.e. the generations given by the knowledge of the sequences of admissible sectors) on cutting sequences, and at the same time applying the corresponding affine diffeomorphisms on trajectories.
More precisely, we can define by recursion trajectories $\tau^{{k}}$ which contain $v^{{k}}$ in their cutting sequence for $k= \overline{k}-1, \overline{k}-2, \dots, 1 ,0$ as follows.
Let us make the inductive assumption that $v^{k}$ is contained in the cutting sequence of $\tau^k$.
Let us denote by $\overline{w}^k$ the cutting sequence of the normalized trajectory $\overline{\tau}^k$ and by $\overline{v}^k$ the block in $\overline{w}^k$ which corresponds to ${v}^k$ in $w^k$. By definition of itinerary and by Proposition \ref{inverse}, we then have that
$u^{k-1} = \gen {a_j}{n}{m} u^{k}$ for $k=2j-1$ odd or $u^{k-1} = \gen {b_j} mn w^{k}$ for $k=2j$ even.
Thus, setting $\overline{\tau}^{k-1}$ to be equal to ${\refl{a_j}{n}{m}}^{-1} {\AD mn}^{-1} \overline{\tau}^{k}$ or $ {\AD nm}^{-1} {\refl{b_j}{m}{n}}^{-1} \overline{\tau}^{k}$ respectively, we have that by Proposition \ref{inverse} the derived sequence $\overline{w}^{k}$ contains $\overline{v}^k$. Thus, if we set $\tau^{k-1} $ to be respectively
${\refl{b_{j-1}}{m}{n}} \overline{\tau}^{k-1}$ or ${\refl{a_{j-1}}{n}{m}} \overline{\tau}^{k-1}$, $\tau^{k-1}$ has a cutting sequence which contains $v^{k-1}$.
Continuing this recursion for $\overline{k}$ steps, we finally obtain a trajectory $\tau^0$ which contains the finite subword $v$. This concludes the proof that cutting sequences are dense in $\I$.
\end{proof}
\subsection{An $\mathcal{S}$-adic characterization via substitutions}\label{sec:substitutionscharacterization}
In this section we present an alternative characterization using the more familiar language of substitutions. This will be obtained by starting from the characterization via generations (Theorem \ref{thm:generation_characterization}) in the previous section \S\ref{sec:generation_characterization}, and showing that generations can be converted to substitutions on a different alphabet corresponding to \emph{arrows} (or transitions) in transition diagrams. Let us first recall the formal definition of a substitution.
\begin{definition}[Substitution]\label{def:substitution}
A \emph{substitution} $\sigma $ on the alphabet $\mathcal{A}$ is a map that sends each symbol in the alphabet to a finite word in the same alphabet, then extended to act on $\mathcal{A} ^{\mathbb{Z}}$ by juxtaposition, so that if for $a\in\mathcal{A}$ we have $\sigma (a) = w_a $ where $w_a$ are finite words in $\mathcal{A}$, then for $w= (a_i)^\mathbb{Z} \in \{0,1\}^ \mathbb{Z}$ we have that $\sigma(\cdots a_{-1} a_0 a_1 \cdots) = \cdots w_{a_{-1}} w_{a_0} w_{a_1} \cdots$.
\end{definition}
Let us now define a new alphabet $\Ar mn$, which we will use to label \emph{arrows} of a transition diagram of $\M mn$. The cardinality of the alphabet $\Ar mn$ is $N_{m,n}:=\NA mn $ since this is the number of arrows in the diagrams $\T i mn$. Recall that from each vertex in $\T i mn$ there is at most one outgoing vertical arrow, for a total of $n(m-2)$ vertical arrows. On the other hand, there can be two outgoing horizontal arrows, going one right and one left, for a total of $2(m-1)(n-2)$ horizontal arrows. Hence, we will use as edge labels $v_i$,
$l_i, r_i$ where $v, l, r$ will stays respectively for \emph{vertical}, \emph{left} and \emph{right} and the index $i$ runs from $1$ to the number of arrows in each group, i.e.
\begin{equation*}
\Ar mn = \{ v_i, 1\leq i \leq n(m-2)\} \cup \{ r_i, 1\leq i \leq (m-1)(n-2)\} \cup \{ l_i, 1\leq i \leq (m-1)(n-2)\}.
\end{equation*}
We label the \emph{arrows} of the universal diagram $\UD m n$ in a \emph{snaking pattern} starting from the upper left corner, as shown in Figure \ref{fig:arrows_names_ex} for $\M 43 $ and $\M 34$,
where the labels of the alphabet $\Ar 43$ are all in red (since they represent transitions between the red vertices), while the labels of $\Ar 43$ are all in green. In particular for vertical arrows $v_i$, $v_1$ is the vertical arrow from the top left vertex, then $i$ increases by going down on odd columns and up on even ones; right arrows $r_i$ are numbered so that $r_1$ is also exiting the top left vertex and $i$ always increases going from left to right in each row; finally left arrows $l_i$ are numbered so that $l_1$ exits the top right vertex and $i$ always increases going from right to left in each row.
This labeling of $\UD m n$ induces a labeling of arrows on each $\T i m n$ for $0\leq i \leq n-1$, where all arrows are labeled in the same way in each diagram.
\begin{figure}[!h]
\centering
\includegraphics[width=350pt]{fig-arrows_names_ex.png}
\begin{quote}
\caption{The labeling of $\UD m n$ with the labels of $\Ar mn$ for \mbox{$m=4,n=3$} and for \mbox{$m=3,n=4$} \label{fig:arrows_names_ex}} \end{quote}
\end{figure}
Let us call \emph{admissible} the words in the alphabet $\Ar mn$ that correspond to paths of arrows in a transition diagram (in a similar way to Definition \ref{admissibledef}).
\begin{definition}\label{def:admissible} Let us say that the word $a$ in ${\Ar mn}^{\mathbb{Z}}$ is \emph{admissible}
if it describes an infinite path on $\UD m n$, i.e. all pairs of consecutive labels $a_i a_{i+1}$ are such that $a_i$ labels an arrow that ends in a vertex in which the arrow labeled by $a_{i+1}$ starts.
\end{definition}
Let us also define an operator
$\Tr i m n$ which, for $0 \leq i \leq 2n-1$, allows us to convert admissible words in $\Ar mn^{\mathbb{Z}}$ to words in $\LL mn^{\mathbb{Z}}$ that are admissible in diagram $\T i m n$.
\begin{definition}\label{def:Tr}
The operator $\Tr 0 m n $ sends an \emph{admissible} sequence $(a_k)_k$ in $ \Ar mn^{\mathbb{Z}}$ to $(w_k)_k$ the sequence in $\LL mn^{\mathbb{Z}}$ admissible in $\T 0 mn $ obtained by reading off the names of the vertices of a path in $\T 0 mn $ which goes through all the arrows $\dots a_{-1}, a_0, a_1, \dots$.
The operators $\Tr i m n $ for $0\leq i \leq 2n-1$ are obtained by composing $\Tr 0 m n $ with the action on $\LL mn$ of $\perm i mn$, so that $\Tr i m n := \perm i mn \circ \Tr 0 m n$ maps admissible sequences in $ \Ar mn^{\mathbb{Z}}$ to the sequences in $\LL mn^{\mathbb{Z}}$ admissible in $\T i mn $.
\end{definition}
\begin{example}\label{ex:Tr}
Let $m=4$ and $n=3$.
Consider an admissible sequence in $\Ar 43 ^ \mathbb Z$ containing ${\re r_1l_2v_1}$.
This is possible because the string represents a path in $\UD 43$ (see Figure \ref{fig:arrows_names_ex}).
Now, to calculate $\Tr 0 mn (\dots {\re r_1l_2v_1} \dots)$, we need to look at $\T 0 43$ (Figure \ref{all-td43}).
We record the vertices of the path represented by these arrows and we get a word which will contain the subword ${\re 1216}$.
So $\Tr 0 mn (\dots {\rd r_1l_2v_1}\dots )=\dots {\re 1216} \dots $.
\end{example}
\begin{remark}\label{invertibilityTr}
The operator $\Tr i m n $ is invertible and for $0\leq i \leq n-1$ the inverse $(\Tr i m n )^{-1}$ maps a sequence $(w_k)_k$ in $\LL mn^{\mathbb{Z}}$ admissible in $\T i mn $ to the admissible sequence $(a_k)_k$ in $\Ar mn^{\mathbb{Z}}$ obtained by reading off the names $\dots a_{-1}, a_0, a_1, \dots$ of the arrows of a path in $\T i mn $ which goes through all the vertices $\dots w_{-1}, w_0, w_1, \dots$.
\end{remark}
\smallskip
The main result of this section is the following characterization.
\begin{theorem}[An $\mathcal{S}$-adic characterization of \bm cutting sequences.]\label{thm:substitutionscharacterization}
There exist \mbox{$(n-1)(m-1)$} substitutions $\sigma_{i,j}$ for $1\leq i \leq n-1 $ and $1\leq j \leq m-1 $ on the alphabet $ \Ar mn$ such that the following holds:
The sequence $w$ is the closure of the set of cutting sequences of a bi-infinite linear trajectory on $\M mn$
if and only if there exist two sequences $(a_k)_k \in \{1, \dots , n-1\}^\mathbb{N}$ and $ (b_k)_k \in \{1, \dots , m-1\}^\mathbb{N}$ and $0\leq b_0 \leq 2n-1$ such that
\begin{equation}\label{intersection_substitutions}
w \in \bigcap_{k\in \mathbb{N}} \Tr {b_0}{m}{n} {\Sub{a_1}{b_1}{m}{n}} {\Sub{a_2}{b_2}{m}{n}} \dots {\Sub{a_k}{b_k}{m}{n}} \Ar mn^{\mathbb{Z}}.
\end{equation}
Furthermore, the sequence $\left((a_k, b_k) \right)_k $ the {itinerary} of $\theta$ under $\FF m n$.
\end{theorem}
This gives the desired $\mathcal{S}$-adic characterization, where
\begin{equation*}
\mathcal{S}= \mathcal{S}_{m,n}= \{ \sigma_{i,j}, \qquad 1\leq i \leq n-1 , 1\leq i \leq m-1 \}..
\end{equation*}
Equivalently, \eqref{intersection_substitutions} can be rephrased by saying that any sequence in the closure of the set of cutting sequences is obtained as an \emph{inverse limit} of products of the substitutions in $\mathcal{S}_{m,n}$, i.e. there exists a sequence of labels $a_k$ in $\Ar mn$ such that
\begin{equation}\label{inverse_limit}
w= \lim_{n \to \infty} \Tr {b_0}{m}{n} {\Sub{a_1}{b_1}{m}{n}} {\Sub{a_2}{b_2}{m}{n}} \dots {\Sub{a_k}{b_k}{m}{n}} a_k.
\end{equation}
The above expression is known as \emph{$\mathcal{S}$-adic expansion} of $w$. We refer to \cite{BD} for details.
The proof of Theorem \ref{intersection_substitutions}, which is presented in the next section \S\ref{proof:substitutions}, essentially consists of rephrasing Theorem \ref{thm:generation_characterization} in the language of substitutions.
As an example of the substitutions which occur, we list one of the substitutions for $m=4, n=3$ below (Example \ref{ex:substitution1for43}) and give the other substitutions for $m=4, n=3$ in Example \ref{ex:substitutionsfor43} as composition of the pseudosubstitutions (see Definition \ref{def:pseudosubstitutions} below) in Example \ref{ex:pseudosubstitutionsfor43}. We explain in the next section how these substitutions were computed (see in particular Example \ref{ex:substitution_howto}).
\begin{example}[Substitutions for $\M 43$]\label{ex:substitution1for43}
The substitution $\Sub 11 43$ for cutting sequences on $\M 43$ is the following:
\begin{align*}
&\Sub 11 43:&
\Sub 11 43( r_1)&= l_2 v_1 r_3v_4
& \Sub 11 43( l_1)&= l_1
& \Sub 11 43( v_1)&= l_2 v_1 \\&&
\Sub 11 43(r_2)&= r_2
& \Sub 11 43(l_2)&= r_2 l_1
& \Sub 11 43(v_2)&= r_3 \\&&
\Sub 11 43(r_3)&= r_3
& \Sub 11 43(l_3)&= r_2 v_5 v_6l_5 v_3
& \Sub 11 43(v_3)&= r_6 l_5 v_3 \\&&
\Sub 11 43( r_4)&= l_4 r_3 v_4
& \Sub 11 43( l_4)&= l_4
& \Sub 11 43( v_4)&= l_4 r_3 v_4 \\&&
\Sub 11 43( r_5)&= l_4 v_2 r_5
& \Sub 11 43( l_5)&= l_5
& \Sub 11 43( v_5)&= l_1 \\&&
\Sub 11 43( r_6)&= r_6
& \Sub 11 43( l_6)&= r_6 l_5 v_3
& \Sub 11 43( v_6)&= r_2 v_5 v_6
\end{align*}
In Example \ref{ex:substitution1for43} below we explain how the above substitution can be obtained from the generation rules in the previous section. The other substitutions for $\M 43$ are given in Example \ref{ex:substitutionsfor43}, see also Example \ref{ex:pseudosubstitutionsfor43}.
\subsection{From generations to substitutions}\label{proof:substitutions}
We will now provide the \emph{recipe} of how to translate generation operators (in the alphabet $\LL mn$) into a substitution (in the alphabet $\AA mn$), and in particular to obtain the substitutions in the previous example. This is done in Definition \ref{def:substitutions} and Lemma \ref{lemma:conjugationgensub}. They constitute the heart of the proof of Theorem \ref{thm:substitutionscharacterization} from Theorem \ref{thm:generation_characterization}, which is presented at the end of this section.
We begin first with a concrete example, which the definitions below will then formalize.
\begin{example}\label{ex:substitution_howto}
Let $m=4$ and $n=3$. Let us explain how to associate to the composition of the two generation operators $\gen 1 34 \circ \gen 1 43 $ a substitution on the arrows alphabet $\Ar 43$. For clarity, we will denote in red the symbols (edge labels) of the alphabet $\rd \Ar 43$ and in green the ones of $\gr \Ar 34$.
Let us first consider the generation diagram $\T 1 43$ used to define $\gen 1 43 $. Start from the arrow labeled by $\rd r_1$ on the universal diagram $\UD 43$, which in this diagram is the arrow from the vertex labeled by $\rd 7$ to the vertex $\rd 9$. The generating word on this arrow in $\GD 1 43$ is the green word $\gr 3 6$. Remark also that all the arrows incoming to the red vertex $\rd 7$ (in this case only one) carry a green word which ends with $\gr 4$ (in this case {\gr 54}), while all the arrows outgoing from the red vertex $\rd 9$ (two) carry a green word which starts with $\gr 5$ ({\gr 5} and {\gr 54}). Thus, the derived sequence of a sequence which contains the transition $\rd 7 9$ contain the word $\gr 4 3 6 5$. We look now at the transition diagram $\T 0 34$ on page \pageref{all-td34} (the first in Figure \ref{all-td34}) and see that a path which goes through $\gr 4 3 6 5 $ crosses the arrows which are labeled by ${\gr l_1 v_3 r_6}$ in $\UD 34$ (see Figure \ref{fig:arrows_names_ex}). We choose to include in the green path associated to the transition $\rd r_1$ the first arrow but not the last one, which will be included in the green path associated to the following red transition. Thus, we say that the label $\rd r_1$ of $\rd \Ar 43$ is mapped to the word ${\gr l_1 v_3 }$ in the alphabet $\gr \Ar 34$. We repeat the same process for every arrow on $\GD 0 43$. This gives a map from edge labels in $\rd \Ar 43$ to words in $\gr \Ar 34$, which can be extended to words in $\rd \Ar 43$ by juxtaposition. We call this type of operator a \emph{pseudo-substitution} (see Definition \ref{def:pseudosubstitutions}), since it acts as a substitution but between two different alphabets.
Similarly, we repeat the same process for arrows for the dual \bm surface $\M 34$. For example, for the generation diagram $\GD 1 34$ used to define $\gen 1 34 $ we see that the arrow labeled by $\gr l_1$ is the arrow from $\gr 1$ to $\gr 3$ and carries the word $\rd 16$. Furthermore, the unique incoming arrow to $\gr 1$ carries the label $\rd 2$. Since the word $\rd 216$ describes in the diagram $\T 0 43$, describes a path through the arrows labeled by $\rd l_2 v_1$ in $\UD 34$ in Figure \ref{fig:arrows_names_ex}, we associate to $\gr l_1$ the word $\rd l_2 v_1$.
Reasoning in a similar way, we associate to $\gr v_3$ the word $\rd r_3v_4$ (given by the path $\rd 652$). Thus, by juxtaposition, the word $\gr l_3 v_4 r_6$ in $\gr \Ar 43$ maps to the word $\rd l_2 v_1 r_3v_4$ in $\rd \Ar 43$.
Thus, the composition $\gen 2 34 \circ \gen 1 43 $ sends $\rd r_1$ to $\rd l_2 v_1 r_3v_4$. Thus we can define a standard substitution $\sigma_{1,2}^{43}$ in the alphabet $\rd \Ar 43$, by setting $\sigma_{1,2}^{43}({\rd r_1})= {\rd l_2 v_1 r_3v_4 r_2}$ and similarly for the other labels of $\rd \Ar 43$. This produces the substitutions in the Example \ref{ex:substitution1for43} above.
\end{example}
We will now state formally how to obtain substitutions from generations, thus formalizing the process explained in the Example \ref{ex:substitution_howto} above.
As we already saw, since each generation operator maps cutting sequences on $\M nm$ to cutting sequences on $\M nm$, in order to get substitutions (in the standard sense of Definition \ref{def:substitution}) we will need to compose \emph{two} generation operators. It is easier though to first describe the substitutions in two steps, each of which correspond to one of the generation operators.
Since the alphabet $\AA mn$ on which the substitution acts corresponds to \emph{transitions} in the original alphabet $\LL mn$ and the transitions for $\M nm$ and for $\M mn$, the intermediate steps will be described by \emph{pseudo-substitutions}, which are like substitutions but act on two different alphabets in departure and arrival:
\begin{definition}\label{def:pseudosubstitution}[Pseudo-substitution]
A \emph{pseudo-substitution} $\sigma $ from alphabet $\mathcal{A} $ to an alphabet $\mathcal{A}'$ is a map that sends each letter $a \in \mathcal{A}$ to a finite word in $\mathcal{A}'$, then extended to act on $\mathcal{A} ^{\mathbb{Z}}$ by justapposition, so that if $\sigma (a) = w_a $ for some finite words $w_a$ in the letters of $\mathcal{A}'$ as $a \in \mathcal{A}$, then for $w= (a_i)^\mathbb{Z} \in \mathcal{A}^ \mathbb{Z}$ we have that $\sigma(\cdots a_{-1} a_0 a_1 \cdots) = \cdots w_{a_{-1}} w_{a_0} w_{a_1} \cdots$.
\end{definition}
\begin{definition}[Pseudo-substitution associated to a generation]\label{def:pseudosubstitutions}
Let $\PSub i m n $ for $\leq i \leq n-1$
be the pseudo-substitution between the alphabets $\AA mn$ and $\AA nm$
defined as follows.
Assume that the arrow from vertex $j$ to vertex $k$ of $\T i mn $ is labeled by $a$ in $\UD mn$. Let $w_1 w_2 \dots w_N$ be the finite word associated to this arrow in the generation diagram ${ \GD i m n}$. Then set
\begin{equation*}
\PSub i m n (a) = a_0 a_1 a_2 \dots a_{N-1},
\end{equation*}
where $a_k$ for $1\leq k \leq N-1$ are the labels in $\AA mn$ of the arrow from $w_i$ to $w_{i+1}$, while $a_0$ is the label of the arrow from the unique edge label in $\LL nm$ which always preceeds $j$ in paths on $\GD i mn$
to $w_1$.
\end{definition}
\begin{example}\label{ex:pseudosubstitutionsfor43}
Let $m=4$, $n=3$. For $i=1$, as we already saw in the beginning of Example \ref{ex:substitution_howto}, the arrow labeled by $\rd r_1$ on the universal diagram $\UD 43$, which in the arrow from the vertex labeled by $\rd 7$ to the vertex $\rd 9$ in $\GD 1 43$, is labeled by the green word $\gr 3 6$ and the labels of arrows incoming to the red vertex $\rd 7$ end with $\gr 4$. Furthermore, the path $\gr 436$ on $\T 0 43$ correspond to the arrows labeled by ${\gr l_1 v_3 }$. Thus we set $\PSub i 43({\rd r_1})={\gr l_1 v_3 } $. Similarly, the arrow ${\rd r_2}$ in $\GD 1 43$ goes from {\rd 9} to {\rd 8}, is labeled by {\gr 5} and all three arrows which and in {\rd 9} have labels which end with {\gr 6}. Thus, since the path {\gr 65} correspond to the arrow $r_6$ in $\T 0 34$, we set $\PSub i 43({\rd r_1})={r_6 } $.
Reasoning in the same way and generalizing it to $i=2$, we get the full pseudosubstitutions for $\M 43$ (Figure \ref{pseudo43}).
\begin{figure}[!h]
\begin{align*}
&\PSub 1 43:
&\PSub 1 43 (r_1) &=l_1v_3
&\PSub 1 43 (l_1) &=l_4
&\PSub 1 43 (v_1) &=l_1 \\&
&\PSub 1 43 (r_2) &=r_6
&\PSub 1 43 (l_2) &=r_6v_4
&\PSub 1 43 (v_2) &=l_2 \\&
&\PSub 1 43 (r_3) &=l_2
&\PSub 1 43 (l_3) &=l_5v_2
&\PSub 1 43 (v_3) &=r_4v_2 \\&
&\PSub 1 43 (r_4) &=r_2v_3
&\PSub 1 43 (l_4) &=r_2
&\PSub 1 43 (v_4) &=r_2v_3 \\&
&\PSub 1 43 (r_5) &=l_3v_1
&\PSub 1 43 (l_5) &=l_6
&\PSub 1 43 (v_5) &=l_4 \\&
&\PSub 1 43 (r_6) &=r_4
&\PSub 1 43 (l_6) &=r_4v_2
&\PSub 1 43 (v_6) &=l_5 \\
&&&&&&& \\
&\PSub 2 43:
&\PSub 2 43 (r_1) &=r_1
&\PSub 2 43 (l_1) &=r_4v_2
&\PSub 2 43 (v_1) &=r_1 \\&
&\PSub 2 43 (r_2) &=l_3v_1
&\PSub 2 43 (l_2) &=l_3
&\PSub 2 43 (v_2) &=r_2 \\&
&\PSub 2 43 (r_3) &=r_2v_3l_5
&\PSub 2 43 (l_3) &=r_5
&\PSub 2 43 (v_3) &=l_1v_3 \\&
&\PSub 2 43 (r_4) &=l_5
&\PSub 2 43 (l_4) &=l_5v_2
&\PSub 2 43 (v_4) &=l_5v_2 \\&
&\PSub 2 43 (r_5) &=r_3
&\PSub 2 43 (l_5) &=r_6v_4
&\PSub 2 43 (v_5) &=r_4 \\&
&\PSub 2 43 (r_6) &=r_4
&\PSub 2 43 (l_6) &=l_1
&\PSub 2 43 (v_6) &=r_5
\end{align*}
\begin{quote}\caption{The pseudosubstitutions for $\M 43$\label{pseudo43}} \end{quote}
\end{figure}
Let now $m=3$ and $n=4$. In the same way, for $i=1,2,3$, we can calculate the pseudosubstitutions for $\M 34$ (Figure \ref{pseudo34}).
\begin{figure}
\begin{align*}
&\PSub 1 34:
&\PSub 1 34 (r_1)&=r_5v_3
&\PSub 1 34 (l_1)&=l_2v_1
&\PSub 1 34 (v_1)&=r_5 \\&
&\PSub 1 34 (r_2)&=l_4
&\PSub 1 34 (l_2)&=r_3
&\PSub 1 34 (v_2)&=l_5v_3 \\&
&\PSub 1 34 (r_3)&=r_3v_4
&\PSub 1 34 (l_3)&=l_4v_2
&\PSub 1 34 (v_3)&=r_3v_4 \\&
&\PSub 1 34 (r_4)&=r_6
&\PSub 1 34 (l_4)&=l_1
&\PSub 1 34 (v_4)&=l_1 \\&
&\PSub 1 34 (r_5)&=l_5v_3v_4
&\PSub 1 34 (l_5)&=r_2v_5v_6 \\&
&\PSub 1 34 (r_6)&=r_2
&\PSub 1 34 (l_6)&=l_5 \\&
&&&&&&& \\
&\PSub 2 34:
&\PSub 2 34 (r_1)&=l_3v_4
&\PSub 2 34 (l_1)&=r_4v_6
&\PSub 2 34 (v_1)&=l_3 \\&
&\PSub 2 34 (r_2)&=r_2v_5v_6
&\PSub 2 34 (l_2)&=l_5v_3v_4
&\PSub 2 34 (v_2)&=r_5v_3v_4 \\&
&\PSub 2 34 (r_3)&=l_5v_3
&\PSub 2 34 (l_3)&=r_2v_5
&\PSub 2 34 (v_3)&=l_5v_3v_4 \\&
&\PSub 2 34 (r_4)&=l_4v_2
&\PSub 2 34 (l_4)&=r_3v_4
&\PSub 2 34 (v_4)&=r_3 \\&
&\PSub 2 34 (r_5)&=r_5v_3v_4
&\PSub 2 34 (l_5)&=l_2v_1v_2 \\&
&\PSub 2 34 (r_6)&=l_2v_1
&\PSub 2 34 (l_6)&=r_5v_3 \\&
&&&&&&& \\
&\PSub 3 34:
&\PSub 3 34 (r_1)&=r_1
&\PSub 3 34 (l_1)&=l_6
&\PSub 3 34 (v_1)&=r_1 \\&
&\PSub 3 34 (r_2)&=l_2v_1v_2
&\PSub 3 34 (l_2)&=r_5v_3v_4
&\PSub 3 34 (v_2)&=l_3v_4 \\&
&\PSub 3 34 (r_3)&=r_5
&\PSub 3 34 (l_3)&=l_2
&\PSub 3 34 (v_3)&=r_5v_3 \\&
&\PSub 3 34 (r_4)&=r_2v_5
&\PSub 3 34 (l_4)&=l_5v_3
&\PSub 3 34 (v_4)&=l_5 \\&
&\PSub 3 34 (r_5)&=l_3
&\PSub 3 34 (l_5)&=r_4 \\&
&\PSub 3 34 (r_6)&=r_4v_6
&\PSub 3 34 (l_6)&=l_3v_4
\end{align*}
\begin{quote}\caption{The pseudosubstitutions for $\M 34$\label{pseudo34}} \end{quote}
\end{figure}
\end{example}
\end{example}
It is easy to check from the definition that given a pseudo-substitution $\sigma$ between the alphabets $\mathcal{A}$ and $\mathcal{A}'$ and a pseudo-substitution $\tau$ between the alphabets $\mathcal{A}'$ and $\mathcal{A}$, their composition $\tau \circ \sigma$ is a substitution on the alphabet $\mathcal{A}$. Thus the following definition is well posed.
\begin{definition}[Substitution associated to pair of generation]\label{def:substitutions}
For $\leq i \leq n-1$, $1\leq j \leq m-1$, let $\Sub i j m n $ be the substitution on the alphabets $\AA mn$ defined by
$$
\Sub i j m n := \PSub j n m \circ \PSub i m n .
$$
\end{definition}
\begin{example}\label{ex:substitutionsfor43}
In Example \ref{ex:substitution1for43} we wrote the substitution $\Sub 11 43$ explicitly. The full list of substitutions for $\M 43$ can be produced by composing the pseudosubstitutions in Example \ref{ex:pseudosubstitutionsfor43}, as by Definition \ref{def:substitutions}, i.e.
\begin{equation*}
\mathcal{S}_{4,3} =\{ \Sub ij 43 := \PSub j n m \circ \PSub i m n, \qquad \text{for} \ 1\leq i \leq 2, 1\leq j\leq 3\}.
\end{equation*}
\end{example}
The following Lemma shows that, up to changing alphabet from vertices labels to arrows labels as given by the operator $\Tr 0 m n $ and its inverse (see Remark \ref{invertibilityTr}), the substitutions $\Sub i k m n$ act as the composition of two generation operators.
\begin{lemma}[From generations to substitutions]
\label{lemma:conjugationgensub}
The substitutions $\Sub i j m n$ defined in Definition \ref{def:substitutions} for any $1\leq i \leq n-1, 1\leq j \leq m-1$ are such that
\begin{equation}\label{covariance}
\Tr 0 m n \circ \Sub i j m n \circ (\Tr 0 m n )^{-1} = \gen j nm \circ \gen i mn.
\end{equation}
\end{lemma}
Before giving the proof, we show by an example the action of the two sides of the above formula.
\begin{example}
Let us verify for example that the formula in Lemma \ref{lemma:conjugationgensub} holds for $m=4,n=3$ and $i=1,j=1$ when applied to a word $w$ admissible in $\T 0 43$ which contains the transition $\rd 123$.
Let us first compute the action of the right hand side of \eqref{covariance}.
Recall (see Definition \ref{genoperator}) that $\gen 1 43$ is given by first applying $\perm 1 43$, then $\mathfrak g_1^0$. Since $\perm 1 43$ maps $\rd 123$ to
$\rd 7 9 8$, by looking at the generation diagram $\GD 1 43$ (Figure \ref{gendiagram-43}), we see that $\gen 1 43 w$ will contain the string $\gr 4365$. Then, to apply $\gen 1 43$ we first apply $\perm 1 34$, which sends $\gr 4365$ to $\gr 1387$, then look at the generation diagram $\GD 1 34$ (Figure \ref{gendiagram-34}), to see that a path which contains $\gr 1387$ will also contain $\rd 216523$.
Hence $\gen 1 34 \gen 1 43 w$ will contain $\rd 216523$.
Let us now compute the action of the left hand side of (\ref{covariance}).
Since the arrow from the vertex $\rd 1$ to $\rd 2$ in $\T 1 43$ is labeled by $\rd r_1$ (Figure \ref{fig:arrows_names_ex}),
and the one from $\rd 2$ to $\rd 3$ is labeled by $\rd r_2$, the operator $(\Tr 0 m n )^{-1}$ sends $\rd 123$ to $\rd r_1 r_2$. Then, from Example \ref{ex:pseudosubstitutionsfor43} we have that $\Sub 11 43 ({\rd r_1 r_2}) = {l_2v_1r_3v_4r_2 }$. Finally,
$\Tr 0 m n$ maps this word in $\AA 43$ to $\rd 216523$ (see Example \ref{ex:Tr}).
Thus, we have verified again that $\Tr 0 43 \circ \Sub 11 43 \circ (\Tr 0 43 )^{-1} (w)$ contains $\rd 216523$.
\end{example}
\begin{proof}[Proof of Lemma \ref{lemma:conjugationgensub}]
Since $$\Sub i j m n := \PSub j n m \circ \PSub i m n = \PSub j n m \circ (\Tr 0 m n )^{-1} \circ \Tr 0 m n \circ \PSub i m n,$$
it is enough to show that
\begin{equation*}
\Tr 0 n m \circ \PSub i m n \circ (\Tr 0 m n )^{-1} = \gen i mn, \qquad \text{for all} \ 1\leq i \leq m-1, \quad \text{for all} \ m,n.
\end{equation*}
Consider any sequence $w \in {\LL mn}^\mathbb{Z}$. Let $ n_1 n_2$ and $ n_2 n_3$ be any two pairs of transitions in $\T 0 mn$. Let $u_1 u_2 \dots u_{N}$ be the label of the arrow from $n_1$ to $n_2$ in $\GD i mn$ and $v_1 v_2 \dots v_{M}$ the one of the arrow
between $n_2$ and $n_3$. Then, for every occurrence of the transitions $n_1 n_2 n_3$ transitions in $w$ (i.e. every time $w_{p-1}=n_1 , w_p=n_2, w_{p+1} =n_3 $ for some $p \in \mathbb{N}$) gives rise to a block of the form $ u_1 u_2 \dots u_N v_1 v_2 \dots v_M$ in $\LL nm$ in $\gen i mn(w)$.
Let $a$ be the label in $\AA mn$ of the arrow from $n_1$ to $n_2$ and $b$ be the label of the arrow from $n_2$ to $n_3$. Thus, when we apply $(\Tr 0 m n )^{-1}$ to $w$, each block $n_1 n_2 n_3$ in $w$ is mapped to the word $ab$.
Let $a_i$ for $1\leq i \leq N-1$ be the labels of the arrows from $u_i$ to $u_{i+1}$ and $a_0$ be the label of the arrow from the unique label in $\AA mn$ which preceeds $n_1$
to $u_1$. Thus, by Definition \ref{def:pseudosubstitutions}, we have that $\PSub i m n (a) = a_0 a_1 a_2 \dots a_{N}$. Now, let $b_i$ for $1\leq i \leq M-1$ be the labels of the arrow from $v_i$ to $v_{i+1}$. Let $b_0$ be the arrow from $u_{N}$ to $v_1$ and remark that $u_N$ is (by uniqueness) the unique label in $\AA mn$ which preceeds $n_2$. Thus, again by Definition \ref{def:pseudosubstitutions}, $\PSub i m n (b) = b_0 b_1 b_2 \dots b_{M-1}$.
Thus, we have that $\PSub i m n (ab) = a_0 a_1 a_2 \dots a_{N} b_0 b_1 \dots b_M$. Finally, by definition of the operator $\Tr 0 mn$ (recall Definition \ref{def:Tr}) and of the arrows $a_i$ and $b_i$ given above, $\Tr 0 m n \circ \PSub i m n \circ (\Tr 0 m n )^{-1} (w)$ contains the word $u_1 u_2 \dots u_N v_1 \dots v_M$. This shows the equality between the two sides of \eqref{covariance}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:substitutionscharacterization}]
By Theorem \ref{thm:generation_characterization}, $w \in {\LL mn}^\mathbb{Z}$ belongs to the closure of cutting sequences on $\M mn$ if and only if there exists $(a_k)_k, (b_k)_k$ such that it belongs to the intersection \eqref{intersection_substitutions}, i.e. for every $k$ there exists a word $u^k$ in $\LL mn ^\mathbb{Z} $ which is admissible in $\T m n 0$ such that
\begin{align*}
w&= (\perm {b_0} m n)^{-1}(\gen {a_1} n m \gen {b_1} mn) ( \gen {a_2} n m \gen {b_2} mn ) \ldots ( \gen {a_k} n m \gen {b_k} m n) u^k \\
&=(\perm {b_0} m n)^{-1} \Tr 0 m n (\Tr 0 m n)^{-1} (\gen {a_1} n m \gen {b_1} mn) \Tr 0 m n (\Tr 0 m n)^{-1} ( \gen {a_2} n m \gen {b_2} mn ) \Tr 0 m n \ldots (\Tr 0 m n)^{-1}( \gen {a_k} n m \gen {b_k} m n) \Tr 0 m n u^k \\
&= \Tr {b_0} m n \Sub {a_1} {b_1} m n \Sub {a_2} {b_2} m n \ldots \Sub {a_k} {b_k} m n \Tr 0 m n u^k ,
\end{align*}
where in the last line we applied Lemma \ref{lemma:conjugationgensub} and recalled the definition $\Tr i m n := (\perm i mn)^{-1} \circ \Tr 0 m n$ of the operators $\Tr i mn$ (see Definition \ref{def:Tr}). Remarking that $ \Tr 0 m n u^k $ is a sequence in the alphabet $\AA mn$ which is admissible by definition of $\Tr 0 m n$, this shows that $w$ is in the closure of cutting sequences if and only if it belongs to the intersection \eqref{intersection_substitutions}.
\end{proof}
\section{Derivation} \label{derivation}
In this section we will describe the \emph{renormalization} procedure which will be our key tool to help us characterize (the closure of) all possible trajectories on a given surface. The idea is that we will describe geometric renormalization operators (given by compositions of affine maps and reflections) which transform a linear trajectory into another linear trajectory, and at the same time we will describe the corresponding combinatorial operations on cutting sequences. The renormalization will happen in two steps, by first transforming a trajectory on $\M mn$ into one on $\M nm$ (and describing how to transform the corresponding cutting sequence), then mapping a trajectory on $\M nm$ into a new trajectory on $\M mn$ (and a new cutting sequence).
The combinatorial operators which shadow this geometric renormalization at the level of cutting sequences will be our derivation operators, followed by a suitable normalization (given by permutations). Since linear trajectories will be by construction infinitely renormalizable under this procedure, their cutting sequences will have to be infinitely derivable (in the sense made precise in \S \ref{sec:infinitely_derivable} below)
More precisely, we begin this section by describing in \S \ref{sec:der_ex} an example which, for $\M 4 3$, shows geometrically how to renormalize trajectories and their cutting sequences. In \S \ref{sec:der_general} we then define the combinatorial derivation operator $D mn$ and prove that it has the geometric meaning described in the example, which implies in particular that the derived sequence of a cutting sequence on $\M mn$ is a cutting sequence of a linear trajectory on the dual surface $\M nm$. In \S\ref{sec:nor} we define this operator for sequences admissible in the standard sector, and then define a \emph{normalization} operator that maps admissible sequences to sequences admissible in the standard sector. Then, by combining $\Norm nm \circ D mn$ with the operator $\Norm mn \circ D nm$ one gets a derivation operator on cutting sequences for $\M mn$ back to itself. In \S \ref{sec:infinitely_derivable} we use this composition to give the definition of infinitely derivable and prove that cutting sequences are infinitely derivable (Theorem \ref{thm:infinitely_derivable}).
In \S\ref{sec:sectors_sequences}, we use this result to associate to any given cutting sequence an infinite sequence (the sequence of admissible diagrams) which records combinatorial information on the sequence of derivatives. We will explain in the next section \S \ref{farey} how this sequence can be used to provide an analogue of the continued fraction expansion for directions. Finally, in \S\ref{sec:fixedpoints} we characterize the (periodic) sequences which are fixed under the renormalization process, since this description will be needed for the characterization in \S\ref{sec:characterization}
\subsection{A motivational example: derivation for $\M 43$ geometrically}\label{sec:der_ex}
Let us start with an example to geometrically motivate the definition of derivation.
In \S\ref{sec:affine} we described an explicit affine diffeomorphim $\AD {4}{3} $ from $\M 43$ to $\M 34$, which is obtained by combining a flip, a shear, a cut and paste, a similarity and another shear. The effect of these steps on $\M 43$ are shown in Figure \ref{hexoct}.
\begin{figure}[!h]
\centering
\includegraphics[width=1\textwidth]{HexOct.pdf}
\begin{quote}\caption{The effect of $\AD 43$ from $\M 43$ to $\M 34$. \label{hexoct}} \end{quote}
\end{figure}
Remark that the transformation $\AD {4}{3} $ acts on directions by mapping the standard sector $\Sec {0}43 $ for $\M 43$ to the sector $[\pi/4,\pi]$, which is the complement in $[0,\pi] $ of the standard sector $\Sec {0} 34 $ for $\M 34$. This is shown in Figure \ref{hexoct}, where the image of the standard sector is followed step by step.
In Figure \ref{fig:intertwined}, the preimages of the edges of $\M 34$ (with their edge labels) by $\AD 43 $ are shown inside $\M 43$. Given a trajectory $\tau$ on $\M 43$ in a direction $\theta \in \Sec 043=[0,\pi/3]$ with cutting sequence $w$ in $\LL 43^{\mathbb{Z}}$ with respect to the edge labels of $\M 43$, one can also write the cutting sequence $w'$ of the same trajectory $\tau$ with respect to the edge labels of the pullback of the sides of $\M 34$ by $\AD {4}{3}$. This gives a symbolic sequence $w'$ in $\LL 34^{\mathbb{Z}}$. We want to define a combinatorial derivation operator so that the sequence $w'$ is obtained from the sequence $w$ by derivation.
\begin{example}\label{reprise}
The periodic trajectory $\tau$ on $\M 43$ in Figure \ref{fig:intertwined}a has corresponding cutting sequence $w=\overline{\rd 1678785452}$, and the same trajectory on $\M 34$ has corresponding cutting sequence $w'=\overline{\gr 143476}$.\footnote{Here the overbar indicates a bi-infinite periodic sequence.} We can read off $w'$ on the left side of the figure as the pullback of the sides of $\M 34$, or on the right side of the figure in $\M 34$ itself. The reader can confirm that the path ${\rd 1678785452}$ in $\D 043$ in Figure \ref{34auxtd} collects exactly the arrow labels ${\gr 434761}$.
\end{example}
\begin{figure}[!h]
\centering
\includegraphics[width=200pt]{m43-with-traj.png}
\includegraphics[width=200pt]{m34-sheared-traj.png}
\begin{quote}\caption{A trajectory in $\M 43$ and $\M 34$: The green edges inside $\M 43$ on the left are the preimages under $\AD 43 $ of $\M 34$ on the right. Figure \ref{hd34aux} showed this construction, and here we now show the same trajectory (the black line) on both surfaces. Note that the trajectories in the two surfaces are not parallel; on the left it is at an angle of about $13^{\circ}$, and on the right about $17^{\circ}$. \label{fig:intertwined}} \end{quote}
\end{figure}
In this explicit example, one can check by hand that for each possible transition in $\T 043$ from an edge label of $\M 43$ to another one, either there are no pullbacks of edges of $\M 34$ crossed, or there is exactly one edges crossed (in general this follows from Lemma \ref{lemma:intertwined}). By writing the edge labels of these edges on top of the arrows in $\T 043$ representing the transition, one obtains exactly the derivation diagram in Figure \ref{34auxtd}a. Thus, consider the \emph{derivation operator} $D {4}{3}$ already mentioned in the introduction. It maps sequences admissible in $\T 043$ to sequences in the $\M 34$ edge labels, and is given by reading off the $\M 34$ edge labels on the arrows of the sequence described by the original sequence on $ \D 043$. It is clear from this geometric interpretation that the derivation operator $D {4}{3}$ is exactly such that the cutting sequence $w'$ of $\tau$ with respect to the pullback of the $\M 34$ edges satisfies $w'= D {4}{3} w$.
Let us now apply the affine diffeomorphism $\AD {4}{3}$. Then $\M 43$ is mapped to $\M 34$ and the trajectory $\tau$ is mapped to a linear trajectory $\tau'$ on $\M 34$ in a new direction $\theta'$, as shown in Figures \ref{hexoct}-\ref{fig:intertwined}. By construction, the sequence $w'= D {4}{3} w$ is the cutting sequence of a linear trajectory on $\M 34$. Since cutting sequences are admissible, this shows in particular that the derived sequence of a cutting sequence is admissible.
The direction $\theta'$ image of $\theta \in [0,\pi/3]$ belongs to $[\pi/4,\pi]$ by the initial remark on the action of $\AD {4}{3} $ on sectors. Thus, $D {4}{3}$ maps cutting sequences on $\M 43$ in a direction $\theta \in [0,\pi/3]$ to cutting sequences on $\M 34$ in a direction $\theta \in [\pi/4,\pi]$. By applying a symmetry on $\M 34$ that maps the direction $\theta'$ to the standard sector $[0,\pi/4]$ for $\M 34$ and the correspoding permutation on edge labels, one obtains a new cutting sequence $\Norm 34 w'$ on $\M 34$. The map that sends the direction $\theta$ of $\tau$ to the direction $\theta'$ of $\tau'$ is the \emph{Farey map} $\F 4 3$, which will be described in \S \ref{farey}.
One can then perform a similar process again, starting this time from $\M 34$ and thus showing that $D {3}{4} \Norm 34 w'$ is a cutting sequence of the trajectory $\tau'$ with respect to the edge labels of the pullback of the sides of a sheared $\M 43$ by $\AD {3}{4}$, or equivalently, the cutting sequence of a new trajectory $\tau''$ on $\M 43$. For symmetry, we also apply a final flip $f'$ to reduce once more to trajectories with direction in the standard sector. This second step is shown in Figure \ref{octhex}.
\begin{figure}[!h]
\centering
\includegraphics[width=1\textwidth]{OctHex.pdf}
\begin{quote}\caption{The effect of $\AD 34$ from $\M 34$ to $\M 43$. \label{octhex}} \end{quote}
\end{figure}
In Figure \ref{fullmovie} we show the combined effect of applying $\AD 43$, then a flip, then $\AD 34$, then another flip.
By applying $\AD 43 \circ f \circ \AD 34$, we then obtain a new linear trajectory in $\M 43$ whose cutting sequence is $D 34 \Norm 34 D 43 w$. We can then apply another flip $f'$ to reduce again to a trajectory with direction in $\Sec 0 {4}{3}$ and repeat the same process.
The effect of the composition $f' \circ \AD 43 \circ f \circ \AD 34$, which we call \emph{renormalization}, (see Remark \ref{rk:renormalization} below) corresponds to applying derivation twice, with normalization to reduce to the standard sector in between and at the end. One gets in this way an operator from cutting sequences on $\M 43$ back to cutting sequences on $\M 43$. The cutting sequence of the \emph{initial} trajectory with respect to the sides of the image of $\M43$ by this element is the sequence $w$ derived, normalized, then derived and normalized again. If we apply $f' \circ \AD 43 \circ f \circ \AD 34$, it maps the original trajectory to a new trajectory whose cutting sequence with respect to $\M43$ is the sequence $w$ derived and normalized twice. Thus, deriving and normalizing twice produces cutting sequences. Repeating this renormalization process allows us to show, with this observation, that cutting sequences are infinitely derivable (see \S \ref{sec:infinitely_derivable}).
\begin{remark}\label{rk:renormalization}
We will call \emph{renormalization} the process just described (in the specific case of $m=4$, $n=3$), obtained by applying to $\M mn$ first $\AD mn$, then a flip to reduce to $\Sec 0 nm$ for $\M nm$, then $\AD nm$ and finally another flip. The name \emph{renormalization} must not be confused with the name \emph{normalization}, used to describe just the reduction (by flips) to standard sectors. Renormalization maps trajectories on $\M mn$ back to trajectories on $\M mn$ but with the effect of \emph{shortening} long pieces of trajectories with directions in the standard sector. This follows since the standard sector is \emph{opened up} to the complementary sector, as shown in Figure \ref{fullmovie}.
At the combinatorial level of cutting sequences, the effect of renormalization
corresponds to applying derivation twice, once acting on cutting sequences on $\M mn$, once on $\M nm$, with \emph{normalization} in between and at the end, which acts by applying a permutation on cutting sequences to reduce to sequences admissible in the standard sector. One gets in this way an operator from cutting sequences on $\M mn$ back to cutting sequences on $\M mn$.
The geometric fact that finite pieces of trajectories are shortened by renormalization has, as its combinatorial counterpart, that finite pieces of cutting sequences, when derived, become \emph{shorter}; see Remark \ref{rk:derivation_short}.
\end{remark}
\begin{figure}[!h]
\centering
\includegraphics[width=1\textwidth]{FULLmovie.pdf}
\begin{quote}\caption{The full renormalization from $\M 43$ to itself. \label{fullmovie}} \end{quote}
\end{figure}
\subsection{Derivation operator for general $m,n$}\label{sec:der_general}
In general, we will now define an operator $D {m}{n}$ combinatorially, and then prove that it admits a
geometric interpretation as in the example in the previous section. The \emph{derivation operator} $D {m}{n}$ is defined using the \emph{derivation diagram} $\D 0mn$ defined in \S \ref{sec:labeled_def} (see Theorem \ref{tdtheorem}) as follows. Recall that a sequence \emph{admissible} in $\T 0mn$ describes a bi-infinite path on $\D 0mn$ (see the Definition \ref{def:transition} of admissible in \S\ref{sec:transition_diagrams}).
\begin{definition}\label{def:derivation}
Given a sequence $w =(w_i) \in \LL mn ^{\mathbb{Z}}$ admissible in $\T 0mn$, the sequence $D mn w$ is the sequence of labels of the \emph{arrows} of $\D 0mn$ that are crossed by an infinite path on $\D 0mn$ that goes through the vertices labeled by $(w_i)_{i \in \mathbb{Z}}$.
\end{definition}
An example was already given, in the introduction (Example \ref{ex:der34}) and in the previous section (Example \ref{reprise}). Derivation is well defined as an operator that maps \emph{admissible} sequences in $\LL mn ^{\mathbb{Z}}$ to sequences in $\LL nm ^{\mathbb{Z}}$, by virtue of the following Lemma:
\begin{lemma}\label{lem:derivation}
If $w =(w_i)$ is a bi-infinite sequence in $\LL mn$ admissible in $\T 0mn$, $D mn w$ is a bi-infinite sequence in $\LL nm $. Thus,
the operator $D {m}{n}$ maps sequences sequences in $\LL mn ^{\mathbb{Z}}$ which are \emph{admissible} in $\T 0mn$ to $\LL nm ^{\mathbb{Z}}$.
\end{lemma}
\begin{proof}
The proof of the Lemma is a consequence of the definition of derivation diagrams.
Let us recall from the structure theorem for derivation diagrams (Theorem \ref{tdtheorem}) that in $\D 0 mn$ the only arrows without edge labels are vertical.
Since there are only $m-2$ vertical arrows in a row, the bi-infinite path described by $w$ is going to have at least one horizontal arrow out of every $m-1$ arrows. Thus, for every block of $m-1$ edge labels of $\LL mn$ in $w$ one get at least one edge label of $\LL nm$ in the derived sequence $D mn w$. It follows that also $D mn w$ is an infinite sequence.
\end{proof}
Derivation is defined so that the following geometric interpretation holds.
\begin{lemma}[Geometric interpretation for derivation.]\label{lem:derivation_interpretation}
If $w$ is the cutting sequence of a linear trajectory of $\M mn$ in a direction $\theta \in \Sec 0 mn$, then the sequence $D mn w$ is the sequence of edge labels of the crossed sides of the flip-sheared copy of $\M nm$ which is the preimage of $\M nm$ by the affine diffeomorphism $\AD {m}{n}$.
\end{lemma}
The proof of the Lemma is a consequence of the definition of derivation diagrams and of their Structure Theorem \ref{tdtheorem}.
\begin{proof}
Consider the sequence $w_{i} w_{i+1}$, $i \in \mathbb{Z}$ of transitions in $w$, which by assumption of $\theta \in \Sec 0 mn$ are all transitions which appear in $\T 0 m n$. Since $w$ is a cutting sequence, each transition $w_{i} w_{i+1}$ corresponds to a segment of the trajectory $\tau$ that crosses first the edge of $\M mn$ labeled $w_i$, then the edge labeled $ w_{i+1}$. By the definitions of derivation diagrams and derivation, if this segment crosses an edge of the flip-sheared copy of $\M nm$ obtained as preimage by $\AD mn$, the derived sequence $w'$ contains the label of this edge. Thus, the derived sequence describes exactly the cutting sequence of $\tau$ with respect to the flip-sheared copy of $\M nm$ in the statement.
\end{proof}
\begin{remark}\label{rk:derivation_short}
As we can intuitively see in Figure \ref{fig:intertwined}, when from the cutting sequence of a trajectory in $\M 43$ we pass to the cutting sequence with respect to the edge labels that are the preimages by $\AD 43$ of $\M 34$, the number of sides crossed reduces.
Combinatorially, this can be seen on the derivation diagram $\D 0 mn$ in general, by remarking that horizontal arrows have exactly one label, while vertical arrows have none.
This means that when we consider a subsequence that comes from a finite path which travels along horizontal arrows, it will have the same number of labels after derivation, while if the subsequence contains also vertical arrows, the length of the subsequence after derivation will be shorter.
Thus, \emph{the effect of derivation on finite subsequences of a word is not to increase their length}.
\end{remark}
From Lemma \ref{lem:derivation_interpretation} we will now deduce that cutting sequences are derivable in the following sense. Recall that the permutations $\perm i mn$ introduced in \S \ref{sec:normalization} map sequences admissible in $\T i mn $ to sequences admissible in $\T 0mn $.
\begin{definition}\label{def:derivable}
A sequence $w$ admissible in $\T 0mn $ is \emph{derivable} if $D mn (w)$ is admissible in some diagram $\T i nm$. A sequence $w$ admissible in $\T i mn $ is derivable if $\perm i mn w$ is derivable.
\end{definition}
\begin{proposition}\label{thm:derivable}
Cutting sequences of linear trajectories in $\M mn$ are derivable. Furthermore:
\begin{enumerate}
\item The derived sequence $D mn (w)$ of a cutting sequence $w$ on $\M mn$ is the cutting sequence of a trajectory in $\M nm$.
\item If $w$ is admissible in $\T 0mn $, then $D mn (w)$ is admissible in some $\T i nm $ with $1\leq i \leq m$.
\end{enumerate}
\end{proposition}
\begin{proof}
We will first prove the first claim.
Normalizing by $\perm i mn$ we can assume without loss of generality by that $w$ is admissible in $\T 0mn $ (recall from Definition \ref{def:derivable} the notion of derivable in other sectors). The proof follows from Lemma \ref{lem:derivation_interpretation} by applying the affine diffeomorphism $\AD {m}{n}$: $\AD {m}{n}$ maps the flip-sheared copy of $\M nm$ to the semi-regular presentation of $\M nm$, and the trajectory $\tau$ with cutting sequence $w$ to a new trajectory $\tau'$ on $\M nm$. This trajectory has cutting sequence $D mn w$ by Lemma \ref{lem:derivation_interpretation}.
Since we just showed that the derived sequence is a cutting sequence and cutting sequences are admissible (see \ref{admissiblelemma}),
it follows that cutting sequences are derivable (according to Definition \ref{def:derivable}).
Finally, the second claim in the Theorem follows by showing that the sector $\Sec 0 m n$ is mapped by $\AD {m}{n}$ to $[0,\pi]\backslash \Sec 0 n m $. This can be verified by describing the image of $\Sec 0 m n$ by each of the elementary maps (see for example Figure \ref{octhex}) comprising $\derAD mn$.
\end{proof}
\subsection{Normalization}\label{sec:nor}
After deriving a derivable sequence using $D mn$, we now want to apply derivation again, this time using the operator $D nm$ for $\M nm$.
Since we defined derivation only for sequences admissible in $\T 0 mn$ and derived sequences are admissible in a sector $\T i mn$ with $i \neq 0$ (by the second claim in the Theorem \ref{thm:derivable}), so in order to apply derivation one more time we first want to \emph{normalize} the derived sequence as follows.
\begin{definition}
Given a sequence $w = (w_j)_{j \in \mathbb{Z}}$ which is admissible in diagram $\T i mn$, the \emph{normalized sequence} which will be denoted by $\Norm mn w$ is the sequence $\Norm mn w = (\perm i mn w_j)_{j \in \mathbb{Z}}$. Thus, the operator $\Norm mn$ maps sequences admissible in $\T i mn$ to sequences admissible in $\T 0 mn$.
\end{definition}
Let us remark that a sequence $w$ can in principle be admissible in more than one diagram $\T i mn$. In this case, we can use any of the corresponding $\perm i mn$ to normalize $w$. On the other hand, we will apply $\Norm mn$ to cutting sequences and one can show that if a \bm cutting sequence is not \emph{periodic}, then it is admissible in a \emph{unique} diagram $\T i mn$ (Lemma \ref{lemma:uniqueness_sector}), hence $\Norm mn w$ is uniquely defined.
\subsection{Cutting sequences are infinitely derivable}\label{sec:infinitely_derivable}
Let $w$ be a derivable sequence in $\T 0 mn$. Then by definition of derivability, $D mn w$ is admissible in some $\T i nm$. By normalizing, $\Norm nm D m n w$ is admissible in $\T 0 nm$ and we can now apply $D nm $.
\begin{definition}\label{def:infinitely_derivable}
A sequence $w$ in $\LL mn ^{\mathbb{Z}}$ is \emph{infinitely derivable} if it is admissible and, by alternatively applying derivation and normalization operators, one always gets admissible sequences, i.e. for any even integer $k=2l$,
$$ \underbrace{(D nm \Norm nm D mn \Norm mn ) \cdots (D nm \Norm nm D mn \Norm mn )}_{\text{$l$ times }} w$$
is admissible on some $\T i mn $ for some $0\leq i \leq n-1$
and for any odd integer $k=2l+1$,
$$ \underbrace{(D mn \Norm mn D nm \Norm nm )\cdots (D mn \Norm mn D nm \Norm nm ) }_{\text{$l$ times }} D mn \Norm mn w$$
for any $l \in \mathbb{N}$ is admissible on some $\T i nm $ for some $0\leq i \leq m-1$.
\end{definition}
We can now show that cutting sequences are infinitely derivable.
\begin{theorem}\label{thm:infinitely_derivable}
Let $w$ be a cutting sequence of a bi-infinite linear trajectory on $\M mn$. Then $w$ is \emph{infinitely derivable} in the sense of Definition \ref{def:infinitely_derivable}.
\end{theorem}
\begin{proof}
We will prove this by induction on the number $k$ of times one has derived and normalized the sequence $w$ that the resulting sequence $w^{k}$ is admissible (on some $\T i mn $ for $k$ even, on some $\T i nm $ for $k$ odd). Assume that we have proved it for $k$. Say that $k$ is odd, the other case being analogous. In this case $w^{k}$ is a cutting sequence of a trajectory $\tau^k$ in some sector $\Sec i nm$. Since $\Norm nm $ acts on $\tau^k$ by the permutation $\perm i nm$ induced by an isometry of $\M nm$, also $\Norm nm w^k$ is a cutting sequence, of the trajectory $\perm i nm \tau^k$ which belongs to the standard sector $\Sec 0 n m$. By applying $D mnm$, by the first part of Proposition \ref{thm:derivable}, the sequence $w^{k+1}:=D nm \Norm nm w^{(n)}$ is again a cutting sequence. Since cutting sequences are admissible (Lemma \ref{admissiblelemma}) this concludes the proof.
\end{proof}
\subsection{Sequences of admissible sectors}\label{sec:sectors_sequences}
Let $w$ be a cutting sequence of a bi-infinite linear trajectory $\tau$ on $\M mn$.
Since by Theorem \ref{thm:infinitely_derivable} $w$ is infinitely derivable, one can alternatively derive it and normalize it to obtain a sequence $(w^k)_k$ of cutting sequences. More precisely:
\begin{definition}\label{def:derivatives}
The \emph{sequence of derivatives} $(w^k)_k$ starting from a cutting sequence of a bi-infinite linear trajectory $w$ on $\M mn$, is the sequence recursively defined by:
\begin{equation*}
w^0:=w, \qquad w^{k+1} := \begin{cases} D nm (\Norm nm w^k) , & k \text{ odd} ; \\ D mn (\Norm mn w^k), & k \text{ even} . \end{cases}
\end{equation*}
\end{definition}
This sequence is well-defined by Theorem \ref{thm:infinitely_derivable}, and by the same Theorem,
$(w^k)_k$ are all admissible sequences
in at least one sector. Furthermore, if $w$ is non-periodic, each $w^k$ is admissible in a unique sector by Lemma \ref{lemma:uniqueness_sector}.
We now want to record after each derivation the sectors in which the cutting sequences $w^k$ are admissible. By Proposition \ref{thm:derivable}, for $k$ even $w^k$ is admissible in (at least one) of the sectors $\Sec i m n $ where $1\leq i \leq n-1$, while for $k$ odd $w^k$ is admissible in (at least one) of the sectors $\Sec i nm $ where $1\leq i \leq m-1$. Let us hence define two sequences of indices in $n-1$ and $m-1$ symbols respectively as follows.
\begin{definition}[Sequences of admissible sectors]\label{def:seq_sectors}
Let $w$ be a
cutting sequence of a linear trajectory on $\M mn$ in the standard sector $\Sec 0 mn$. Let us say that the two sequences $(a_k)_k \in \{1, \dots , m-1\}^\mathbb{N}$ and $(b_k)_k \in \{1, \dots ,n-1\}^\mathbb{N}$ are a \emph{pair of sequences of admissible sectors associated to $w$} if
\begin{itemize}
\item
$w^{2k-1}$ is admissible in $\Sec {a_k} nm$ for any $k\geq 1$;
\item
$w^{2k}$ is admissible in $\Sec {b_k} mn $ for any $k\geq 1$.
\end{itemize}
\end{definition}
Thus, the sequence of admissible sectors for $w$, i.e. the sequence of sectors in which the derivatives $w^1, w^2, \dots$ are admissible, is given by
$$ \Sec {a_1} {n}{m}, \Sec {b_1} {m}{n}, \Sec {a_2} {n}{m}, \Sec {b_2} {m}{n},\dots , \Sec {a_k} {n}{m},\Sec {b_k} {m}{n}, \dots .$$
We remark that the sequences of admissible sectors associated to a cutting sequence $w$ are \emph{unique} as long as $w$ is non-periodic, by virtue of Lemma \ref{lemma:uniqueness_sector}.
\begin{convention}
If $w$ is a cutting sequence of a linear trajectory $\tau$ on $\M mn$ in a sector different from the standard one, we will denote the sector index by $b_0$, so that $\tau$ is admissible in $\Sec {b_0} mn$, where $0\leq b_0 \leq 2n-1$.
\end{convention}
In \S \ref{sec:itineraries_vs_sectors}, after introducing the \bm Farey map $\F mn$, we will show that this pair of sequences is related to a symbolic coding of the map $\F mn$ and can be used to reconstruct from the sequence $w$, via a continued-fraction like algorithm, the direction of the trajectory of which $w$ is a cutting sequence (see \S \ref{sec:direction_recognition}, in particular Proposition \ref{prop:itineraries_vs_sectors}).
\subsection{Sequences fixed by renormalization}\label{sec:fixedpoints}
For the characterization of the closure of cutting sequences, we will also need the following characterization of periodic sequences, which are fixed points of our renormalization procedure.
Let us first remark that it makes sense to consider the restriction of the operators $ \Norm mn D nm $ and $ \Norm nm D mn $ to subwords of cutting sequences, by following up in the process how a subword $u$ of the bi-infinite cutting sequence $w$ changes under derivation (some edge labels in $u$ will be dropped, while others will persist) and under normalization (a permutation will act on the remaining edge labels). If $w' $ is obtained from a cutting sequence $w$ by applying a sequence of operators of the form $ \Norm mn D nm $ and $ \Norm nm D mn $ and $u'$ is the subword (possibly empty) obtained by following a subword $u$ of $w$ in the process, we will say that \emph{$u'$ is the image of $u$ in $w'$}.
\begin{lemma}\label{periodicABAB}
Let $w$ be the cutting sequence of a linear trajectory on $\M mn$ admissible in the standard sector $\Sec 0 mn$. If $w$ is fixed by our renormalization procedure, i.e.
$$ \Norm mn D nm \Norm nm D mn w = w , $$
then $w$ is an infinite periodic word of the form $\dots n_1 n_2 n_1 n_2 \dots$ for some edge labels $n_1, n_2 \in {\LL mn} $.
Furthermore, if $u$ is a subword of a cutting sequence $w$ on $\M mn$ such that the image $u'$ of $u$ in $w' = \Norm mn D nm \Norm nm D mn w$ has the same length (i.e. the same number of edge labels) of $u$, then $u$ is a finite subword of the infinite periodic word $\dots n_1 n_2 n_1 n_2 \dots$ for some edge labels $n_1, n_2 \in {\LL mn}$.
\end{lemma}
\begin{proof}
Let us first remark that it is enough to prove the second statement, since if $ w=w'$ where $w'$ is by definition $\Norm mn D nm \Norm nm D mn w $, in particular for any finite subword $u$ of $w$, the image $u'$ of $u$ in $w$ has the same number of edge labels as $u$. Thus, considering arbitrarily long finite subwords of $w$, the second statement implies that $w$ is the infinite word $\dots n_1 n_2 n_1 n_2 \dots$.
Let $w$ be a cutting sequence of a linear trajectory $\tau$ on $\M mn$. Without loss of generality we can assume that $\tau$ is in direction $\Sec 0 mn$, as the second part of the statement does not change up to applying the relabeling induced by permutations. Thus, we can assume that the cutting sequence $w$ (by definition of the transition diagrams) describes an infinite path in $\T 0 mn$.
Consider now the same path in the derivation diagram $\D 0 mn$. Any given finite subword $u$ of $w$ corresponds to a finite path on $\D 0 mn$. If we assume that the image $u'$ of $u$ in $D mn w $ has the same number of edge labels as $u$, the path cannot cross any vertical single arrow, as otherwise $u'$ would be shorter (see Remark \ref{rk:derivation_short}). Thus, the path described by $u$ should consist of arrows all belonging to the same row of $\D 0 mn$.
Without loss of generality, we can assume that the finite path described by $u$ contains the arrow $r_1$ in the piece of diagram drawn here below.
\begin{figure}[!h]
\centering
\includegraphics[width=180pt]{diagram-piece.png}
\end{figure}
Since it cannot contain vertical arrows, the only arrows that can follow $r_1$ in the path can either be $l_2$ or $r_2$. Correspondingly, the sequence of edge labels that can appear in the word are either $ n_1n_2n_1 $ or $n_1n_2n_3 $. If we prove that the latter case leads to a contradiction, then by repeating the same type of argument, we get that the path must be going back and forth between edge labels $n_1$ and $n_2$ and hence the word $u$ if a finite subword of the infinite periodic word $\dots n_1 n_2 n_1 n_2 \dots$.
Let us assume that the arrow $r_1$ is followed by $r_2$ and show that it leads to a contradiction. Let us denote by $n_4$ and $n_5$ the edge labels of $r_1$ and $r_2$ respectively in the derivation diagram $\D 0 mn$. Then, by definition of the derivation operator $D nm$, the image $u'$ of $u$ in $D nm w$ will contain the string $ \dots n_4 n_5 \dots$.
We claim that the transition diagram $\T 0 nm$ (for the standard sector of the dual surface $\M nm$) will contain a piece that looks like the following figure, up to change the direction of the arrows:
\begin{figure}[!h]
\centering
\begin{tikzcd}
{n_4}\arrow[bend right]{r} \arrow{d}
&{} \arrow[bend right]{l} \\
{}\arrow[bend right]{r}
&{n_5} \arrow[bend right]{l} \arrow{u} \\
\end{tikzcd} \ \ \ \ \
\end{figure}
In particular we claim that the location of the edge labels $A',B'$ in $\T 0 nm$ is at opposite vertices as shown in the figure above. This can be deduced by the Structure Theorem \ref{tdtheorem} looking at how the labels of the arrows of the derivation diagram $\D 0 mn$ snake in Figure \ref{gentransdiag} and comparing them with the labels of the
transition diagram $\T 0 mn$.
For a concrete example, refer to Figure \ref{34auxtd}: pairs of labels of arrows like $r_1$ and $r_2$ are for example {\color{green}$2$} and {\color{green}$8$} or {\color{green}$6$} and {\color{green}$2$} which indeed lie at opposite vertices in the derivation diagram for $\M 34$ and one can verify from Figure \ref{gentransdiag} that this is indeed always the case. In particular, $n_4,n_5$ \emph{do not} belong to the same row of $\T 0 nm$.
We know that the derived sequence $w'= D nm w$ is the cutting sequence of a linear trajectory on $\M nm$ and that it is admissible in (at least one) transition diagram $\T i nm$ for some $1\leq i \leq m$ (see Proposition \ref{thm:derivable}). This means that there will be an arrow between the two vertices labeled $n_4$ and $n_5$ in the transition diagram $\T i nm$ \footnote{We remark that this does not yet imply though that there has to be an arrow connecting $n_4, n_5$ in $\T 0 nm$ and indeed this does not have to be the case in general.}.
Let us now apply the normalization operator $\Norm nm$, which corresponds to acting by the permutation $\perm i nm $ for some $1\leq i \leq m$ on the labels in $w$. Denote by $n_4',n_5'$ the images of the labels $n_4, n_5$.
Since $\Norm nm D mn w$ is admissible on $\T 0 nm$ and contains the transition $n_4' n_5'$, by construction $n_4'$ and $n_5'$ must be connected by an arrow of $\T 0 nm$. Furthermore, the assumption on $u $ (that the image of $u$ in $w' = \Norm mn D nm \Norm nm D mn w$ has the same length as $u$) implies that $n_4'$ and $n_5'$ also have to be on the same row of $\T 0 nm$ (otherwise the image of $u$ in $w'$ would be shorter than $u$ because of the effect of $D nm$, see Remark \ref{rk:derivation_short}). This means that also $n_4$ and $n_5$ were on the same row of $\T i nm$ (since by definition of $\perm i nm $, $n_4'= \pi_i(n_4)$ and $n_5'= \pi_i(n_5)$ are the labels on $\T 0 nm$ of the vertices which were labeled $n_4$ and $n_5$, respectively, on $\T i nm$).
By Corollary \ref{preserves-rows-cor}, the action of $(\perm i nm) ^{-1}= \perm i nm$ ($\perm i nm$ are involutions since the reflections $\refl i nm$ are) maps the transition diagram $\T i nm$ to $\T 0 nm$ by mapping labels on the same rows to labels on the same row. Thus, we get that $n_4$ and $n_5$, which we just said are on the same row of $\T i nm$, are also on the same row of $\T 0 nm$, in contradiction with what we proved earlier (see the above Figure, that shows that $n_4$ and $n_5$ are \emph{not} on the same row on $\T 0 nm$). This concludes the proof that $u$ has the desired form and hence, by the initial remark, the proof of the Lemma.
\end{proof}
Finally, for the characterization we will also need to use that all sequences which have the form of fixed sequences under derivation, i.e. of the form $\dots n_1n_2n_1n_2 \dots$, are actually cutting sequences:
\begin{lemma}\label{realizecs}
Given a transition $n_1n_2$, such that the word $n_1n_2n_1n_2$ is admissible in some diagram $\T i mn$,
the periodic sequence of form $\dots n_1n_2n_1n_2 \dots$ is the cutting sequence of a periodic trajectory in $\M mn$.
\end{lemma}
\begin{proof}
If $n_1n_2n_1n_2$ is admissible in a diagram $\T i mn$, then both $n_1n_2$ and $n_2n_1$ are admissible in that diagram, so the labels $n_1$ and $n_2$ are connected by arrows in both directions, so $n_1$ and $n_2$ must be on the same row of the diagram $\T i mn$ and must be adjacent.
We recall from Section \ref{hooperdiagrams} that white vertices in a Hooper diagram correspond to horizontal cylinders and black ones correspond to "vertical" cylinders.
Edges around a vertex correspond to (possibly degenerate) basic rectangles composing the cylinder.
Moreover, sides in the polygonal representation are diagonals of the basic rectangles which correspond to horizontal sides in the Hooper diagram.
Using this last fact, we can label the horizontal edges of a Hooper diagram with the edge labels of the polygonal representation (see Figure \ref{hd-labeled}) and we proved in Section \ref{transitiondiagrams} that the labels of the Hooper diagram have the same structure as the transition diagram in the standard sector (see Figure \ref{construct2}).
The latter observation means that since $n_1$ and $n_2$ were adjacent and in the same row of $\T i mn$, they will label two adjacent horizontal edges of the Hooper diagram.
Let us consider first the case when $n_1$ and $n_2$ label horizontal edges of the Hooper diagram which share a white vertex as a common endpoint.
They will hence correspond to the two basic rectangles of the horizontal cylinder represented by the white vertex in the Hooper diagram, which have the sides labeled by $n_1$ and $n_2$ as diagonals.
Consider now a horizontal trajectory in the polygon contained in this horizontal cylinder. By looking at the way the arrows in the Hooper diagram follow each other around a white edge, we can see that the horizontal trajectory will cross in cyclical order the four (possibly degenerate) basic rectangles forming the cylinder, crossing first a basic rectangle which does not contain sides of the polygonal representation (corresponding to a vertical edge in the Hooper diagram), then the one with a diagaonl labeled by $n_1$ (corresponding to a horizontal edge in the Hooper diagram), then another basic rectangle which does not contain any side (corresponding to the other vertical edge) and finally the one with a diagaonl labeled by $n_2$.
This means that the cutting sequence of such trajectory corresponds to the periodic trajectory $\ldots n_1n_2n_1n_2\ldots$.
Recalling that "vertical" means vertical in the orthogonal decomposition, and at an angle of $\pi/n$ in the polygon decomposition (see Figure \ref{hexort1} for the correspondence), the same argument can be used for the case when $n_1$ and $n_2$ label horizontal edges of the Hooper diagram connected by a black vertex.
This proves that a path at angle $\pi/n$ across the cylinder represented by the black dot corresponds to the periodic trajectory $\ldots n_1n_2n_1n_2\ldots$.
\end{proof}
\section{The \bm Farey maps} \label{farey}
We will describe in this section a one-dimensional map that describes the effect of renormalization on the direction of a trajectory. We call this map the \emph{\bm Farey map}, since it plays an analogous role to the Farey map for Sturmian sequences. We define the \bm Farey map $\FF mn$ in two steps, i.e. as composition of the two maps $\F mn$ and $\F nm $, which correspond respectively to the action of $D mn $ and $D nm $, each composed with normalization.
\subsection{Projective transformations and projective coordinates}\label{sec:projective}
Let us introduce some preliminary notation on projective maps.
Let $\mathbb{R}\mathbb{P}^1$ be the space of lines in $\R^2$. A line in $\R^2$ is determined by a non-zero column vector with coordinates $x$ and $y$. There are two \emph{coordinate systems} on $\mathbb{R}\mathbb{P}^1$ which will prove to be useful in what follows and that we will use interchangably. The first is the \emph{inverse slope coordinate}, $u$. We set $u((x,y))=x/y$. The second useful coordinate is the \emph{angle coordinate} $\theta \in [0, \pi]$, where $\theta$ corresponds to the line generated by the vector with coordinates $x=\cos(\theta)$ and $y=\sin(\theta)$. Note that since we are parametrizing lines rather than vectors, $\theta$ runs from $0$ to $\pi$ rather than from $0$ to $2\pi$.
An interval in $\mathbb{R}\mathbb{P}^1$ corresponds to a collection of lines in $\R^2$. We will think of such an interval as corresponding to a sector in the upper half plane (the same convention is adopted in \cite{SU, SU2}).
\begin{convention}
We will still denote by $ \Sec i mn$ for $0\leq 1 \leq n-1$ (see Definition \ref{sectordef})
the sector of $\mathbb{R}\mathbb{P}^1$
corresponding to the angle coordinate sectors $ \left[ i \pi /n , (i+1)\pi/n \right]$ for $i=0,\dots, 2n-1$, each of length $\pi/n$ in $[0,\pi]$.
We will abuse notation by writing $u \in \Sec i mn$ or $\theta \in \Sec i mn$, meaning that the coordinates belong to the corresponding interval of coordinates.
\end{convention}
A linear transformation of $\R^2$ induces a \emph{projective transformation} of $\mathbb{R}\mathbb{P}^1$ as follows. If $L= \left(\begin{smallmatrix}a& b \\ c& d \end{smallmatrix} \right)$ is a matrix in $GL(2, \mathbb{R})$, the induced projective transformation is given by the associated \emph{linear fractional transformation} $ L[x]= \frac{ax+b}{cx+d}$. This linear fractional transformation records the action of $L$ on the space of directions in inverse slope coordinates. Let $PGL(2,\R)$ be the quotient of $GL(2, \mathbb{R})$ by all homotheties $\{ \lambda I,\ \lambda \in \mathbb{R}\}$, where $I$ denotes the identity matrix. Remark that the linear fractional transformation associated to a homothety is the identity.
The \emph{group of projective transformations} of $\mathbb{R}\mathbb{P}^1$ is $PGL(2,\R)$.
\subsection{The projective action $\F mn$ of $\AD mn$} \label{sec:2farey}
Let us recall that in \S \ref{sec:affine} we defined an affine diffeomorphism $\AD m n$ from $\M mn$ to $\M nm$ which acts as a \emph{flip and shear}. The linear part of $\AD mn$ is the $SL(2, \R)$ matrix
$\derAD m n $ in \eqref{def:derivativeAD}, obtained as the product $\shear nm \diag{m}{n} \shear{m}{n} f$ of the matrices in \eqref{def:generalmatrices}.
The diffeomorphism $\AD m n$ acts projectively by sending the standard sector $\Sec 0 mn$ to the complement $(\pi/m, \pi)$ of the standard sector $\Sec 0 nm$.
This is shown for $m=4,n=3$ in Figures \ref{hexoct} and \ref{octhex}, for $\AD4 3$ and $\AD 3 4 $ respectively, where the effect of each elementary matrix in the product giving $\derAD m n $ is illustrated.
Let $\refl i nm$ for $i=0, \dots m-1$ be the isometry of $\M nm$ described in \S \ref{transitiondiagrams}, which maps $\Sec i nm$ to $\Sec 0 nm$. We will abuse the notation and also denote by $\refl i nm$ the matrix in $PGL(2,\R)$ which represents them (see Example \ref{ex:refl_matrices} for the matrices $\refl i 4 3$ for $\M 43$). We stress that when we consider products of the matrices $\refl i nm$ we are always thinking of the product as representing the corresponding coset in $PGL(2,\R)$.
Let us define the map $\F m n$ so that it records the projective action of $ \AD m n$ on the standard sector $\Sec 0 mn$, composed with \emph{normalization}. Let us recall from \S\ref{sec:normalization} that we normalize trajectories in $\Sec j n m$ by applying the reflection $\refl j nm $ that maps them to $\Sec 0 n m$ (see Definition \ref{def:reflections}). Thus, we have to compose $\derAD m n $ with $\refl j nm $ exactly when the image under $\derAD m n $ is contained in $\Sec j nm$. Let us hence define the subsectors $\Subsec j mn \subset \Sec 0 mn$ for $1\leq j \leq m-1$ which are given in inverse slope coordinates by
\begin{equation} \label{def:subsectors}
\Subsec j mn : = \{
({\derAD m n})^{-1} [u], u \in \Sec j nm \} \quad \text{for} \quad 1 \leq j \leq m-1.
\end{equation}
\begin{remark}\label{rk:subsec}
Thus, $u \in \Subsec j mn$ iff $\derAD m n [u] \in \Sec j nm$.
\end{remark}
We can then define the map $\F m n : \Sec 0 mn \to\Sec 0 nm $ to be the piecewise-projective map, whose action on the subsector of directions corresponding to $\Subsec j mn$ is given by
the projective action given by $\refl j nm \derAD m n $, that is, in inverse slope coordinates, by the following piecewise linear fractional transformation:
\begin{equation} \label{def:Fmn}
\F mn (u) = \refl j nm \derAD m n [u ] = \frac{a_i u + b_i}{c_i u + d_i}, \qquad \mathrm{where} \, \, \begin{pmatrix} a_i & b_i \\ c_i & d_i \end{pmatrix}:= \refl j n m \derAD m n, \quad \mathrm{for\ } u \in \Subsec j mn , \leq j \leq m-1.
\end{equation}
The action in angle coordinates is obtained by conjugating by conjugating by $\cot : [0,\pi] \rightarrow \mathbb{R}$, so that if $\theta \in \overline{\Sigma_i}$ we have $F (\theta) = \cot^{-1}\left( \frac{a_1 \cot(\theta) + b_i}{c_i \cot(\theta) + d_i}\right)$. Let us remark that the change from the coordinates $u$ to $\theta$ through cotangent reverses orientation.
In Figure \ref{fareymap} we show the graphs of the map $\F 43$ and $\F 34$ in angle coordinates.
\begin{figure}[!h]
\centering
\includegraphics[width=.44\textwidth]{farey43m.pdf} \ \ \ \ \
\includegraphics[width=.44\textwidth]{farey34m.pdf}
\begin{quote}\caption{The Farey maps $\F 43$ and $\F 34$ . \label{fareymap}} \end{quote}
\end{figure}
Remark that since the image of the standard sector $\Sec 0 m n$ by $\F m n$ is contained in the standard sector $\Sec 0 n m$, we can compose $\F m n$ with $\F n m$.
\begin{definition}\label{def:FareyFF}
The \emph{\bm Farey map} $\FF m n: \Sec 0 mn \to \Sec 0 mn$ for \M mn is the composition $\FF m n := \F n m \circ \F m n$ of the maps $\F m n$ and $\F n m$ given by \eqref{def:Fmn}.
\end{definition}
In Figure \ref{ffareymap} we show the graphs of the maps $\FF 4 3$ and $\FF 34$ in angle coordinates.
The map $\FF m n $ is pointwise expanding, but not uniformly expanding since the expansion constant tends to $1$ at the endpoints of the sectors. Since each branch of $\FF m n $ is monotonic, the inverse maps
of each branch
are well defined.
\begin{figure}[!h]
\centering
\includegraphics[width=.44\textwidth]{farey44.png} \ \ \ \ \
\includegraphics[width=.44\textwidth]{farey33.png}
\begin{quote}\caption{The Farey maps $\FF 34$ and $\FF 43$. \label{ffareymap}} \end{quote}
\end{figure}
\subsection{Itineraries and sectors}\label{sec:itineraries_vs_sectors}
The \bm Farey map $\FF m n$ has $(m-1)( n-1) $ branches. The intervals of definitions of these branches are the following subsectors of $\Sec 0 m n $ that we will denote by $\Subsubsec i j m n$ for $1\leq i \leq m-1, 1\leq j \leq n-1$:
\begin{equation}\label{def:subsubsec}
\Subsubsec i j m n = \Subsec i mn \cap (\F m n)^{-1} (\Subsec j nm), \qquad i=1, \dots, m-1, \quad j =1 \dots, n-1.
\end{equation}
Thus, explicitely,
\begin{equation} \label{explicitdefFareyFF}
\FF m n(u) = \refl {b_1} m n \derAD nm \refl {a_1} n m \derAD mn [u], \qquad \text{iff} \ u \in \Subsubsec i j m n,
\end{equation}
Remark that if $\theta \in \Subsubsec i j m n$, then the
affine diffeomorphism $\AD m n$ sends the direction $\theta$ to a direction $\theta'$ in the sector $\Sec i n m$ and then, after normalizing the direction $\theta'$ to a direction $\refl i nm [\theta]$ in the standard sector $\Sec 0 n m$, the
affine diffeomorphism $\AD n m$ sends it to a direction $\theta''$ in the sector $\Sec j m n$. Thus the indices $i, j$ record the visited sectors
\smallskip
Let us \emph{code} the orbit of a direction under $\FF m n$ as follows.
\begin{definition}[Itinerary]\label{itinerarydef}
To any $\theta\in \Sec 0 mn$ we can assign two sequences $(a_k)_k \in \{1, \dots , m-1\}^\mathbb{N}$ and $(b_k)_k \in \{1, \dots , n-1\}^\mathbb{N}$ defined by
\begin{equation*}
{\left(\F m n\right)}^{k-1}(\theta)\in \Subsubsec {a_k} {b_k} m n \text{ for each } k \in \mathbb{N}.
\end{equation*}
We call the sequence $\left((a_k, b_k) \right)_k $ the \emph{itinerary} of $\theta$ under $\FF m n$.
\end{definition}
Let us recall that in \S \ref{sec:sectors_sequences} given a cutting sequence $w$, by performing derivation and normalization, we assigned to it a pair of sequences recording the sectors in which derivatives of $w$ are admissible (see Definition \ref{def:seq_sectors}), uniquely when $w$ is non-periodic (by Lemma \ref{lemma:uniqueness_sector}). These sequences are related to itineraries of $\F m n$ as follows:
\begin{proposition}\label{prop:itineraries_vs_sectors}
Let $w$ be a non-periodic cutting sequence of a bi-infinite linear trajectory on $\M mn$ in a direction $\theta$ in $\Sec 0 mn$. Let
$(a_k)_k \in \{1, \dots , m-1\}^\mathbb{N}$ and $(b_k)_k \in \{1, \dots , n-1\}^\mathbb{N}$ be the pair of sequences of admissible sectors associated to $w$ (see Definiton \ref{def:seq_sectors}).
Then the sequence $\left((a_k, b_k) \right)_k $ is the \emph{itinerary} of $\theta$ under $\FF m n$.
\end{proposition}
The proof is based on the fact that the \bm Farey map shadows at the projective level the action of the geometric renormalization procedure that is behind the combinatorial derivation and normalization procedure on cutting sequences.
\begin{proof}
Let $(w^k)_k$ be the sequence of derivatives of $w$ given by Definition \ref{def:derivatives}. Remark that since $\tau$ is not periodic, none of its derivatives $w^k$ is periodic. Thus, since $w^k$ is non-periodic, it is admissible in a unique diagram that (by definition of $(a_k)_k, (b_k)_k$ as sequences of admissible sectors) is $\T {a_j} {n}{m}$ for $k=2j-1$ odd and $\T {b_j} {m}{n}$ for $k=2j$ even. Thus, the sequence $(u^k)_k$ of \emph{normalized derivatives}, given by $u^k := \Norm nm w^k$ for $k$ odd and $u^k := \Norm mn w^k$ for $k$ even, is well defined and is explicitly given by $u^k= \perm {a_j} {n}{m} w^k$ for $k=2j-1$ odd and $u^k= \perm {b_j} {m}{n} w^k$ for $k=2j$ even.
From Proposition \ref{thm:derivable} we know that, for any $k$, $w^k$ is the cutting sequence of a trajectory $\tau^k$ and thus $u^k$ is the cutting sequence of a trajectory $\overline{\tau}^k$ in the standard sector obtained by normalizing $\tau^k$. These trajectories can be constructed recursively as follows. Set $\tau^0:= \tau= \overline{\tau}^0$ (since we are assuming that $\tau$ is in the standard sector). Assume that for some $k\geq 1$ we have already defined ${\tau}^{k-1}$ and $\overline{\tau}^{k-1}$ so that their cutting sequences are respectively $w^{k-1}$ and $u^{k-1} $. Deriving $u^{k-1}$, we get $w^{k}$, which, by definition of sequence of sectors and the initial observation, is admissible \emph{only} in $\T {a_j} {n}{m}$ for $k=2j-1$ odd and \emph{only} in $\D {b_j} {m}{n}$ for $k=2j$ even. It follows that the trajectory $\tau_{k}$ of which $w^{k} $ is a cutting sequence belongs to $\Sec {a_j} {n}{m}$ for $k=2j-1$ odd and $\Sec {b_j} {m}{n}$ for $k=2j$ even. Thus, to normalize it we should apply $\refl {a_j} n m $ or $\refl {b_j} m n $ according to the parity. Hence, set for any $k\geq 1$:
\begin {eqnarray} \label{trajectories_recursively}
\tau_{k} &:= &\begin{cases} \AD m n \overline{\tau}^{k-1}, & k-1\ \text{even} ,\\ \AD n m \overline{\tau}^{k-1} , & k-1\ \text{odd} ; \end{cases}\\
\overline{\tau}_{k} &:= & \begin{cases} \refl {a_j}{n}{m} \tau^{k} , & k=2j-1\ \text{odd} ; \\ \refl {b_j}{m}{n} \tau^{k}, & k=2j\ \text{even} . \end{cases}
\end{eqnarray}
Let $(\overline{\theta}^k)_k$ be the directions of the normalized trajectories $(\overline{\tau}^k)_k$.
Recalling now the definition of the \bm Farey map $\FF mn = \F nm \circ \F m n$, and of each of the maps $\F nm $ and $\F m n$ defined in (\ref{def:Fmn}), we see then that the directions $(\overline{\theta}^k)_k$ of $(\overline{\tau}^k)_k$ satisfy for any $k\geq 1$:
\begin{equation*}
\overline{\theta}^{k} = \begin{cases} \F mn (\overline{\theta}^{k-1}) = \refl {a_j}{n}{m} \derAD mn [\overline{\theta}^{k-1}] , & k=2j-1 \ \text{odd} ; \\ \F nm (\overline{\theta}^{k-1}) = \refl {b_j}{m}{n} \derAD nm [\theta^{k-1}], & k=2j \ \text{even} . \end{cases}
\end{equation*}
It follows from Remark \ref{rk:subsec} that $\overline{\theta}^{k-1}$ belongs to $\Subsec {a_j}{n}{m}$ for $k=2j-1$ odd (since $\theta^k= \derAD mn [\overline{\theta}^{k-1}] $, which belongs to $\Sec {a_j}{n}{m}$) and to $\Subsec {b_j}{m}{n}$ for $k=2j$ even (since $\theta^k= \derAD nm [\overline{\theta}^{k-1}] $, which belongs to $\Sec {b_j}{m}{n}$).
From the definition of $\FF mn^l$ as composition, we also have that $\overline{\theta}^{2l} = \FF mn ^l (\theta)$ for every $l \geq 0$.
Thus, by definition \eqref{def:subsubsec} of the subsectors $\Subsubsec i j mn$ and using the previous formulas for $k-1:=2l$ (so that $k$ is odd and we can write it as $k=2j-1$ for $j=l+1$), we have that $\FF mn ^l (\theta)$ belongs to $\Subsubsec {a_{l+1}} {b_{l+1}} mn$ for every $l\geq 0$. This shows that $\left((a_k,b_k)\right)_k$ is the itinerary of $\theta$ under the \bm Farey map and concludes the proof.
\end{proof}
In the next section we show that, thanks to Proposition \ref{prop:itineraries_vs_sectors} and the expanding nature of the map $\F mn$, one can recover the direction of a trajectory from its cutting sequence (see Proposition \ref{directionsthm}).
\subsection{Direction recognition}\label{sec:direction_recognition}
Given a cutting sequence $w$, we might want to recover the direction of the corresponding trajectory. This can be done by exploiting a continued fraction-like algorithm associated to the \bm Farey map, as follows.
Let $\mathcal{I}$ be the set of all possible itineraries of $\FF mn$, i.e. the set
\begin{equation*}
\mathcal{I}_{m,n}:= \{ \left((a_k, b_k) \right)_k , \quad (a_k)_k \in \{1, \dots , m-1\}^\mathbb{N}, \quad (b_k)_k \in \{1, \dots , n-1\}^\mathbb{N}\}.
\end{equation*}
Let us recall that $\FF mn$ is monotonic and surjective when restricted to each subiterval
$\Subsubsec i j m n$ for $1\leq i \leq m-1, 1\leq j \leq n-1$. Let us denote by $\FFF m n i j$ the restriction of $\FF mn$ to $\Subsubsec i j m n$. Each of these branches $\FFF m n i j$ is invertible.
Given $\left((a_k, b_k) \right)_k \in \mathcal{I}_{m,n}$, one can check that intersection
\begin{equation} \label{sectorCFdef}
\bigcap_{k \in \mathbb{N}} (\FFF m n {a_1} {b_1} )^{-1} (\FFF m n {a_2} {b_2})^{-1} \cdots (\FFF m n {a_k} {b_k})^{-1} [0, \pi]
\end{equation} is non empty and consists of a single point $\theta$. In this case we write
$\theta = [a_1, b_1, a_2, b_2 , \dots ]_{m,n}$
and say that $[a_1, b_1, a_2, b_2 , \dots ]_{m,n} $ is a \emph{\bm additive continued fraction expansion} of $\theta$. To extend this continued fraction beyond directions in the standard sector, we set the following convention. For an integer $0\leq b_0 \leq 2n-1 $, and sequences $(a_k)_k, (b_k)_k$ as above, we set
\begin{equation}\label{bmCFdef}
[b_0; a_1, b_1, a_2, b_2 , \dots ]_{m,n}:=
\left(\refl {b_0} {m}{n}\right)^{-1} [\theta], \qquad \text{where \ } \theta = [a_1, b_1, a_2, b_2 , \dots ]_{m,n}.
\end{equation}
The index $b_0$ is here such that the above angle lies in $\Sec {b_0} mn$.
The notation recalls the standard continued fraction notation, and $b_0$ plays the role analogous to the integer part.
\smallskip
We have the following result, which allows us to reconstruct the direction of a trajectory from the combinatorial knowledge of the sequence of admissible sectors of its cutting sequence:
\begin{proposition}[Direction recognition]\label{directionsthm
If $w$ is a \emph{non-periodic} cutting sequence of a linear trajectory in direction $\theta \in [0, 2\pi]$, then
\begin{equation*}
\theta=[b_0; a_1, b_1, a_2, b_2 , \dots ]_{m,n},
\end{equation*}
where $b_0$ is such that $\theta \in \Sec {b_0} {m}{n}$ and the two sequences \mbox{$(a_k)_k \in \{1, \dots , m-1\}^\mathbb{N}$} and \mbox{$(b_k)_k \in \{1, \dots , n-1\}^\mathbb{N}$} are a \emph{pair of sequences of admissible sectors associated to $w$}.
\end{proposition}
Let us recall that $b_0$ and the sequences $(a_k)_k \in \{1, \dots , n-1\}^\mathbb{N}$ and $(b_k)_k \in \{1, \dots , m-1\}^\mathbb{N}$ are uniquely determined when $w$ is non-periodic (see \S \ref{sec:sectors_sequences}). Let us also remark that the above Proposition implies in particular that the direction $\theta$ is \emph{uniquely determined} by the combinatorial information given by deriving $w$.
\begin{proof}
Without loss of generality, by applying $\perm {b_0}{m}{n}$ to $w$ and $\refl {b_0}{m}{n}$ to $\tau$, we can assume that the direction $\theta $ of $\tau$ is in the standard sector and reduce to proving that $\theta $ is the unique point of intersection of \eqref{sectorCFdef}.
By Proposition \ref{prop:itineraries_vs_sectors}, the itinerary of $\theta$ under $\FF mn$ is $\left((a_k, b_k)\right)_k$.
By definition of itinerary, for every $k \in \mathbb{N}$ we have that $\theta^k:= {\left(\F m n\right)}^k(\theta) \in \Subsubsec {a_k}{b_k}{m}{n} $. Thus, since $\FF mn$ restricted to $\Subsubsec {a_k}{b_k}{m}{n} $ is by definition the branch $\FFF m n {a_k} {b_k}$, we can write $$\theta^{k}=(\FFF m n {a_k} {b_k})^{-1} (\theta^{k+1}), \qquad \forall \ k \in \mathbb{N}.$$
This shows that $\theta$ belongs to the intersection \eqref{sectorCFdef} and,
since the intersection consists of an unique point, it shows that $\theta = [ a_1,b_1,a_2,b_2,\dots]_{m,n}$.
\end{proof}
\section{\bm surfaces via Hooper diagrams} \label{hooperdiagrams}
In \S\ref{bmdefsection} we recalled the construction of \bm surfaces by gluing a collection of semi-regular polygons. In his paper \cite{Hooper}, as well as this polygonal presentation, Hooper gave a description of these surfaces by constructing a diagram, that we will call the \emph{Hooper diagram $\G{m}{n}$} for the \bm surface $\M mn$.
In this section we will explain how to construct the Hooper diagram given the polygonal presentation of the surface and vice versa, following the example of $\M{3}{4}$ throughout.
\subsection{From $\M{3}{4}$ to a Hooper diagram: an example}
Let us consider the \bm surface $\M{3}{4}$.
To construct its Hooper diagram, we need to consider the two cylinder decompositions given in Figure \ref{octcyl}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\textwidth]{octcyl.pdf} \hspace{0.08\textwidth}
\includegraphics[width=0.2\textwidth]{graph.pdf}
\begin{quote}\caption{The cylinder decomposition for $\M{3}{4}$ and its Hooper diagram. \label{octcyl}} \end{quote}
\end{figure}
We have a \emph{horizontal} cylinder, and a cylinder in direction $\frac{3 \pi}{4}$ that we will call \emph{vertical}.
The horizontal cylinders will be called $\alpha_1$, $\alpha_2$ and $\alpha_3$ as in Figure \ref{octcyl}, while the vertical ones will be $\beta_1$, $\beta_2$ and $\beta_3$.
Notice that both decompositions give the same number of cylinders -- three cylinders in this case -- and this is true for all \bm surfaces, that the cylinder decompositions in each direction $k\pi/n$ yield the same number of cylinders.
Let us now construct the corresponding graph $\G{3}{4}$ as the \emph{cylinder intersection graph} for our cylinder decompositions.
In general, it will be a bipartite graph with vertices $\mathcal V= \mathcal A \cup \mathcal B$, represented in Figure \ref{octcyl} with black and white vertices, respectively.
The black vertices are in one-to-one correspondence with the vertical cylinders, while the white vertices are in one-to-one correspondence with the horizontal cylinders.
To describe the set of edges, we impose that there is an edge between two vertices $v_i$ and $v_j$ if the two corresponding cylinders intersect.
It is clear that we will never have edges between vertices of the same type, because two parallel cylinders never intersect.
An edge will hence correspond to a parallelogram that is the intersection between cylinders in two different decompositions.
In our case, the graph $\G{3}{4}$ will have six vertices: three white ones, corresponding to the cylinders $\alpha_i$, and three black ones, corresponding to the cylinders $\beta_i$, for $i=1,2,3$.
Considering the intersections, as we can see in Figure \ref{octcyl}, the central cylinder $\alpha_2$ of the horizontal decomposition will cross all three cylinders of the vertical decomposition.
The other two will cross only two of them, $\beta_1$ and $\beta_2$ in the case of $\alpha_3$; $\beta_3$ and $\beta_1$ in the case of $\alpha_3$.
Finally, we need to record how the various pieces of a cylinder, seen as the various edges around a vertex, glue together.
To do that we first establish a positive direction, gluing on the right for the orthogonal decomposition and gluing upwards for the vertical one.
We then record this on the graph by adding some circular arrows around the vertices, giving an ordering for the edges issuing from that vertex.
We can easily see that such arrows will have the same direction (clockwise or counter-clockwise) in each column, and alternating direction when considering the vertices on the same row.
We start the diagram in a way such that we will have arrows turning clockwise in odd columns and arrows turning counter-clockwise in the even columns.
All we just said leads us to construct a graph as in Figure \ref{octcyl}.
We notice that the dimension of the graph is of three rows and two columns and this will be true in general: the graph $\G{m}{n}$ for $\M mn$ will have $n-1$ rows and $m-1$ columns.
\subsection{From $\M{m}{n}$ to Hooper diagrams: the general case }\label{hoogen}
We will now explain how to extend this construction to a general \bm surface and see what type of graph we obtain.
In general, our surface $\M{m}{n}$ will have two cylinder decompositions in two different directions that we will call \emph{horizontal} and \emph{vertical}.
We define $\mathcal A=\{\alpha_i\}_{i \in \Lambda}$ and $\mathcal B= \{\beta_i\}_{i \in \Lambda}$ to be the set of horizontal and vertical cylinders, respectively.
The vertices of the cylinder intersection graph is the set of cylinders in the horizontal and vertical directions, $\mathcal A \cup \mathcal B$.
The set of edges will be determined by the same rule as before: there is an edge between $\alpha_i$ and $\beta_j$ for every intersection between the two cylinders.
Therefore, each edge represents a parallelogram, which we call a \emph{rectangle} because it has horizontal and vertical (by our definition of ``vertical'' explained above) sides.
Let $\mathcal E$ be the collection of edges (or rectangles).
Define the maps $\alpha \colon \mathcal E \to \mathcal A$ and $\beta \colon \mathcal E \to \mathcal B$ to be the maps that send the edge between $\alpha_i$ and $\beta_j$ to the nodes $\alpha_i$ and $\beta_j$, respectively.
The generalization of the black and white vertices is the concept of a $2$-colored graph:
\begin{definition}
A \emph{2-colored graph} is a graph equipped with a coloring function $C$ from the set of nodes $\mathcal V$ to $\{0,1\}$, with the property that for any two adjacent nodes, $x, y \in \mathcal V$, we have $C(x) \neq C(y)$.
\end{definition}
The graph we constructed is a 2-colored graph.
To see that, simply define $C(x)=0$ if $x\in \alpha (\mathcal E)=\mathcal A$ and $C(x)=1$ if $x \in \beta (\mathcal E)=\mathcal B$.
Conversely, the maps $\alpha, \beta \colon \mathcal E \to \mathcal V$ as well as the decomposition $\mathcal V= \mathcal A \cup \mathcal B$ are determined by the coloring function.
As we said, we also need to record in our graph the way the rectangles forming the cylinders are glued to each other.
To do that we define $\mathfrak e \colon \mathcal E \to \mathcal E$ be the permutation that sends a rectangle to the rectangle on its right, and let $\mathfrak n \colon \mathcal E \to \mathcal E$ be the permutation that sends a rectangle to the rectangle above it. (Here $\mathfrak e$ stands for ``east'' and $\mathfrak n$ stands for ``north.'')
Clearly, we will always have that $\mathfrak e(e)$ lies in the same cylinder as the rectangle $e$, hence $\alpha \circ \mathfrak e= \alpha$ and $\beta \circ \mathfrak n = \beta$.
Moreover, an orbit under $\mathfrak e$ is a horizontal cylinder and an orbit under $\mathfrak n$ is a vertical one.
\begin{corollary}
By construction, $\G mn$ is always a grid of $(n-1)\times(m-1)$ vertices.
\end{corollary}
\subsection{Definition of Hooper diagrams and augmented diagrams}
In \S \ref{hoogen}, we showed how from a surface we can construct a Hooper diagram, which is a 2-colored graph equipped with two edge permutations. In \S\ref{hoopertobm}, we will show how to construct a \bm surface from a Hooper diagram. We first give the formal definition of Hooper diagrams and define their \emph{augmented} version, which provides an useful tool to unify the treatment to include degenerate cases (coming from the boundary of the diagrams).
The data of a 2-colored graph $\graphg$, and the edge permutations $\mathfrak e$ and $\mathfrak n$, determine the combinatorics of our surface as a union of rectangles, as we will explain explicitly in this section.
We will also give the width of each cylinder, to determine the geometry of the surface as well.
We will first describe in general the Hooper diagram for $\M mn$. Here we use Hooper's notation and conventions from \cite{Hooper}.
\begin{definition}[Hooper diagram]\label{hooperdiagram}
Let $\Lambda=\{(i,j) \in \mathbb Z^2 \mid 1 \leq i \leq m-1$ and $1 \leq j \leq n-1\}$.
Let $\mathcal A_{m,n}$ and $\mathcal B_{m,n}$ be two sets indexed by $\Lambda$, as follows:
\[
\mathcal A_{m,n}=\{\alpha_{i,j}, (i,j) \in \Lambda \mid i+j \text{ is even}\} \text{ and } \mathcal B_{m,n}=\{\beta_{i,j}, (i,j) \in \Lambda \mid i+j \text{ is odd}\}.
\]
Here $\mathcal A_{m,n}$ are the white vertices and $\mathcal B_{m,n}$ are the black vertices.
Let $\G{m}{n}$ be the graph with nodes $\mathcal A_{m,n} \cup \mathcal B_{m,n}$ formed by adding edges according to the usual notion of adjacency in $\mathbb Z^2$.
In other words, we join an edge between $\alpha_{i,j}$ and $\beta_{i',j'}$ if and only if $(i-i')^2+(j-j')^2=1$, for all $(i,j), (i',j') \in \Lambda$ for which $\alpha_{i,j}$ and $\beta_{i',j'}$ exist.
We define the counter-clockwise ordering of indices adjacent to $(i,j)$ to be the cyclic ordering
\[
(i+1,j) \to (i,j+1) \to (i-1,j) \to (i, j-1) \to(i+1,j).
\]
The clockwise order will clearly be the inverse order.
We define then the map $\mathfrak e \colon \mathcal E \to \mathcal E$ to be the cyclic ordering of the edges with $\alpha_{i,j}$ as an endpoint.
We order edges with endpoints $\alpha_{i,j}$ counter-clockwise when $i$ is odd and clockwise if $i$ is even.
Similarly, $\mathfrak n \colon \mathcal E \to \mathcal E$ is determined by a cyclic ordering with $\beta_{i,j}$ as an endpoint.
The opposite rule about the ordering of the cycle will be applied for $\beta_{i,j}$: we order the edges with endpoint $\beta_{i,j}$ clockwise when $j$ is odd and counter-clockwise when $j$ is even.
$\G{m}{n}$ is called the \emph{Hooper diagram} for $\M mn$.
\end{definition}
\smallskip
We now define the \emph{augmented Hooper diagram}, which will make it easier to construct the surface associated to a Hooper diagram. The \emph{augmented graph} $\G mn '$ is obtained by adding degenerate nodes and degenerate edges to the graph $\G{m}{n}$.
If we consider the nodes of $\G{m}{n}$ in bijection with the coordinates $(i,j) \in \mathbb Z^2$, for $0<i<m$ and $0<j<n$, the nodes of $\G mn '$ will be in bijection with the coordinates $(i,j) \in \mathbb Z^2$, for $0 \leq i \leq m$ and $0 \leq j \leq n$.
The nodes we added are the \emph{degenerate nodes}.
On the new set of nodes we add a \emph{degenerate edge} if the nodes are at distance $1$ in the plane and they are not yet connected by an edge.
Our graph $\G mn '$ is again bipartite and we extend coherently the naming conventions we described for $\G{m}{n}$.
We can see the augmented graph for $\M{3}{4}$ in Figure \ref{augmented}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{graphaugm.pdf}
\begin{quote}\caption{The augmented Hooper diagram for $\M{3}{4}$. \label{augmented}} \end{quote}
\end{figure}
Let $\mathcal E'$ denote the set of all edges of $\G mn '$, both original edges and degenerate ones.
We say a degenerate edge $e \in \mathcal E'$ is \emph{ $\mathcal A$-degenerate}, \emph{$\mathcal B$-degenerate} or \emph{completely degenerate} if $\partial e$ contains a degenerate $\mathcal A$-node, a degenerate $\mathcal B$-node or both, respectively.
We also extend the edge permutations to $\mathfrak e', \mathfrak n' \colon \mathcal E' \to \mathcal E'$ following the same convention as before.
\subsection{From Hooper diagrams to \bm surfaces: combinatorics}\label{hoopertobm}
In Section \ref{hoogen}, we showed how from a surface we can construct a Hooper diagram.
In this and the next sections, we will show how to construct a \bm surface from a Hooper diagram $\G mn$ and describe it explicitly on the example of $\M{3}{4}$ we considered before.
The data of a 2-colored graph $\graphg$, and the edge permutations $\mathfrak e$ and $\mathfrak n$, determine the combinatorics of our surface as a union of rectangles, as we will explain explicitly in this section. We will also need the width of each cylinder, to determine the geometry of the surface as well. This is explained in the next section \S\ref{moduli}.
From the $(m,n)$ Hooper diagram $\G mn$ we can in fact recover the structure of two surfaces: $\M{m}{n}$ and $\M{n}{m}$ (which are affinely equivalent, see Theorem \ref{thm:Hooper}). In this section we will show how to construct $\M{m}{n}$, while in \S \ref{sec:affine} we will comment on how to recover also the dual surface. More precisely, we will often consider an \emph{intermediate} picture, that we will call the \emph{orthogonal presentation}, which contains both a sheared copy of $\M{m}{n}$ and a sheared copy of the dual $\M{n}{m}$ and allows us to easily see the relation between the two (see \S \ref{sec:affine}).
To recover the combinatorics of the surface from its Hooper diagram, we need to decompose it into smaller pieces.
We will see that each piece corresponds to one polygon in the presentation in semi-regular polygons that was explained in the previous section.
The choice of which surface we obtain depends on our choice to decompose the graph into horizontal or vertical pieces.
The vertical decomposition of the graph $\G{m}{n}$ will give us the combinatorics of the surface $\M{m}{n}$, while the horizontal decomposition produces $\M{n}{m}$, see \S\ref{sec:affine}. This is coherent with the operation of rotating the diagram to invert the role of $m$ and $n$, see Remark \ref{rk:rotating} for details.
We now explain how to construct the surface starting from its graph, using the example of $\M{3}{4}$.
Let us decompose the augmented graph vertically, as shown in Figure \ref{dec}.
We will consider as a piece a column of horizontal edges with the boundary vertices and all the edges between two of these vertices, no matter if they are degenerate or not.
In our case the decomposition will be as in the following figure, where, as before, the degenerate edges that have been added are represented with dotted lines.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{decomp.pdf}
\begin{quote}\caption{The three pieces of the vertical decomposition of $\G{3}{4}$. \label{dec}} \end{quote}
\end{figure}
Each edge will now represent a basic rectangle in our decomposition of the surface in polygons.
We will still need the data of the width and height of the rectangle, which we will treat later.
In Figure \ref{piece1} we label each edge and its corresponding basic rectangle with a letter, so that it is easy to see how to pass from one to the other.
The \emph{degenerate edges} will correspond to degenerate rectangles, which means rectangles with zero width, or zero height, or both.
The $\mathcal A$-degenerate edges correspond to rectangles with zero height (horizontal edges), the $\mathcal B$-degenerate edges correspond to rectangles with zero width (vertical edges), and the completely degenerate ones correspond to rectangles with zero width and zero height (points).
Each rectangle coming from a vertical edge will contain a \emph{positive diagonal}, which means a diagonal with positive slope, going from the bottom left corner to the upper right corner.
In the case of degenerate rectangles we will just identify the diagonal with the whole rectangle, so with a horizontal edge, a vertical edge or a point for $\mathcal A$-degenerate, $\mathcal B$-degenerate and completely degenerate edges respectively.
In the non-degenerate rectangles, this means that since each piece is repeated twice, in two pieces of our decomposition, each time we will include in our polygon one of the two triangles formed by the diagonal inside the rectangle.
The permutation arrows between edges show us how the basic rectangles are glued.
We will glue the rectangles according to the ``north'' and ``east'' conventions: following $\mathfrak e$-permutation arrows around white vertices corresponds to gluing on the right, and following
$\mathfrak n$-permutation arrows around black vertices corresponds to gluing above.
Moreover, such arrows will sometimes represent gluing in the interior of the same polygon, and other times they will represent gluing between a polygon and the following one.
This will depend on whether the permutation arrows are internal to the piece we are considering or if they are between edges in different pieces of the decomposition.
This is evident already in the first piece of our diagram.
As in Figure \ref{piece1}, we can see that the edges that contain both a black and a white degenerate vertex collapse to a point, as for the basic rectangles $a, e, f, h, j, l$.
The edges containing only black degenerate vertices collapse to a vertical edge, as for $b, d, g, m$.
The edge $c$, containing a degenerate white vertex, will be a horizontal edge.
\begin{figure}[!h]
\centering
\includegraphics[width=1\textwidth]{piece1.pdf}
\begin{quote}\caption{The first piece of the vertical decomposition of $\M{3}{4}$ and its orthogonal presentation. \label{piece1}} \end{quote}
\end{figure}
The remaining basic rectangles $k$ and $i$ are the only non-degenerate ones, each corresponding to half of a basic rectangle.
It is evident that the gluing between $k$ and $i$ internal to the piece is the one going upwards, passing through the horizontal edge represented by $c$.
The result is a parallelogram as in the right picture of Figure \ref{piece1}.
The diagonals in $k$ and $i$ will be glued to the other triangles, missing from the basic rectangle and that will appear in the following polygon.
The other two sides will be glued to the next polygon and this is because the gluing correspond to the ``hanging arrows" shown in the left part of Figure \ref{octort}: a gluing on the left (arrow pointing to $m$ around a white vertex) for $m$ and a gluing on the right (arrow starting from $g$ around a white vertex).
Doing the same thing for the other two pieces of the Hooper diagram, we get two parallelograms and a octagon glued together.
We can see them in Figure \ref{octort} in what we will call the \emph{orthogonal presentation}.
To return to the original polygonal presentation as described in section \ref{bmdefsection}, we need to shear back the cylinders to put them back in the original slope.
The grid in the orthogonal presentation is in fact the vertical and horizontal cylinder decomposition. (Recall that the angle we call \emph{vertical} is not ${\pi}/{2}$, but $\pi/n$.)
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{octort.pdf}
\begin{quote}\caption{The orthogonal polygonal presentation of $\M{3}{4}$. \label{octort}} \end{quote}
\end{figure}
\subsection{From Hooper diagrams to \bm surfaces: widths of cylinders} \label{moduli}
Now that we reconstructed the combinatorial structure of the surface from a Hooper diagram, we will explain how to recover the widths of the cylinders, which is the last piece of information to completely determine the geometry of the surface.
Indeed, widths automatically determine the heights of the cylinders as well: for how the two cylinder decompositions intersect, we can recover the heights from the formula:
\begin{align}\label{heigth}
\text{height}(\beta_i)= \sum_{j \in \Lambda} \#(\beta_i \cap \alpha_j) \text{width} (\alpha_j).
\end{align}
This is because each part of the cylinder $\beta_i$ is in the surface, hence also in a cylinder $\alpha_j$.
Measuring along the height of such a cylinder means counting each $\alpha_j$ we are intersecting and having a segment as long as its width.
To recover the width we need to explain the concept of \emph{critical eigenfunctions}.
\begin{definition}
Let's assume in a general setting that $\graphg$ is a graph, connected, with no multiple edges or loops and $\mathcal V$ is the vertex set.
Let $\mathcal E(x) \in \mathcal V$ be the set of vertices adjacent to $x \in \mathcal V$, which we assume is finite.
Let $\mathbb C^\mathcal V$ be the set of functions $f \colon \mathcal V \to \mathbb C$.
The adjacency operator is $H \colon \mathbb C ^\mathcal V \to \mathbb C ^\mathcal V$ defined by
\[
(Hf)(x)=\sum_{y \in \mathcal E(x)} f(x).
\]
An eigenfunction for $H$ corresponding to the eigenvalue $\lambda \in \mathbb C$ is a function $f \in \mathbb C^\mathcal V$, such that $Hf=\lambda f$.
\end{definition}
Now let $\mathcal L_\mathbb Z$ be the graph with integer vertices whose edges consist of pairs of integers whose difference is $\pm 1$.
$\graphg$ will now be a connected subgraph of $\mathcal L_\mathbb Z$, and again $\mathcal V$ is its vertex set.
If we assume $\mathcal V=\{1, \dots, n-1\}$, which will be our case, then:
\begin{definition}
The \emph{critical eigenfunction of $H$} is defined by
\[
f(x)=\sin \frac{x \pi}{n}, \text{ corresponding to the eigenvalue } \lambda=\cos \frac{\pi}{n}.
\]
\end{definition}
We now consider $\mathcal I$ and $\mathcal J$, two connected subgraphs of $\mathcal L_\mathbb Z$, with vertex sets $\mathcal V_\mathcal I$ and $\mathcal V_\mathcal J$ respectively.
Let $\graphg$ be the Cartesian product of the two graphs, as described in \cite{Hooper}.
Clearly our cylinder intersection graph is a graph of this type.
For the graph $\G{m}{n}$, we then choose the widths of the cylinders to be defined by
\[
w(\alpha_{i,j})=w(\beta_{i,j})=f_\mathcal I (i) f_\mathcal J(j),
\]
where $f_\mathcal I \colon \mathcal V_\mathcal I \to \mathbb R$ and $f_\mathcal J \colon \mathcal V_ \mathcal J \to \mathbb R$ are the critical eigenfunctions.
As we said, the graph fully determines the combinatorial structure for the surface.
The flat structure is fully determined by choosing the widths of the cylinders, corresponding to vertices of the graph.
We take the critical eigenfunctions of the graph to be the widths of the cylinders:
\begin{corollary}
The \bm surface $\M m n$ has cylinder widths
\[
w_{i,j}= \sin \left(\frac{ i \pi}{m} \right) \sin \left( \frac{j\pi}{n} \right),
\]
where $w_{i,j}$ is the width of the cylinder corresponging to the vertex $(i,j)$ of the Hooper diagram $\G m n $.
The height can be calculated using (\ref{heigth}).
\end{corollary}
\section{Introduction} \label{intro}
In this paper we give a complete characterization of a class of symbolic sequences that generalize the famous class of \emph{Sturmian sequences}, and arise geometrically by coding bi-infinite linear trajectories on \bm surfaces. In order to introduce the problem and motivate the reader, we start this introduction by recalling in \S \ref{sec:Sturmian} the geometric construction of \emph{Sturmian sequences} in terms of coding linear trajectories in a square, and then their characterization both as described by Series using derivation, and by a system of substitutions (an $\mathcal{S}-$adic presentation). We then recall in \S \ref{sec:polygons} how this type of description was recently generalized by several authors to the sequences coding linear trajectories in regular polygons. Finally, in \S \ref{sec:ourresults} we explain why \bm sequences are the next natural example to consider to extend these symbolic characterizations, and state a simple case of our main result.
\subsection{Sturmian squences} \label{sec:Sturmian}
\emph{Sturmian sequences} are an important class of sequences in two symbols that often appear in mathematics, computer science and real life. They were considered by Christoffel \cite{C:obs} and Smith \cite{S:note} in the 1870's, by Morse and Hedlund \cite{MH:sym} in 1940 and by many authors since then (see \cite{fogg} for a contemporary account and \cite{Alg} for a historical survey). Sturmian sequences are interesting because of their geometric origin, and are also of interest because they give the simplest non-periodic infinite sequences (see \cite{CH:seq}), having the lowest possible complexity.\footnote{For each $n$ let $P(n)$ be the number of possible strings of length $n$. For Sturmian sequences, $P(n)=n+1$.} They admit the following geometric interpretation:
Consider an \emph{irrational line}, i.e. a line in the plane in a direction $\theta$ such that $\tan \theta$ is irrational, in a \emph{square grid} (Figure \ref{square1}). As we move along the line, let us record with a $0$ each time we hit a horizontal side and with a $1$ each time we hit a vertical side. We get in this way a bi-infinite sequence of $0$s and $1$s which, up to choosing an origin arbitrarily, we can think of as an element in $\{0,1\}^{\mathbb{Z}}$. The sequences obtained in this way as the line vary among all possible irrational lines are exactly all \emph{Sturmian sequences}. (For further reading, see the beautiful expository paper by Series \cite{Series}, and also the introduction of \cite{SU2}.)
Equivalently, by looking at a fundamental domain of the periodic grid, we can consider a square with opposite sides identified by translations.
We define a \emph{linear trajectory} in direction $\theta$ to be a path that starts in the interior of the square and moves with constant velocity vector making an angle $\theta$ with the horizontal, until it hits the boundary, at which time it re-enters the square at the corresponding point on the opposite side and continues traveling with the same velocity.
For an example of a trajectory see Figure \ref{square1}.
We will restrict ourselves to trajectories that do not hit vertices of the square. As in Figure \ref{square1}, let us label by $0$ and $1$ respectively its horizontal and vertical sides.\footnote{Since squares (or, more generally, parallelograms) tile the plane by translation, the cutting sequence of a trajectory in a square (parallelogram) is the same than the cutting sequence of a straight line in $\mathbb{R}^2$ with respect to a square (or affine) grid.} The \emph{cutting sequence} $c(\tau)$ associated to the linear trajectory $\tau$ is the bi-infinite word in the symbols (edge labels, here $0$ and $1$) of the alphabet $\Lalphabet$, which is obtained by reading off the labels of the pairs of identified sides crossed by the trajectory $\tau$ as time increases.
\smallskip
Let us explain now how to characterize Sturmian sequences. One can assume without loss of generality (see \cite{SU2} for details) that $0\leq \theta \leq \pi/2$. If $0\leq \theta \leq \pi/4$, as in Figure \ref{square1}, the cutting sequence does not contain the subword $00$, and if $\pi/4 \leq \theta \leq \pi/2$, it does not contain the subword $11$.
Let us say that a word $w \in \{0,1\}^{\mathbb{Z}}$ is \emph{admissible} if either it does not contain any subword $00$, so that $0$s separate blocks of $1$s (in which case we say it it admissible of type $1$) or it does not contain any subword $11$ and $1$s separate blocks of $0$s (in which case we say it it admissible of type $0$).
\begin{figure}[!h]
\centering
\includegraphics[width=300pt]{irrational-torus-shaded.png}
\begin{quote}\caption{A trajectory with $\theta < \pi/4$ and irrational slope on the square torus \label{square1}} \end{quote}
\end{figure}
Given an admissible word $w$, denote by $w'$ the \emph{derived sequence}\footnote{In this section, we are using the terminology from Series \cite{Series}. } obtained by erasing one $1$ (respectively one $0$) from each block of consecutive $1$'s (respectively $0$'s) if $w$ is admissible of type $1$ (respectively $0$).
\begin{ex}\label{sigmaex} A $w$ and its derived sequence $w'$:
\begin{eqnarray}
w &=& \dots 011101111011101110 11110\dots \nonumber \\
w'& = & \dots 0 11\phantom{1}0 111\phantom{1}011\phantom{1}011\phantom{1}0111\phantom{1}0 \dots \nonumber%
\end{eqnarray}
\end{ex}
We say (following Series \cite{Series}) that a word is \emph{infinitely derivable} if it is admissible and each of its derived sequences is admissible. It turns out that \emph{cutting sequences of linear trajectories on the square are infinitely derivable} (see Series \cite{Series} or also the introduction of \cite{SU2}).
Moreover, the converse
is \emph{almost} true; the exceptions, i.e. words in $\{0, 1\} ^{\mathbb{Z}}$ which are infinitely derivable and are not cutting sequences such as $\overline{w} = \dots 111101111 \dots $,
can be explicitly described.
The space of words has a natural topology that makes it a compact space (we refer e.g.~to \cite{LM:sym}). The word $\overline{w}$ is not a cutting sequence, but it has the property that any finite subword can be realized by a finite trajectory. This is equivalent to saying that it is in the \emph{closure} of the space of cutting sequences. In fact, \emph{the closure of the space of cutting sequences is precisely the set of infinitely derivable sequences}.
\smallskip
An alternative related characterization of Sturmian sequences can also be given in terms of substitutions. The definition of substitution is recalled in \S \ref{sec:substitutionscharacterization} (see Definition \ref{def:substitution}).
Let $\sigma_0$ be the substitution given by $\sigma_0(0)=0$ and $\sigma_0(1)=10$ and let $\sigma_1$ be the substitution given by $\sigma_1(0)=01$ and $\sigma_1(1)=1$. Then, words in Sturmian sequences can be obtained by starting from a symbol (0 or 1) and applying all possible combinations of the substitutions $\sigma_0$ and $\sigma_1$. More precisely, given a Sturmian word $w$ corresponding to a cutting sequence in a direction $0<\theta<\pi/4$, there exists a sequence $(a_i)_{i \in \mathbb{N}} $ with integer entries $a_i \in \mathbb{N}$ such that
\begin{equation}\label{Sturmian:substitutions_char}
w \in \bigcap_{k \in \mathbb{N}} \sigma_0^{a_0}\sigma_1^{a_1} \sigma_0^{a_2}\sigma_1^{a_3} \cdots \sigma_0^{a_{2k}} \sigma_1^{a_{2k+1}}\{0,1\}^{\mathbb{Z}}.
\end{equation}
If $\pi/4<\theta<\pi/2$, the same type of formula holds, but starting with $\sigma_1$ instead of $\sigma_0$.
Furthermore, $w $ is in the closure of the set of cutting sequence in $\{ 0,1\}^\mathbb{Z}$ if and only if there exists $(a_i)_{i \in \mathbb{N}} $ with integer entries $a_i \in \mathbb{N}$ such that \eqref{Sturmian:substitutions_char} holds, thus this gives an alternative characterization via substitutions.
This type of characterization is known as an $\mathcal{S}$-\emph{adic} presentation. We refer to \cite{BD} for a nice exposition on \emph{$\mathcal{S}$-\emph{adic} systems}, which are a generalization of \emph{substitutive systems} (see also \cite{ferenczi} and \cite{Chap12PF}). While in a substitutive system one considers sequences obtained as a fixed point of a given substitution and the closure of its shifts, the sequences studied in an $ \mathcal{S}$-\emph{adic system},
are obtained by applying products of permutations from a finite set, for example from the set $\mathcal{S} = \{ \sigma_0, \sigma_1\}$ in \eqref{Sturmian:substitutions_char}. Equivalently, we can write \eqref{Sturmian:substitutions_char} in the form of a limit which is known as $\mathcal{S}$-adic expansion (see for example \eqref{inverse_limit} after Theorem \ref{thm:substitutionscharacterization} in \S \ref{sec:substitutionscharacterization}, or more in general \cite{BD}). The term $\mathcal{S}-$adic was introduced by Ferenczi in \cite{ferenczi}, and is meant to remind of Vershik \emph{adic} systems \cite{vershik} (which have the same inverse limit structure) where $\mathcal{S}$ stands for \emph{substitution}.
The sequence of substitutions in an $\mathcal{S}$-adic system
is often
governed by a dynamical system, which in the Sturmian case is a one-dimensional map, i.e. the Farey (or Gauss) map (see Arnoux's chapter \cite{fogg} and also the discussion in \S 12.1 in \cite{Chap12PF}).
Indeed, the sequence $(a_i)_{i \in \mathbb{N}}$ in \eqref{Sturmian:substitutions_char} is exactly the sequence of \emph{continued fraction entries} of the slope of the coded trajectory and hence can be obtained as \emph{symbolic coding of the Farey} (or \emph{Gauss}) \emph{map} (see for example the introduction of \cite{SU}, or \cite{fogg}). There is also a classical and beautiful connection with the geodesic flow on the modular surface (see for example the papers \cite{Series}, \cite{Se:mod}, \cite{Se:sym} by Series). For more on Sturmian sequences, we also refer the reader to the excellent survey paper \cite{fogg} by Arnoux.
\subsection{Regular polygons} \label{sec:polygons}
A natural geometric generalization of the above Sturmian characterization is the question of \emph{characterizing cutting sequences of linear trajectories in regular polygons} (and on the associated surfaces).
Let ${O_n}$ be a regular $n$-gon. When $n$ is even, edges come in pairs of opposite parallel sides, so we can identify opposite parallel sides by translations. When $n$ is odd, no sides are parallel, but we can take two copies of $O_n$ and glue parallel sides in the two copies (this construction can also be done for $n$ even). \emph{Linear trajectories} in a regular polygon are defined as for the square. We will restrict our attention to \emph{bi-infinite trajectories} that never hit the vertices of the polygons. If one labels pairs of identified edges with \emph{edge labels} in the alphabet $\Lalphabet_n =\{0,1, \dots , n-1\}$, for example from the alphabet $\Lalphabet_4:= \{ 0,1,2,3 \}$ when $n=8$ (see Figure \ref{intro-oct}), one can associate as above to each bi-infinite linear trajectory $\tau$ its \emph{cutting sequence} $c(\tau)$, which is a sequence in $\Lalphabet_n^{\mathbb{Z}}$. For example, a trajectory that contains the segment in Figure \ref{intro-oct} will contain the word $10123$.
\begin{figure}[!h]
\centering
\includegraphics[width=100pt]{octagon-trajectory.png} \hspace{0.5in}
\begin{tikzpicture}[node distance=2em]
\node (0) {0};
\node (1) [right= of 0] {1};
\node (2) [right= of 1] {2};
\node (3) [right= of 2] {3};
\path[thick,-to]
(0) edge [bend left=30] (1)
(1) edge [bend left=30] (0)
(1) edge [bend left=30] (2)
(2) edge [bend left=30] (1)
(2) edge [bend left=30] (3)
(3) edge [bend left=30] (2)
(3) edge [loop right=60] (3);
\end{tikzpicture}
\begin{quote}\caption{A trajectory on the regular octagon surface, and the corresponding transition diagram for $\theta \in [0,\pi/8)$ \label{intro-oct}} \end{quote}
\end{figure}
In the case of the square, identifying opposite sides by translations yields a torus or surface of genus $1$. When $n \geq 4$, one obtains in this way a surface of higher genus. We call all the surfaces thus obtained (taking one or two copies of a regular polygon) \emph{regular polygonal surfaces}.
Regular polygonal surfaces inherit from the plane an Euclidean metric (apart from finitely many points coming from vertices), with respect to which linear trajectories are \emph{geodesics}.
\smallskip
The full characterization of cutting sequences for the octagon, and more in general for regular polygon surfaces coming from the $2n$-gons, was recently obtained by Smillie and the third author in the paper \cite{SU2}; see also \cite{SU}. Shortly after the first author, Fuchs and Tabachnikov described in \cite{DFT} the set of periodic cutting sequences in the regular pentagon, the first author showed in \cite{davis} that the techniques in Smillie and Ulcigrai's work \cite{SU2} can be applied also to regular polygon surfaces with $n$ odd. We now recall the \emph{characterization of cutting sequences for the regular octagon surface} in \cite{SU2}, since it provides a model for our main result.
One can first describe the set of pairs of consecutive edge labels, called \emph{transitions}, that can occur in a cutting sequence. By symmetry, one can consider only cutting sequences of trajectories in a direction $\theta \in [0,\pi)$ and up to permutations of the labels, one can further assume that $\theta \in [0,\pi/8)$. One can check that the transitions that are possible in this sector of directions are only the ones recorded in the graph in Figure \ref{intro-oct}. Graphs of the same form with permuted edge labels describe transitions in the other sectors of the form $[\pi i /8 ,\pi (i+1)/8)$ for $i=1,\dots, 7$. We say that a sequence $w \in \Lalphabet_4^\mathbb{Z}$ is \emph{admissible} or more precisely \emph{admissible in sector $i $} if it contains only the transitions allowed for the sector $[\pi i /8 ,\pi (i+1)/8)$.
One can then define a \emph{derivation rule}, which turns out to be different than Series' rule for Sturmian sequences, but is particularly elegant. We say that an edge label is \emph{sandwiched} if it is preceded and followed by the same edge label.
The \emph{derived sequence} of an admissible sequence is then obtained by \emph{keeping only sandwiched edge labels}.
\begin{ex}\label{generationex} In the following sequence $w$ sandwiched edge labels are written in bold fonts:
$$ w= \cdots \mathbf{2}\, 1\, \mathbf{3}\, 122 \,\mathbf{1}\, 2213\, \mathbf{0}\,312213\,\mathbf{0}\,3122\,\mathbf{1}\,221\,\mathbf{3}\,122\,\mathbf{1}\,221\,\mathbf{3} \cdots $$
Thus, the derived sequence $w'$ of $w$ will contain the string
$$w'=\cdots 231001313 \cdots .$$
\end{ex}
One can then prove that cutting sequences of linear trajectories on the regular octagon surface are \emph{infinitely derivable}. Contrary to the Sturmian case, though, this condition is only necessary and fails to be sufficient to characterize the closure of the space of cutting sequences. In \cite{SU2} an additional condition, \emph{infinitely coherent} (that we do not want to recall here), is defined in order to characterize the closure. It is also shown on the other hand that one can give an $\mathcal{S}$-adic presentation of the closure of the octagon cutting sequences. In \cite{SU2} the language of substitutions was not used, but it is shown that one can define some combinatorial operators called \emph{generations} (which are essentially substitutions on pairs of labels) and that each sequence in the closure can be obtained by a sequence of generations. One can rewrite this result in terms of substitutions; this is done for the example in the case of the regular hexagon in \cite{report}, thus obtaining a characterization that generalizes \eqref{Sturmian:substitutions_char} and provides an $\mathcal{S}-$adic presentation, which for a regular $2n$-gon surface consists of $2n-1$ substitutions. The $1$-dimensional map that governs the substitution choice is a generalization of the Farey map (called the \emph{octagon Farey map} for $2n=8$ in \cite{SU2}). A symbolic coding of this generalized Farey map applied to the direction of a trajectory coincides with the sequence of sectors in which derived sequences of the trajectory's cutting sequence are admissible.
\smallskip
Both in the Sturmian case and for regular polygon surfaces the proofs of the characterizations are based on \emph{renormalization} in the following sense.
Veech was the first to notice in the seminal paper \cite{Veech} that the square surface and the regular polygon surfaces share some special property that might make their analysis easier. He realized that all these surfaces are rich with \emph{affine symmetries} (or more precisely, of affine diffeomorphisms) and are examples of what are nowadays called \emph{Veech surfaces} or \emph{lattice surfaces}, see \S \ref{sec:Veech} for definitions. It turns out that these affine symmetries can be used to \emph{renormalize} trajectories and hence produce a characterization of cutting sequences. In the case of the square torus, they key idea behind a geometric proof of the above mentioned results on Sturmian sequences is the following: by applying an affine map of the plane, a linear trajectory is mapped to a linear trajectory whose cutting sequence is the derived sequence of the original trajectory. From this observation, one can easily show that cutting sequences are infinitely derivable. In the case of the regular octagon, Hubert and Schmidt have used affine symmetries in \cite{AH:fra} to renormalize directions and define a continued fraction-like map for the octagon, but could not use their renormalization to describe cutting sequences and left this as an open question in \cite{AH:fra}.
An important point in Smillie and Ulcigrai's work \cite{SU2, SU} is to also use non-orientation-preserving affine diffeomorphisms, since this makes the continued fraction simpler and allows to use an element which acts as a \emph{flip and shear}, which accounts for the particularly simple sandwiched derivation rule.
\subsection{Our results
on \bm surfaces}\label{sec:ourresults}
In addition to the regular polygon surfaces, there are other known examples (see \S \ref{sec:Veech_history}) of surfaces which, being rich with affine symmetries, are \emph{lattice} (or \emph{Veech}) surfaces (the definition is given in \S \ref{sec:Veech}). A full classification of Veech surfaces is an ongoing big open question in Teichm\"uller dynamics (see again \S \ref{sec:Veech_history} for some references).
Two new infinite families of Veech surfaces were discovered almost two decades after regular polygonal surfaces, respectively one by
Irene Bouw and Martin M\"oller \cite{BM} and the other by Kariane Calta \cite{calta} and Curt McMullen \cite{Mc} independently.
The family found by Irene Bouw and Martin M\"oller was initially described algebraically (see \S \ref{sec:Veech_history}); later, Pat Hooper presented the construction of what we here call \emph{\bm surfaces} as created by identifying opposite parallel edges of a collection of \emph{semi-regular} polygons (see \S \ref{sec:Veech_history} for more detail). We give a precise description in \S \ref{bmdefsection}. An example is the surface in Figure \ref{intro-bm43}, obtained from two semi-regular hexagons and two equilateral triangles by gluing by parallel translation the sides with the same edge labels. Surfaces in the \bm family are parametrized by two indices $m, n$, so that the $\M mn$ \bm surface is glued from
$m$ polygons, the first and last of which are regular $n$-gons, and the rest of which are semi-regular $2n$-gons. The surface in the example is hence known as $\M 43$.
\begin{figure}[!h]
\centering
\includegraphics[width=.8\textwidth]{bm43-trajectory.png}
\begin{quote}\caption{Part of a trajectory on the \bm surface $\M 43$ \label{intro-bm43}} \end{quote}
\end{figure}
\bm surfaces can be thought in some sense as the next simplest classes of (primitive) Veech surfaces after regular polygon surfaces, and the good \emph{next} candidate to generalize the question of characterizing cutting sequences. Indeed, the Veech group, i.e. the group generated by the linear parts of the affine symmetries (see \S\ref{sec:Veech} for the definition) of both regular polygon surfaces and \bm surfaces are \emph{triangle groups}. More precisely, regular $n$-gon surfaces have $(n,\infty, \infty)$-triangle groups as Veech groups, while the Veech groups of \bm surfaces are $(m,n, \infty)$-triangle groups for $m$ and $n$ not both even (when $m$ and $n$ are both even, the Veech group has index $2$ inside the $(m,n, \infty)$-triangle group) \cite{Hooper}. In \cite{D14}, Davis studied cutting sequences on \bm surfaces and analyzed the effect of a \emph{flip and shear} (as in Smillie-Ulcigrai's work \cite{SU2}) in order to define a derivation operator and renormalize trajectories. Unfortunately, with this approach it does not seem possible to cover all angles, apart from the surfaces with $m=2$ or $m=3$ in which all polygons are regular. Part of the reason behind this difficulty is that the Veech group contains two rotational elements, one of order $m$ and one of order $n$, but they do not act simultaneously on the same polygonal presentation.
In this paper, we give a \emph{complete characterization of the cutting sequences on \bm surfaces}, in particular providing an \emph{$\mathcal{S}$-adic presentation} for them. The key idea behind our approach is the following. It turns out that the $\M mn$ and the $\M nm$ \bm surfaces are intertwined in the sense that they can be mapped to each other by an affine diffeomorphism.\footnote{In other words, they belong to the same Teichm\"uller disk.} While the $\M mn$ surface has a rotational symmetry of order $n$, the $\M nm$ surface has a rotational symmetry of order $m$. We will call $\M mn$ and the $\M nm$ \emph{dual \bm surfaces}. Instead of normalizing using an affine automorphism as in the regular polygon case, we renormalize trajectories and define associated derivation operators on cutting sequences in two steps, exploiting the affine diffeomorphism between the $\M mn$ and the $\M nm$ \bm surfaces. In particular, we map cutting sequences on the $\M mn$ surface to cutting sequences on the $\M nm$ \bm surface. This allows us the freedom in between to apply the $n$ rotational symmetry and the $m$ rotational symmetry respectively, and this allows us to renormalize all cutting sequences.
Note that since we frequently use the relationship between the surfaces {\rd \M mn} and {\gr \M nm}, we use the colors red and green to distinguish them throughout the paper, as here and as in Figure \ref{intro34auxtd} below.
We now give an outline of the statement of our main result, with an example in the special case of the $\M 43$ surface. The general results for $\M mn$ surfaces are stated precisely at the end of our paper, in \S \ref{howtolabel}. Let us label pairs of identified edges of the $\M mn$ surface with labels in the alphabet $\LL mn=\{1,2,\dots, (m-1)n\}$. The surface $\M 43$ is for example labeled by $\LL 43=\{1,2,\dots, 9\}$ as in Figure \ref{intro-bm43}. The way to place edge labels for $\M mn$ is described in \S\ref{sec:labelingHooper} and is chosen in a special way that simplifies the later description.
By applying a symmetry of the surface and exchanging edge labels by permutations accordingly, we can assume without loss of generality that the direction of trajectories we study belongs to the sector $[0, \pi/n]$.
As in the case of the regular octagon, we can first describe the set of \emph{transitions} (i.e. pairs of consecutive edge labels) that can occur in a cutting sequence.
For trajectories on $\M 43$ whose direction belongs to sector $[0, \pi/3]$, the possible transitions are shown in the graph in Figure \ref{intro34auxtd}. The structure of \emph{transition diagrams} $\T i m n$ for trajectories on $\M mn$ whose direction belong to sector $[\pi i/n, \pi (i+1)/n]$ are described in \S \ref{sec:other_sectors}.
We say that a sequence $w \in \LL mn^\mathbb{Z}$ is \emph{admissible} (or more precisely \emph{admissible in sector $i $})
if it contains only the transitions represented by arrows in the diagram $\T i m n$.
\begin{figure}[!h]
\centering
\includegraphics[width=350pt]{intro34auxtd.png}
\begin{quote}\caption{The transition diagram $\T 0 34$ for $\M 34$ and its derivation diagram $\D 0 34$, used to define $D 34$ \label{intro34auxtd}} \end{quote}
\end{figure}
We define a \emph{derivation operator} $D mn$, which maps admissible sequences in $\LL mn^\mathbb{Z}$ to (admissible) sequences in $\LL nm^\mathbb{Z}$. The derivation rule for sequences admissible in sector $0$ is described by a labeled diagram as follows. We define \emph{derivation diagrams} $\D 0 m n$ for the basic sector $[0,\pi/n]$ in which some of the arrows are labeled by edge labels of the dual surface $\M nm$. The derivation diagram for $\M43$ is shown in Figure \ref{intro34auxtd}. The derived sequence $w'= D mn w $
of a sequence $w$ admissible in diagram $0$ is obtained by reading off only the arrow labels of a bi-infinite path which goes through the vertices of $\D 0 mn$ described by $w$.
\begin{example}\label{ex:der34}
Consider the trajectory on $\M 43$ in Figure \ref{intro-bm43}. Its cutting sequence $w$ contains the word $\cdots {\rd 1678785452} \cdots$. This word corresponds to a path on the derivation diagram $\D 0 43$ in Figure \ref{intro34auxtd}, which goes through the edge label vertices. By reading off the labels of the arrows crossed by this path, we find that $w'= D 4 3 w$ contains the word $\cdots {\gr 434761} \cdots$.
\end{example}
This type of derivation rule is not as concise as for example the \emph{keep the sandwiched labels} rule for regular polygons, but we remark that the general shape of the labeled diagram that gives the derivation rule is quite simple, consisting of an $(m-1)\times n$ rectangular diagram with vertex labels and arrows labels snaking around as explained in detail in \S \ref{sec:labeled_def}.
We say that a sequence $w \in \LL mn^\mathbb{Z}$ is \emph{derivable} if it is admissible and its derived sequence $D mn w \in \LL nm^\mathbb{Z}$ is admissible (in one of the diagrams of the dual surface $\M nm$). The derivation operator is defined in such a way that it admits the following geometric interpretation: if $w$ is a cutting sequence of a linear trajectory on $\M mn$, the derived sequence $D mn w$ is the cutting sequence of a linear trajectory on the dual surface $\M nm$ (see \S \ref{sec:der_ex} for this geometric interpretation). In the special case $m=4,n=3$ this result was proved by the second author in \cite{report} (see also the Acknowledgments), where the derivation diagram in Figure \ref{intro34auxtd} was first computed.
In order to get a derivation from sequences $\LL mn^\mathbb{Z}$ back to itself, we compose this derivation operator with its dual operator $D nm$: we first \emph{normalize} the derived sequence, i.e. apply a permutation to the labels to reduce to a sequence admissible in $\T 0 m n$. The choice of the permutations used to map sequences admissible in $\T i m n$ to sequences admissible in $\T 0 m n$ is explained in \S \ref{sec:normalization}. We can then apply $D nm$. This composition maps
cutting sequences of trajectories on $\M mn$ first to cutting sequences of trajectories on $\M nm$ and then back to cutting sequences on $\M mn$.
We say that a sequence in $\LL mn^\mathbb{Z}$ is \emph{infinitely derivable} if by alternatively applying normalization and the two dual derivation operators $D mn$ and $D nm$ one always obtains sequences that are admissible (see formally Definition \ref{def:infinitely_derivable} in \S\ref{sec:infinitely_derivable}). With this definition, we then have our first main result:
\begin{theorem}\label{thm:main_infinite_diff}
Cutting sequences of linear trajectories on \bm surfaces are infinitely derivable.
\end{theorem}
As in the case of regular polygon surfaces, this is only a necessary and not a sufficient condition to characterize the closure of cutting sequences. We then define in \S \ref{sec:generation} \emph{generation} combinatorial operators that invert derivation (with the additional knowledge of starting and arrival admissibility diagram) as in the work by Smillie-Ulcigrai \cite{SU,SU2}. Using these operators, one can obtain a characterization, which we then also convert in \S \ref{sec:substitutionscharacterization} into a statement using substitutions. More precisely, we explain how to explicitly construct, for every \bm surface \M mn, $(m-1)(n-1)$ substitutions $\sigma_i$ for $1\leq i \leq (m-1)(n-1)$ on an alphabet of cardinality $N=N_{m,n}:= 3mn-2m-4n+2$ and an operator $\Tr i mn$ that maps admissible sequences in the alphabet of cardinality $N$ (see details in \S \ref{sec:generation_characterization}) to admissible sequences on $\T i mn$ such that:
\begin{theorem}\label{thm:main_characterization}
A sequence $w$ is in the closure of the set of cutting sequences on the \bm surface \M mn if and only if there exists a sequence $(s_i)_{i\in \mathbb{N}}$ with $s_i \in \{1, \dots, (m-1)(n-1)\}$ and $0\leq s_0 \leq 2n -1$ such that
\begin{equation}
w \in \bigcap_{k \in \mathbb{N}} \Tr {s_0} mn \sigma_{s_1} \sigma_{s_2} \cdots \sigma_{s_{k}} \{1, \dots, N \}^{\mathbb{Z}}.
\end{equation}
Furthermore, when $w$ is a non periodic cutting sequence the sequence $(s_i)_{i\in \mathbb{N}}$ can be uniquely recovered from the knowledge of $w$.
\end{theorem}
Theorem \ref{thm:main_characterization}, which is proved as Theorem \ref{thm:substitutionscharacterization} in \S \ref{sec:generation_characterization},\footnote{We remark that Theorem \ref{thm:substitutionscharacterization} the notation used is slightly different than the statement above, in particular the substitutions are labeled by two indices $i,j$ and similarly the entries $s_i$ are pairs of indices which code the two simpler Farey maps, see \S \ref{sec:generation_characterization} for details.} and the relation with itineraries mentioned above, which is proved by Proposition \ref{prop:itineraries_vs_sectors}, provide the desired $\mathcal{S}-$adic characterization of \bm cutting sequences (recall the discussion on $\mathcal{S}-$adic systems in the paragraph following equation \eqref{Sturmian:substitutions_char} previously in this introduction).
We remark also that Theorem \ref{thm:main_characterization} provides an algorithmic way to test (in infinitely many steps) if a sequence belongs to the closure of cutting sequences. The sequence $(s_i)_{i\in \mathbb{N}}$ can be recovered algorithmicaly when $w$ is a cutting sequence and hence infinitely derivable and is the sequence of indices of diagrams in which the successive derivatives of $w$ are admissible
(see Definition \ref{def:seq_sectors} in Section \ref{sec:sectors_sequences}).
Furthermore, the sequence $(s_i)_{i\in \mathbb{N}}$ is governed by a $1$-dimensional dynamical system as follows. There exists a piecewise expanding map $\FF mn$, which we call the \emph{\bm Farey map}, which has \mbox{$(n-1)(m-1)$}
branches, such that if $w$ is the cutting sequence of a trajectory in direction $\theta$, the sequence $(s_i)_{i\in \mathbb{N}}$ is given by the symbolic coding of the orbit of $\theta$ under $\FF mn$. More precisely, it is the itinerary of $((\FF mn )^k(\theta))_{k \in \mathbb{N}}$ with respect to the natural partition of the domain of $\FF mn$ into monotonicity intervals. This is explained in \S \ref{farey}, where the map $\FF mn$ is defined as composition of two simpler maps, describing the projective action on directions of the affine diffeomorphisms from $\M mn$ to $\M nm$ and from $\M nm$ to $\M mn$ respectively.
The \bm Farey map can be used to define a generalization of the continued fraction expansion (see \S \ref{sec:direction_recognition}) which can be then in turn used to recover the direction of a
trajectory corresponding to a given cutting sequence. More precisely, the itinerary of visited sectors for the \bm Farey map described above gives us the indices for the \emph{\bm additive continued fraction expansion} of the direction $\theta$ (Proposition \ref{directionsthm}).
\subsection{Structure and outline of the paper}
Let us now comment on the main tools and ideas used in the proofs and describe the structure of the rest of the paper. As a general theme throughout the paper, we will first describe properties and results on an explicit example, then give general results and proofs for the general case of $\M mn$. The example we work out in detail is the characterization of cutting sequences on the \bm surface $\M 43$ which already appeared in this introduction, exploiting also its dual \bm surface $\M 34$.
This is the first case that could not be fully dealt with by D. Davis in \cite{D14}.\footnote{On the other hand derivation on $\M 34$ can be fully described using Davis' flip and shear because whenever $m=3$, all the polygons are regular.}
In the next section, \S \ref{background}, we include some background material, in particular the definition of translation surface (\S \ref{sec:trans_surf}), affine diffeomorphisms (\S \ref{subsec:affine}) Veech group and Veech (or lattice) surfaces (\S \ref{sec:Veech}) and a brief list of known classes of Veech surfaces (\S \ref{sec:Veech_history}).
In \S \ref{bmdefsection} we then give the formal definition of \bm surfaces, describing the number and type of semi-regular polygons to form \M mn and giving formulas for their side lengths. We also describe their Veech group (see \S \ref{veechofbm}).
The main tool used in our proofs is the presentation of \bm surfaces through \emph{Hooper diagrams}, introduced by P. Hooper in his paper \cite{Hooper} and originally called \emph{grid graphs} by him. These are decorated diagrams that encode combinatorial information on how to build \bm surfaces. The surface $\M mn$ can be decomposed into \emph{cylinders} in the horizontal direction, and in the direction of angle $\pi/n$. The Hooper diagram encodes how these transversal cylinder decompositions intersect each other. In \S \ref{hooperdiagrams} we first explain how to construct a Hooper diagram starting from a \bm surface, while in \S \ref{hoopertobm} we formally define Hooper diagrams and then explain how to construct a \bm surface from a Hooper diagram.
As we already mentioned in the introduction, the definition of the combinatorial derivation operator is
motivated by the action on cutting sequences of affine diffeomorphism (a \emph{flip and shear}) between $\M mn$ and its dual \bm surface $\M nm$. This affine diffeomorphism is described in \S \ref{sec:affine}. A particularly convenient presentation is given in what we call the \emph{orthogonal presentation}: this is an affine copy of $\M mn$, so that the two directions of cylinder decomposition forming an angle of $\pi/n$ are sheared to become orthogonal. In this presentation, both $\M mn$ and $\M nm$ can be seen simultaneously as diagonals of rectangles on the surface (that we call \emph{basic rectangles}, see Figure \ref{hexort1}).
In \S \ref{stairsandhats} a useful tool for later proofs is introduced: we describe a local configuration in the Hooper diagram, that we call a \emph{hat} (see Figure \ref{hat} to understand choice of this name)
and show that it translates into a \emph{stair} configuration of basic rectangles in the orthogonal presentation mentioned before. Proofs of both the shape and labeling of transition diagrams and of derivation rules exploit the local structure of Hooper diagrams by switching between hat and stairs configurations.
Section \S \ref{transitiondiagrams} is devoted to transition diagrams: we first explain our way of labeling edges of \bm surfaces. This labeling, as mentioned before, works especially well with Hooper diagrams. The structure of transition diagrams is then described in \S \ref{sec:labeled_def} (see Theorem \ref{tdtheorem}) and proved in the later sections using hats and stairs. In the same sections we prove also that derivation diagrams describe intersections with sides of the affine image of the dual \bm surface, which is a key step for derivation.
In Section \S \ref{derivation} we describe the \emph{derivation process} obtained in two steps,
by first deriving cutting sequences on \M mn to obtain cutting sequences on the dual surface \M nm (see \S \ref{sec:der_general}) and then, after \emph{normalizing} them (see \S \ref{sec:nor}), deriving them another time but this time applying the \emph{dual} derivation operator.
This two-step process of derivation and then normalization is called \emph{renormalization}.
In \S \ref{farey} we define a one-dimensional map, called the \emph{Bouw-M\"oller Farey map}, that describes the effect of renormalization on the direction of a trajectory.
In \S \ref{sec:characterization} we \emph{invert} derivation through generation operators. This allows to prove the characterization in \S \ref{sec:generation}, where first the characterization of \bm cutting sequences through generation is proved in \S\ref{sec:generation_characterization}, then the version using substitutions is obtained in \S\ref{sec:substitutionscharacterization}, see Theorem \ref{thm:substitutionscharacterization}.
\subsection{Acknowledgements}
The initial idea of passing from $\M mn $ to $\M nm$ to define derivation in \bm surfaces came from conversations between the third author and John Smillie, whom we thank also for explaining to us Hooper diagrams.
We also thank Samuel Leli\`evre,
Pat Hooper, Rich Schwartz and Ronen Mukamel for useful discussions and Alex Wright and Curt McMullen for their comments on the first version of this paper.
A special case of the derivation operator defined in this paper (which provided the starting point for our work) was worked out by the second author for her Master's thesis \cite{report} during her research project under the supervision of the third author. We thank Ecole Polytechnique and in particular Charle Favre for organizing and supporting this summer research project and the University of Bristol for hosting her as a visiting student.
The collaboration that led to the present paper was made possible by the support of ERC grant ChaParDyn, which provided funds for a research visit of the three authors at the University of Bristol, and by the hospitality during the ICERM's workshop \emph{Geometric Structures in Low-Dimensional Dynamics} in November 2013, and the conference \emph{Geometry and Dynamics in the Teichm\"uller space} at CIRM in July 2015, which provided excellent conditions for continued collaboration.
I. Pasquinelli is currently supported by an EPSRC Grant.
C. Ulcigrai is currently supported by ERC Grant ChaParDyn.
\section{Stairs and hats}\label{stairsandhats}
In this section we will explain in detail one particular configuration of basic rectangles in the orthogonal presentation, and the corresponding configuration in the Hooper diagram.
We put particular emphasis on it because we will be using throughout the next sections.
Let us consider a piece of an orthogonal presentation given by six basic rectangles, glued together as in Figure \ref{stair}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.3\textwidth]{stair.png}
\begin{quote}\caption{Configuration of a stair. \label{stair}} \end{quote}
\end{figure}
\begin{definition}
A \emph{stair} is a piece of an orthogonal presentation made of six basic rectangles.
They are glued together so that we have three columns, made of three, two and one rectangle respectively, as shown in Figure \ref{stair}.
\end{definition}
As we did all through \S \ref{hooperdiagrams}, we will need to pass from the Hooper diagram to the orthogonal presentation.
First, we will explain what a stair corresponds to in a Hooper diagram.
Clearly, it will be a piece of diagram made of six edges, with some vertices between them.
The exact configuration will depend on the parity of the vertices, i.e. on the position of the piece in the diagram.
The piece corresponding to a stair will be one of the configurations in Figure \ref{hat}, which we call a \emph{hat}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{possiblehats.pdf}
\begin{quote}\caption{Possible configurations of hats. \label{hat}} \end{quote}
\end{figure}
More precisely:
\begin{definition}\label{def:hat}
A \emph{hat} is a piece of a Hooper diagram made of six edges.
Two of them are vertical and the others are horizontal, in the configuration shown in Figure \ref{hat}.
Moreover, if the two vertical ones go upwards from the vertices of the three-piece base, the first column has counter-clockwise permutation arrows; it has clockwise permutation arrows otherwise.
\end{definition}
According to the parity, these vertices described can be black or white and have permutation arrows turning around clockwise or counter-clockwise.
This gives us four possible configurations, as in Figure \ref{hat}.
The direction of the permutation arrows depends on the number of the column.
As we saw, in fact, in odd columns we have arrows turning clockwise, while in even columns we have arrows turning counter-clockwise.
As we explained in the definition, this determines also the position of the two vertical edges.
Given the parity of the column, the two different possibilities of the vertex colorings are determined from the parity of the row.
In an odd column we will have white vertices on odd rows and black vertices on even rows,
and the opposite in an even column.
Notice that the vertex in the lower left corner determines everything:
Its color together with the direction of its arrows determines the parity of the row and column of its position, and determines in which of the four possible hats we are in.
The first case, with a white vertex and counter-clockwise arrows, corresponds to a corner position in an even row and an even column.
The second one, with a black vertex but still counter-clockwise arrows, corresponds to an odd row and an even column.
The third one, with a black vertex but clockwise arrows, corresponds to an even row and an odd column.
The last one, with a white vertex and clockwise arrows again, corresponds to an odd row and an odd column.
\subsection{Stairs and hats correspondence}
We will now show that the stair and hat configurations correspond to each other.
To do that we will use our method of passing from the Hooper diagram to the orthogonal decomposition and vice-versa.
\begin{lemma}[Hat Lemma]\label{hatlemma}
The stair configurations correspond exactly to the four possible hat configurations.
\end{lemma}
\begin{proof}
First, we show that if we have one of the hat configurations, it actually gives a stair configuration.
We will show it in detail for the first case and the others will work in exactly the same way.
Let us consider a labeling on the hat in the upper-left of Figure \ref{hat}.
As before, each edge corresponds to a basic rectangle.
The three edges around the white vertex in the left bottom corner and the arrows around it, tell us that we will have three basic rectangles, glued one to each other on the right, in the order $a$ glued to $b$, glued to $c$.
On the other hand, the three edges around the black vertex at the other extremity of the edge $a$, and its arrows, tell us that a basic rectangle labeled $f$ is glued on top of one labeled $d$ which is glued on top of the one labeled $a$.
Finally, the basic rectangle $e$ is glued on top of $b$, and on the right of $d$, and we obtain the configuration in Figure \ref{stair}.
The other three cases work the same way.
Secondly, we show that if we have a stair configuration, it will necessarily give a hat configuration on the Hooper diagram.
Let us consider a stair configuration, with the same labels as in Figure \ref{stair}.
The basic rectangle $a$ will correspond to an edge, and we do not know if it will be horizontal or vertical.
We assume for the moment that it is a horizontal edge (we will explain later why the same figure, but rotated so that $a$ is vertical, is not acceptable).
At this point we have the choice of on which extremity of $a$ we want to record the left-right adjacency and the upwards-downwards one. In other words, we have the choice of where to put a black vertex and where to put a white one.
This gives us two possible cases.
If we have the white vertex (resp. the black vertex) on the left of the edge $a$, we will record the gluing with $b$ and then $c$ (resp. $d$ and $f$) on that side.
Again, we have the choice of recording it putting $b$ (resp. $d$) going upwards from the vertex or downwards.
This leads to split each of the two cases in two more.
If $b$ (resp. $d$) is above the line of $a$, the permutation arrows around the white vertex will go counter-clockwise (resp. clockwise).
Now, on the other extremity of the edge $a$, we record the other adjacency and add the edges $d$ and $f$ (resp. $b$ and $c$) in order.
It looks like we have again a choice of whether to draw $d$ (resp. $b$) going upwards or downwards, but it is not difficult to see that the previous choice determines also this one.
In fact, if edge $b$ was going upwards, the edge $d$ will have to go upwards as well, because the edge $e$ is obtained both from the upwards gluing from $b$ and from the right gluing from $d$.
The diagram does not intersect itself and we cannot repeat an edge, hence the two vertical ones have to be in the same direction.
This also shows that $a$ needs to be horizontal, because having the two vertical edges in the same direction makes the permutation arrow go in different orientation around the two vertices, and we saw that if they are in the same column, then they must have the same orientation.
It is clear that we cannot have any other possibility and the four possibilities just described correspond to the four hat configurations in Figure \ref{hat}.
\end{proof}
\subsection{Degenerate hat configurations}
Let us recall that to unify and simplify the description of \bm surfaces via Hooper diagrams we introduced a \emph{augmented diagram}, which allows us to treat the boundary of the Hooper diagram as a degenerate case of a larger diagram (see \S \ref{hooperdiagram}). We now describe degenerate hat configurations that correspond to boundary configurations in the Hooper diagram. We will use them later, in \S \ref{sec:structure}, to prove our main structure theorem.
We will shade the six edges to pick out a hat configuration, as shown in Figure \ref{hat-cases}. The \emph{middle edge} is the one that is numbered in Figure \ref{hat-cases} (or edge $a$ from Figure \ref{hat}).
\begin{lemma}[Degenerate Hat Lemma] \label{degeneratehat}
All edges of an augmented Hooper diagram that form a subset of a hat, such that the middle edge of the hat is an edge of the augmented diagram, fall into one of the four cases in Figure \ref{hat-cases}.
\end{lemma}
\begin{figure}[!h]
\centering
\includegraphics[width=.6\textwidth]{hat-cases.png}
\begin{quote}\caption{The cases $1-4$ for hats \label{hat-cases}} \end{quote}
\end{figure}
\begin{proof}
The reader can easily verify, using a diagram such as Figure \ref{all-hats}, that any orientation and placement of a hat whose middle edge is an edge of an augmented Hooper diagram falls into one of the cases $1-4$.
\end{proof}
\begin{figure}[!h]
\centering
\includegraphics[width=400pt]{hat-ex.png}
\begin{quote}\caption{Examples of the four possible cases for hats. We do not include the outer degenerate edges (dashed gray) in our hats because they are not adjacent to the middle edge. \label{all-hats}} \end{quote}
\end{figure}
These cases are illustrated in Figure \ref{all-hats}. We use hats, and degenerate hats, in Lemma \ref{degeneratearrow}, which is a main step in our Structure Theorem \ref{tdtheorem} for derivation diagrams.
\subsection{Dual surfaces}
In this section, we will prove a Lemma that uses the stairs and degenerate stair configurations, and will be used later to define the derivation diagrams that give derivation rules.
Consider the superposition of the orthogonal presentation of $\M mn$ and the dual orthogonal presentation $\M nm$, as shown in Figure \ref{coincide}.
Recall that sides of (the sheared images of) $\M mn$ and of $\M nm$ appear as diagonals of alternating basic rectangles. Sides of either presentation that are horizontal or vertical can be thought of as degenerate diagonals, i.e. degenerate basic rectangles of zero width or zero height, described by the \emph{augmented} Hooper diagram (see \S \ref{hoopertobm}).
As before, let us call \emph{positive diagonals} the sheared images of sides of $\M nm$ (which, if not vertical or horizotal, have slope $1$) and let us now call \emph{negative diagonals } the sheared images of sides of $\M mn$ (which, if not vertical or horizontal, have slope $-1$). As observed in Remark \ref{alternate}, positive and negative diagonals \emph{alternate}, in the sense that the neighboring basic rectangles with a positive diagonal are adjacent (right/left or up/down) to basic rectangles with a negative diagonal. This remark holds true for all sides, including vertical and horizontal ones, if we think of them as degenerated diaganals and draw them according to the following convention:
\begin{convention}\label{convention:ordered_sides}
When degenerate sides of the orthogonal presentation of $\M nm$ and of the dual orthogonal presentation of $\M nm$ \emph{coincide}, we think of them as degenerated diagonals and hence we draw them adjacent to each other and ordered so that degenerate positive (red) and negative (green) diagonals \emph{alternate} in horizontal and vertical directions, as shown in Figure \ref{coincide} for $\M 43 $ and $\M 34$.
\end{convention}
\begin{figure}[!h]
\centering
\includegraphics[width=400pt]{coincide.png}
\begin{quote}\caption{The orthogonal presentations of $\M 43$ (green, left) and $\M34$ (red, right), superimposed on the same figure (center). Coinciding horizontal and vertical edges alternate red and green in the horizontal and vertical directions. Here the space between coinciding edges is exaggerated for clarity. \label{coincide}} \end{quote}
\end{figure}
Consider trajectories whose direction belongs to the first quadrant, i.e. such that $\theta \in [0, \pi/2]$. Let us say that a pair of negative diagonals is \emph{consecutive} if there exists such a trajectory which hits these two diagonals one after the other.
\begin{lemma}\label{lemma:intertwined}
Consider a pair of consecutive negative diagonals. Then, the following dichotomy holds: for \emph{any} trajectory whose direction belong to the first quadrant, i.e. such that $\theta \in [0, \pi/2]$, either
\begin{itemize}
\item between any consecutive crossings of these pairs of negative diagonals, no positive diagonal is crossed, or
\item between any consecutive crossings of these pairs of negative diagonals, exactly one and the same positive diagonal is crossed.
\end{itemize}
\end{lemma}
\begin{proof}
Assume first that the two negative adjacent negative diagonals are non-degenerate. The fact that they are adjacent means that one can find a stair configuration as in Figure \ref{stair}, in which the two diagonals are the ones labeled by either $a$ and $c$, or $a$ and $e$, or $a$ and $f$. It is then clear from the stairs picture in Figure \ref{stair} that (referring to the labeling in that figure) if the pair is given by $a$ and $e$, then a trajectory whose direction belongs to the first quadrant that crosses these two negative diagonals never crosses any positive diagonal in between, while if the pair is $a$ and $c$ or $a$ and $f$, such a trajectory will always cross a negative diagonal between, i.e. $b$ (for the pair $a$ and $c$) or $d$ (for the pair $a$ and $f$).
Convention \ref{convention:ordered_sides}, which treats vertical and horizontal sides as degenerate diagonals, allows us to use exactly the same proof for degenerate stairs (when some of the $6$ edges in the stair are degenerate).
\end{proof}
In \S \ref{sec:labeled_def} this Lemma will be used to define derivation diagrams.
\section{Renormalization on the Teichm\"uller disk} \label{teich}
In this section we describe how the renormalization algorithm for cutting sequences and linear trajectories defined in this paper for \bm surfaces can be visualized on the Teichm\"uller disk of $\M mn$. This is analogous to what was described in \cite{SU2} by Smillie and the third author for the analogous renormalization algorithm for the regular octagon and other regular $2n$-gons introduced in \cite{SU}, so we will only give a brief overview and refer to \cite{SU} for details.
In \S~\ref{Teichdisksec} we first recall some background definitions on the Teichm\"uller disk (following \cite{SU2}). In \S~\ref{sec:tessellation} we then describe a tessellation of the hyperbolic disk and the tree of possible renormalization moves. Finally, in \S~\ref{renormcutseq}, we use this tree to visualize the sequence of moves that approximate a geodesic ray limiting to a given direction $\theta$ on the boundary and explain the connection with the itinerary of $\theta$ under the \bm Farey map and with sequences of derivatives of cutting sequences of trajectories in direction $\theta$.
\subsection{The Teichm\"uller disk of a translation surface}\label{Teichdisksec}
The Teichm\"uller disk of a translation surface $S$ can be identified with a space of marked translation surfaces as follows.
Let $S$ be a translation surface.
Using the convention that a map determines its range and domain we can identify a triple with a map and denote it by $[f]$.
We say two triples $f:S\to S'$ and $g:S\to S''$ are equivalent if there is a translation equivalence $h:S'\to S''$ such that $g=fh$.
Let $\tilde{\mathcal{ M}}_A(S)$ be the set of equivalence classes of triples. We call this the set of \emph{marked translation surfaces affinely equivalent to $S$}. There is a canonical basepoint corresponding to the identity map $id:S\to S$.
We can also consider marked translation surfaces up to \emph{isometry}. We say that two triples $f:S\to S'$ and $g:S\to S''$ are equivalent up to isometry if there is an isometry $h:S'\to S''$ such that $g=fh$. Let ${\mathcal{\tilde M}}_{I}(S)$ be the collection of isometry classes of triples.
Let us denote by $\mathbb{H}$ the upper half plane, i.e.~$\{ z \in \mathbb{C} |\, \Im z >0\}$ and by $\mathbb{D}$ the unit disk, i.e.~\mbox{$\{ z \in \mathbb{C} : \, |z|< 1 \}$}. In what follows, we will identify them by the conformal map $\phi: \mathbb{H} \rightarrow \mathbb{D}$ given by $\phi(z) = \frac{z-i}{z+i}$. One can show that the set ${\mathcal{\tilde M}}_A(S)$ can be canonically identified with $SL_\pm(2,\mathbb{R})$. More precisely, one can map the matrix $\nu \in SL_\pm(2,\R)$ to the marked triple $\Psi_\nu: S \to \nu S$, where $\Psi_\nu$ is the standard affine deformation of $S$ given by $\nu $ and show that this map is injective and surjective. We refer the reader to the proof of Proposition 2.2 in \cite{SU2} for more details.
The space ${\mathcal{\tilde M}}_{I}(S)$ of marked translation surfaces up to isometry is hence isomorphic to $\mathbb{H}$ (and hence to $\mathbb{D}$), see Proposition 2.3 in \cite{SU2}.
The hyperbolic plane has a natural \emph{boundary}, which corresponds to $\partial\mathbb{H} = \{ z \in \mathbb{C} : \, \Im z = 0\}\cup\{\infty\}$ or $\partial \mathbb{D}= \{ z \in \mathbb{C} : \, |z|=1\}$. The boundaries can be naturally identified with the projective space $\mathbb{R}\mathbb{P}^1$, i.e. the space of row vectors $\begin{pmatrix} x_1&x_2\end{pmatrix}$ modulo the identification given by multiplication by a non-zero real scalar. The point $\begin{pmatrix} x_1&x_2\end{pmatrix}$ in $\mathbb{R}\mathbb{P}^1$ is sent by the standard chart $\phi_1$ to the point $x_1/x_2 \in \mathbb{R} = \partial \mathbb{H}$ and by $\phi \phi_1$ (where the action of $\phi: \mathbb{H}\rightarrow \mathbb{D}$ extends to the boundaries) to the point $e^{i \theta_x} \in \partial \mathbb{D}$ where $\sin \theta_x = -2x_1x_2 /(x_1^2+x_2^2)$ and $\cos \theta_x= (x_1^2- x_2^2) /(x_1^2+x_2^2)$. Geometrically, $\mathbb{R}\mathbb{P}^1$ can also be identified with the space of projective parallel one-forms on $S$, which give examples of projective transverse measures and were used by Thurston to construct his compactification of Teichm\"uller space.
The corresponding element of $\mathbb{C}$ under the chart $\phi_1$ is $\frac{ai+c}{bi+ d}$ and the corresponding element of $\mathbb{D}$ is $\frac{(a-d)i+c+ b}{(a+d)i + c-b }$.
\smallskip
\subsubsection*{The $SL_\pm(2,\R)$ action and the Veech group action.}
The subgroup $SL_\pm(2,\R)\subset GL(2,\R) $ acts naturally on ${\mathcal{\tilde M}}_A(S)$ by the following \emph{left action}. Given a triple $f:S\to S'$ and an $\eta\in SL_\pm(2,\R)$, we get the action by sending $[f]$ to $\eta f:S\to S''$, where $\eta : S' \to S'':= \nu S$ is defined by the linear action of $\eta$ on translation surfaces given by post-composition with charts. Using the identification of ${\mathcal{\tilde M}}_A(S)$ with $SL_\pm(2,\R)$, this action corresponds to left multiplication by $\eta$. One can see that
this action is simply transitively on ${\mathcal{\tilde M}}_A(S)$.
There is also a natural \emph{right action} of $Af\!f(S)$ on the set of triples. Given an affine automorphism $\Psi:S\to S$ we send $f:S\to S'$ to $f \Psi:S\to S'$. This action induces a right action of $V(S)$ on ${\mathcal{\tilde M}}_A(S)$. Using the identification of ${\mathcal{\tilde M}}_A(S)$ with $SL_\pm(2,\R)$, this action corresponds to right multiplication by $D\Psi$. It follows from the associativity of composition of functions that \emph{these two actions commute}.
The Veech group acts via isometries with respect to the hyperbolic metric of constant curvature on $\mathbb{H}$. The action on the unit disk can be obtained by conjugating by the conformal map $\phi: \mathbb{H} \rightarrow \mathbb{D}$.
This action induces an action of the Veech group on $\mathbb{R}\mathbb{P}^1$ seen as boundary of $\mathbb{H}$ (or $\mathbb{D}$). Geometrically, this can also be interpreted as action on projective transverse measures (since the latter can be identified with the boundary of $\mathbb{D}$ as recalled above). We remark that this action is the projective action of $GL(2,\R)$ on row vectors coming from multiplication on the right, that is to say
$\begin{pmatrix} z_1 & z_2\end{pmatrix}\mapsto \begin{pmatrix} z_1 & z_2 \end{pmatrix} \left(\begin{smallmatrix} a & b \\ c & d\end{smallmatrix}\right)$.
When the matrix $\nu = \left( \begin{smallmatrix} a & b \\ c & d\end{smallmatrix}\right)$ has positive determinant it takes the upper and lower half-planes to themselves and the formula is
$z \mapsto \frac{az+c}{bz+ d}$.
When the matrix $\nu$ has negative determinant the formula is $z \mapsto \frac{a \overline{z}+c}{b \overline{z}+ d}$.
\subsubsection*{The Teichm\"uller flow and the Teichm\"uller orbifold of a translation surface.}\label{Teichorbsec}
The \emph{Teichm\"uller flow} is given by the action of the $1$-parameter subgroup $g_t$ of $SL(2,\mathbb{R})$ given by the diagonal matrices $$g_t : = \begin{pmatrix} e^{t/2} & 0 \\ 0 & e^{-t/2} \end{pmatrix}$$ on ${\mathcal{\tilde M}}_{A}(S)$. This flow acts on translation surfaces by rescaling the time parameter of the vertical flow and rescaling the space parameter for a transversal to the vertical flow; thus we can view it as a renormalization operator.
If we project ${\mathcal{\tilde M}}_{A}(S)$ to ${\mathcal{\tilde M}}_{I}(S)$ by sending a triple to its isometry class and using the identification ${\mathcal{\tilde M}}_{I}(S)$ with $ \mathbb{H}$ described in \S~\ref{Teichdisksec}, then the Teichm\"uller flow corresponds to the hyperbolic geodesic flow on $T_1 \mathbb{H}$, i.e.~orbits of the $g_t$-action on ${\mathcal{\tilde M}}_{A}(S)$ project to geodesics in $\mathbb{H}$ parametrized at unit speed. We call a $g_t$-orbit in ${\mathcal{\tilde M}}_{A}(S)$ (or, under the identifications, in $T_1 \mathbb{D}$) a \emph{Teichm\"uller geodesic}. Given
$\nu=\left( \begin{smallmatrix} a&b\\c&d \end{smallmatrix}\right)$, the geodesics through the marked translation surface $[\Phi_\nu]$ converge to the boundary point corresponding to the row vector $\begin{pmatrix} a&b\end{pmatrix}$ in positive time and to the boundary point corresponding to $\begin{pmatrix} c&d\end{pmatrix}$ in backward time.
The quotient of ${\mathcal{\tilde M}}_{I}(S)$ by the natural right action of the Veech group $V(S)$ is the moduli space of unmarked translation surfaces, which we call ${\mathcal{M}}_{I}(S) = {\mathcal{\tilde M}}_{I}(S) / V(S)$. This space is usually called the \emph{Teichm\"uller curve associated to $S$}. In our case, since we allow orientation-reversing automorphisms, this quotient might be a surface with boundary, so the term \emph{Teichm\"uller curve} does not seem appropriate. Instead we call it the \emph{Teichm\"uller orbifold associated to $S$}.
We denote by ${\mathcal{ M}}_{A}(S)$ the quotient ${\mathcal{\tilde M}}_{A}(S)/ V(S)$ of ${\mathcal{\tilde M}}_{A}(S)$ by the right action of the Veech group. This space is a four-fold cover of the tangent bundle to ${\mathcal{M}}_{I}(S)$ in the sense of orbifolds (see Lemma~2.5 in \cite{SU2}).
The Teichm\"uller flow on ${\mathcal{ M}}_{A}(S)$ can be identified with the geodesic flow on the Teichm\"uller orbifold. We note that in the particular case where the space ${\mathcal{M}}_{I}(S)$ is a geodesic polygon in the hyperbolic plane, the geodesic flow in the sense of orbifolds is just the \emph{hyperbolic billiard flow} on the polygon, which is to say that if we project an orbit of this flow to the polygon then it gives a path which is a hyperbolic geodesic path, except where it hits the boundary, and when it does hit the boundary it bounces so that the angle of incidence is equal to the angle of reflection.
\subsection{Veech group action on a tessellation of the Teichm\"uller disk of a \bm surface} \label{sec:tessellation}
Let $\M mn$ be the $(m,n)$ \bm translation surface. We recall from \ref{veechofbm} that the Veech group of $\M mn$ (as well as the Veech group of the dual surface $\M nm$) is isomorphic to the $(m,n,\infty)$ triangle group or it has index $2$ in it (when $n,m $ are both even, see \cite{Hooper}). Thus, the fundamental domain for the action described above of the Veech group on the Teichm\"uller disk is a hyperbolic triangle whose angles are $\pi/n,\pi/m$, and $0$. The action of the Veech group can be easily visualized by considering a tessellation of the hyperbolic plane by $(m,n,\infty)$ triangles, as shown in Figure \ref{hypdisk1} for the $(3,4)$ \bm surface. In this example, the tessellation consists of triangles whose angles are $\pi/3,\pi/4$, and $0$. The rotational symmetries of order $3$ and $4$ appear clearly at alternating interior vertices. Triangles in the tessellation can be grouped to get a tessellation into hyperbolic polygons which are either $2m$-gons or $2n$-gons. The $2m$-gons (respectively the $2n$-gons) have as a center an elliptic point of order $m$ (respectively $n$) and have exactly $m$ (respectively $n$) ideal vertices. For example, the tessellation in Figure \ref{hypdisk1} contains a supertessellation by octagons with four ideal vertices and hexagons with three ideal vertices.
\begin{figure}[!h]
\centering
\includegraphics[width=.4\textwidth]{teich-8.png}
\hspace{1cm}
\includegraphics[width=.4\textwidth]{teich-6.png}
\begin{quote}\caption{The first four steps of the tessellation of the hyperbolic disk by $(3,4,\infty)$ triangles, with $\M 34$ or $\M 43$ in the center respectively. Angles of $\pi/3, \pi/4$ and $0$ are indicated by red, green and black dots, respectively. \label{hypdisk1}} \end{quote}
\end{figure}
If we consider the Teichm\"uller disk of $\M mn$ pointed at $\M mn$, i.e. we choose the center of the disk $\mathbb{D}$ to represent the base triple $id: \M mn \to \M mn$,
the rotation of order $\pi/n$ on the plane acts as a rotation by angle $2\pi/n$ of the Teichm\"uller disk. On the other hand, if we center the Teichm\"uller disk at $\M nm$, i.e. we choose the center of the disk $\mathbb{D}$ to represent the base triple $id: \M nm \to \M nm$ and mark triples by $\M mn$, the rotation of order $\pi/m$ acts a a rotation by an angle $2\pi/m$ of the Teichm\"uller disk. The derivative $\derAD m n$ of the affine diffeomorphism $\AD m n$ (described in \S~\ref{sec:affine}) acts on the right on $\mathbb{D}$ by mapping the center of the disk, which in this case is a center of an ideal $2n$-gon, into the center of an ideal $2m$-gon. Thus, the elliptic element of order $2m$ in the Veech group of $\M mn$ can be obtained by conjugating the rotation $\rho_m$ by an angle $\pi/m$ acting on $\M nm$ by the derivative $\derAD m n$ of the affine diffeomorphism $\AD m n$ sending $\M mn$ to the dual surface $\M nm $ (described in \S~\ref{sec:affine}), i.e. it has the form ${\derAD mn} ^{-1}\rho_m \derAD mn$. Finally, the parabolic element which generates parabolic points in the tessellation is the shear automorphism from $\M mn$ to itself given by the composition $\shear nm \shear mn $ of the shearing matrices defined in \S~\ref{flipandshears}, see \eqref{def:generalmatrices}.
We remark also that all reflections $\refl i m n$ for $0\leq i \leq n$ defined in \S~\ref{sec:normalization} (see Definition \ref{def:reflections}) belong to the Veech group of $\M mn$. Each of them acts on the Teichm\"uller disk as a reflection at one of the hyperbolic diameters which are diagonals of the central $2n$-gon.
\subsubsection*{The tree of renormalization moves.}
We now define a bipartite tree associated to the tessellations of the disk described above. Paths in this tree will prove helpful in visualizing and describing the possible sequences of renormalization moves.
Consider the graph in the hyperbolic plane which has a bipartite set of vertices $V=V_m \cup V_n$ where vertices in $V_m$, which we will call $m$-vertices, are in one-to-one correspondence with centers of ideal $2m$-gons of the tessellation, while vertices in $V_n$, called $n$-vertices, are in one-to-one correspondence with centers of ideal $2n$-gons. Edges connect vertices in $V_m$ with vertices in $V_n$, and there is a vertex connecting an $m$-vertex to an $n$-vertex if and only if the corresponding $2m$-gon and $2n$-gon share a side. The graph can be naturally
embedded in $\mathbb{D}$, so that vertices in $V_m$ (respectively $V_n$) are centers of $2m$-gons (respectively $2n$ gons) in the tessellation and each edge is realized by a hyperbolic geodesic segment, i.e. by the side of a triangle in the tessellation which connects the center of an $2m$-gon with the center of an adjacent $2n$-gon. We will call $\Tree{m}{n}$ the embedding of the graph in the tessellation associated to $\M mn$, i.e. the embedding such that the center of the disk is a vertex of order $n$ (which will be the root of the tree). Examples of the embedded graph $\Tree{m}{n}$ are given in Figure~\ref{hypdisk} for $m=3$, $n=4$ and for $m=4$, $n=3$.
\begin{figure}[!h]
\centering
\includegraphics[width=.4\textwidth]{teich-8-tree-num.png}
\hspace{1cm}
\includegraphics[width=.4\textwidth]{teich-6-tree-num.png}
\begin{quote}\caption{The tree $\Tree{m}{n}$ for the tessellation associated to $\M34$ and to $\M 43$ with the labels of the first two generations of edges. \label{hypdisk}} \end{quote}
\end{figure}
One can see that the graph $\Tree{m}{n}$ is a bipartite \emph{tree}, with a root in the center of the disk. We define the \emph{level $k$} of the tree to be composed of all vertices which have distance $k$ from the root, where the distance here is the natural distance on a graph which gives distance $1$ to vertices which are connected by an edge. For $k\ge 1$ we call \emph{edges of level $k$} the edges which connect a vertex of level $k-1$ with a vertex of level $k$. Let us recall that one calls \emph{children} of a vertex $v$ of level $n$ all the vertices of level $n+1$ which are connected to $v$. With the exception of the root, which has $n$ children, any other vertex has either $m-1$ children if it belongs to $V_m$ or $n-1$ children if it belongs to $V_n$, or equivalently a vertex of level $k$ has $m-1$ children for $k $ odd and $n-1$ children for $k$ even.
One can also define a corresponding sequence of $(\xi_k)_{k\geq 1}$ of partitions of $\partial \mathbb{D}$ as follows. The interior of the arcs in these partitions, as explained below, correspond to all points on $\partial \mathbb{D}$ which are endpoints of infinite paths which share a common initial subpath on $\Tree{m}{n}$.
Recall that vertices of $\Tree{m}{n}$ are in correspondence with geodesic polygons with either $2m$ or $2n$ sides and respectively $m$ or $n$ ideal vertices. For each $k\geq 1$, consider all ideal vertices of polygons that correspond to levels of the tree up to $k-1$. They determine a finite partition $\xi_k$ of $\partial \mathbb{D}$ into arcs. For $k=1$, this is a partition into $n$ arcs (for example, for the tessellation of the Teichm\"uller disk of $\M 34$ shown in Figure \ref{hypdisk}, this is the partition into four arcs each corresponding to the intersection of $\partial \mathbb{D}$ with a quadrant). The partition $\xi_{k+1}$ is a refinement of $\xi_k$, where if $k$ is even (respectively odd) each arc of $\xi_k$ is subdivided into $m-1$ (respectively $n-1$) arcs (given by the ideal vertices of the $2m$-gon, or respectively $2n$-gon of level $k+1$ who has two ideal vertices which are endpoints of the arc). Equivalently, one can see that the interior of each arc of $\xi_k$ corresponds to exactly all endpoints of paths in the tree which share a fixed initial path of length $k$ (i.e. consisting of $k$ edges). We will then say that the arc is \emph{dual} to the finite path. In the same way we can label by $0 \leq j \leq m-1$ the $m$ level-$1$ edges branching out of the central root of $\Tree{n}{m}$.
\subsubsection*{Labeling of the tree}
Let us now describe how to {label the edges of the tree} $\Tree{m}{n}$ so that the labels will code renormalization moves.
Let us first label edges of level $1$ and $2$ (or equivalently arcs in $\xi_1$ and $\xi_2$) in both $\Tree{m}{n}$ and $\Tree{n}{m}$ simultaneously (these labels are shown in Figure \ref{hypdisk} for the $m=3, n=4$ examples). We remark first that arcs in $\xi_1$ are in one-to-one correspondence with the $n$ \emph{sectors} $\Sec i m n$ for $0 \leq i \leq n-1$ defined in \S~\ref{sec:transition_diagrams} (see Definition \ref{sectordef}), via the identification of $\partial \mathbb{D}$ with $\mathbb{R}\mathbb{P}^1$ described in \S~\ref{Teichdisksec}. Thus, label by $i$ the edge of level $i$ which corresponds to $\Sec i m n$ as well as the dual arc of $\xi_1$.
\begin{rem}\label{actionrefl}
This is equivalent to labeling the $0$-edge so that the corresponding sector is the standard sector $\Sec 0 mn$ and then saying that any other edge $e$ of level $1$ is labeled by $i$ if and only if the reflection $\refl mn i$ for $1\leq i \leq n$ maps $e$ to the $0$-edge.
\end{rem}
We remark now that the right action of the derivative $\derAD mn $ of the affine diffeomorphism $\AD mn $ (which, we recall, was described in \S~\ref{sec:affine}) maps the level $1$ edge labeled by $0$ in $\Tree{m}{n}$ to the level $1$ edge labeled by $0$ in $\Tree{n}{m}$ flipping its orientation, in particular by mapping
the center of the disk (i.e. the root of $\Tree{m}{n}$) to the endpoint $v_0$ of the edge of level $1$ labeled by $0$ in $\Tree{n}{m}$. Thus, the inverse $({\derAD nm })^{-1}$ sends the endpoint $v_0$ of the edge of level $1$ labeled by $0$ in $\Tree{m}{n}$ to the root of $\Tree{n}{m}$ in the center of the disk (and maps the $2m$-gon which has $v_0$ as a center in the tessellation for $\M mn $ to the central $2m$-gon in the tessellation for the dual surface $\M nm$). For example, $(\derAD 34)^{-1}$ maps the hexagon which has as center the red endpoint of the $0$-edge of level $1$ in the left disk tessellation in Figure \ref{hypdisk} to the central hexagon in the right disk tessellation in the same Figure \ref{hypdisk}. Since the edges of level $1$ of $\Tree{n}{m}$ are labeled by $0 \leq i \leq m-1$ and $\derAD mn $ maps the $0$-edges of level $1$ of $\Tree{m}{n}$ and $\Tree{n}{m}$ to each other, it follows that $(\derAD mn)^{-1} $ induces a labeling of $m-1$ edges of level $2$ which start from the endpoint of the $0$-edge of level $1$ as follows. One of such edges $e$ is labeled by $1\leq i\leq m$ if ${\derAD mn}^{-1} $ maps $e$ to the edge of level $1$ of $\Tree{n}{m}$ labeled by $i$.
\begin{rem}\label{firstsubsectors}
Consider the arc of $\xi_1$ dual to the $0$-edge. In the left tessellation in Figure \ref{hypdisk}, this is for example the intersection of $\partial \mathbb{D}$ with the quadrant given by the negative part of the real axes and positive part of the imaginary axes. Recall that there are $m-1$ subarcs of $\xi_2$ contained in this arc, each of which is dual to a path of length $2$ on the graph starting with $0$.
The identification of $\partial \mathbb{D}$ with $\mathbb{R}\mathbb{P}^1$ described in \S~\ref{Teichdisksec} maps the $0$ arc of $\xi_1$ to the standard sector $\Sec 0 mn$ and each of these $m-1$ subarcs to one of the $m-1$ subsectors $\Subsec i mn \subset \Sec 0 mn$ for $1\leq i \leq m-1$ given in \eqref{def:subsectors} in \S~\ref{sec:2farey}. The labeling is defined so that the arc of $\xi_2$ dual to paths starting with the $0$ and then the $i$ edge corresponds exactly to $\Subsec i mn$.
\end{rem}
To label the edges of level $2$ which branch out of the other level $1$ edges, just recall that the reflection $\refl i m n$ (see Definition \ref{def:reflections}) maps the $i$-edge to the $0$ edge (see Remark \ref{actionrefl}), and hence can be used in the same way to induce a labeling of all the edges of level $1$ branching out of the $i$ edge (by labeling an edge by $1\leq j\leq m$ if it is mapped to an edge of level $1$ already labeled by $j$).
The same definitions for the tree embedded in the tessellation for the dual surface $\M nm$ also produce a labeling of the edges of level $1$ and $2$ of $\Tree{n}{m}$. We refer to Figure \ref{hypdisk} for an example of these labelings of edges of level $1$ and $2$ for $\Tree{3}{4}$ and $\Tree{4}{3}$.
\begin{figure}[!h]
\centering
\includegraphics[width=.4\textwidth]{tree.png}
\begin{quote}\caption{The labeling on a schematic representation of a portion of tree $\Tree{m}{n}$ for $\M34$, consisting of paths starting with the $0$-edge. \label{treelabeling}} \end{quote}
\end{figure}
We will now describe how to label all paths which start with the $0$-edge in $\Tree{m}{n}$, since these are the ones needed to describe renormalization on $\M mn$.
Since we already labeled edges of levels $1$ and $2$ both in $\Tree{m}{n}$ and $\Tree{n}{m}$, to label the edges of level $3$ which belong to paths in $\Tree{m}{n}$ which start with the $0$-edge, we can use that $({\derAD nm })^{-1}$ maps them to edges of level $2$ in $\Tree{n}{m}$ and hence this induces a labeling for them. For example, see Figure \ref{treelabeling} to see this labeling for $\Tree{3}{4}$.
Recalling the duality between paths made by $3$ edges in $\Tree{m}{n}$ and arcs of the partition $\xi_3$ defined above and
the definitions of the \bm Farey map $\FF m n$ and of its continuity sectors $\Subsubsec i j m n$ for $1\leq i \leq m$, $1\leq j \leq n$ given in \S~\ref{sec:2farey} and~\ref{sec:itineraries_vs_sectors} (see in particular equation \eqref{def:subsubsec}), one can see that the labeling is defined so that the following correspondence with these sectors holds.
\begin{rem}\label{rk:subsubsectors}
Under the correspondence between $\partial \mathbb{D}$ and $\mathbb{R}\mathbb{P}^1$ described in \S~\ref{Teichdisksec},
the $(m-1)(n-1)$ arcs of $\xi_3$ subdividing the arc of $\xi_1$ dual to the $0$-edge are mapped to the $(m-1)(n-1)$ subintervals $\Subsubsec i j m n$, $1\leq i \leq m-1, 1\leq j \leq n-1$ which are domains of the branches of the \bm Farey map $\FF m n$ (see \eqref{def:subsubsec} in \S~\ref{sec:itineraries_vs_sectors}). The labeling is defined so that the subiterval $\Subsubsec i j m n$ correspond to the arc of $\xi_3$ dual to the path in $\Tree{m}{n}$ which starts with the $0$-edge of level $1$, then followed by the $i$ edge of level $2$ and the $j$ edge of level $3$.
\end{rem}
We can then transport this labeling of paths made by $3$ edges starting with the $0$-edge to all paths starting with the $0$ edge in $\Tree{m}{n}$ via the action of elements of the Veech group as follows.
Consider the elements $ (\refl i n m \derAD mn)^{-1 } (\refl j m n \derAD nm)^{-1 } $, $i= 1,\dots, m$, $j= 1,\dots, n$ and consider their right action on the embedded copy of $\Tree{m}{n}$. One can check the following.
\smallskip
For $1 \leq i\leq m$ and $1\leq j\leq n$, let us denote by $v_{i,j}$ the vertex of level $3$ which is the endpoint of the path starting with the $0$ edge at level $1$, the $i$ edge at level $2$ and the $j$ edge at level $3$.
\begin{lemma}\label{actionontree}
For every $k\geq 1$, $1\leq i \leq n$ and $1\leq j \leq m$, the right action of the element $ (\refl i n m \derAD mn)^{-1 } (\refl j m n \derAD nm)^{-1 } $ gives a tree automorphism of $\Tree{m}{n}$,
which maps
$v_{i,j}$ (defined just above) to the endpoint of the $0$-edge of level $1$ in $\Tree{m}{n}$.
The edge ending in $v_{i,j}$ is mapped to the $0$-edge of level $1$. Furthermore, the edges of level $2k$, $k\geq 2$, which branch out of $v_{i,j}$ are mapped to edges of level $2k-2$ and the edges of level $2k+1$ branching from those to edges of level $2k-1$.
\end{lemma}
This Lemma, whose proof we leave to the reader, can be used to define a labeling of the edges of paths starting with the $0$-edge by induction on the \emph{level} of the edges. We already defined labels for edges of level $2$ and $3$. Assume that all edges of paths starting with the $0$-edge up to level $2k-1$ included (where $k\geq 2$ so $2k-1\geq 3$) are labeled. For $1 \leq i\leq m$ and $1\leq j\leq n$, let $E_{i,j}^k$ be the set of edges of level $2k$ and $2k+1$ which branch out of the vertex $v_{i,j}$.
To label edges in $E_{i,j}^k$, apply the tree automorphism $ (\refl i m n \derAD mn)^{-1 } (\refl j n m \derAD n m)^{-1 } $. By Remark \ref{actionontree}, it maps edges of $E_{i,j}^k$ into edges of level $2k-2$ and $2k-1$, for which labels were already assigned by induction. Thus we can label and edge in $E_{i,j}^k$ by the label of the image edge under $ (\refl i n m \derAD mn)^{-1 } (\refl j m n \derAD nm)^{-1 } $.
Finally, one can label all paths on $\Tree{m}{n}$ by using the reflection $\refl j m n$ for $1\leq j \leq n$ (see Definition \ref{def:reflections}) which maps the $j$-edge to the $0$ edge (see Remark \ref{actionrefl}): an edge $e$ in a path starting with the $j$-edge of $\Tree{m}{n}$ is labeled by $l$ if its image under the right action of $\refl j m n$ is an edge labeled by $l$ (where $1\leq l \leq n$ if $e$ is an edge of level $k$ with $k$ odd or $1\leq l \leq m$ if $k$ is even).
One can check by induction by repeatedly applying Lemma \ref{actionontree} that the labeling is defined so that the following holds.
\begin{lemma}\label{actionontreepath}
Consider a finite path on the tree $\Tree{m}{n}$ starting from the root and ending in an $n$-vertex, whose edge labels are in order $b_0, a_1, b_1, \dots, a_k, b_k$ where $0\leq b_0 \leq n$ and, for any $k\geq 1$, $1\leq a_k \leq m $ and $1\leq b_k \leq n $. Then the element
$$ \refl {b_0} m n (\refl {a_1} n m \derAD mn)^{-1 } (\refl {b_1} m n \derAD nm)^{-1 } \cdots (\refl {a_k} n m \derAD mn)^{-1 } (\refl {b_k} m n \derAD nm)^{-1 } $$
acts on the right by giving a tree automorphism of $\Tree{m}{n}$ which maps the last edge, i.e. the one labeled by $b_k$, to the $0$-edge of level $1$ and the final vertex of the path to the ending vertex of the $0$-edge of level $1$.
\end{lemma}
Since arcs of the partitions $(\xi_k)_k$ of $\partial \mathbb{D}$ are by definition in one-to-one correspondence with finite paths on $\Tree{m}{n}$ starting from the origin, the
labeling of $\Tree{m}{n}$ induces also a labeling of the arcs of $(\xi_k)_k$ by sequences of the form
$$ \begin{cases}(b_0, a_1, b_1, \dots, a_i, b_i) & \text{if}\ k=2i, \\ (b_0, a_1, b_1, \dots,a_{i-1}, b_{i-1}, a_i) & \text{if}\ k=2i+1,
\end{cases} \text{where} \ 0\leq b_0 \leq n, \ 1\leq a_k \leq m ,\ 1\leq b_k \leq n .$$
Given an arc of $\xi_k$ for $k=2n+1$ labeled by the sequence $(b_0, a_1, b_1, \dots, a_n, b_n)$, the $m-1$ arcs of $\xi_{k+1}$ which are contained in that arc are labeled by $(b_0, a_1, b_1, \dots, a_n, b_n, j)$ where $1\leq j \leq m $. Moreover, the index $j$ increases from $1$ to $m$ as one moves counterclockwise along $\partial \mathbb{D}$ (see Figure~\ref{hypdisk} and Figure~\ref{treelabeling}).
Similarly, for $k=2n$, the arc of $\xi_k$ labeled by $(b_0, a_1, b_1, \dots,a_{n-1}, b_{n-1}, a_n)$ is subdivided into $n-1$ arcs of $\xi_{k+1}$ labeled by $(b_0, a_1, b_1, \dots,a_{n-1}, b_{n-1}, a_n, i)$ where the index $1\leq i \leq n $ also increases counterclockwise along $\partial \mathbb{D}$ (see Figure~\ref{hypdisk} and Figure~\ref{treelabeling}).
In the following sections we link these finite sequences labeling arcs to itineraries of the \bm Farey map and to sequences of admissible diagrams for derivatives of cutting sequences.
\subsection{Renormalization on the Teichm{\"u}ller disk.}\label{renormcutseq}
We will now associate to a direction $\theta \in [0, \pi) $ an infinite path on the tree $\Tree{m}{n}$. We will first show that the labels of these infinite paths (for the labeling described in the previous section) coincide both with the itinerary of $\theta$ under the \bm Farey map and with the sequences of admissible sectors associated to any cutting sequence in direction $\theta$ (defined in \S~\ref{sec:sectors_sequences}, see Definition~\ref{def:seq_sectors}). We will then explain how vertices of the tree can be interpreted as polygonal decompositions of either $\M mn$ or $\M nm$, with respect to which derivatives of cutting sequences are again cutting sequences (see Proposition \ref{wknormalized} below).
Let $\theta$ be a fixed direction, that we think of as the direction of a trajectory $\tau $ on $\M mn$. Denote by $\rho_\theta$ the matrix corresponding to counterclockwise rotation by $\theta$ and by $g_t ^{\theta}:= \rho_{\frac{\pi}{2}-\theta}^{-1} \, g_{t}\ \rho_{\frac{\pi}{2}-\theta} $ a $1$-parameter subgroup conjugate to the geodesic flow whose linear action on $\M mn$, for $t>0$, contracts the direction $\theta$ and expands the perpendicular direction. Let us therefore consider the \emph{Teichm{\"u}ller geodesic ray}
\begin{equation}\label{raydef}
\tilde{r}_\theta : = \{ g_t ^{\theta} \cdot M_{m,n} \}_{t\geq 0},
\end{equation}\noindent
which, using the identification of ${\mathcal{\tilde M}}_A(S)$ with $T_1 \mathbb{D}$ explained in \S~\ref{Teichdisksec}, corresponds to a geodesic ray in $T_1 \mathbb{D}$. The projection $r_\theta$ of the Teichm{\"u}ller ray $\tilde{r}_\theta$ to $\mathbb{D}$ is a half ray, starting at the center $0 \in \mathbb{D}$ and converging to the point on $\partial \mathbb{D}$ representing the linear functional given by the row vector $\begin{pmatrix}\cos( \frac{\pi}{2}-\theta)& - \sin(\frac{\pi}{2}- \theta))\end{pmatrix} =\begin{pmatrix} \sin \theta& - \cos \theta\end{pmatrix}$. Thus, according to the conventions in the previous section, one can check that the ray $r_\theta$ in $\mathbb{D}$ is the ray converging to the point on
$e^{(\pi+ 2 \theta)i} \in \partial \mathbb{D}$. In particular, $r_0$ is the ray in $\mathbb{D}$ obtained by intersecting the negative real axes in $\mathbb{C}$ with $\mathbb{D}$ and $r_\theta$ is the ray that makes an angle $2\theta$ (measured clockwise) with the ray $r_0$. Let us identify $\mathbb{D}$ with $\mathbb{H}$ by $\phi$ (see \S\ref{Teichdisksec}) and $\partial \mathbb{D}$ with $\partial \mathbb{H} = \mathbb{R}$ by extending $\phi$ by continuity. If $x \in \mathbb{R}$ is the coordinate for $\partial \mathbb{H}$ obtained using the chart $\phi_1$ (see \S\ref{Teichdisksec}), one can check that the ray $r_{\theta}$ has endpoint $ x(\theta) = -\frac{1}{\cot \theta}$.
\subsubsection*{Combinatorial geodesics.}
Let us explain how to associate to the geodesic path $r_\theta$ a path $p_\theta$ in the tree $\Tree{m}{n}$, which we call the \emph{combinatorial geodesic} approximating $r_\theta$. We say that $\theta$ is a \emph{cuspidal direction} if the ray $r_{\theta}$ converges to a vertex of an ideal polygon of the tessellation. One can show that this is equivalent to saying that the corresponding flow on $\M mn$ consists of periodic trajectories.
Assume first that $\theta$ is \emph{not} a cuspidal direction. In this case,
the endpoint of $r_\theta$ belongs to a \emph{unique} sequence of nested arcs of the partitions $(\xi_k)_k$. Recall that each of the arcs in $\xi_k$ dual to finite path on the tree formed by $k$ edges. We remark that if a given path $p$ is dual to an arc $\gamma$ of $\xi_k$, any arc of $\xi_{k+1}$ contained in $\gamma$ is dual to a path obtained from $p$ by adding an edge. Thus, in the limit, the sequence of nested arcs which contains the endpoint of $r_\theta$ determines a continuous semi-infinite path on $\Tree{m}{n}$ which starts at $\underline{0}$ and converges to the endpoint of $r_\theta$ on $\partial \mathbb{D}$. We will call this infinite path on $\Tree{m}{n}$ the \emph{combinatorial geodesic} associated to $r_\theta$ and denote it by $p_\theta$. We can think of this path $p_\theta$ as the image of $r_\theta$ under the retraction which sends the whole disk $\mathbb{D}$ onto the deformation retract $\Tree{m}{n}$.
If $\theta$ is a cuspidal direction, there exist exactly two sequences of nested arcs of $(\xi_k)_k$ sharing the cuspidal point as common endpoints of all arcs in both sequences and thus two combinatorial geodesics which approximate $r_\theta$.
\subsubsection*{Interpretations of the labeling sequences}
Given a direction $\theta$, let $p_\theta$ be a combinatorial geodesic associated to $r_{\theta}$. Let us denote by
$$ l(p_\theta) = (b_0, a_1, b_1, \dots, a_i, b_i, \dots), \quad \text{where} \ 0\leq b_0 \leq n, \ 1\leq a_k \leq m ,\ 1\leq b_k \leq n, $$
the sequence of labels of the edges of $p_\theta$ in increasing order (or equivalently the sequence such that $(b_0, a_1, b_1, \dots, a_i, b_i)$ is the labeling of the arc of $\xi_{2i+1}$ which contains the endpoint of $p_\theta$). We now show that this sequence coincides both with the itinerary of $\theta$ under the \bm Farey map (as defined in \S~\ref{sec:itineraries_vs_sectors}), see Proposition~\ref{CFandcuttseq} below, and with the pair of sequences of admissible sectors of any (bi-infinite, non periodic) cutting sequence of a linear trajectory on $\M mn $ in direction $\theta$ (see Definition~\ref{def:seq_sectors} in \S~\ref{sec:sectors_sequences}), see Corollary \ref{l_vs_diagrams} below.
Let us recall that in \S~\ref{sec:direction_recognition} we have defined a \bm continued fraction expansion, see Definition~\ref{bmCFdef}. Definitions of the labeling of the tree are given so that the following holds:
\begin{prop}\label{CFandcuttseq}
If a non-cuspidal direction $\theta$ has \bm continued fraction expansion
\begin{equation}\label{CFexp}
\theta = [b_0; a_1, b_1, a_2, b_2 , \dots ]_{m,n} ,
\end{equation}
then the labeling sequence $l(p_\theta)$ of the unique combinatorial geodesics associated to the the Teich\-m\"uller geodesics ray $r_{\theta}$ is given by the entries, i.e.
\begin{equation*}\label{labelexp} l(p_\theta) = (b_0, a_1, b_1, a_2, b_2 , \dots ).
\end{equation*}
If $\theta$ is a cuspidal direction, $\theta$ admits two \bm continued fraction expansions of the form \eqref{CFexp}, which give the labellings of the two combinatorial geodesics approximating $r_\theta$.
\end{prop}
To prove the proposition, we will define a renormalization scheme on paths on the tree $\Tree{m}{n}$ (or combinatorial geodesics) acting by the elements $(\refl i n m \derAD mn)^{-1 } (\refl j m n \derAD nm)^{-1 }$, $1\leq i \leq m$, $1\leq j \leq n$, and show that this renormalization extends to an action on $\partial \mathbb{D}$ that can be identified with the action of the \bm Farey map.
\begin{proof}
Let us remark first that we can assume that $b_0=0$. Indeed, if not, we can apply the element $\refl {b_0} nm $ and remark that the \bm continued fraction of the new direction is $ [0; a_1, b_1, a_2, b_2 , \dots ]_{m,n} $ (by construction, see the definition in Equation (\ref{bmCFdef})) and since $\refl {b_0} nm $ maps the $b_0$ edge of level $1$ to the $0$-edge, the labels of the new combinatorial geodescis are now $(0, a_1, b_1, a_2, b_2 , \dots )$.
Hence, without loss of generality let us consider a direction $\theta$ in $\Sec mn 0$. Let us assume for now that $\theta$ is not a cuspidal direction and let $(a_k)_k, (b_k)_k$ be the entries of its \bm continued fraction expansion as in \eqref{CFexp}. Let $p_{\theta}$ be the combinatorial geodesic approximating $r_\theta$ and let $(a_k')_k, (b_k')_k$ be the labels of the combinatorial geodesics $p_{\theta}$, i.e. let $l(p_\theta):=(0, a_1, b_1, \dots)$. Our goal is hence to show that $a_k'=a_k$ and $b_k'=b_k$ for every $k\geq 1$.
By definition of itineraries, since $\theta = [0; a_1, b_1,\dots ]_{m,n}$ we know that $\theta \in \Subsubsec {a_1}{b_1} m n$.
By Remark~\ref{rk:subsubsectors}, $\theta \in \Subsubsec {a_1} {b_1} m n$ is equivalent to saying that the endpoint of $p_\theta$ belongs to the arc of $\xi_3$ labeled by $(0,a_1,b_1)$. Thus, by definition of the labeling of the tree, this shows that $a_1'=a_1$ and $b_1'=b_1$. Let us now act on the right by the renormalization element $(\refl {a_1} n m \derAD mn)^{-1 } (\refl {b_1} m n \derAD nm)^{-1 }$.
This sends $p_{\theta}$ to a new combinatorial geodesic, by mapping the vertex $v_{a_1,b_1}$ of level $3$ to the endpoint of the $0$-edge of level $1$ and passes through the center of the disk (see Lemma \ref{actionontree}). If we neglect the image of the first two edges, in order to get a new combinatorial geodesic $p'$ that starts from the center, we have that $l(p')= a_2',b_2', a_3', b_3', \dots $. Thus, at the level of combinatorial geodesic labelings, this renormalization act as (the square of) a shift.
Let us consider the limit point in $\partial \mathbb{D}$ of $p'$ and show that it is the endpoint of the Teich\-m\"uller ray $r_{\theta'}$, where $\theta'=\FF m n(\theta)$ and \ $\FF m n$ is the \bm Farey map defined in \S~\ref{sec:2farey}.
Recall that $\theta \in \Subsubsec {a_1} {b_1} m n$ and $p'$ is obtained by acting on the right on $p$ by $(\refl {a_1} n m \derAD mn)^{-1 } (\refl {b_1} m n \derAD nm)^{-1 }$. Since $p$ by construction has the same limit point than $r_{\theta}$ and the action of $(\refl {a_1} n m \derAD mn)^{-1 } (\refl {b_1} m n \derAD nm)^{-1 }$ extends by continuity to $\partial \mathbb{D}$, the limit point of $p'$ is obtained by acting on the right on the limit point of $r_{\theta}$. Let us identify $\partial \mathbb{D}$ with $\mathbb{R}$ as in \S~\ref{Teichdisksec} and let $\bar x\in \mathbb{R}$ be the endpoint. Its image $\bar x'$ by the right action of $(\refl {a_1} n m \derAD mn)^{-1 } (\refl {b_1} m n \derAD nm)^{-1 }: = \left( \begin{smallmatrix} a_i & b_i \\ c_i & d_i \end{smallmatrix} \right)$ is $\bar x' = \frac{a_i \bar x + c_i}{ b_i \bar x +d_i}$. As described at the beginning of \S~\ref{renormcutseq}, this is the endpoint of the ray $r_{\theta'}$ where $\cot \theta ' = -1/(\bar x ') = - \frac{ b_i \bar x + d_i}{a_i \bar x + c_i} $ and since $\bar x = 1 / \cot \theta$, we get $\cot \theta ' = \frac{ d_i \cot \theta - b_i }{ - c_i \cot \theta+ a_i} $. This is exactly the left action by linear fractional transformation of the inverse $\left( \begin{smallmatrix} d_i& - b_i \\ -c_i & a_i \end{smallmatrix} \right) = (\refl {b_1} m n \derAD nm) (\refl {a_1} n m \derAD mn) $. This shows exactly that $\theta ' = \FF m n(\theta)$, by definition of the \bm Farey map (see
\eqref{explicitdefFareyFF}
in \S\ref{sec:2farey}). Thus, reasoning as before for $k=1$ we can now show that $a_2'=a_2$ and $b_2'=b_2$. Iterating the renormalization move on the combinatorial geodesics $p'$ and this step, we hence get that $a_k'=a_k$ and $b_k'=b_k$ for every $k\geq 1$ and this concludes the proof.
\end{proof}
As a consequence of Proposition \ref{CFandcuttseq} and the correspondence between itineraries and sequences of admissible sectors given by Proposition \ref{prop:itineraries_vs_sectors}, we hence also have the following.
\begin{corollary}\label{l_vs_diagrams}
Let $w$ be a non-periodic cutting sequence of a bi-infinite linear trajectory on $\M mn$ in a direction $\theta$ in $\Sec 0 mn$. Let
$(a_k)_k \in \{1, \dots , m-1\}^\mathbb{N}$ and $(b_k)_k \in \{1, \dots , n-1\}^\mathbb{N}$ be the pair of sequences of admissible sectors associated to $w$ (see Definiton \ref{def:seq_sectors}).
Then the labeling $l(p_\theta)$ of the combinatorial geodesic $p_\theta $ approximating $r_\theta$ is $l(p_\theta)= (0, a_0, b_0, a_1, b_1, \dots )$.
\end{corollary}
\subsubsection*{Derived cutting sequences and vertices on the combinatorial geodesic.}
In this section we will show that the sequence of vertices of the combinatorial geodesic $p_\theta$ has a geometric interpretation which helps to understand derivation on cutting sequences. More precisely,
if $w$ is a cutting sequence of a trajectory $\tau$ in direction $\theta$, let $r_{\theta}$ be the geodesic ray which contracts the direction $\theta$ given in (\ref{raydef}) and let $p_\theta$ be the associated combinatorial geodesic, i.e. the path on $\Tree{m}{n}$ that we defined above. Recall that given a cutting sequence $w$ on $\M mn $ of a trajectory in direction $\theta$, in \S~\ref{sec:sectors_sequences} we recursively defined its {sequence of derivatives} $(w^k)_k$ obtained by alternatively deriving it and normalizing it, see Definition~\ref{def:derivatives}. We will show below that these derived sequences can be seen as cutting sequences of the same trajectory with respect to a sequence of polygonal decompositions of $\M mn$ dertermined by the vertices of the combinatorial path $p_\theta$ as explained below.
If the label sequence $l(p_{\theta})$ starts with $b_0, a_1, b_1, \dots ,a_l, b_l, \dots$, then for each $k\geq 1$, define the affine diffeomorphisms
$$
\Psi^k:= \begin{cases} \refl{b_0} mn (\AD n m)^{-1 } \refl {a_1} n m ( \AD mn)^{-1 } \refl {b_1} m n (\AD n m)^{-1 } \dots \refl {a_l} n m ( \AD mn)^{-1 }& \text{if }\ k=2l, \\ \refl{b_0}mn ( \AD n m)^{-1 }\refl {a_1} n m ( \AD mn)^{-1 }\refl {b_1} m n ( \AD n m)^{-1 } \dots \refl {a_l} n m ( \AD mn)^{-1 } \refl {b_l} m n ( \AD n m)^{-1 } & \text{if }\ k=2l+1. \end{cases}
$$
We will denote by $\gamma^k$ the derivative of $\Psi^k$. We claim that $\gamma^k$
acts on the right on $\mathbb{D}$ by mapping the $k^{th}$ vertex of $p_{\theta}$ back to the origin. This can be deduced from Lemma \ref{actionontreepath} for even indices, by remarking that
$\gamma^{2k} \refl {b_k} m n = \left( \refl {b_0} m n (\refl {a_1} n m \derAD mn)^{-1 } (\refl {b_1} m n \derAD nm)^{-1 } \cdots (\refl {a_k} n m \derAD mn)^{-1 } (\refl {b_k} m n \derAD nm)^{-1 } \right), $
which are the elements considered in Lemma \ref{actionontreepath} and by noticing that the additional reflection $\refl {b_k} m n$ does not change the isometry class of the final vertex. For odd indices, this can be obtained by combining Lemma \ref{actionontreepath} with the description of the action of $\derAD n m$ on the disk. We omit the details.
We now remark that $\Psi^k(\M mn)=\M mn$ when $k$ is even while $\Psi^k(\M nm)=\M mn$ when $k$ is odd. Let us consider the marked triple $(\Psi^k)^{-1}: \M mn \to \M mn$ for $k$ even or $(\Psi^k)^{-1}: \M mn \to \M nm$ for $k$ odd. As explained at the beginning of this Appendix (see \S~\ref{Teichdisksec}), this is an affine deformation of $\M mn$ and considering its isometry equivalence class in ${\mathcal{\tilde M}}_{I}(S)$ we can identify it with a point in the Teichm\"uller disk $\mathbb{D}$ centered at $id: \M mn \to \M mn $. The corresponding point is a vertex level $k$ of $\Tree{m}{n}$, or more precisely, it is the $k^{th}$ vertex in the combinatorial geodesic $p_\theta$. Thus, under the identification of $\mathbb{D}$ with ${\mathcal{\tilde M}}_{I}(S)$, the vertices of the path $p_\theta$ are, in order, the isometry classes of the marked triples $[\Psi^k]$, for $k=1, 2, \dots$.
One can visualize these affine deformations by a corresponding sequence of polygonal presentations as follows.
Recall that both $\M mn$ and $\M nm$ are equipped for us with a semi-regular polygonal presentation, whose sides are labeled by the alphabets $\LL mn$ and $\LL nm$ respectively as explained in \S~\ref{howtolabel}. For every $k\geq 1$, let $\mathcal{P}^k$ be the image in $\M mn$ under the affine diffeomorphism $\Psi^k$ of the polygonal presentation of $\M mn$ if $k$ is even or of $\M nm $ if $k$ is odd. This polygonal decomposition $\mathcal{P}^k$ carries furthermore a labeling of its sides by $\LL mn$ or $\LL nm$ (according to the parity of $k$) induced by $\Psi^k$: if for $k$ even (respectively $k$ odd) a side of $\M mn$ (respectively $\M nm$) is labeled by $i \in \LL mn$ (respectively by $i \in \LL nm$), let us also label by $i$ its image under $\Psi^k$. This gives a labeling of the sides of $\mathcal{P}^k$ by $\LL mn$ for $k$ even or by $\LL nm$ for $k$ odd, which we call the \emph{labeling induced by} $\Psi^k$.
Thus the sequence of vertices in $p_\theta$ determines a sequence of affine deformations of $\M mn$ and a sequence $(\mathcal{P}^k)_k$ of labeled polygonal decompositions. The connection between $(\mathcal{P}^k)_k$ and the sequence of derived cutting sequences (see Definition~\ref{def:derivatives}) is the following.
\begin{prop}\label{wknormalized}
Let $w, \theta$ and $\mathcal{P}^k$ be as above. The $k^{th}$ derived sequence $w^k$ of the cutting sequence $w$ of a trajectory on $\M mn$ is the cutting sequence of the same trajectory with respect to the labels of the sides of the polygonal decompositions $\mathcal{P}^k$ with the labeling induced by $\Psi^k$.
\end{prop}
\noindent Before giving the proof of Proposition \ref{wknormalized}, let us remark that if we think of $\mathcal{P}^k$ as a collection of polygons in $\mathbb{R}^2$ obtained by linearly deforming the semi-regular polygonal presentation of $\M mn$ if $k$ is even or of $\M nm $ if $k$ is odd by the linear action of
$\gamma^k$, as $k$ increases the polygons in these decompositions become more and more stretched in the direction $\theta$, meaning that the directions of the sides of polygons tend to $\theta$.
This can be checked by first reflecting by $\refl{b_0}mn$ to reduce to the $b_0=0$ case and then by verifying that the sector of directions which is the image of $ \Sec 0 mn$ under the projective action of $\gamma^k $ is shrinking to the point corresponding to the line in direction $\theta$. This distortion of the polygons corresponds to the fact that as $k$ increases a fixed trajectory hits the sides of $\mathcal{P}^k$ less often which is reflected by the fact that in deriving a sequence labels are erased.
\begin{proof}[Proof of Proposition \ref{wknormalized}]
Let $\tau$ be the trajectory whose cutting sequence is $w$.
To prove that $w^k$ is the cutting sequence of $\tau$ with respect to $\mathcal{P}^k $, one can equivalently apply the affine diffeomorphism $\left(\Psi^k\right)^{-1}$ and prove that $w^k$ is the cutting sequence of the trajectory $\left(\Psi^k\right)^{-1} \tau$ (which belongs to $\M mn$ if $k$ is even and $\M nm$ if $k$ is odd) with respect to the semi-regular polygonal presentation of $\M mn$ for $k$ even or $\M nm$ for $k$ odd. Let us show this by induction on $k$. Set $\tau^{(0)}:=\tau$ and for $k>0$ set
$$\tau^k:= (\Psi^k)^{-1} \tau = \begin{cases}
(\AD nm \refl {a_l} nm ) \dots (\AD nm \refl {b_1} m n )(\AD nm \refl {a_1} n m )(\AD mn \refl {b_0} m n) \tau & \text{if} \ k=2l \ \text{is \ even}, \\
(\AD mn \refl {b_l} mn ) (\AD nm \refl {a_l} nm ) \dots (\AD nm \refl {b_1} m n )(\AD nm \refl {a_1} n m )(\AD mn \refl {b_0} m n ) \tau
& \text{if} \ k=2l+1 \ \text{is \ odd} \end{cases}
$$
(note that $\refl {i} n m$ are reflections and hence equal to their inverses). The base of the induction for $k=0$ holds simply because $w$ is the cutting sequence of $\tau$.
In particular, note that by definition we have that
\begin{equation}\label{recursivedef}
\tau^{(k+1)} = \begin{cases} (\AD nm \refl {a_l} nm) \tau^{k} & \text{if} \ k=2l-1 \ \text{is \ odd}, \\ (\AD mn \refl {b_l} mn ) \tau^{k} & \text{if} \ k =2l \ \text{is \ even}. \end{cases}
\end{equation}
Assume that $w^k$ is the cutting sequence of $ \tau^k$ with respect to either $\M mn$ or $\M nm$ according to the parity of $k$ and let us prove that the same holds for $k+1$.
Since $l(p_{\theta}) = (b_0, a_1, b_1, \dots ,a_l, b_l, \dots)$, by Corollary \ref{l_vs_diagrams}
$(a_l)_l, (b_l)_l$ is a pair of sequences of admissile sectors for $w$. Thus (recalling Definition~\ref{def:seq_sectors}), we know that $w^k$ is amissible in sector $\Subsec {b_l} mn$ if $k=2l$ is even or in $\Subsec {a_l} nm$ if $k=2l-1$ is odd. Thus, if $k=2l$ is even, $\Norm mn w^k = \perm {b_l} m n $ and is the cutting sequence of the reflected trajectory $\refl {b_l} m n \tau^k$, while, if $k=2l-1$ is odd, $\Norm nm w^k= \perm {a_l} nm$ and is the cutting sequence of $\refl {a_l} nm \tau^k$. Thus, by the geometric interpretation of derivation for trajectories in the standard sectors given by Lemma \ref{lem:derivation_interpretation}, $w^{(k+1)}$, which by Definition \ref{def:derivatives}, for $k=2l-1$ odd (respectively $k=2l$ even) is equal to $D nm (\Norm nm w^k)$ (respectively $D mn (\Norm mn w^k)$) is the cutting sequence of the same linear trajectory $( \refl {a_l} n m )\ \tau^k$ (respectively $\refl {b_l} mn \ \tau^k$) with respect to the preimage by $\AD nm$ (respectively $\AD mn$) of the semi-regular polygonal presentation of $\M mn$ (respectively $\M nm$). Equivalently, by applying $\AD nm$ for $k$ odd (respectively $\AD mn$ for $k$ even), this gives that $w^{k+1}$ is also the cutting sequence
$(\AD nm \refl {a_l} n m )\ \tau^k$ with respect to $\M mn$ if $k=2l-1$ is odd (respectively $\AD m n \refl {b_l} mn \ \tau^k$ with respect to $\M nm $ if $k=2l$ is even) . Thus, by \eqref{recursivedef}, this shows that $w^{k+1}$ is exactly the cutting sequence of the $\tau^{(k+1)}$ in $\M mn$ if $k+1$ is even or $\M nm$ if $k+1 $ is odd with respect to the corresponding semi-regular presentation, as desired.
This concludes the proof by induction.
\end{proof}
We conclude by remarking that, as was done in \cite{SU2} for the octagon Teichm\"uller disk and octagon Farey map, it is possible to use the hyperbolic picture introduced in this Appendix to define a cross section of the geodesic flow on the Teichm\"uller orbifold of a \bm surface. More precisely, one can consider a section corresponding to geodesics which have forward endpoint in the $0$-arc of $\xi_1$ and backward endpoint in the complementary arc of $\xi_1$. The Poincar{\'e} map of the geodesic flow on this section provides a geometric realization of the natural extension of the \bm Farey map $\FF m n$. More precisely, one can define a \emph{backward} \bm Farey map which can be used to define the natural extension and describes the behavior of the backward endpoint under the Poincar{\'e} map. The natural extension can be then used to explicitly compute an invariant measure for $\FF mn$ which is absolutely continuous with respect to the Lebesgue measure but infinite. In order to have a finite absolutely continuous invariant measure, one can accelerate branches of $\FF mn$ which correspond to the parabolic fixed points of $\FF mn$ at $0$ and $\theta= \pi/n$. We leave the computations to the interested reader, following the model given by \cite{SU2}.
\section{Transition and derivation diagrams} \label{transitiondiagrams}
We now return to the polygon decomposition of the \bm surfaces and to our goal of characterizing all cutting sequences. First, in \S \ref{howtolabel} we will describe how to label the edges of the semi-regular presentation of the \bm surface. We then show in \S \ref{sec:labelingHooper} how this labeling induces a labeling on the corresponding Hooper diagram. In \S\ref{sec:transition_diagrams} we define \emph{transition diagrams}, which are essential for understanding cutting sequences. In \S\ref{sec:admissibility} we define \emph{admissible} cutting sequences, generalizing the work of Series and Smillie-Ulcigrai discussed in \S\ref{sec:Sturmian}-\ref{sec:polygons}. In \S\ref{sec:labeled_def} we define \emph{derivation diagrams}, which are the key tool we will use to characterize cutting sequences on \bm surfaces. In \S\ref{sec:structure}, we prove our structure theorem for derivation diagrams for trajectories in $\Sec 0mn$, which is the main result of this section. In \S\ref{sec:normalization}, we describe how to \emph{normalize} trajectories in other sectors to $\Sec 0mn$. In \S\ref{sec:other_sectors}, we describe transition diagrams for trajectories in other sectors.
\subsection{Edge labeling}\label{howtolabel}
To label the edges of the \bm surfaces, we use a ``\emph{zig-zag}'' pattern as follows. First, we label the lower-right diagonal edge of $P(0)$ with a $1$, and then go horizontally left to the lower-left diagonal edge of $P(0)$ and label it with a $2$ (see Figure \ref{labeling}). Then we go up diagonally to the right (at angle $\pi/n$) and label the next edge $3$, and then go horizontally left and label that edge $4$, and so on until label $n$. The $n$ edges of $P(1)$ that are identified with these edges have the same labels.
Now we label the remaining $n$ edges of $P(1)$. If the bottom horizontal edge is already labeled (as in Figure \ref{labeling}a below), we start with the lowest-right diagonal edge and label it $n+1$, and then go horizontally to the left and label that edge $n+2$, and then zig-zag as before. If the bottom horizontal edge is not yet labeled (as in Figure \ref{labeling}b below), we label it $n+1$, and then go diagonally up to the right and label that edge $n+2$, and so on in the zig-zag. We do the same for $P(2)$ and the remaining polygons until all the edges are labeled.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{3443labels.pdf}
\begin{quote}\caption{The edge labelings for $\M 34$ and $\M 43$ \label{labeling}} \end{quote}
\end{figure}
We choose to label the edges in this way because it makes the \emph{transition diagrams} easy to describe, as we will see. We can first reap the benefits of this labeling system by labeling the edges of the Hooper diagram.
\subsection{Labeling the Hooper diagram}\label{sec:labelingHooper}
Each edge of the Hooper diagram $\G mn$ corresponds to the intersection of a horizontal cylinder and a vertical cylinder, which is a basic rectangle in the orthogonal decomposition. Each non-degenerate basic rectangle is crossed by an edge of either $\M mn$ or $\M nm$: a negative diagonal for the (red) edges of $\M mn$ or a positive diagonal for the (green) edges of $\M nm$. We can label the edges of the Hooper diagram with the label of the edge that crosses the corresponding basic rectangle.
\begin{proposition}\label{hd-snake}
In $\G mn$, the labels are as follows:
The upper-left horizontal auxiliary edge is edge ${\rd 1}$ of $\M mn$, and thereafter the horizontal edges are labeled ${\rd 2,3,4}$, etc., ``snaking'' horizontally back and forth from top to bottom, as shown in Figure \ref{hd-labeled}b.
The upper-left vertical auxiliary edge is edge ${\gr 1}$ of $\M nm$, and thereafter the vertical edges are labeled ${\gr 2,3,4}$, etc., ``snaking'' vertically up and down from left to right, as shown in Figure \ref{hd-labeled}b.
\end{proposition}
In Figure \ref{hd-labeled}a, ``up'' and ``down'' are reversed because of the conventions in the Hooper diagram, but we choose to orient the $1$s in the upper left; see Remark \ref{hd-reflect}.
\begin{proof}
We begin with an Hooper diagram, including the edges that are either horizontally degenerate, or vertically degenerate. (We omit edges that are completely degenerate, because they are points and thus do not have polygon edges associated with them.) This is the black part of the diagram in Figure \ref{hd-labeled}. We will determine where the (colored) edge labels go on the diagram in several steps.
Recall that the white vertices represent horizontal cylinders, with the arrows indicating movement to the right, and the black vertices represent vertical cylinders, with the arrows indicating movement up.
\emph{Step 1}: The (red) edges of $\M mn$ and the (green) edges of $\M nm$ comprise the horizontal and vertical sets of edges of the Hooper diagram. We can determine which is which by counting: $\M mn$ has $n(m-1)$ edges and $\M nm$ has $n(m-1)$ edges. If $m=n$, the diagram is symmetric so it doesn't matter which is which.
For our example, $\M 43$ has $9$ edges, so they are the horizontal edges in Figure \ref{hd-labeled}a, and $\M 34$ has $8$ edges, so they are the vertical edges in Figure \ref{hd-labeled}a. This means that the horizontal edges will have red edge labels, and the vertical edges will have green edge labels.
\emph{Step 2}: We determine where to put the edge label ${\rd 1}$. ${\rd 1}$ is a degenerate edge, so it must be one of the outer (dotted) diagram edges. ${\rd 1}$ is in $\M mn$, so it must be a horizontal diagram edge. ${\rd 1}$ is parallel to the vertical cylinder decomposition, so it lies in a horizontal cylinder, so it emanates from a white vertex. When we go against the arrow direction from ${\rd 1}$, we get to ${\gr 1}$, which is also a degenerate edge, so it must be on a corner (see Figure \ref{hd34aux}).
All of these narrow our choices to just one, or sometimes two when the diagram has extra symmetry; in that case, the two choices are equivalent. In our example, there is only one choice, the edge labeled ${\rd 1}$ in Figure \ref{hd-labeled}.
\begin{figure}[!h]
\centering
\includegraphics[width=350pt]{hd-labeled.png}
\begin{quote}\caption{The labeled Hooper diagram for $\M 43$, and the general form (see Remark \ref{reflect}). We do not include the bottom edge, the right edge, or the bottom-right corner of the general form in Figure \ref{hd-labeled}, because the edge labels and the vertex colors depend on the parity of $m$ and $n$, so it is clearer to look at the example. \label{hd-labeled}} \end{quote}
\end{figure}
\emph{Step 3}: We determine where to place edges ${\gr 1},{\rd 2,\ldots,n,n+1}$.
From edge ${\rd 1}$ in $\M mn$, we go horizontally to the left to get to ${\rd 2}$, and in between we pass through ${\gr 1}$ (see Figure \ref{hd34aux}). On the Hooper diagram, from edge ${\rd 1}$ we go against the arrows around the white vertex, and label the vertical edge ${\gr 1}$ and the next horizontal edge ${\rd 2}$.
From edge ${\rd 2}$ in $\M mn$, we go in the direction of the vertical cylinder decomposition to get to ${\rd 3}$, so we go with the arrows around the black vertex and label the next horizontal edge ${\rd 3}$. In our example $\M 43$, this is the end of the row; for $m>3$, we continue until we get to ${\rd n}$, going left and up in the polygons and correspondingly going around the white and black vertices in the Hooper diagram.
To get from edge ${\rd n}$ to ${\rd n+1}$, in the polygons we go up and right for $n$ odd, and left and down for $n$ even, and we follow the arrows in the Hooper diagram to do the same. For our example $\M 43$, from ${\rd 3}$ to ${\rd 4}$, we go up and right, so in the Hooper diagram we follow the arrow around the black vertex to the vertical edge, and then at the other end of the vertical edge we follow the arrow around the white vertex, and label the horizontal edge ${\rd 4}$. The same is true for any odd $n$. When $n$ is even, we follow the same pattern on the Hooper diagram to go left and down and label edge ${n+1}$ in the same location.
\emph{Step 4}: We complete the labels of $\M mn$ and also label with $\M nm$.
The construction in Step $3$ shows why moving horizontally across a line in the Hooper diagram corresponds to the zig-zag labeling in each polygon of the \bm surface: going around white and black vertices corresponds to alternately going horizontally and vertically in the polygons. To get from one horizontal line to the next in the Hooper diagram, we follow the direction in the polygons. Thus, the ``snaking'' labeling in the Hooper diagram corresponds to the labeling described in Section \ref{howtolabel}.
We already placed edge ${\gr 1}$ of $\M nm$, and we follow exactly the same method for the rest of the edges as we just described for $\M mn$. This leads to the overlaid ``snaking'' patterns shown in Figure \ref{hd-labeled}.
\end{proof}
\begin{remark}\label{hd-reflect}
When we defined the Hooper diagrams in Section \ref{hooperdiagrams}, we followed Hooper's convention of the arrangement of white and black vertices and arrow directions. In fact, this choice is somewhat arbitrary; the diagrams lead to the same polygon construction if we rotate them by a half-turn, or reflect them horizontally or vertically. Using Hooper's convention, along with our left-to-right numbering system in the polygons where we first label $P(0)$ with $1,\ldots,n$ and so on, leads to the edges ${\rd1}$ and ${\gr 1}$ being in the lower-left corner of the labeled Hooper diagram, with the numbering going up. We prefer to have the $1$s in the upper-left corner with the numbers going down, so after we finish labeling it, we will reflect the diagram horizontally, as in Figure \ref{hd-labeled}b for the general form. This choice is merely stylistic.
\end{remark}
\subsection{Transition diagrams: definitions and examples}\label{sec:transition_diagrams}
In this section we define \emph{transition diagrams}, which describe all possible transitions between edge labels for trajectories that belong to a given sector of directions (see Definition \ref{def:transition} below).
We will first describe in this section transition diagrams for cutting sequences of trajectories whose direction belongs to the sector $[0,\pi/n]$. Then, exploiting the symmetries of the polygonal presentation of \bm surfaces, we will describe transition diagrams for the other sectors of width $\pi/n$, see \S\ref{sec:other_sectors}.
\begin{definition} \label{sectordef}
For $i=0,\ldots,2n-1$, let $\Sec i m n = [i\pi/n,(i+1)\pi/n]$. We call \mbox{$\Sec 0 m n = [0,\pi/n]$} the \emph{standard sector}. For a trajectory $\tau$, we say $\tau \in \Sec i m n$ if the angle of the trajectory is in $\Sec i m n$.
\end{definition}
Let us first describe the \emph{transitions} that are allowed in each sector:
\begin{definition}\label{def:transition}
The \emph{transition} $n_1 n_2$ is \emph{allowed} in sector $\Sec i m n$ if some trajectory in $\Sec i m n$ cuts through edge $n_1$ and then through edge $n_2$.
\end{definition}
The main result of this section (Theorem \ref{tdtheorem}) is the description of the structure of diagrams which describe of all possible transitions in $\Sec 0 mn$ for $\M mn$.
\begin{definition}
The \emph{transition diagram} $\T i m n$ for trajectories in $\Sec i m n $ on $\M m n$ is a directed graph whose vertices are edge labels of the polygon decomposition of the surface, with an arrow from edge label $n_1$ to edge label $n_2$ if and only if the transition $n_1 n_2$ is allowed in $\Sec i m n $.
\end{definition}
\begin{example}
We construct $\T 0 43$ which is for sector \mbox{$\Sec 0 4 3 =[0,\pi/3]$} (Figure \ref{34td}). A trajectory passing through edge ${\rd 1}$ can then go horizontally across through edge ${\rd 2}$ or diagonally up through edge ${\rd 6}$, so we draw arrows from \mbox{${\rd 1}\to {\rd 2}$} and \mbox{${\rd 1}\to {\rd 6}$}. A trajectory passing through edge ${\rd 2}$ can go across through edge ${\rd 1}$, or up through edge ${\rd 3}$, so we draw arrows \mbox{${\rd 2}\to {\rd 1}$} and \mbox{${\rd 2}\to {\rd 3}$}. From edge ${\rd 3}$, we can only go up to edge ${\rd 4}$, so we draw \mbox{${\rd 3}\to {\rd 4}$}. The rest of the diagram is constructed in the same manner. We do \emph{not} draw (for example) an arrow from ${\rd 3}$ to ${\rd 6}$, because such a trajectory is not in $\Sec 0 43$ (it is in $\Sec 143$).
\end{example}
\begin{figure}[!h]
\centering
$\T 0 43$: \begin{tikzcd}
{\color{red}1}\arrow[bend right]{r} \arrow{d}
&{\color{red}2} \arrow[bend right]{l} \arrow[bend left]{r}
&{\color{red}3} \arrow[bend left]{l} \arrow{d} \\
{\color{red}6}\arrow[bend right]{r} \arrow{d}
&{\color{red}5} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\color{red}4} \arrow[bend left]{l} \arrow{d} \\
{\color{red}7}\arrow[bend right]{r}
&{\color{red}8} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\color{red}9} \arrow[bend left]{l} \\
\end{tikzcd} \ \ \ \ \
$\T 0 34$: \begin{tikzcd}
{\gr1}\arrow[bend right]{r} \arrow{d}
&{\gr2} \arrow[bend right]{l} \arrow[bend left]{r}
&{\gr3} \arrow[bend left]{l} \arrow[bend right]{r} \arrow{d}
&{\gr4} \arrow[bend right]{l} \\
{\gr8}\arrow[bend right]{r}
&{\gr7} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\gr6} \arrow[bend left]{l} \arrow[bend right]{r}
&{\gr5} \arrow[bend right]{l} \arrow{u} \\
\end{tikzcd}
\begin{quote}\caption{Transition diagrams for the standard sector \label{34td}} \end{quote}
\end{figure}
\begin{example}
In Figure \ref{34td}, we also show $\T 0 34$, which is constructed in the same way for trajectories in sector \mbox{$\Sec 0 34 = [0,\pi/4]$} on $\M 34$.
\end{example}
We chose to label the edges as we did so that the numbers in the transition diagrams ``\emph{snake}'' back and forth across the table in this convenient way, just as in the Hooper diagram. The arrows are always as in Figure \ref{34td}: The arrows in the upper-left corner of every diagram are exactly as in the figure, and if $m$ and $n$ are larger, the same alternating pattern is extended down and to the right. We prove this general structure in the main result of this section, Theorem \ref{tdtheorem}.
\subsection{Admissibility of sequences}\label{sec:admissibility}
Consider the space ${\LL mn}^{\mathbb{Z}}$ of bi-infinite words $w$ in the symbols (edge label numbers) of the alphabet ${\LL mn}$ used to label the edges of the polygon presentation of $\M mn$.
\begin{definition}\label{admissibledef} Let us say that the word $w$ in ${\LL mn}^{\mathbb{Z}}$ is \emph{admissible} if there exists a diagram $\T i m n$ for $i\in \{0, \dots, n-1\}$ such that all transitions in $w$ correspond to labels of edges of $\T i m n$. In this case, we will say that $w$ is \emph{admissible in (diagram) $\T i m n$}. Equivalently, the sequence $w$ is admissible in $\T i m n$ if it describes an infinite path on $\T i m n$. Similarly, a finite word $u$ is admissible (admissible in $\T i m n$) if it describes a finite path on a diagram (on $\T i m n$).
\end{definition}
Admissibility is clearly a necessary condition for a sequence to be a cutting sequence:
\begin{lemma}\label{admissiblelemma}
Cutting sequences are admissible.
\end{lemma}
\begin{proof}
Let $w$ be a cutting sequence of a linear a trajectory $\tau$ on $\M mn$. Up to orienting it suitably (and reversing the indexing by $\mathbb{Z}$ if necessary) we can assume without loss of generality that its direction $\theta$ belongs to $[0, \pi]$. Then there exists some $0\leq i \leq n-1$ such that $\theta \in \Sec i m n$. Since the diagram $\T i m n$ contains by definition all transitions which can occurr for cutting sequences of linear trajectories with direction in $\Sec i m n$, it follows that $w$ is admissible in $\T i m n$.
\end{proof}
We remark that some words are admissible in more than one diagram. For example, since we are using closed sectors, a trajectory in direction $k\pi/n$ is admissible in sector $k$ and in sector $k+1$.
On the other hand, if $w$ is a non-periodic sequence, then it is admissible in a \emph{unique} diagram:
\begin{lemma}\label{lemma:uniqueness_sector}
If $w$ in ${\LL mn}^{\mathbb{Z}}$ is a \emph{non-periodic} cutting sequence of a linear trajectory on $\M mn$, then there exists a \emph{unique} $i\in \{0, \dots, n-1\}$ such that $w$ is admissible in diagram $\T i m n$.
\end{lemma}
\begin{proof}
We know that $w$ is the cutting sequence of some $\tau$ in an unknown direction $\theta$.
Let $0\leq i \leq n-1$ be so that $w$ is admissible in $\T i mn$. A priori $w$ could be admissible in some other diagram too and we want to rule out this possibility.
We are going to show that all transitions which are allowed in $\T i mn$ actually occur.
Since $w$ is non-periodic, the trajectory $\tau$ cannot be periodic. The Veech dichotomy
(see \S\ref{sec:Veech})
implies that $\tau$ is dense in $\M mn$.
Let $n_1 n_2$ be a transition allowed in $\T i mn$. This means that we can choose inside the polygons forming $\M mn $ a segment in direction $\theta$ that connects an interior point on a side labeled by $n_1$ with an interior point on a side labeled $n_2$. Since $\tau$ is dense, it comes arbitrarily close to the segment. Since by construction $\tau$ and the segment are parallel, this shows that $w$ contains the transition $n_1 n_2$.
Repeating the argument for all transitions in $\T i mn$, we get that $w$ gives a path on $\T i mn$ which goes through all arrows.
This implies that the the diagram in which $w$ is admissible is uniquely determined, since one can verify by inspection that there is a unique diagram which contains certain transitions.
\end{proof}
\subsection{Derivation diagrams }\label{sec:labeled_def}
We now define \emph{derivation diagrams} and explain how to construct them. These diagrams, as explained in the introduction, will provide a concise way to encode the rule to derive cutting sequences. As usual, we start with a concrete example for $\M 43 $, then give the general definition and results.
As explained in Section \ref{hooperdiagrams}, the \bm surfaces $\M mn $ and $\M nm $ are cut-and-paste affinely equivalent via a diffeomorphism $\Psi_{m,n}$. Hence, we can draw a flip-sheared version of $\M nm $ surface on the $\M nm$ polygon decomposition. This is shown for the special case of $m=4, n=3$ in Figure \ref{hd34aux}. When two edges coincide, we arrange them so that red and green edges alternate going horizontally, and also vertically (as shown in Figure \ref{hd34aux} for the example).
\begin{figure}[!h]
\centering
\includegraphics[height=140pt]{m34aug.png}
\includegraphics[height=140pt]{m43aug.png}
\begin{quote}\caption{$\M 34$ with flip-sheared edges of $\M 43$, and $\M 43$ with flip-sheared edges of $\M 34$. \label{hd34aux}} \end{quote}
\end{figure}
We add the following labeling to the transition diagram, thus making it into a derivation diagram. Recall that each arrow \mbox{$n_1\to n_2$} in the diagram represents a possible transition from edge $n_1$ to $n_2$ for a trajectory in $\Sec i m n$ in $\M m n $. We label the arrow \mbox{$n_1\to n_2$} with the edge label $n_3$ if trajectories which hit the edge $n_1$ and then the edge $n_2$ passes through some edge labeled $n_3$ of the flip-sheared $\M n m $.
It turns out that, with a suitable convention to treat degenerate cases, this is definition is well posed: either \emph{every} trajectory from $n_1$ to $n_2$ passes through $n_3$, or \emph{no} trajectory from $n_1$ to $n_2$ passes through $n_3$. This will be shown below in Lemma \ref{lemma:wellposed}.
\begin{example}
Figure \ref{hd34aux} shows $\M 34$ in red with the flip-sheared edges of $\M 43$ in green, and shows $\M 43$ in green with the flip-sheared edges of $\M 34$ in red. We will construct the derivation diagram for each.
The transition diagram for $\M 34$ is as before, but now we will add arrow labels (Figure \ref{34auxtd}). A trajectory passing from {\rd 1} to {\rd 2} crosses edge {\gr 2}, so we label \mbox{ {\rd 1} $\to$ {\rd 2}} with {\gr 2}. A trajectory passing from {\rd 6} to {\rd 5} also passes through {\gr 2}, so we label \mbox{ {\rd 6} $\to$ {\rd 5}} with {\gr 2} as well. Since these arrows are next to each other, we just write one {\gr 2} and the arrows share the label. The rest of the diagram, and the diagram for $\M 43$, is constructed in the same way.
The only exceptions to this are the ``\emph{degenerate cases}'', where edges coincide. The edges that coincide here are {\rd 1} with {\gr 1}, {\rd 3} with {\gr 8}, {\rd 7} with {\gr 4}, and {\rd 9} with {\gr 5}.
Four pairs of edges coincide in this way in the four corners of every transition diagram.
\begin{figure}[!h]
\centering
\includegraphics[width=350pt]{34auxtd.png}
\begin{quote}\caption{Derivation diagrams for $\M 43$ and $\M 34$ \label{34auxtd}} \end{quote}
\end{figure}
\end{example}
In general, we adopt the following convention, which corresponds (after a shear) to Convention \ref{convention:ordered_sides} for the orthogonal presentations.
\begin{convention}\label{convention:ordered_sides_affine}
When sides of $\M nm$ and of the flip and sheared pre-image of $\M nm$ by $\AD mn$ \emph{coincide}, we draw them adjacent to each other and ordered so that sides of $\M nm$ (red) and sides of $\M nm$ (green) diagonals \emph{alternate}, as shown in Figures \ref{coincide} and \ref{hd34aux} for $\M 43 $ and $\M 34$.
\end{convention}
With this convention, the following Lemma holds, which is essentially a restating of Lemma \ref{lemma:intertwined} from the orthogonal presentations:
\begin{lemma}\label{lemma:wellposed}
Consider any segment of a trajectory on $\M mn $ with direction $\theta$ in the standard sector $\Sec 0 m n$ which crosses from the side of $\M mn $ labeled $n_1$ to the side of $\M mn $ labeled $n_2$. Consider the interwoven sides of the flip-sheared copy of $\M nm$ obtained asa preimage of $\AD mn$. Then only one of the following is possible:
\begin{enumerate}
\item either no such segment crosses a side of the flip-sheared edges of $\M 34$, or
\item every such segment crosses the same side of the flip-sheared edges of $\M 34$.
\end{enumerate}
\end{lemma}
\begin{proof} Remark that the affine diffeomorphism that maps the orthogonal presentation of $\M mn$ to $\M mn$, by mapping negative diagonals to sides of $\M mn$, and the dual orthogonal presentation of $\M nm$ to the flip and sheard preimage of $\M nm$, by mapping positive diagonals to flip and sheared preimages of sides of $\M nm$ by $\AD mn$. Thus, Convention \ref{convention:ordered_sides_affine} for the sides of $\M mn$ and the sides of the preimage of $\M nm$ by $\AD mn$ correspond to Convention \ref{convention:ordered_sides_affine} for diagonals in the orthogonal presentations. Thus, the lemma follows immediately from Lemma \ref{lemma:intertwined} for the orthogonal presentations.
\end{proof}
With the above convention (Convention \ref{convention:ordered_sides_affine}), in virtue of Lemma \ref{lemma:wellposed} the following definition is well posed.
\begin{definition} The \emph{derivation diagram} $\D 0 mn$ is the transition diagram $\T 0 mn$ for the standard sector with arrows labeled as follows.
We label the arrow \mbox{$n_1\to n_2$} with the edge label $n_3$ if all the segments of trajectories with direction in the standard sector which hit the edge $n_1$ and then the edge $n_2$ passes through some edge labeled $n_3$ of the flip-sheared $\M nm$. Otherwise, we leave the arrow \mbox{$n_1\to n_2$} without a label.
\end{definition}
In the example of derivation diagram for the surface $\M 34$ in Figure \ref{34auxtd},
one can see that the arrow labels in the example are also are arranged elegantly: they snake up and down, interlaced with the edge labels in two alternating grids. The relation between the diagrams for $\M 34$ and $\M 43$ is simple as well: flip the edge labels across the diagonal, and then overlay the arrows in the standard pattern.
This structure holds for every \bm surface, as we prove in the following main theorem of this section:
\begin{theorem}[Structure theorem for derivation diagrams] \label{tdtheorem}
The structure of the derivation diagram for $\M mn$ in sector $[0,\pi/n]$ is as follows:
\begin{itemize}
\item The diagram consists of $n$ columns and $m-1$ rows of edge labels of $\M mn$.
\item The edge labels start with {\rd 1} in the upper-left corner and go left to right across the top row, then right to left across the second row, and so on, ``snaking'' back and forth down the diagram until the last edge label {\rd n(m-1)} is in the left or right bottom corner, depending on parity of $m$.
\item Vertical arrows between edge labels go down in odd-numbered columns and up in even-numbered columns.
\item Vertical arrows have no arrow labels.
\item A pair of left and right horizontal arrows connects every pair of horizontally-adjacentedge labels.
\item Horizontal arrows have arrow labels, which are edge labels of $\M nm$.
\end{itemize}
For convenience, we choose to arrange these arrow pairs so that the top arrow goes left and the bottom arrow goes right for odd-numbered columns of arrows, and vice-versa in even-numbered columns of arrows. With this arrangement, the arrow labels are as follows:
\begin{itemize}
\item The top-left arrow label is {\gr 1}, and then going down, the next two arrows are both labeled {\gr 2}, and the rest of the pairs are numbered consecutively, until the last remaining arrow is labeled {\gr n}. Then the arrow to the right is labeled {\gr n+1}, and going up the next two arrows are both labeled {\gr n+2}, and so on, ``snaking'' up and down across the diagram until the last arrow is labeled {\gr m(n-1)}.
\end{itemize}
\end{theorem}
There are two examples of derivation diagrams in Figure \ref{34auxtd}, and the general form is shown in Figure \ref{gentransdiag}. Essentially, the two transition diagrams in Figure \ref{34td} are laid over each other as overlapping grids.
\begin{figure}[!h]
\centering
\includegraphics[width=250pt]{gentransdiag.png} \\
\begin{quote}\caption{The form of a derivation diagram for $\M mn$ \label{gentransdiag}} \end{quote}
\end{figure}
Again, we omit the right and bottom edges of the diagram because their labels depend on the parity of $m$ and $n$; to understand the full diagram, it is clearer to look at an example such as Figure \ref{34auxtd}.
\subsection{The structure theorem for derivation diagrams }\label{sec:structure}
In this section we prove Theorem \ref{tdtheorem} describing the structure of derivation diagrams.
For the proof
we will use the \emph{stairs} and \emph{hats} that we defined in Section \ref{stairsandhats}.
Let us recall that each edge in the Hooper diagram corresponds to a basic rectangle, which is the intersection of two cylinders, as explained in Section \ref{hoogen}. Each stair configuration of basic rectangles corresponds exactly to the four possible hat configurations, see Lemma \ref{hatlemma} and also Figure \ref{hat-cases}. Recall that the \emph{middle edge} is the one that is numbered in Figure \ref{hat-cases}, and called $a$ in Figure \ref{hatarrow} below.
We will now describe the labeling on these hats that corresponds to a given labeling by $a,b,c,d,e,f$ of the basic rectangles in the stairs. Each basic rectangle either contains an edge of $\M mn$ (red, a negative diagonal) or an edge of $\M nm$ (green, a positive diagonal).
Thus, giving a labeling of diagonals is equivalent to giving a labeling of basic rectangles. Furthermore, if we work with augmented diagrams and degenerate basic rectangles, each edge of the \bm surface and of its dual \bm surface is in correspondence with a diagonal (positive or negative) of a basic rectangle (possibly degenerate).
Let us first establish:
\begin{lemma}\label{hatdirection}
Hats are right-side-up when the middle edge is in an even-numbered column, and upside-down when the middle edge is in an odd-numbered column.
\end{lemma}
\begin{proof}
Recall from Definition \ref{def:hat} that we have defined a \emph{hat} in such a way that the arrows from the Hooper diagram always go from the middle edge of the hat to each of the adjacent vertical edges $-$ from edge $a$ to edges $b$ and $d$ as shown in in Figure \ref{hat}. Since the arrows go down in even-numbered columns and up in odd-numbered columns of the Hooper diagram, as discussed in \S \ref{hoopertobm} and shown in Figure \ref{augmented}, the directions of the hats also alternate accordingly. When we perform the reflection discussed in Remark \ref{hd-reflect}, the directions are reversed, as desired.
\end{proof}
The following Lemma is key to proving the structure theorem, since it describes the \emph{local} structure of a transition diagram that corresponds to a (non-degenerate) hat/stair configuration. (Recall the cases $1-4$ for hats from the Degenerate Hat Lemma \ref{degeneratehat}.)
\begin{lemma} \label{degeneratearrow} Consider an edge $a$ of the \bm surface $\M mn$.
If the corresponding edge $a$ of $\G mn$ is the middle edge of a hat in case $1$, with adjacent edges $b,c,d,e,f$ as positioned in Figure \ref{hatarrow}a, then the allowed transitions starting with $a$ are as shown in Figure \ref{hatarrow}c.
\begin{figure}[!h]
\centering
\includegraphics[width=400pt]{hat-1.png}
\begin{quote}\caption{(a) a hat in case $1$ (b) the corresponding stair diagram (c) the transitions from edge $a$ \label{hatarrow}} \end{quote}
\end{figure}
Furthermore, if $a$ is the middle edge of a hat in any of the degenerate cases $2-4$, the corresponding arrow picture is a subset of that picture, with exactly the edges that appear in the degenerate hat, as shown in Figure \ref{deghatarr}.
\begin{figure}[!h]
\centering
\includegraphics[width=300pt]{hat-234.png}
\begin{quote}\caption{The degenerate hats of cases $2-4$, and their corresponding transitions \label{deghatarr}} \end{quote}
\end{figure}
\end{lemma}
\begin{proof}
First, we consider the case where $a$ is the middle edge of a hat in case $1$ (Figure \ref{hatarrow}a).
Assume that edges $a,b,c$ in the Hooper diagram are adjacent in a vertical cylinder, so then $a,d,f$ are adjacent in a horizontal cylinder. Then the stair corresponding to this hat is as in Figure \ref{hatarrow}b.
Now we can determine the possible transitions from edge $a$ to other edges of $\M mn$ -- in this case, edges $c,e$ and $f$. Going vertically, $a$ can go to $c$ through $b$; going horizontally, $a$ can go to $f$ through $d$, and going diagonally, $a$ can go to $e$ without passing through any edge of $\M nm$. We record this data with the arrows in Figure \ref{hatarrow}c.
If instead the edges $a,b,c$ are adjacent in a horizontal cylinder, and $a,d,f$ are adjacent in a vertical cylinder, the roles of $b$ and $d$ are exchanged, and the roles of $c$ and $f$ are exchanged, but the allowed transitions and arrows remain the same.
Now we consider the case where $a$ is the middle edge of a hat in cases $2-4$. The analysis about basic rectangles and diagonals is the same as in case $1$; the only difference is that the basic rectangles corresponding to auxiliary (dotted) edges are degenerate, and the basic rectangles corresponding to missing edges are missing.
The degeneracy of the rectangles does not affect the adjacency, so the degenerate edges act the same as normal edges, and remain in the arrow diagram. The missing edges clearly cannot be included in transitions, so these are removed from the arrow diagram (Figure \ref{deghatarr}).
\end{proof}
We can now use these Lemmas to give the proof of Theorem \ref{tdtheorem}.
\begin{proof} [Proof of Theorem \ref{tdtheorem}]
We begin with a Hooper diagram as in Figure \ref{construct1}a. The edges are labeled corresponding to the case of the hat that has that edge as its middle edge. The label is above the edge if that hat is right-side-up, and below the edge if the label is upside-down, from Lemma \ref{hatdirection}.
Lemma \ref{degeneratearrow} tells us the allowed transitions in each case, and we copy the arrows onto the corresponding locations in the Hooper diagram, in Figure \ref{construct1}. Here the node at the tail of each arrow is the hat case number, and we have spaced out the arrows so that it is clear which arrows come from which hat.
\begin{figure}[!h]
\centering
\includegraphics[height=200pt]{construct1.png}
\begin{quote}\caption{The first steps of constructing the derivation diagram for $\M 43$ \label{construct1}} \end{quote}
\end{figure}
Now we determine the arrow labels. Proposition \ref{hd-snake} tells us that the edge labels from $\M mn$ and $\M nm$ snake back and forth and up and down, respectively, so we copy the labels in onto the Hooper diagram in Figure \ref{construct2}a. Then we use Lemma \ref{degeneratearrow} to copy these labels onto the arrow picture. For $\M 43$, this yields the derivation diagram in Figure \ref{construct2}b, and for $\M mn$ in general it yields the derivation diagram in Figure \ref{gentransdiag}.
\begin{figure}[!h]
\centering
\includegraphics[height=200pt]{construct2.png}
\begin{quote}\caption{Finishing the construction of the derivation diagram for $\M 43$ \label{construct2}} \end{quote}
\end{figure}
Where two identical arrow labels are adjacent (as for ${\gr 2, 3, 6, 7}$ here), we only write one label, and then get the diagram in Figure \ref{gentransdiag}, as desired.
\end{proof}
\subsection{Normalization}\label{sec:normalization}
Theorem \ref{tdtheorem} describes the transition diagram for \mbox{$\Sec 0 m n = [0,\pi/n]$}. Now we will describe how to transform any trajectory into a trajectory in $\Sec 0 m n$.
To \emph{normalize} trajectories whose direction does not belong to the standard sector, we reflect each other sector $\Sec i m n$ for $1\leq i \leq 2n-1$ onto $\Sec 0 m n$. Remark that geodesic are line in a given direction and we can choose how to orient it. We can decide that all trajectories are ``\emph{going up},'' i.e. have their angle $\theta \in[0,\pi]$. Hence, often we will consider only sectors $\Sec i m n$ for $1\leq i \leq n-1$.
Recall that for $\M mn$, we defined $\Sec i m n = [i\pi/n, (i+1)\pi/n]$.
\begin{definition}\label{def:reflections}
For $0\leq i < 2n$ the transformation $\refl i m n$ is a reflection across the line $\theta = (i+1)\pi/(2n)$. Thus, $\refl i m n$ maps $\Sec i m n$ bijectively to $\Sec 0 m n$.
In matrix form, we have
$$\qquad \refl i m n =
\mat{\cos\left((i+1)\pi/n\right)}{\sin\left((i+1)\pi/n\right)}{\sin\left((i+1)\pi/n\right)}{-\cos\left((i+1)\pi/n\right)} .$$
\end{definition}
\noindent See Example \ref{ex:refl_matrices} for the explicit form of the reflection matrices for $n=3$.
The reflection $\refl i m n$ also gives an
\emph{affine diffeomorphism} of $\M mn$, which is obtained by reflecting each polygon of $\M mn$ (see Example \ref{ex:reflections} below).
\begin{convention}\label{convention:refl}
We use the same symbols $\refl i m n$ to denote matrices in $SL(2 , \mathbb{R})$ and the corresponding affine diffeomorphisms of the \bm surface $\M mn$ .
\end{convention}
Each of the affine diffeomorphisms $\refl i m n$ also induces a \emph{permutation} on the edge labels of $\M mn$, i.e. on the alphabet in $\LL m n $ (see Example \ref{ex:permutations} below).
We will denote the permutation corresponding to $\refl i mn$ by $\perm i mn$. We now want to describe these permutations explicitly.
To do this, we first notice that each flip can be seen as a composition of two flips that are easier to study (see Lemma \ref{comptransf} below).
The following Definition \ref{diagram-actions-def} and Lemma \ref{diagram-actions-lem} then explain the actions of these fundamental transformations on the labels of the polygons.
\begin{lemma}\label{comptransf}
Each of the reflections $\refl i m n$ can be written as a composition of the following:
\begin{itemize}
\item a flip along the axis at angle $\pi /n$, denoted by $\fl {n}$.
\item a flip along the axis at angle $\pi /(2n)$, denoted by $\fl {2n}$.
\end{itemize}
\end{lemma}
\begin{proof}
Recall that we numbered the sectors with $\Sec i m n = [i\pi/n, (i+1)\pi/n]$, and that $\refl i m n$ reflects sector $\Sec imn$ into sector $\Sec 0mn$.
Applying $\fl {2n}$ to $\Sec imn$ yields $\Sec {2n-1}mn$, with the opposite orientation. The composition $\fl n \circ \fl {2n}$ is a counter-clockwise rotation by $\pi/n$, preserving orientation. Thus, $$\refl imn = (\fl n \circ \fl {2n})^{2n-i} \circ \fl {2n}.$$
Notice that this is a composition of an odd number of flips, so it reverses orientation, as required.
\end{proof}
\begin{definition}\label{diagram-actions-def}
We define two actions on transition diagrams, which leave the arrows in place but move the numbers (edge labels) around.
The action $\nu$ is a flip that exchanges the top row with the bottom row, the second row with the next-to-bottom row, etc. The action $\beta$ is a switching of adjacent pairs in a kind of ``brick'' pattern where the $1$ in the upper-left corner is preserved, and the $2$ and $3$ exchange places, $4$ and $5$ exchange places, and so on across the first row, and then in the second row the pairs that are exchanged are offset.
See Figure \ref{diagram-actions-fig} for an example.
\end{definition}
\begin{figure}[!h]
$\T 0 3 4$: \begin{tikzcd}
{\gr1}\arrow[bend right]{r} \arrow{d}
&{\gr2} \arrow[bend right]{l} \arrow[bend left]{r}
&{\gr3} \arrow[bend left]{l} \arrow[bend right]{r} \arrow{d}
&{\gr4} \arrow[bend right]{l} \\
{\gr8}\arrow[bend right]{r}
&{\gr7} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\gr6} \arrow[bend left]{l} \arrow[bend right]{r}
&{\gr5} \arrow[bend right]{l} \arrow{u} \\
\end{tikzcd} \\
$\nu(\T 0 3 4)$: \begin{tikzcd}
{\gr8}\arrow[bend right]{r} \arrow{d}
&{\gr7} \arrow[bend right]{l} \arrow[bend left]{r}
&{\gr6} \arrow[bend left]{l} \arrow[bend right]{r} \arrow{d}
&{\gr5} \arrow[bend right]{l} \\
{\gr1}\arrow[bend right]{r}
&{\gr2} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\gr3} \arrow[bend left]{l} \arrow[bend right]{r}
&{\gr4} \arrow[bend right]{l} \arrow{u}
\end{tikzcd} \ \ \ \ \ \
$\beta(\T 0 3 4)$: \begin{tikzcd}
{\gr1}\arrow[bend right]{r} \arrow{d}
&{\gr3} \arrow[bend right]{l} \arrow[bend left]{r}
&{\gr2} \arrow[bend left]{l} \arrow[bend right]{r} \arrow{d}
&{\gr4} \arrow[bend right]{l} \\
{\gr7}\arrow[bend right]{r}
&{\gr8} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\gr5} \arrow[bend left]{l} \arrow[bend right]{r}
&{\gr6} \arrow[bend right]{l} \arrow{u} \\
\end{tikzcd}
\begin{quote}\caption{The actions $\nu$ and $ \beta$ on a transition diagram. \label{diagram-actions-fig}}\end{quote}
\end{figure}
\begin{lemma} \label{diagram-actions-lem}
\begin{enumerate}
\item The flip $\fl {2n}$ has the effect of $\nu$ on the transition diagram. \label{pirotate}
\item The flip $\fl n$ has the effect of of $\beta$ on the transition diagram.\label{diagbuddy}
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item Recall Definition \ref{pk}, where we named the polygons $P(0), P(1), \ldots, P(m-1)$ from left to right. By the Structure Theorem for derivation diagrams \ref{tdtheorem}, the first row of a transition diagram has the edge labels of $P(0)$, the second row has the edge labels of $P(1)$, and so on until the last row has the edge labels of $P(m-1)$. A flip along the line at angle $\pi/(2n)$ exchanges the locations of the ``short'' and ``long'' sides, so it takes $P(0)$ to $P(m-1)$, and takes $P(1)$ to $P(m-1)$, etc. Thus it exchanges the rows by the action of $\nu$.
\item A flip along $\pi/n$ exchanges pairs of edge labels that are opposite each other in direction $\pi/n$ in the polygons, which because of the zig-zag labeling are exactly the ones exchanged by $\beta$.
\end{enumerate}
\end{proof}
\begin{corollary}\label{preserves-rows-cor}
The actions $\nu$ and $ \beta$ on the transition diagram corresponding to the actions of $\fl {2n}$ and $\fl {n}$, respectively, preserve the rows of the transition diagram $\T i mn$.
Consequently, the permutations $\perm i mn$ preserve the rows of $\T i mn$.
\end{corollary}
\begin{proof}
It follows immediately from Lemma \ref{diagram-actions-lem} that the action described on the diagrams preserve rows.
Now, each permutation $\perm i mn$ corresponds to a reflection $\refl i mn$, which by Lemma \ref{comptransf} is obtained as a composition of the transformations $\fl {2n}$ and $\fl {n}$.
Thus the permutations are obtained by composing the permutations corresponding to $\fl {2n}$ and $\fl {n}$.
Each permutation preserves the rows, hence their composition does too.
\end{proof}
\begin{example}[Matrices for $n=3$]\label{ex:refl_matrices}
For $n=3$, the reflections $\refl i 4 3$ for $0\leq i \leq 2$
that act on $\M 4 3$ are given by the following matrices:
$$\refl 043 = \mat 1001 \qquad \refl 1 43 = \mat {-1/2}{\sqrt{3}/2}{\sqrt{3}/2}{1/2} \qquad \refl 2 43 = \mat {-1}001.$$
\end{example}
\begin{example} [Reflections for $n=3$]\label{ex:reflections}
In Figure \ref{reflect} we show how the reflections $\refl i 4 3$ and $\refl i 3 4$ act as affine diffeomorphism on $\M 43$ and $\M 34$ respectively. The solid line reflects $\Sigma_1$ to $\Sigma_0$; the dashed line reflects $\Sigma_2$ to $\Sigma_0$, and for $\M 34$ the dotted line reflects $\Sigma_3$ to $\Sigma_0$.
\begin{figure}[!h]
\centering
\includegraphics[width=.45\textwidth]{refl34} \hspace{0.08\textwidth}
\includegraphics[width=.45\textwidth]{refl43}
\begin{quote}\caption{The action of reflections $\refl i m n$ on $\M 34$ and $\M 43$. \label{reflect}}\end{quote}
\end{figure}
\end{example}
\begin{example}[Permutations for $n=3$]\label{ex:permutations} Looking at Figure \ref{reflect}, we can see the permutation on edge labels induced by each $\refl i m n$.
\begin{align*}
&\M 43: &\perm 1 43 = &(17)(29)(38)(56) &\M 34: &&\perm 1 34 =& (14)(57)(68) \\
&&\perm 2 43 = &(12)(45)(78) &&&\perm 2 34 =& (16)(28)(35)(47) \\
& &&&&&\perm 3 34 =& (12)(34)(67).
\end{align*}
\end{example}
\subsection{Transition diagrams for other sectors}\label{sec:other_sectors}
We can now explain how to draw a \emph{transition diagram} for trajectories in each sector.
Let us start with some examples, and then give a general rule to produce any such diagram.
For our example surfaces $\M 43$ and $\M 34$, the transition diagrams for each sector are in Figures \ref{all-td43} and \ref{all-td34}, respectively.
\begin{figure}[!h]
\centering
$\T 0 43$: \begin{tikzcd}
{\color{red}1}\arrow[bend right]{r} \arrow{d}
&{\color{red}2} \arrow[bend right]{l} \arrow[bend left]{r}
&{\color{red}3} \arrow[bend left]{l} \arrow{d} \\
{\color{red}6}\arrow[bend right]{r} \arrow{d}
&{\color{red}5} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\color{red}4} \arrow[bend left]{l} \arrow{d} \\
{\color{red}7}\arrow[bend right]{r}
&{\color{red}8} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\color{red}9} \arrow[bend left]{l} \\
\end{tikzcd} \ \ \ \ \ \
$\T 1 43$: \begin{tikzcd}
{\color{red}7}\arrow[bend right]{r} \arrow{d}
&{\color{red}9} \arrow[bend right]{l} \arrow[bend left]{r}
&{\color{red}8} \arrow[bend left]{l} \arrow{d} \\
{\color{red}5}\arrow[bend right]{r} \arrow{d}
&{\color{red}6} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\color{red}4} \arrow[bend left]{l} \arrow{d} \\
{\color{red}1}\arrow[bend right]{r}
&{\color{red}3} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\color{red}2} \arrow[bend left]{l} \\
\end{tikzcd} \ \ \ \ \ \
$\T 2 43$: \begin{tikzcd}
{\color{red}2}\arrow[bend right]{r} \arrow{d}
&{\color{red}1} \arrow[bend right]{l} \arrow[bend left]{r}
&{\color{red}3} \arrow[bend left]{l} \arrow{d} \\
{\color{red}6}\arrow[bend right]{r} \arrow{d}
&{\color{red}4} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\color{red}5} \arrow[bend left]{l} \arrow{d} \\
{\color{red}8}\arrow[bend right]{r}
&{\color{red}7} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\color{red}9} \arrow[bend left]{l} \\
\end{tikzcd}
\begin{quote}\caption{Transition diagrams in each sector for $\M 43$ \label{all-td43}} \end{quote}
\end{figure}
\begin{figure}[!h]
\centering
$\T 0 34$: \begin{tikzcd}
{\gr1}\arrow[bend right]{r} \arrow{d}
&{\gr2} \arrow[bend right]{l} \arrow[bend left]{r}
&{\gr3} \arrow[bend left]{l} \arrow[bend right]{r} \arrow{d}
&{\gr4} \arrow[bend right]{l} \\
{\gr8}\arrow[bend right]{r}
&{\gr7} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\gr6} \arrow[bend left]{l} \arrow[bend right]{r}
&{\gr5} \arrow[bend right]{l} \arrow{u} \\
\end{tikzcd} \ \ \ \ \
$\T 1 34$: \begin{tikzcd}
{\gr4}\arrow[bend right]{r} \arrow{d}
&{\gr2} \arrow[bend right]{l} \arrow[bend left]{r}
&{\gr3} \arrow[bend left]{l} \arrow[bend right]{r} \arrow{d}
&{\gr1} \arrow[bend right]{l} \\
{\gr6}\arrow[bend right]{r}
&{\gr5} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\gr8} \arrow[bend left]{l} \arrow[bend right]{r}
&{\gr7} \arrow[bend right]{l} \arrow{u} \\
\end{tikzcd} \\
$\T 2 34$: \begin{tikzcd}
{\gr6}\arrow[bend right]{r} \arrow{d}
&{\gr8} \arrow[bend right]{l} \arrow[bend left]{r}
&{\gr5} \arrow[bend left]{l} \arrow[bend right]{r} \arrow{d}
&{\gr7} \arrow[bend right]{l} \\
{\gr2}\arrow[bend right]{r}
&{\gr4} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\gr1} \arrow[bend left]{l} \arrow[bend right]{r}
&{\gr3} \arrow[bend right]{l} \arrow{u} \\
\end{tikzcd} \ \ \ \ \
$\T 3 34$: \begin{tikzcd}
{\gr2}\arrow[bend right]{r} \arrow{d}
&{\gr1} \arrow[bend right]{l} \arrow[bend left]{r}
&{\gr4} \arrow[bend left]{l} \arrow[bend right]{r} \arrow{d}
&{\gr3} \arrow[bend right]{l} \\
{\gr8}\arrow[bend right]{r}
&{\gr6} \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
&{\gr7} \arrow[bend left]{l} \arrow[bend right]{r}
&{\gr5} \arrow[bend right]{l} \arrow{u} \\
\end{tikzcd}
\begin{quote}\caption{Transition diagrams in each sector for $\M 34$ \label{all-td34}} \end{quote}
\end{figure}
\begin{corollary}\label{cor:shape}
Up to permuting the labels, the shape of the transition diagram is always the same. For $\T i mn$, the labels in $\T 0 mn$ are permuted by $\perm i m n$.
\end{corollary}
\begin{definition}\label{def:universal}
We will call \emph{universal diagram} and denote by $\UD mn$ the unlabeled version of the diagrams $\T i mn$.
\end{definition}
The universal diagrams for $\M 43$ and $\M 34$ are shown in Figure \ref{fig:universalex}. All transition diagrams for $\M mn$ have the same arrow structure, $\UD mn$, with different labels at the nodes.
\begin{figure}[!h]
\centering
$\UD 43$: \begin{tikzcd}
\arrow[bend right]{r} \arrow{d}
& \arrow[bend right]{l} \arrow[bend left]{r}
& \arrow[bend left]{l} \arrow{d} \\
\arrow[bend right]{r} \arrow{d}
& \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
& \arrow[bend left]{l} \arrow{d} \\
\arrow[bend right]{r}
& \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
& \arrow[bend left]{l} \\
\end{tikzcd} \ \ \ \ \ \
$\UD 34$: \begin{tikzcd}
\arrow[bend right]{r} \arrow{d}
& \arrow[bend right]{l} \arrow[bend left]{r}
& \arrow[bend left]{l} \arrow[bend right]{r} \arrow{d}
& \arrow[bend right]{l} \\
\arrow[bend right]{r}
& \arrow[bend right]{l} \arrow[bend left]{r} \arrow{u}
& \arrow[bend left]{l} \arrow[bend right]{r}
& \arrow[bend right]{l} \arrow{u} \\
\end{tikzcd}
\begin{quote}\caption{The universal diagrams $\UD 43$ and $\UD 34$ \label{fig:universalex}} \end{quote}
\end{figure}
\smallskip
|
2,869,038,153,823 | arxiv | \section{Introduction}
\label{sec1} |
2,869,038,153,824 | arxiv | \section{Introduction and motivation}
\subsection{Introduction}
In this paper, we describe a simple fast algorithm for evaluating expressions of
the form
\begin{equation} \label{eq1}
u_j = \sum_{i=1, i \not = j}^n \frac{\alpha_i}{x_i - x_j}, \quad \text{for}
\quad j = 1,\ldots,n,
\end{equation}
where $\alpha_i$ are real numbers, and $x_i$ are points in a
compact interval of $\mathbb{R}$. This expression can be viewed as representing
the electrostatic potential generated by charges on a line in $\mathbb{R}^3$.
We remark that fast algorithms for computing the electrostatic potential
generated by general distributions of charges in $\mathbb{R}^3$ exist, see for
example the Fast Multipole Method \cite{MR936632} whose relation to the method
presented in this paper is discussed in \S \ref{relatedworks}. However, in a number of
situations in computational physics it is useful to have a simple and extremely
fast method for evaluating the potential of charges on a line; we present such a
method in this paper. Under mild assumptions the presented method involves
$\mathcal{O}(n \log n)$ operations and has a small constant. The method is
based on writing the potential $1/r$ as
$$
\frac{1}{r} = \int_0^\infty e^{-r t} dt.
$$
We show that there exists a small set of quadrature nodes $t_1,\ldots,t_m$
and weights $w_1,\ldots,w_m$ such that for a large range of values of $r$ we
have
\begin{equation} \label{oapprox}
\frac{1}{r} \approx \sum_{j=1}^m w_j e^{-r t_j},
\end{equation}
see Lemma \ref{quadlem}, which is a quantitative version of \eqref{oapprox}.
Numerically the nodes $t_1,\ldots,t_m$ and weights $w_1,\ldots,w_m$ are
computed using a procedure for constructing generalized Gaussian quadratures,
see \S \ref{nodesandweights}. An advantage of representing $1/r$ as a sum
of exponentials is that the translation operator
\begin{equation} \label{transeq}
\frac{1}{r} \mapsto \frac{1}{r+r'}
\end{equation}
can be computed by taking an inner product of the weights
$(w_1,\ldots,w_m)$ with a diagonal transformation of the vector
$(e^{-r t_1},\ldots,e^{-r t_m})$. Indeed, we have
\begin{equation} \label{diageq}
\frac{1}{r+r'} \approx \sum_{j=1}^m
w_j e^{-(r +r') t_j} = \sum_{j=1}^m
w_j e^{-r' t_j} e^{-r t_j}.
\end{equation}
The algorithm described in \S \ref{algomain} leverages the existence of this
diagonal translation operator to efficiently evaluate \eqref{eq1}.
\subsection{Relation to past work} \label{relatedworks}
We emphasize that fast algorithms for computing the potential generated by
arbitrary distributions of charges in $\mathbb{R}^3$ exist. An example of such
an algorithm is the Fast Multipole Method that was introduced by \cite{MR936632}
and has been extended by several authors including \cite{MR2558773, MR1489257, MR1273161}. In this paper, we present a simple scheme for the special case
where the charges are on a line, which occurs in a number of numerical
calcuations, see \ref{motivation}. The presented scheme has a much smaller
runtime constant compared to general methods, and is based on the diagonal form
\eqref{diageq} of the translation operator \eqref{transeq}. The idea of using
the diagonal form of this translation operator to accelerate numerical
computations has been studied by several authors; in particular, the diagonal
form is used in algorithms by Dutt, Gu and Rokhlin \cite{MR1411845}, and Yavin
and Rokhlin \cite{MR1675269} and was subsequently studied in detail by Beylkin
and Monz\'on \cite{MR2595881,MR2147060}.
The current paper improves upon these past works by taking advantage of robust
generalized Gaussian quadrature codes \cite{MR2671296} that were not previously
available; these codes construct a quadrature rule that is exact for functions
in the linear span of a given Chebyshev system, and can be viewed as
a constructive version of Lemma \ref{krein} of Kre\u{\i}n \cite{MR0113106}. The
resulting fast algorithm presented in \S \ref{algomain} simplifies past
approaches, and has a small runtime constant; in particular, its computational
cost is similar to the computational cost of $5$-$10$ Fast Fourier Transforms on
data of a similar length, see \ref{numerics}.
\subsection{Motivation} \label{motivation}
Expressions of the form \eqref{eq1} appear in a number of situations in
computational physics. In particular, such expressions arise in connection with
the Hilbert Transform
$$
H f(x) = \lim_{\varepsilon \rightarrow 0} \frac{1}{\pi} \int_{|x-y|\ge
\varepsilon} \frac{f(y)}{y - x} dy.
$$
For example, the computation of the projection $P_m f$ of a function $f$
onto the first $m+1$ functions in a family of orthogonal polynomials can be
reduced to an expression of the form \eqref{eq1} by using the
Christoffel--Darboux formula, which is related to the
Hilbert transform; we detail the reduction of $P_m f$ to an expression of the
form \eqref{eq1} in the following.
Let $\{p_k\}_{k=0}^\infty$ be a family of monic polynomials that are orthogonal
with respect to the weight $w(x) \ge 0$ on $(a,b) \subseteq \mathbb{R}$.
Consider the projection operator
$$
P_m f(x) := \int_a^b \sum_{k=0}^m \frac{p_k(x) p_k(y)}{h_k} f(y) w(y) dy,
$$
where $h_k := \int_a^b p_k(x)^2 w(x) dx$. Let $x_1,\ldots,x_n$ and
$w_1,\ldots,w_n$ be the $n > m/2$ point Gaussian quadrature nodes and
weights associated with $\{p_k\}_{k=0}^\infty$, and set
\begin{equation} \label{eq3}
u_j := \sum_{i=1}^n \sum_{k=0}^m
\frac{p_k(x_j) p_k(x_i)}{h_k} f(x_i) w(x_i) ,
\quad \text{for} \quad j = 1,\ldots,n.
\end{equation}
By construction
the polynomial that interpolates the values $u_1,\ldots,u_n$ at the points
$x_1,\ldots,x_n$ will accurately approximate $P_m f$ on $(a,b)$ when
$f$ is sufficiently smooth, see for example \S 7.4.6 of Dahlquist and
Bj\"orck \cite{DahlquistBjorck1974}. Directly evaluating \eqref{eq3} would
require $\Omega(n^2)$ operations. In contrast, the algorithm of this paper
together with the Christoffel--Darboux Formula can be used to evaluate
\eqref{eq3} in $\mathcal{O}(n \log n)$ operations. The
Christoffel-Darboux formula states that
\begin{equation} \label{eq4}
\sum_{k=0}^m \frac{p_k(x) p_k(y)}{h_k} = \frac{1}{h_m} \frac{p_{m+1}(x) p_m(y)
-p_m(x) p_{m+1}(y)}{x-y},
\end{equation}
see \S 18.2(v) of \cite{nist}. Using \eqref{eq4} to rewrite \eqref{eq3} yields
\begin{equation} \label{eq5}
u_j = \frac{1}{h_m}\left( f(x_j) + \sum_{i=1, i \not = j}^m \frac{p_{m+1}(x_j)
p_m(x_i) -p_m(x_j) p_{m+1}(x_i)}{x_j-x_i} f(x_i) w(x_i) \right),
\end{equation}
where we have used the fact that the diagonal term of the double summation is
equal to $f(x_j)/h_m$. The summation in \eqref{eq5} can be rearranged into two
expressions of the form \eqref{eq1}, and thus the method of this paper can be
used to compute a representation of $P_m f$ in $\mathcal{O}(n \log n )$
operations.
\begin{remark}
Analogs of the Christoffel--Darboux formula hold for many other families of
functions; for example, if $J_{\nu}(w)$ is a Bessel function of the first kind,
then we have
$$
\sum_{k=1}^\infty 2(\nu +k) J_{\nu +k}(w) J_{v+k}(z) = \frac{w z}{w - z} \left(
J_{\nu+1}(w) J_\nu(z) - J_\nu(w) J_{\nu+1}(z) \right),
$$
see \cite{Tygert2006}. This formula can be used to write a projection operator
related to Bessel functions in an analogous form to \eqref{eq5}, and the
algorithm of this paper can be similarly applied
\end{remark}
\begin{remark}
A simple modification of the algorithm presented in this paper can be used to
evaluate more general expressions of the form
$$
v_j = \sum_{i=1}^n \frac{\alpha_i}{x_i - y_j}, \quad \text{for} \quad j =
1,\ldots,m,
$$
where $x_1,\ldots,x_n$ are source points, and $y_1,\ldots,y_m$ are target
points. For simplicity, this paper focuses on the case where the source and
target points are the same, which is the case in the projection application
described above.
\end{remark}
\section{Main result}
\subsection{Main result} \label{mainresult}
Our principle analytical result is the following theorem, which provides precise
accuracy and computational complexity guarantees for the algorithm presented in
this paper, which is detailed in \S \ref{algomain}.
\begin{theorem} \label{thm1}
Let $x_1 <\ldots <x_n \in [a,b]$ and $\alpha_1,\ldots,\alpha_n
\in \mathbb{R}$ be given. Set
$$
u_j := \sum_{i=1,i \not = j}^n \frac{\alpha_i}{x_i - x_j}, \quad \text{for}
\quad j = 1,\ldots,n.
$$
Given $\delta >0$ and $\varepsilon > 0$, the algorithm
described in \S \ref{algomain} computes values $\tilde{u}_j$ such
\begin{equation} \label{errest}
\frac{\left| \tilde{u}_j - u_j \right|}{\sum_{i=1}^n |\alpha_i|} \le \varepsilon
, \quad \text{for} \quad j = 1,\ldots,n
\end{equation}
in $\mathcal{O} \left(n \log (\delta^{-1}) \log(\varepsilon^{-1}) + N_\delta
\right)$ operations, where
\begin{equation} \label{ndelta}
N_\delta := \sum_{j=1}^n \# \{ x_i : |x_j - x_i| < \delta (b-a) \}.
\end{equation}
\end{theorem}
The proof of Theorem \ref{thm1} is given in \S \ref{proofmainresult}. Under
typical conditions, the presented algorithm involves $\mathcal{O}( n \log n)$
operations. The following corollary describes a case of interest, where the
points $x_1,\ldots,x_n$ are Chebyshev nodes for a compact interval $[a,b]$ (we
define Chebyshev nodes in \S \ref{preliminaries}).
\begin{corollary} \label{cor1}
Fix $\varepsilon = 10^{-15}$, and let the points $x_1,\ldots,x_n$ be
Chebyshev nodes on $[a,b]$. If $\delta = 1/n$, then the algorithm of \S
\ref{algomain} involves $\mathcal{O}( n \log n)$ operations.
\end{corollary}
The proof of Corollary \ref{cor1} is given in \S \ref{completeproof}. The
following corollary states that a similar result holds for uniformly random
points.
\begin{corollary} \label{cor2}
Fix $\varepsilon = 10^{-15}$, and suppose that $x_1,\ldots,x_n$ are
sampled uniformly at random from $[a,b]$. If $\delta = 1/n$, then the
algorithm of \S \ref{algomain} involves $\mathcal{O}( n \log n)$ operations
with high probability.
\end{corollary}
The proof of Corollary \ref{cor2} is immediate from standard probabilistic
estimates. The following remark describes an adversarial configuration of
points.
\begin{remark} \label{rmk1}
Fix $\varepsilon > 0$, and let $x_1,\ldots,x_{2 n}$ be a collection of
points such that $x_1,\ldots,x_n$ and $x_{n+1},\ldots,x_{2 n}$ are evenly spaced
in $[0,2^{-n}]$ and $[1-2^{-n},1]$, respectively, that is
$$
x_j = 2^{-n} \left( \frac{j-1}{n -1} \right),
\quad \text{and} \quad
x_{n+j} = 1 +2^{-n} \left( \frac{j-n}{n -1} \right),
\quad \text{for} \quad j = 1,\ldots,n.
$$
We claim that Theorem \ref{thm1} cannot guarantee a complexity better than
$\mathcal{O}(n^2)$ for this configuration of points. Indeed, if $\delta \ge
2^{-n}$, then $N_\delta \ge n^2/2$, and if $\delta < 2^{-n}$, then
$\log_2(\delta^{-1}) > n$. In either case
$$
n \log(\delta^{-1}) + N_\delta = \Omega(n^2).
$$
This complexity is indicative of the performance of the algorithm for this
point configuration; the reason that the algorithm performs poorly is that
structures exist at two different scales. If such a configuration were
encountered in practice, it would be possible to modify the algorithm of \S
\ref{algomain} to also involve two different scales to achieve evaluation in
$\mathcal{O}(n \log n)$ operations.
\end{remark}
\section{Algorithm} \label{algomain}
\subsection{High level summary} \label{high}
The algorithm involves passing over the points $x_1,\ldots,x_n$ twice. First, we
pass over the points in ascending order and compute
\begin{equation} \label{uplus}
\tilde{u}_j^+ \approx \sum_{i=1}^{j-1} \frac{\alpha_i}{x_i - x_j},
\quad \text{for} \quad j = 1,\ldots,n,
\end{equation}
and second, we pass over the points in descending order and compute
\begin{equation} \label{uminus}
\tilde{u}_j^- \approx \sum_{i=j+1}^{n} \frac{\alpha_i}{x_i - x_j},
\quad \text{for} \quad j = 1,\ldots,n.
\end{equation}
Finally, we define $ \tilde{u}_j := \tilde{u}_j^+ + \tilde{u}_j^-$ for $j =
1,\ldots,n$ such that
$$
\tilde{u}_j \approx \sum_{i=1, i \not = j}^n \frac{\alpha_i}{x_i - x_j}, \quad
\text{for} \quad j = 1,\ldots,n.
$$
We call the computation of $\tilde{u}_1^+,\ldots,\tilde{u}_n^+$ the forward pass
of the algorithm, and the computation of $\tilde{u}_1^-,\ldots,\tilde{u}_n^+$
the backward pass of the algorithm. The forward pass of the algorithm computes
the potential generated by all points to the left of a given point, while the
backward pass of the algorithm computes the potential generated by all points to
the right of a given point. In \S \ref{informal} and \S \ref{detailed} we give
an informal and detailed description of the forward pass of the algorithm. The
backward pass of the algorithm is identical except it considers the points in
reverse order.
\subsection{Informal description} \label{informal}
In the following, we give an informal description of the forward pass of the
algorithm that computes
$$
\tilde{u}_j^+ \approx \sum_{i=1}^{j-1} \frac{\alpha_i}{x_i - x_j},
\quad \text{for} \quad j = 1,\ldots,n.
$$
Assume that a small set of nodes
$t_1,\ldots,t_m$ and weights $w_1,\ldots,w_m$ such that
\begin{equation} \label{comp2}
\frac{1}{r} \approx \sum_{i=1}^m w_i e^{-r t_i} \quad
\text{for} \quad r \in [\delta(b-a),b-a],
\end{equation}
where $\delta > 0$ is given and fixed. The existence and computation of
such nodes and weights is described in \S \ref{completeproof} and \S
\ref{nodesandweights}. We divide the sum defining $u_j^+$ into two parts:
\begin{equation} \label{divide}
\tilde{u}_j^+ \approx \sum_{i=1}^{j_0} \frac{\alpha_i}{x_i - x_j}
+ \sum_{i=j_0+1}^{j-1} \frac{\alpha_i}{x_i - x_j},
\end{equation}
where $j_0 = \max \big\{ i : x_j - x_i > \delta (b-a)
\big\}.$ By definition, the points $x_1,\ldots,x_{j_0}$ are all distance at
least $\delta(b-a)$ from $x_j$. Therefore, by \eqref{comp2}
$$
\tilde{u}_j^+ \approx - \sum_{i=1}^{j_0} \sum_{k=1}^m w_k \alpha_i e^{-(x_j-x_i)
t_k} + \sum_{i=j_0+1}^{j-1} \frac{\alpha_i}{x_i - x_j}.
$$
If we define
\begin{equation} \label{geq}
g_k(j_0) = \sum_{i=1}^{j_0} \alpha_i e^{-(x_{j_0} - x_i)t_k}, \quad
\text{for} \quad k = 1,\ldots,m,
\end{equation}
then it is straightforward to verify that
\begin{equation} \label{maineqap}
\tilde{u}_j^+ \approx - \sum_{k=1}^m w_k g_k(j_0) e^{-(x_j - x_{j_0}) t_k} +
\sum_{i=j_0+1}^{j-1} \frac{\alpha_i}{x_i - x_j}.
\end{equation}
Observe that we can update $g_k(j_0)$ to $g_k(j_0+1)$ using the following
formula
\begin{equation} \label{updateg}
g_k(j_0+1) = \alpha_{j_0} + e^{-(x_{j_0+1} - x_{j_0}) t_k} g_k(j_0), \quad
\text{for} \quad k = 1,\ldots,m.
\end{equation}
We can now summarize the algorithm for computing
$\tilde{u}_1^+,\ldots,\tilde{u}_n^+$. For each $j$, we compute
$\tilde{u}_j^+$ by the following three steps:
\begin{enumerate}[\quad 1.]
\item Update $g_1,\ldots,g_m$ as necessary
\item Use $g_1,\ldots,g_m$ to evaluate the potential from $x_i$ such
that
$x_j - x_i > \delta (b-a)$
\item Directly evaluate the potential from $x_i$ such that $0 < x_j - x_i < \delta (b-a)$
\end{enumerate}
By \eqref{updateg}, each update of $g_1,\ldots,g_m$ requires $\mathcal{O}(m)$
operations, and we must update $g_1,\ldots,g_m$ at most $n$ times, so we
conclude that the total cost of the first step of the algorithm is
$\mathcal{O}(n m)$ operations. For each $j = 1,\ldots,n$, the second and third
step of the algorithm involve $\mathcal{O}(m)$ and $\mathcal{O}(\# \{x _i : 0 <
x_j - x_i < \delta(b-a)\})$ operations, respectively, see \eqref{maineqap}. It
follows that the total cost of the second and third step of the algorithm is
$\mathcal{O}(n m + N_\delta)$ operations, where $N_\delta$ is defined in
\eqref{ndelta}. We conclude that $\tilde{u}_1^+,\ldots,\tilde{u}_n^+$ can be
computed in $\mathcal{O}( n m + N_\delta)$ operations. In \S
\ref{proofmainresult}, we complete the proof of the computational complexity
guarantees of Theorem \ref{thm1} by showing that there exist $m = \mathcal{O}(
\log(\delta^{-1}) \log( \varepsilon^{-1}) )$ nodes $t_1,\ldots,t_m$ and weights
$w_1,\ldots,w_m$ that satisfy \eqref{comp2}, where $\varepsilon > 0$ is the
approximation error in \eqref{comp2}.
\subsection{Detailed description} \label{detailed}
\label{algorithm} In the following, we give a detailed description of the
forward pass of the algorithm that computes
$\tilde{u}_1^+,\ldots,\tilde{u}_n^+$. Suppose that $\delta > 0$ and $\varepsilon
> 0$ are given and fixed. We describe the algorithm under the assumption that
we are given quadrature nodes $t_1,\ldots,t_m$ and weights $w_1,\ldots,w_m$ such
that
\begin{equation} \label{quad}
\left| \frac{1}{r} - \sum_{j=1}^m w_j e^{-r t_j} \right| \le \varepsilon
\quad \text{for} \quad
r \in [\delta (b-a), b-a].
\end{equation}
The existence of such weights and nodes is established in \S
\ref{completeproof}, and the computation of such nodes and weights is discussed
in \S \ref{nodesandweights}. To simplify the description of the algorithm, we
assume that $x_0 = -\infty$ is a placeholder node that does not generate a
potential.
\begin{algorithm} \label{algo1}
\textit{Input:} $x_1 < \cdots < x_n \in [a,b]$,
$\alpha_1,\ldots,\alpha_n \in \mathbb{R}$.
\textit{Output:} $\tilde{u}_1^+,\ldots,\tilde{u}_n^+$.
\begin{enumerate}[\quad 1:]
\item \qquad $j_0 = 0$ and $g_1 = \cdots = g_m = 0$
\item
\item \qquad \textit{main loop:}
\item \qquad \textbf{for} $j = 1,\ldots,n$
\item
\item \qquad \qquad \textit{update $g_1,\ldots,g_m$ and $j_0$:}
\item \qquad \qquad \textbf{while} $x_j - x_{j_0+1} > \delta(b-a)$
\item \qquad \qquad \qquad \textbf{for} i = 1,\ldots,m
\item \qquad \qquad \qquad \qquad
$g_i = g_i e^{-(x_{j_0+1} - x_{j_0}) t_i}+ \alpha_i$
\item \qquad \qquad \qquad \textbf{end for}
\item \qquad \qquad \qquad $j_0 = j_0 + 1$
\item \qquad \qquad \textbf{end while}
\item
\item \qquad \qquad \textit{compute potential from $x_i$ such that $x_i \le x_{j_0}:$}
\item \qquad \qquad $\tilde{u}_j^+ = 0$
\item \qquad \qquad \textbf{for} $i = 1,\ldots,m$
\item \qquad \qquad \qquad
$\tilde{u}_j^+ = \tilde{u}_j^+ - w_i g_i e^{-(x_j - x_{j_0}) t_i}$
\item \qquad \qquad \textbf{end for}
\item
\item \qquad \qquad \textit{compute potential from $x_i$ such that $x_{j_0+1} \le x_i \le x_{j-1}$}
\item \qquad \qquad \textbf{for} $i = j_0+1,\ldots,j-1$
\item \qquad \qquad \qquad
$\tilde{u}_j^+ = \tilde{u}_j^+ + \alpha_i/(x_i - x_j).$
\item \qquad \qquad \textbf{end for}
\item \qquad \textbf{end for}
\end{enumerate}
\end{algorithm}
\begin{remark} \label{precomputation}
In some applications, it may be necessary to evaluate an expression of the form
\eqref{eq1} for many different weights $\alpha_1,\ldots,\alpha_n$ associated
with a fixed set of points $x_1,\ldots,x_n$. For example, in the projection
application described in \S \ref{motivation} the weights
$\alpha_1,\ldots,\alpha_n$ correspond to a function that is being projected,
while the points $x_1,\ldots,x_n$ are a fixed set of quadrature nodes. In such
situations, pre-computing the exponentials $e^{-(x_j - x_{j_0}) t_i}$ used in
the Algorithm \ref{algo1} will significantly improve the runtime, see
\S \ref{numres}.
\end{remark}
\section{Proof of Main Result} \label{proofmainresult}
\subsection{Organization}
In this section we complete the proof of Theorem \ref{thm1}; the section is
organized as follows. In \S \ref{preliminaries} we give mathematical
preliminaries. In \S \ref{tech} we state and prove two technical lemmas.
In \S \ref{completeproof} we prove Lemma \ref{quadlem}, which together with
the analysis in \S \ref{algomain} establishes Theorem \ref{thm1}. In
\S \ref{proofcor} we prove Corollary \ref{cor1}, and Corollary \ref{cor2}.
\subsection{Preliminaries} \label{preliminaries}
Let $a < b \in \mathbb{R}$ and $n \in \mathbb{Z}_{> 0}$ be fixed, and
suppose that $f : [a,b] \rightarrow \mathbb{R}$, and $x_1 < \cdots < x_n \in
[a,b]$ are given. The interpolating polynomial $P$ of the function $f$ at
$x_1,\ldots,x_n$ is the unique polynomial of degree at most $n-1$ such that $$
P(x_j) = f(x_j), \quad \text{for} \quad j = 1,\ldots,n.
$$
This interpolating polynomial $P$ can be explicitly defined by
\begin{equation} \label{P}
P(x) = \sum_{j=1}^n f(x_j) q_{j}(x),
\end{equation}
where $q_j$ is the nodal polynomial for $x_j$, that is,
\begin{equation} \label{nodal}
q_{j}(x) = \prod_{k = 1,k \not = j}^n \frac{x - x_k}{x_j - x_k}.
\end{equation}
We say $x_1,\ldots,x_n$ are Chebyshev nodes for the interval $[a,b]$ if
\begin{equation} \label{nodes}
x_j = \frac{b+a}{2} + \frac{b-a}{2} \cos \left( \pi \frac{j - \frac{1}{2}}{ n
} \right), \quad \text{for} \quad j = 1,\ldots,n.
\end{equation}
The following lemma is a classical result in approximation theory. It says that
a smooth function on a compact interval is accurately approximated by the
interpolating polynomial of the function at Chebyshev nodes, see for example \S
4.5.2 of Dahlquist and Bj\"orck \cite{DahlquistBjorck1974}.
\begin{lemma} \label{lem1}
Let $f \in C^{n}([a,b])$, and $x_1,\ldots,x_n$ be Chebyshev nodes for $[a,b]$.
If $P$ is the interpolating polynomial for $f$ at $x_1,\ldots,x_n$, then
$$
\sup_{x \in [a,b]} |f(x) - P(x)| \le \frac{2 M}{n !} \left( \frac{b-a}{4}
\right)^{n},
$$
where
$$
M = \sup_{x \in [a,b]} |f^{(n)}(x)|.
$$
\end{lemma}
In addition to Lemma \ref{lem1}, we require a result about the existence of
generalized Gaussian quadratures for Chebyshev systems. In 1866, Gauss
\cite{Gauss1866} established the existence of quadrature nodes $x_1,\ldots,x_n$
and weights $w_1,\ldots,w_n$ for an interval $[a,b]$ such that
$$
\int_a^b f(x) dx = \sum_{j=1}^n w_j f(x_j),
$$
whenever $f(x)$ is a polynomial of degree at most $2n - 1$. This result was
generalized from polynomials to Chebyshev systems by Kre\u{\i}n
\cite{MR0113106}. A collection of functions $f_0,\ldots,f_n$ on $[a,b]$ is a
Chebyshev system if every nonzero generalized polynomial
$$
g(t) = a_0 f_0(t) + \cdots + a_n f_n(t), \quad \text{for} \quad a_0,\ldots,a_n
\in \mathbb{R},
$$
has at most $n$ distinct zeros in $[a,b]$. The following result of Kre\u{\i}n
says that any function in the span of a Chebyshev system of $2n$ functions can
be integrated exactly by a quadrature with $n$ nodes and $n$ weights.
\begin{lemma}[Kre\u{\i}n \cite{MR0113106}] \label{krein}
Let $f_0,\ldots,f_{2n-1}$ be a Chebyshev system of continuous functions on
$[a,b]$, and $w : (a,b) \rightarrow \mathbb{R}$ be a continuous positive weight
function. Then, there exists unique nodes $x_1,\ldots,x_n$ and weights
$w_1,\ldots,w_n$ such that
$$
\int_a^b f(x) w(x) dx = \sum_{j=1}^n w_j f(x_j),
$$
whenever $f$ is in the span of $f_0,\ldots,f_{2n-1}$.
\end{lemma}
\subsection{Technical Lemmas} \label{tech}
In this section, we state and prove two technical lemmas that are
involved in the proof of Theorem \ref{thm1}. We remark that a similar version of
Lemma \ref{lem2} appears in \cite{Rokhlin1988}.
\begin{lemma} \label{lem2}
Fix $a > 0$ and $t \in [0,\infty)$, and let $r_1,\ldots,r_n$ be Chebyshev nodes
for $[a,2 a]$. If $P_{t}(r)$ is the interpolating polynomial for $e^{-r t}$
at $r_1,\ldots,r_n$, then
$$
\sup_{r \in [a,2 a]} \left| e^{-r t} - P_{t}(r) \right|\le \frac{1}{4^n}.
$$
\end{lemma}
\begin{proof}
We have
$$
\sup_{r \in [a,2 a]} \left| \frac{\partial^n}{\partial r^n} e^{-r t} \right| =
\sup_{r \in [a,2 a]} |t^n e^{-r t}| = t^n e^{-t a}.
$$
By writing the derivative of $t^n e^{-t a}$ as
$$
\frac{d}{d t} t^n e^{-t a} = \left( \frac{n}{a} -t \right) a
t^{n-1} e^{-a t},
$$
we can deduce that the maximum of $t^n e^{-t a}$ occurs at $t = n/a$, that is,
\begin{equation} \label{maxna}
\sup_{t \in [0,\infty)} t^n e^{-t a} = \left( \frac{n}{a} \right)^n e^{-a(n/a)}.
\end{equation}
By \eqref{maxna} and the result of Lemma \ref{lem1}, we conclude that
$$
\sup_{t \in [a,2a]} |e^{-r t} - P_t(r)| \le \frac{2 (n/a)^n e^{-a(n/a)}}{n !} \left(
\frac{a}{4} \right)^{n} = \frac{2 n^n e^{-n}}{n!} \frac{1}{4^n}.
$$
It remains to show that $2 n^n e^{-n} \le n!$. Since $\ln(x)$ is a increasing
function, we have
$$
n \ln n - n + 1 = \int_1^n \ln(x) dx \le \int_1^n \sum_{j=1}^{n-1}
\chi_{[j,j+1]}(x) \ln(j+1) dx = \sum_{j=1}^n \ln(j).
$$
Exponentiating both sides of this inequality gives $e n^n e^{-n} \le n!$, which
is a classical inequality related to Stirling's approximation. This completes
the proof.
\end{proof}
\begin{lemma} \label{approx}
Suppose that $\varepsilon > 0$ and $M > 1$ are given. Then, there exists
$$
m = \mathcal{O}(\log(M) \log(\varepsilon^{-1}))
$$
values $r_1,\ldots,r_m \in [1,M]$ such that for all $r \in [1,M]$ we have
\begin{equation} \label{approxeq}
\sup_{t \in [0,\infty)} \left| e^{-r t} - \sum_{j=1}^m c_j(r)
e^{-r_j t} \right| \le \varepsilon,
\end{equation}
for some choice of coefficients $c_j(r)$ that depend on $r$.
\end{lemma}
\begin{proof}
We construct an explicit set of $m := (\lfloor \log_2 M \rfloor +1) ( \lfloor
\log_4 \varepsilon^{-1} \rfloor + 1)$ points and coefficients such that
\eqref{approxeq} holds. Set $n := \lfloor \log_4 \varepsilon^{-1} \rfloor +1$.
We define the points $r_1,\ldots,r_m$ by
\begin{equation} \label{nodefin}
r_{i n +k} := 2^{i-1} \left( 3 + \cos \left( \pi \frac{k - \frac{1}{2}}{n}
\right) \right),
\end{equation}
for $k = 1,\ldots,n$ and $i = 0,\ldots,\lfloor \log_2 M \rfloor$, and define
the coefficients $c_1(r),\ldots,c_m(r)$ by
\begin{equation} \label{coeff}
c_{i n +k}(r) := \chi_{[2^{i},2^{i+1})}(r) \prod_{l=1,l \not = k}^{\lfloor
\log_4 \varepsilon^{-1} \rfloor} \frac{r - r_{i n+l}}{r_{i n + l} -r_{i n
+ k}},
\end{equation}
for $k = 1,\ldots,n$ and $i = 0,\ldots,\lfloor \log_2 M \rfloor$. We claim that
$$
\sup_{r \in [1,M]} \sup_{t \in [0,\infty)} \left| e^{-r t} - \sum_{j=1}^m c_j(r)
e^{-r_j t} \right| \le \varepsilon.
$$
Indeed, fix $r \in [1,M]$, and let $i_0 \in \{0,\ldots,\lfloor \log_2 M
\rfloor\}$ be the unique integer such that $r \in [2^{i_0},2^{i_0+1})$. By
the definition of the coefficients, see \eqref{coeff}, we have
$$
\sum_{j=1}^m c_j(r) e^{-r_j t} = \sum_{k=1}^n e^{-r_{i_0 n +k} t} \prod_{l=1,l
\not = k}^{\lfloor \log_4 \varepsilon^{-1} \rfloor} \frac{r - r_{i_0
n+l}}{r_{i_0 n + l} -r_{i_0 n + k}}.
$$
We claim that the right hand side of this equation is the interpolating
polynomial $P_{t,i_0}(r)$ for $e^{-r t}$ at $r_{i_0 n + k},\ldots,r_{(i_0+1)n}$,
that is,
$$
\sum_{k=1}^n e^{-r_{i_0 n +k} t} \prod_{l=1,l \not = k}^{\lfloor \log_4
\varepsilon^{-1} \rfloor} \frac{r - r_{i_0 n+l}}{r_{i_0 n + l} -r_{i_0 n + k}} =
P_{t,i_0}(r).
$$
Indeed, see \eqref{P} and \eqref{nodal}. Since the points $r_{i_0 n +
k},\ldots,r_{(i_0+1)n}$ are Chebyshev nodes for the interval
$[2^{i_0},2^{i_0+1}]$, and since $i_0$ was chosen such that $r \in
[2^{i_0},2^{i_0+1})$, it follows from Lemma \ref{lem2} that
$$
\left| e^{-r t} - P_{t,i_0}(r) \right|\le \frac{1}{4^n}
\quad \text{for} \quad t \in [0,\infty).
$$
Since $n = \lfloor \log_4 \varepsilon^{-1} \rfloor
+1$ the proof is complete.
\end{proof}
\begin{remark}
The proof of Lemma \ref{approx} has the additional consequence that the
coefficients $c_1(r),\ldots,c_m(r)$ in \eqref{approxeq} can be chosen such that
they satisfy
$$
|c_j(r)| \le \sqrt{2} \quad \text{for} \quad j=1,\ldots,m.
$$
Indeed, in \eqref{coeff} the coefficients $c_j(r)$ are either equal zero or
equal to the nodal polynomial, see \eqref{nodal}, for Chebyshev nodes on an
interval that contains $r$. The nodal polynomials for Chebyshev nodes on an
interval $[a,b]$ are bounded by $\sqrt{2}$ on $[a,b]$, see for example
\cite{Rokhlin1988}. The fact that $e^{-r t}$ can be approximated as a linear
combination of functions $e^{-r_1 t},\ldots,e^{-r_m t}$
with small coefficients means that the approximation of Lemma \ref{approx} can
be used in finite precision environments without any unexpected catastrophic
cancellation.
\end{remark}
\subsection{Completing the proof of Theorem \ref{thm1}} \label{completeproof}
Previously in \S \ref{informal}, we proved that the algorithm of \S
\ref{algomain} involves $\mathcal{O}( n m + N_\delta)$ operations. To complete
the proof of Theorem \ref{thm1} it remains to show that there exists
$$
m = \mathcal{O}( \log( \varepsilon^{-1} )
\log( \delta^{-1} ))
$$
points $t_1,\ldots,t_m$ and weights $w_1,\ldots,w_m$ that satisfy \eqref{quad};
we show the existence of such nodes and weights in the following lemma, and thus
complete the proof of Theorem \ref{thm1}. The computation of such nodes and
weights is described in \S \ref{nodesandweights}.
\begin{lemma} \label{quadlem}
Fix $a < b \in \mathbb{R}$, and let $\delta > 0$ and $\varepsilon > 0$ be given.
Then, there exists $m = \mathcal{O}( \log(\varepsilon^{-1}) \log(\delta^{-1}))$
nodes $t_1,\ldots,t_m$ and weights $w_1,\ldots,w_m$ such
that
\begin{equation} \label{star}
\left| \frac{1}{r} - \sum_{j=1}^m w_j e^{-r t_j} \right| \le \varepsilon,
\quad
\text{for} \quad r \in [\delta (b-a), b-a].
\end{equation}
\end{lemma}
\begin{proof}
Fix $a < b \in \mathbb{R}$, and let $\delta, \varepsilon >0$ be given. By the
possibility of rescaling $r$, $w_j$, and $t_j$, we may assume that $b-a =
\delta^{-1}$ such that we want to establish \eqref{star} for $r \in
[1,\delta^{-1}]$. By Lemma \ref{approx} we can choose $2 m = \mathcal{O} (
\log(\varepsilon^{-1}) \log(\delta^{-1}))$ points $r_0,\ldots,r_{2m-1} \in
[1,\delta^{-1}]$, and coefficients $c_0(r),\ldots,c_{2m-1}(r)$ depending on $r$
such that
\begin{equation} \label{erri}
\sup_{r \in [1,\delta^{-1}]} \sup_{t \in [0,\infty)} \left| e^{-r t} -
\sum_{j=0}^{2m - 1} c_j(r) e^{-r_j t} \right| \le \frac{\varepsilon}{2
\log(2\varepsilon^{-1})}.
\end{equation}
The collection of functions $e^{-r_0 t},\ldots,e^{-r_{2m-1} t}$ form a
Chebyshev system of continuous functions on the interval $[0,\log(2
\varepsilon^{-1})]$, see for example \cite{MR0204922}. Thus, by Lemma
\ref{krein} there exists $m$ quadrature nodes $t_1,\ldots,t_m$ and weights
$w_1,\ldots,w_m$ such that
$$
\int_0^{\log(2 \varepsilon^{-1})} f(t) dt = \sum_{j=1}^m w_j f(t_j),
$$
whenever $f(t)$ is in the span of $e^{-r_0 t},\ldots,e^{-r_{2m-1} t}$. By the
triangle inequality
\begin{multline} \label{s1}
\left| \frac{1}{r} - \sum_{j=1}^m w_j e^{-r t_j} \right| \\ \le
\left| \frac{1}{r} - \int_0^{ \log(2 \varepsilon^{-1})} e^{-r t} dt \right|
+ \left| \int_0^{\log(2 \varepsilon^{-1})} e^{-r t} dt - \sum_{j=1}^m w_j e^{r t_j}
\right|.
\end{multline}
Recall that we have assumed $r \in [1,\delta^{-1}]$, in particular, $r \ge 1$ so
it follows that
\begin{equation} \label{s2}
\left| \frac{1}{r} - \int_0^{ \log(2 \varepsilon^{-1})} e^{-r t} dt \right| \le
\varepsilon/2.
\end{equation}
By \eqref{erri}, the function $e^{-r t}$ can be approximated to error
$\varepsilon/(2 \log(2 \varepsilon^{-1}))$ in the $L^\infty$-norm on $[0,\log(2
\varepsilon^{-1})]$ by functions in the span of $e^{-r_0 t},\ldots,e^{-r_{2m-1}
t}$. Since our quadrature is exact for these functions, we conclude that
\begin{equation} \label{s3}
\left| \int_0^{\log(2 \varepsilon^{-1})} e^{-r t}dt - \sum_{j=1}^m w_j e^{r t_j}
\right| \le \varepsilon/2.
\end{equation}
Combining \eqref{s1}, \eqref{s2}, and \eqref{s3} completes the
proof.
\end{proof}
\subsection{Proof of Corollary \ref{cor1}} \label{proofcor}
In this section, we prove Corollary \ref{cor1}, which states that the algorithm
of \S \ref{algomain} involves $\mathcal{O}(n \log n)$ operations when
$x_1,\ldots,x_n$ are Chebyshev nodes, $\varepsilon = 10^{-15}$, and $\delta =
1/n$.
\begin{proof}[Proof of Corollary \ref{cor1}]
By rescaling the problem we may assume that $[a,b] = [-1,1]$
such that the Chebyshev nodes $x_1,\ldots,x_n$ are given by
$$
x_j = \cos \left( \pi \frac{j - \frac{1}{2}}{n } \right), \quad \text{for}
\quad j = 1,\ldots,n.
$$
By the result of Theorem \ref{thm1}, it suffices to show that $N_\delta = \mathcal{O}(n
\log n)$, where
$$
N_\delta := \sum_{j=1}^n \# \left\{ x_i : |x_j - x_i| < \frac{1}{n} \right\}.
$$
It is straightforward to verify that the number of Chebyshev nodes within an
interval of radius $1/n$ around the point $-1 < x < 1$ is
$\mathcal{O}(1/\sqrt{1-x^2})$, that is,
$$
\# \left\{ x_i : |x - x_i| < \frac{1}{n} \right\} = \mathcal{O} \left(
\frac{1}{\sqrt{1 - x^2}} \right), \quad \text{for} \quad -1 < x < 1.
$$
This estimate, together with the fact that the first and last Chebyshev node are
distance at least $1/n^2$ from $1$ and $-1$, respectively, gives the estimate
\begin{equation} \label{estc}
\sum_{j=1}^n \# \left\{ x_i : |x_j - x_i| < \frac{1}{n} \right\} = \mathcal{O}
\left( \int_{1/n^2}^{\pi-1/n^2} \frac{n}{\sqrt{1-\cos(t)^2}} dt \right).
\end{equation}
Let $\pi/2 > \eta > 0$ be a fixed parameter; direct calculation yields
$$
\int_{\eta}^{\pi-\eta} \frac{1}{\sqrt{1-\cos(t)^2}} dt = 2 \log \left( \cot
\left( \frac{\eta}{2} \right) \right) = \mathcal{O} \left( \log \left(\eta^{-1}
\right)\right).
$$
Combining this estimate with \eqref{estc} yields $N_\delta = \mathcal{O}(n \log
n)$ as was to be shown.
\end{proof}
\section{Numerical results and implementation details} \label{numerics}
\subsection{Numerical results} \label{numres}
We report numerical results for two different point distributions:
uniformly random points in $[1,10]$, and Chebyshev nodes in $[-1,1]$.
In both cases, we choose the weights $\alpha_1,\ldots,\alpha_n$ uniformly at
random from $[0,1]$, and test the algorithm for
$$
n = 1000 \times 2^k \quad \text{points}, \quad \text{for} \quad k = 0,\ldots,10.
$$
We time two different versions of the algorithm: a standard implementation,
and an implementation that uses precomputed exponentials. Precomputing
exponentials may be advantageous in situations where the expression
\begin{equation} \label{eqnew}
u_j = \sum_{i=1}^n \frac{\alpha_i}{x_i - x_j}, \quad \text{for} \quad j =
1,\ldots,n,
\end{equation}
must be evaluated for many different weights $\alpha_1,\ldots,\alpha_n$
associated with a fixed set of points $x_1,\ldots,x_n$, see Remark
\ref{precomputation}. We find that using precomputed exponentials makes the
algorithm approximately ten times faster, see Tables \ref{key}, \ref{figrand},
and \ref{figcheb}. In addition to reporting timings, we report the absolute
relative difference between the output of the algorithm of \S \ref{algomain} and
the output of direct evaluation; we define the absolute relative difference
$\epsilon_r$ between the output $\tilde{u}_j$ of the algorithm of \S
\ref{algomain} and the output $u_j^d$ of direct calculation by
\begin{equation} \label{errdef}
\epsilon_r := \sup_{j = 1,\ldots,n} \left| \frac{\tilde{u}_j -
u^d_j}{\bar{u}_j} \right|,
\quad \text{where} \quad
\bar{u}_j := \sum_{i=1}^n \left| \frac{\alpha_i}{x_i - x_j} \right|,
\end{equation}
Dividing by $\bar{u}_j$ accounts were the fact that the calculations are
performed in finite precision; any remaining loss of accuracy in the numerical
results is a consequence of the large number of addition and multiplication
operations that are performed.
All calculations are performed in double precision, and the algorithm of \S
\ref{algomain} is run with $\varepsilon = 10^{-15}$. The parameter $\delta > 0$
is set via an empirically determined heuristic. The numerical experiments were
performed on a laptop with a Intel Core i5-8350U CPU and $7.7$ GiB of memory;
the code was written in Fortran and compiled with gfortran with standard
optimization flags. The results are reported in Tables \ref{key}, \ref{figrand},
and \ref{figcheb}.
To put the run time of the algorithm in context, we additionally perform a time
comparison to the Fast Fourier Transform (FFT), which also has complexity
$\mathcal{O}(n \log n)$. Specifically, we compare the run time of the algorithm
of \S \ref{algomain} on random data using precomputed exponentials with the run
time of an FFT implementation from FFTPACK \cite{fftpack} on random data of the
same length using precomputed exponentials. We report these timings in Table
\ref{figfft}; we find that the FFT is roughly $5$-$10$ times faster than our
implementation of the algorithm of \S \ref{algomain}; we remark that no
significant effort was made to optimize our implementation, and that it may be
possible to improve the run time by vectorization.
\begin{table}[h!]
\begin{tabular}{c|l}
Label & Definition \\
\hline
$n$ & number of points \\
$t_w$ & time of algorithm of \S \ref{algomain} without precomputation in seconds \\
$t_p$ & time of precomputing exponentials for algorithm of \S
\ref{algomain} in seconds \\
$t_u$ & time of algorithm of \S \ref{algomain} using precomputed exponentials in seconds \\
$t_d$ & time of direct evaluation in seconds \\
$\epsilon_r $ & maximum absolute relative difference defined in \eqref{errdef} \\
$t_{f}$ & time of FFT using precomputed exponentials (for time comparison only)
\end{tabular}
\vspace{1ex}
\caption{Key for column labels of Tables \ref{figrand}, \ref{figcheb}, and
\ref{figfft}. }
\label{key}
\end{table}
\begin{table}[h!]
\centering
$$
\begin{array}{r|c|c|c|c|c}
n & t_w & t_p & t_u & t_d & \epsilon_r \\
\hline
1000 & 0.74\E-03 & 0.18\E-02 & 0.93\E-04 & 0.66\E-03 & 0.19\E-14 \\
2000 & 0.19\E-02 & 0.31\E-02 & 0.19\E-03 & 0.25\E-02 & 0.30\E-14 \\
4000 & 0.42\E-02 & 0.61\E-02 & 0.43\E-03 & 0.10\E-01 & 0.52\E-14 \\
8000 & 0.85\E-02 & 0.10\E-01 & 0.89\E-03 & 0.37\E-01 & 0.72\E-14 \\
16000 & 0.18\E-01 & 0.25\E-01 & 0.18\E-02 & 0.14\E+00 & 0.92\E-14 \\
32000 & 0.38\E-01 & 0.49\E-01 & 0.37\E-02 & 0.59\E+00 & 0.19\E-13 \\
64000 & 0.84\E-01 & 0.98\E-01 & 0.78\E-02 & 0.23\E+01 & 0.21\E-13 \\
128000 & 0.16\E+00 & 0.19\E+00 & 0.18\E-01 & 0.95\E+01 & 0.35\E-13 \\
256000 & 0.37\E+00 & 0.53\E+00 & 0.34\E-01 & 0.40\E+02 & 0.59\E-13 \\
512000 & 0.75\E+00 & 0.10\E+01 & 0.71\E-01 & 0.19\E+03 & 0.88\E-13 \\
1024000 & 0.17\E+01 & 0.23\E+01 & 0.15\E+00 & 0.81\E+03 & 0.14\E-12 \\
\end{array}
$$
\caption{Numerical results for uniformly random points in $[1,10]$.}
\label{figrand}
\end{table}
\begin{table}[h!]
\centering
\begin{minipage}{4.5in}
$$
\begin{array}{r|c|c|c|c|c}
n & t_w & t_p & t_u & t_d & \epsilon_r \\
\hline
1000& 0.54\E-03 & 0.12\E-02 & 0.74\E-04 & 0.60\E-03 & 0.11\E-14 \\
2000& 0.15\E-02 & 0.26\E-02 & 0.15\E-03 & 0.24\E-02 & 0.14\E-14 \\
4000& 0.38\E-02 & 0.51\E-02 & 0.37\E-03 & 0.99\E-02 & 0.39\E-14 \\
8000& 0.83\E-02 & 0.10\E-01 & 0.85\E-03 & 0.38\E-01 & 0.35\E-14 \\
16000& 0.19\E-01 & 0.23\E-01 & 0.17\E-02 & 0.14\E+00 & 0.58\E-14 \\
32000& 0.41\E-01 & 0.48\E-01 & 0.37\E-02 & 0.62\E+00 & 0.89\E-14 \\
64000& 0.98\E-01 & 0.90\E-01 & 0.82\E-02 & 0.24\E+01 & 0.12\E-13 \\
128000& 0.22\E+00 & 0.19\E+00 & 0.23\E-01 & 0.10\E+02 & 0.19\E-13 \\
256000& 0.44\E+00 & 0.47\E+00 & 0.32\E-01 & 0.40\E+02 & 0.26\E-13 \\
512000& 0.84\E+00 & 0.94\E+00 & 0.73\E-01 & 0.19\E+03 & 0.52\E-13 \\
1024000& 0.19\E+01 & 0.19\E+01 & 0.14\E+00 & 0.84\E+03 & 0.64\E-13 \\
\end{array}
$$
\end{minipage}
\vspace{1ex}
\caption{Numerical results for Chebyshev nodes on $[-1,1]$.}
\label{figcheb}
\end{table}
\begin{table}[h!]
\centering
$$
\begin{array}{r|c|c}
n & t_u & t_{f} \\
\hline
1000 &0.91E-04 &0.16E-04 \\
2000 &0.28E-03 &0.37E-04 \\
4000 &0.41E-03 &0.44E-04 \\
8000 &0.93E-03 &0.85E-04 \\
16000 &0.18E-02 &0.24E-03 \\
32000 &0.38E-02 &0.41E-03 \\
64000 &0.81E-02 &0.88E-03 \\
128000 &0.18E-01 &0.19E-02 \\
256000 &0.38E-01 &0.59E-02 \\
512000 &0.71E-01 &0.12E-01 \\
1024000 &0.14E+00 &0.25E-01 \\
\end{array}
$$
\caption{Time comparison with FFT.} \label{figfft}
\end{table}
\subsection{Computing nodes and weights} \label{nodesandweights}
The algorithm of \S \ref{algomain} is described under the assumption that nodes
$t_1,\ldots,t_m$ and weights $w_1,\ldots,w_m$ are given such that
\begin{equation} \label{eqapp6}
\left| \frac{1}{r} - \sum_{j=1}^m w_j e^{-r t_j} \right| \le \varepsilon
\quad \text{for} \quad
r \in [\delta (b-a), b-a],
\end{equation}
where $\varepsilon > 0$ and $\delta > 0$ are fixed parameters. As in the
proof of Lemma \ref{quadlem} we note that by rescaling $r$ it suffices to find
nodes and weights satisfying
\begin{equation} \label{eqap5}
\left| \frac{1}{r} - \sum_{j=1}^m w_j e^{-r t_j} \right| \le \varepsilon
\quad \text{for} \quad
r \in [1, \delta^{-1}].
\end{equation}
Indeed, if the nodes $t_1,\ldots,t_m$ and weights $w_1,\ldots,w_m$ satisfy
\eqref{eqap5}, then the nodes $t_1/(b-a),\ldots,t_m/(b-a)$ and weights
$w_1/(b-a),\ldots,w_m/(b-a)$ will satisfy \eqref{eqapp6}. Thus, in order to
implement the algorithm of \S \ref{algomain} it suffices to tabulate nodes and
weights that are valid for $r \in [1,M]$ for various values of $M$. In the
implementation used in the numerical experiments in this paper, we tabulated
nodes and weights valid for $r \in [1,M]$ for
$$
M = [1,4^k] \quad \text{for} \quad k = 1,\ldots,10.
$$
For example, in Tables \ref{fig01} and \ref{fig02} we have listed $m = 33$ nodes
$t_1,\ldots,t_{33}$ and weights $w_1,\ldots,w_{33}$ such that
$$
\left| \frac{1}{r} - \sum_{j=1}^{33} w_j e^{-r t_j} \right| \le 10^{-15},
$$
for all $r \in [1,1024]$.
\begin{table}[h!]
\centering
\begin{minipage}{4.5in}
{\small
\begin{verbatim}
0.2273983006898589D-03,0.1206524521003404D-02,0.3003171636661616D-02,
0.5681878572654425D-02,0.9344657316017281D-02,0.1414265501822061D-01,
0.2029260691940998D-01,0.2809891134697047D-01,0.3798133147119762D-01,
0.5050795277167632D-01,0.6643372693847560D-01,0.8674681067847460D-01,
0.1127269233505314D+00,0.1460210820252656D+00,0.1887424688689547D+00,
0.2435986924712581D+00,0.3140569015209982D+00,0.4045552087678740D+00,
0.5207726670656921D+00,0.6699737362118449D+00,0.8614482005965975D+00,
0.1107074709906516D+01,0.1422047253849542D+01,0.1825822499573290D+01,
0.2343379511131976D+01,0.3006948272874077D+01,0.3858496861353812D+01,
0.4953559345813267D+01,0.6367677940017810D+01,0.8208553424367139D+01,
0.1064261195532074D+02,0.1396688222191633D+02,0.1889449184151398D+02
\end{verbatim}
}
\end{minipage}
\vspace{1ex}
\caption{A list of $33$ nodes $t_1,\ldots,t_{33}$.} \label{fig01}
\end{table}
\begin{table}[h!]
\centering
\begin{minipage}{4.5in}
{\small
\begin{verbatim}
0.5845245927410881D-03,0.1379782337905140D-02,0.2224121503815854D-02,
0.3150105276431181D-02,0.4200370923383030D-02,0.5431379037435571D-02,
0.6918794756934398D-02,0.8763225538492927D-02,0.1109565843047196D-01,
0.1408264766413004D-01,0.1793263393523491D-01,0.2290557147478609D-01,
0.2932752351846237D-01,0.3761087060298772D-01,0.4828044150885936D-01,
0.6200636888239893D-01,0.7964527252809662D-01,0.1022921587521237D+00,
0.1313462348178323D+00,0.1685948994092301D+00,0.2163218289369589D+00,
0.2774479391081561D+00,0.3557192797195578D+00,0.4559662159666857D+00,
0.5844792718191478D+00,0.7495918095861060D+00,0.9626599456939077D+00,
0.1239869481076760D+01,0.1605927580173348D+01,0.2102583514906888D+01,
0.2811829220697454D+01,0.3937959064316012D+01,0.6294697335695096D+01
\end{verbatim}
}
\end{minipage}
\vspace{1ex}
\caption{A list of $33$ weights $w_1,\ldots,w_{33}$.} \label{fig02}
\end{table}
The nodes and weights satisfying \eqref{eqap5} can be computed by using
a procedure for generating generalized Gaussian quadratures for Chebyshev
systems together with the proof of Lemma \ref{approx}. Indeed, Lemma
\ref{approx} is constructive with the exception of the step that invokes Lemma
\ref{krein} of Kre\u{\i}n. The procedure described in
\cite{MR2671296} is a constructive version of Lemma \ref{krein}: given a
Chebyshev system of functions, it generates the corresponding quadrature nodes
and weights. We remark that generalized Gaussian quadrature generation codes are
a powerful tools for numerical computation with a wide range of applications.
The quadrature generation code used in this paper was an optimized version of
\cite{MR2671296} recently developed by Serkh for \cite{MR3564124}.
\subsection*{Acknowledgements}
The authors would like to thank Jeremy Hoskins for many useful
discussions. Certain commercial equipment is identified in this paper
to foster understanding. Such identification does not imply
recommendation or endorsement by the National Institute of Standards
and Technology, nor does it imply that equipment identified is
necessarily the best available for the purpose.
|
2,869,038,153,825 | arxiv | \section*{Abstract}
The penetration power of x-rays allows one to image large objects
while their short wavelength allows for high spatial resolution. As
a result, with synchrotron sources one has the potential to obtain
tomographic images of centimeter-sized specimens with sub-micrometer
pixel sizes. However, limitations on beam and detector size make it
difficult to acquire such data of this sort in a single take,
necessitating strategies for combining data from multiple
regions. One strategy is to acquire a tiled set of local tomograms
by rotating the specimen around each of the local tomogram center
positions. Another strategy, sinogram oriented acquisition,
involves the collection of projections at multiple offset positions
from the rotation axis followed by data merging and
reconstruction. We have carried out a simulation study to compare
these two approaches in terms of radiation dose applied to the
specimen, and reconstructed image quality. Local tomography
acquisition involves an easier data alignment problem, and immediate
viewing of subregions before the entire dataset has been acquired.
Sinogram oriented acquisition involves a more difficult data
assembly and alignment procedure, and it is more sensitive to
accumulative registration error. However, sinogram oriented
acquisition is more dose-efficient, it involves fewer translation
motions of the object, and it avoids certain artifacts of local
tomography.
\section{Introduction}
X-ray tomography offers a way to image the interior of extended
objects, and tomography at synchrotron light sources offers
significantly higher throughput than with laboratory sources when
working at $\sim 1$ micrometer voxel resolution or below. However,
practical limitations of synchrotron x-ray beam width limit the size
of objects that can be studied in single field of view, and pixel
count in readily available image detectors sets a similar limit. Thus
it becomes challenging to scale x-ray tomography up from the roughly
$(2000)^{3}=8$ gigavoxel volumes that are routinely imaged today,
towards the teravoxel volumes that are required for imaging
centimeter-sized objects at micrometer-scale voxel size.
One solution lies in the use of one of several image stitching
approaches that can be applied to synchrotron x-ray tomography
\cite{kyrieleis_nima_2009}. Of those approaches discussed, we
consider here two of the most promising as shown schematically in
Fig.~\ref{fig:acquisition}:
\begin{itemize}
\item \textbf{Local tomography acquisition (LTA):} in
local tomography \cite{kuchment_invprob_1995} (also called
truncated object tomography \cite{lewitt_optik_1978a}, or interior
tomography \cite{natterer_1986}), a subregion of a larger volume is
imaged by rotating about the center of the subregion. Features
outside the subregion will contribute to some but not all
projections, reducing their effects on the reconstructed image. One
can therefore acquire a tiled array of local tomograms to image the
full specimen (method III of \cite{kyrieleis_nima_2009}). In this
case the rotation axis is shifted to be centered at each of the
array of object positions, after which the object is rotated. The
local tomograms of the regions of interest (ROIs) are then
reconstructed, and the full object is constructed from stitching
together these local tomograms \cite{oikonomidis_jphysics_2017}.
\item \textbf{Sinogram oriented acquisition (SSA):} in this case, one
acquires a set of ``ring in a cylinder'' projections
\cite{vescovi_jsr_2017}. The object is moved to a series of offset
positions from the rotation axis, and at each position the object is
rotated while projections are collected (method V of
\cite{kyrieleis_nima_2009}). The projections can be assembled and
stitched to yield a full-field, panoramic projection image at each
rotation angle, or they can be assembled and stitched in the
sinogram representation. This method shares some common
characteristics with the so-called ``half-acquisition'' method
\cite{sztrokay_physmedbio_2012} in that both methods acquire
sinograms of different parts of the sample, and stitch them before
reconstruction (in half-acquisition, sinogram from \ang{180} to
\ang{360} is flipped and stitched by the side of the \ang{0} to
\ang{180} portion). The difference between them is that SOA can
handle a larger number of fields in the horizontal axis (instead of
2 in half-acquisition), and that each partial sinogram is acquired
with the same rotation direction so no flipping is needed.
\end{itemize}
Another approach that has been employed with much
success involves
collecting a mosaic array of projection images at each rotation angle
\cite{liu_jsr_2012,mokso_jsr_2012} before repeating the process at the
next rotation. For each angle, the projections are assembled and
stitched to yield a full-field panoramic projection. However, since
in practice it is usually quicker to rotate the specimen through
\ang{180} than it is to translate to a new mosaic offset position,
this approach (method I of \cite{kyrieleis_nima_2009}) has lower
throughput so we do not consider it further. Other large-scale imaging
methods like helical tomography
\cite{kalender_sucm_1994,pelt_measscitech_2018} are not discussed
here, as they have not been implemented for sub-micrometer resolution
imaging. Therefore, we limit our discussion to LTA versus SOA as
defined above.
LTA and SOA are two distinct data collection strategies, each with
their own tradeoffs. For example, in LTA one can begin to reconstruct
regions of the object immediately after collection of its local
tomography data, whereas in SOA one must wait for the collection of all
``ring in a cylinder'' data before obtaining a full volume
reconstruction. One study of LTA
\cite{kyrieleis_jmicro_2010} indicated that the method contains
inherent complicating factors that can affect image quality, while
another study \cite{dasilva_optexp_2018} has shown that the
tomographic reconstruction of a local region can be improved
by using a multiscale acquisition approach including lower resolution
views of the entire specimen (this is not straightforward
when the specimen is larger than the illuminating beam). However, we
are not aware of detailed comparisons of LTA and SOA
with regards to radiation dose
efficiency as well as
reconstruction quality. Low radiation dose is critical
for X-ray imaging of soft materials, since they are
vulnerable to beam-induced damage and distortion
\cite{reimer_scanning_1984}. Moreover, other factors may also come into
play when one does either SOA or LTA in practice. For example,
mechanical instabilities in translational motors introduce positional
fluctuations of the collected field-of-views, which requires image
registration to refine the relative offsets between them. For that,
SOA and LTA data behave differently in the presence of noise. Thus, a
comprehensive comparison is made here.
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{figure_01}}
\caption{Comparison on the acquisition scheme of local tomography
acquisition (LTA; left) versus sinogram oriented acquisition (SOA;
right). The top row depicts information collection in sinogram
space, where each stripe with an arrow and a distinct color
represents one angular scan over \ang{180} (which is then used to
synthesize the full \ang{360} sinogram). The bottom row shows the
mapping of different scans to the full image of one object
slice. For samples with roughly equal extension in both lateral
dimensions, if the number of scans required in SOA is $n_{\textup{f}}$,
then $n_{\textup{f}}^2$ scans are needed by LTA.}
\label{fig:acquisition}
\end{figure}
\section{Methodology}
\subsection{Phantom object}
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{figure_02}}
\caption{Phantom object created for simulations, where the
highest attenuation values are white and the lowest are
black. The object has a diameter, or maximum projected
thickness, of $L=2048$ voxels, each set to a per-voxel
linear attenuation coefficient of $\mu=1/2048$ so that the
total attenuation through the disk if solid is $\exp[-1]$.
The object is designed with random spherical
``pores'' inside with linear attenuation coefficients
ranging from $\mu=0/2048$ to $\mu=1/2048$.}
\label{fig:simulated_sample}
\end{figure}
In order to better understand the tradeoffs between object and
sinogram oriented acquisition, we created a 2D phantom sample using the
open-source virtual object designing tool \textit{XDesign}{}
\cite{ching_jsr_2017}. This represents an object slice from a 3D
object. The simulated sample (Fig.~\ref{fig:simulated_sample}) is a
solid disk with a diameter, and thus maximum projected thickness, of
$L=2048$ pixels. If solid, each pixel would be set to a linear
absorption coefficient (LAC) of $\mu=1/2048$, so that its total
thickness of $L=2048$ pixels would attenuate the x-ray beam by a
factor of $\exp[-\mu L]=\exp[-1]$. In fact, the object was created
with circular pores in its interior, with diameters ranging from 8 to
205 pixels, and linear absorption coefficients ranging from
$\mu=0/2048$ (vacuum) to $\mu=1/2048$ (solid). All pores are randomly
distributed with no overlapping. The object is also assumed to be
fully within the depth of focus of the imaging system, with no wave
propagation effects visible at the limit of spatial resolution, so
that pure projection images are obtained.
To generate the sinogram of the object, the Radon transform was
performed using \textit{TomoPy}, an open-source toolkit for x-ray
tomography \cite{gursoy_jsr_2014}. All tomographic reconstructions in
this work are also obtained using this software package.
\subsection{Sampling for LTA and SOA}
To image an object larger than the imaging system's field of view
$f$, one provides some overlap between acquired projection
scans. The acquisition scheme can be conveniently
shown in the sinogram
domain which contains both a spatial dimension and a viewing angle
dimension. A scan can be represented by a band-shaped coverage on
the sinogram, which is the region where we have access to the
measurement. Figure~\ref{fig:sampling_map} illustrates this coverage
for SOA and LTA, respectively, with the same field-of-view size for
both schemes. For LTA, a 3$\times$3 square grid is
used. Brighter values in the images means that a pixel in the sinogram
is sampled by the illuminating beam more frequently.
\begin{figure}
\centerline{\includegraphics[width=0.8\textwidth]{figure_03}}
\caption{Coverage on the full sinogram in an experiment using
(a) sinogram oriented acquisition (SOA) and (b) local
tomography acquisition (LTA) with equal field-of-view.
Brighter values in the images correspond to the number of
times that avoxel in the object is sampled by (exposed to) the
illuminating beam.}
\label{fig:sampling_map}
\end{figure}
For LTA, by defining a coordination system with the
origin $(0,0)$ located at the object center,
it can be shown that the coverage of a local tomography scan centered
at $(x, y)$ is a set of points on the \ang{360} synthesized sinogram given by
\begin{equation}
C_{\textup{LTA}} = \{(s, \theta)|s_0(\theta) - f/2 \leq s \leq
s_{0}(\theta) + f/2\}
\label{eqn:c_os}
\end{equation}
with
\begin{equation}
s_{0}(\theta) = \sqrt{x^2+y^2}\,\cos(\alpha-\theta) + c_{0}
\label{eqn:s_0_theta}
\end{equation}
where $s$ and $\theta$ are respectively the horizontal (spatial) and
vertical (angular) coordinates of the sinogram, $\alpha = \arctan(x/y)$,
and $c_0$ is the rotation center of the
synthesized \ang{360} sinogram. This represents a partial sinogram of
the entire object, as shown in Fig.~\ref{fig:acquisition}.
For SOA, the coverage is simply a straight band extending through
the angular axis. Mathematically, it can be expressed as
\begin{equation}
C_{\textup{SOA}} = \{(s, \theta)|p - f/2 \leq s \leq p + f/2\}
\label{eqn:c_ps}
\end{equation}
where $p$ is the center position of the field-of-view.
For object
stitching (LTA), the partial sinograms are padded with their edge
values for twice their width on both sides
in order to reduce boundary artifacts in the reconstruction images
\cite{kyrieleis_jmicro_2010}. After reconstructing all partial
sinograms, the reconstructed disks are then stitched together to form
the complete reconstruction.
Since both SOA and LTA involve multiple scans, we define a quantity
$n_{\textup{f}}$ that represents the number of scans along one side of the
object that is required to fully reconstruct one slice of the sample.
For SOA, $n_{\textup{f,PS}}$ is equal to the total number of scans; for LTA,
the total number of scans is roughly $n_{\textup{f,OS}}^2$ considering a square
grid of regions of interest (ROIs), though the actual number can vary
depending on the object shape. For example, applying LTA on a thin
sheet-like sample only requires roughly the same number of scans as
SOA. Also, one could choose hexagonal grids which are more efficient
by a factor of $\sqrt{3}$ than a square grid
\cite{heinzer_neuroimg_2006}, but we assume square grids here for
simplicity.
In order to fully reconstruct one slice of the object using SOA, there
should be a sufficient number $n_{\textup{f,PS}}$ of scan fields to guarantee
that the composite field of view completely covers the longest lateral
projection of that slice. In practice, an overlap that takes a
fraction $\gamma_{\rm PS}$ of the field of view between each pair of adjacent
scans is needed for an automated algorithm to determine the offset
between them. With this taken into account, $n_{\textup{f,PS}}$ can be denoted
as
\begin{equation}
n_{\textup{f,PS}} = \mbox{ceil}\Big[\frac{L - f}{\gamma_{\rm PS} f} + 1\Big]
\end{equation}
where the function $\mbox{ceil}(x)$ is the ceiling function that returns the
smallest integer that is greater than or equal to a real number $x$. Since the overlapping area
diminishes the actual sample area that a scan can cover, we introduce a
``useful field of view'' $f^{\prime}_{\rm PS}$ for SOA, given by $f^{\prime}_{\rm PS} = \gamma_{\rm PS} f$.
For example, if a 15\% overlap is deliberately created between
a pair of adjacent scans, then $f^{\prime}_{\rm PS}$ will be 85\% of the instrumental field-of-view.
Unless otherwise noted, in
this work we keep the value of $\gamma_{\rm PS}$ to be 0.85 for simulation studies.
The case for LTA differs in that the scans need to cover the object slice
in two dimensions. In principle, the scans in LTA can be arranged in an
arbitrary pattern that complies with the actual shape of the sample. If the
sample is square, then a roughly equal number of scans $n_{\textup{f,OS}}$ is needed
along both sides of the object, or $n_{\textup{f,OS}}^{2}$ scans in total. A special
notice should be paid to the width of the field of view in LTA, as it might not be equal to
the actual
field of view of the optical system. In LTA, it has been found that
the reconstructed ROI often exhibits a bowl-shaped
intensity profile, with values of near-boundary pixels abnormally
higher \cite{kyrieleis_jmicro_2010}. Although this can be mitigated
by padding the partial sinograms, this remedy does not work effectively
when the truncation ratio is very low. In such scenario, the
reconstructed ROIs need to have a portion of their outer pixels
removed before they can be stitched. Similar to the case of SOA,
we therefore introduce a ``useful field of view'' $f^{\prime}_{\rm OS}$ for LTA.
If we use for local tomography acquisition only the content within
a disk whose radius is a fraction $\gamma_{\rm OS}$ of the original ROI,
then $f^{\prime}_{\rm OS} = \gamma_{\rm OS} f$.
Consequently, the required number of scans $n_{\textup{f,OS}}$ is given by
\begin{equation}
n_{\textup{f,OS}} =
\begin{cases}
1 & f \geq L \\
\mbox{ceil}\Big(\frac{\sqrt{2}L}{\gamma_{\rm OS} f}\Big) & f < L
\end{cases}.
\label{eqn:nscan_os}
\end{equation}
We emphasize that $n_{\textup{f,OS}}$ is the number of scans required
along one side of the sample; for a square specimen, the total number
of scans needed is $n_{\textup{f,OS}}^2$.
Eq.~\ref{eqn:nscan_os} is derived assuming the scenario indicated in
Fig.~\ref{fig:os_sampling_scheme}. When $f < L$, scanned
ROIs are arranged in a square grid such that each corner of the bounding
square of the sample disk intersects with the border of an ROI.
Also, we assume that
the distance between the centers of two diagonally overlapping
ROIs is $f^{\prime}_{\rm OS}$ so that all ROIs exactly cover the object seamlessly.
Unless specifically indicated, the value of
$\gamma_{\rm OS}$ is chosen to be 0.85 for simulation studies involved in this work.
\begin{figure}
\centerline{\includegraphics[width=0.5\linewidth]{figure_04}}
\caption{Schematic diagram showing the assumed pattern of data
acquisition in the LTA approach of beyond-field-of-view tomography. The
specimen is represented by the gray solid disk. Each local ROI
that can be reconstructed using the data acquired from a scan
in LTA is shown by a dashed blue circle. Each of these ROIs has
a diameter of $f^{\prime}_{\rm OS}$, and they are packed in a way that the
distance between the centers of each pair of diagonally adjacent
ROIs is exactly $f^{\prime}_{\rm OS}$, so that the sample can be fully
covered without gaps. }
\label{fig:os_sampling_scheme}
\end{figure}
In order to understand the consequences of different object diameters
$L$, we follow previous work \cite{kyrieleis_jmicro_2010} and
characterize them in terms of a truncation ratio $T$ of
\begin{equation}
T = \frac{f^{\prime}}{L}
\label{eqn:truncation_ratio}
\end{equation}
where of course one uses $f^{\prime}_{\rm OS}$ for local tomography acquisition (LTA) and
$f^{\prime}_{\rm PS}$ for sinogram oriented acquisition (SOA).
The numerical studies in this work, which involve the simulation of
data acquisition and reconstruction using both LTA and SOA, were performed
using a Python package we developed called ``Tomosim,'' which has been
made freely available on Github (https://github.com/mdw771/tomosim).
The charcoal tomographic dataset has been made available on TomoBank
\cite{decarlo_mst_2018} with a sample ID of 00078.
\subsection{Radiation dose calculation}
The differential energy deposition $dE$ within an infinitesimal depth
$dt$ is formulated from the Lambert-Beer law as
\begin{equation}
\frac{dE}{dt} = \Big|\bar{n}E_0\frac{dI}{dt}\Big|
\label{eqn:diff_energy_didx}
\end{equation}
where $\bar{n}$ is the average number of incident photons per voxel,
and $E_0$ is the photon energy.
The Lambert-Beer law gives $\mu_{\boldsymbol{r}}(t)$, the x-ray
LAC of the sample as a function of
penetration depth $t$ along the current transmission direction
$\boldsymbol{r}$, as $\mu_{\boldsymbol{r}}(t) = -[1/I(t)](dI/dt)$.
To simplify our later computation with this term included in an
integral with regards to $t$, we approximate the quantity
$I(t)$ in the factor prior to $dI/dt$ as $I(t) \approx I(L/2) =
\exp(-\bar{\mu}L/2)$, where $\bar{\mu}$ is the mean LAC of the
specimen. Equation~\ref{eqn:diff_energy_didx} then becomes
\begin{equation}
\frac{dE}{dt} = \bar{n}E_0\exp(-\bar{\mu}L/2)\mu_{\boldsymbol{r}}(t).
\label{eqn:diff_energy}
\end{equation}
Again, the term $\exp(-\bar{\mu}L/2)$ represents the
beam attenuation factor at the center of the object, but it can also
be used to approximate the average normalized beam intensity ``seen''
by an arbitrary voxel of the object in one viewing direction. Accordingly,
we also replace $\mu_{\boldsymbol{r}}(t)$ in Eq.~\ref{eqn:diff_energy}
with a constant value of $\bar{\mu}$. This approximation is valid
as long as the LAC of the object varies slowly. By integrating both
sides over the voxel size $\Delta$, we obtain the energy absorbed
by this voxel as
\begin{equation}
E = \bar{n}E_0\exp(-\bar{\mu}L/2)\bar{\mu}\Delta.
\label{eqn:diff_energy2}
\end{equation}
Then, the radiation dose received by this $j$-th voxel per (\ang{180}) scan
is given by
\begin{equation}
D_{s,j} = \frac{N_{\theta}\bar{n}E_0\exp(-\bar{\mu}L/2)\bar{\mu}}{\rho\Delta^{2}}
\label{eqn:energy_radon}
\end{equation}
where $N_{\theta}$ is the number of projection angles, and
$\rho$ is the object density.
The subscript $s$ in $D_{s,j}$ denotes the $s$-th scan. Again, for SOA,
a total of $n_{\textup{f,PS}}$ scans are needed, while for LTA the number is on the
order of $n_{\textup{f,OS}}^2$.
Based on this, one can estimate the total radiation dose received by
the sample by summing up the number of occasions of being exposed to
the beam over all voxels ($j$) and all scans ($s$). This is compactly expressed
as
\begin{eqnarray}
D &=& \sum_s\sum_j D_{s,j} \nonumber \\
&\propto & \sum_s \epsilon\Omega_s
\label{eqn:total_dose}
\end{eqnarray}
where $\Omega_s$ is the total area in sinogram domain
that is sampled in one scan (which is equal to the width in pixels of a field of view
multiplied by the number of projection angles), and $\epsilon$ is the fraction of
pixels where the sample is present (\textit{i.e.}, pixels that are not
purely air).
\subsection{Experimental data acquisition and registration}
For an experimental tests on sinogram oriented acquisition (SOA), we
used experimental data collected using 25 keV X rays at beamline 32-ID
of the Advanced Photon Source at the Argonne National Laboratory.
The specimen is a truncated charcoal sample
with a diameter of $d=4$ mm, whereas the imaging system field of view
was $f=1920\times 0.6$ $\mu$m=1.12 mm. With $\gamma_{\rm PS}=0.9$,
this yields a reduced field of view of $f^{\prime}_{\rm PS}=1.04$ mm so that
$n_{\textup{f,PS}}=4$ and $T=0.26$. Registration of the sinograms was done using phase
correlation, which can be formulated as
\begin{equation}
\boldsymbol{c} = \argmax_{x\in\mathbb{R}^2}
\mathcal{F}^{-1}
\Bigg[\frac{\mathcal{F}[I_a(\boldsymbol{x})] \mathcal{F}[I^*_b(\boldsymbol{x})]}
{|\mathcal{F}[I_a(\boldsymbol{x})] \mathcal{F}[I^*_b(\boldsymbol{x})]|}
\Bigg](\boldsymbol{x}).
\end{equation}
This method is reliable when a large number of high-contrast features are
present in the overlapping region of both images $I_a$ and $I_b$, and when
noises are not heavily present. In practice, photon flux ($\bar{n}$ in
Eq.~\ref{eqn:energy_radon}) sometimes needs to be reduced in order to
lower the radiation dose imposed on the sample. This can lead to higher photon
noise that challenges image registration.
\section{Results and discussion}
\subsection{Comparison on dose-efficiency}
As one can easily see from Fig.~\ref{fig:acquisition}, object
stitching (LTA) requires a larger number of scans
than sinogram oriented acquisition (SOA) by a factor of about $1/T$. Because
much of the illumination of one scan goes into out-of-local-tomogram
regions in local tomography acquisition, this also means that the object is
exposed to a higher radiation dose.
In Eq.~\ref{eqn:total_dose}, the total radiation
dose of an experiment
is shown to be approximately proportional to the area of non-air regions
sampled in the sinogram, given that the
sample does not contain large fluctuations
in absorption coefficient. In this equation, $\Omega_s$
itself is also an interesting quantity to
investigate. The sum of the areas of all $\Omega_s$ regions in the
sinogram, which also includes those ``air'' pixels, provides an
intuitive measurement of the acquired data size, which is jointly
determined by the actual field of view, the
number of scans $s$, and the number of projection angles $N_{\theta}$. For a given
experimental configuration, this summed area is denoted by $A$.
A lower $A$ means
that the sample can be entirely imaged and reconstructed with a
smaller data size (\emph{i.e.}, less disk space is needed to store a complete acquisition),
which is desirable in the case
where only limited storage and computing resources are available.
\begin{figure}
\centerline{\includegraphics[width=0.95\textwidth]{figure_05}}
\caption{Acquired data size (a) and radiation dose (b) as
a function of the truncation ratio $T$ of
Eq.~\ref{eqn:truncation_ratio} for both local tomography acquisition
(LTA) and sinogram oriented acquisition (SOA). In each subplot, the
variation of $n_{\textup{f,PS}}$ and $n_{\textup{f,OS}}$ is also shown. The
figure indicates that the acquired data size and radiation
dose do not necessarily decrease with increasing truncation
ratio; rather, both quantities are associated with the
arrangement of scans in an actual experiment. These results
were calculated for fixed values of $\gamma_{\rm PS}=0.85$ and
$\gamma_{\rm OS}=0.85$ as discussed in the text.}
\label{fig:eff_ratio}
\end{figure}
With these quantities defined, Fig.~\ref{fig:eff_ratio}(a) shows the
results for acquired data size $A_\textup{SOA}$ and $A_\textup{LTA}$
as a function of truncation ratio $T$ , while
Fig.~\ref{fig:eff_ratio}(b) shows $D_\textup{SOA}$ and
$D_\textup{LTA}$. The dashed lines in each plot show the variation of
$n_{\textup{f,PS}}$ and $n_{\textup{f,OS}}$. Note that $n_{\textup{f,OS}}$ is the number of
scans along one side of the object, so that $n_{\textup{f,OS}}^{2}$ scans are
required for local tomography acquisition (LTA). When examining this
figure, it has to be noted that no matter what $T$ is, the values of
$\gamma_{\rm PS}$ and $\gamma_{\rm OS}$ are fixed. This means that area covered by
all scans in either SOA or LTA might be larger than the sample. In
such cases, we allow acquisition to extend beyond the right side of
the sample for SOA; for LTA, the exceeded margins are on the right and
and bottom sides of the sample. The ``overflow'' of acquisition does
not substantially affect $D$, but can increase $A$. It can be seen in
Fig.~\ref{fig:eff_ratio}(a) that $A$ is not a monotonic function of
$T$, although it does show an overall decreasing trend with increasing
$T$. For example, when $T$ grows from 0.5 to 0.7, $A_\textup{LTA}$
increases while $n_{\textup{f,OS}}$ is unchanged. This is explained by the
larger ``overflow'' of scanned field beyond the actual object. In
contrast, the increase in $T$ that does not cause a reduction of
$n_{\textup{f}}$ only results in a small cost of $D$ due to the increase of
overlapping areas required between adjacent scans. However, the
overall observation is still that SOA is both more data-efficient and
dose-efficient than LTA in general. The figures indicate that
no matter which method is used, a higher $T$ does not necessarily
imply better data efficiency in the case of $f < L$. One should
thus carefully choose the camera to use in order to optimize the
experiment in terms of both data size and radiation dose.
\subsection{Comparison on reconstruction artifacts}
\begin{figure}
\centerline{\includegraphics[width=0.9\textwidth]{figure_06}}
\caption{Comparison of image quality between local tomography
acquisition (LTA) and sinogram oriented acquisition
(SOA). This comparison is made using the Structural
SIMilarity index (SSIM) with regards to the ground truth
image, against reconstructions for SOA (a) and LTA (b). When
noise is not a factor, and registration errors are
negligible, the SOA result is identical to the ground
truth. Also shown (c) is the SSIM as a function of
truncation ratio $T$.}
\label{fig:recon_ssim}
\end{figure}
While both LTA and SOA are subject to photon noise during measurement,
other types of artifacts can also participate in determining the
reconstruction quality.
The sources of noise and artifacts in the final reconstructions for SOA
are straightforward to understand. In particular, when the intensity
of adjacent projection tiles differs, ring artifacts can be seen in
the reconstruction if the sinograms are not properly blended where
they overlap. For LTA, reconstruction quality is mainly affected by
three factors other than noise in the raw data. First, since the
illuminating beam at different scan positions and illumination angles
arrive at the object region with varying transmission through
out-of-object-field features, the overall intensity of the
reconstruction disk can vary between neighboring tiles.
This issue can be mitigated by gradient-based image blending techniques
such as Poisson blending \cite{perez_acm_2003}, but they are usually
time-consuming and are not appropriate when the number of tiles is
large. Second, a
bowl-shaped intensity profile across an individual reconstruction disk
is often observed in ROI tomography, in which case the pixel
intensities near the edge of the reconstruction disk are shifted
higher. This can be alleviated by padding the partial sinograms on its
left and right sides (along the spatial axis) by the edge values
\cite{kyrieleis_jmicro_2010}. In our case, the sinograms were padded
by twice their length on each side, but this did not completely
eliminate distortion in the intensity profile. Finally, each
projection image collected inevitably contains information of the
portion object lying outside of the ROI, which, at least to some minor
extent, violates the Fourier slice theorem \cite{kak_2012}. When the
truncation ratio is not too low, one can use this excessive information
to slightly expand the field-of-view by padding both sides of the sinogram
with its edge values; however, streak artifacts will be heavily present
in the area out of the scanned disk in the case of a small truncation
ratio \cite{dasilva_optexp_2018}.
In addition, ideally,
one would also seek to satisfy the Crowther criterion
\cite{crowther_prsa_1970} on the required number of rotation angles
based on the entire object size rather than the size of the local
tomography region of interest. One can thus expect aliasing artifacts
especially for low truncation ratios.
To quantify the reconstruction quality, we used Structural SIMilarity
(SSIM; \cite{wang_ieeetip_2004}) as a metric for the fidelity of the
stitched reconstruction images with regards to the ground truth image.
The SSIM allows us to independently examine the structural fidelity of
an image with regards to the reference by incorporating the
inter-dependency of image pixels, especially when they are spatially close.
These dependencies carry important information about the structure, so that
it serves as an accurate and reliable tool for evaluation. The reconstruction
images were obtained by applying filtered backprojection (FBP)
algorithm to the full-object sinogram. SSIM is defined as a product of
three terms that assess the similarity between two images $A$ and $B$
in different aspects. These include the luminance ($l$), the contrast
($c$), and the structure ($s$), defined by
\begin{eqnarray}
l(A, B) &=& \frac{2\mu_A\mu_B + c_1}{\mu_A^2 + \mu_B^2 + c_1} \label{eqn:ssim_l} \\
c(A, B) &=& \frac{2\sigma_A\sigma_B + c_2}{\sigma_A^2 + \sigma_B^2 + c_2} \label{eqn:ssim_c} \\
s(A, B) &=& \frac{\sigma_{AB} + c_3}{\sigma_A\sigma_B + c_3} \label{eqn:ssim_s}
\end{eqnarray}
where
\begin{eqnarray}
c_1 &=& (k_1 L)^2 \\
c_2 &=& (k_2 L)^2 \\
c_3 &=& c_2 / 2
\end{eqnarray}
with typical values of $k_1$ and $k_2$ set to 0.01 and 0.03, and $L$ being the
dynamic range of the grayscale images. In Eqs.~\ref{eqn:ssim_l} to \ref{eqn:ssim_s},
$\mu_i$ and $\sigma_i$ represent the mean and standard deviation of image $i$ ($i = A$
or $B$), and $\sigma_{AB}$ is the covariance of image $A$ and $B$ \cite{wang_ieeetip_2004}.
While it is common to calculate the SSIM as the product of all three terms, we set
$l(A, B) = 1$ here in order to exclude the overall intensity shifting and scaling.
Thus for all quality evaluations in this work, we have
\begin{equation}
\mbox{SSIM}(A, B) = c(A, B)\cdot s(A, B).
\end{equation}
Figs.~\ref{fig:recon_ssim}(a) and (b) respectively show the stitched reconstructions
obtained from SOA and LTA with $T = 0.2$. If the beam brightness
is sufficiently high and stable, then noise and intensity variations
between adjacent tiles can be neglected. In this case, the stitched sinogram in SOA
is not affected by other systematic artifacts, and is identical to the full-object
sinogram. However, the stitched reconstruction in LTA is
affected by intensity variations and bowl-profile artifacts, even though the partial
sinograms were padded before reconstruction and only the inner $\gamma_{\rm OS} = 0.85$ of
the reconstructed ROIs were used. Fig.~\ref{fig:recon_ssim}
plots the SSIM of the reconstructed porous disk (vacuum portions at the corners are not
included) with regards to the ground truth for both two approaches. As can be seen,
the quality of the SOA reconstruction is in principle not affected by the truncation
ratio. In contrast, an overall reduction in SSIM with decreasing truncation ratio
is seen for LTA.
In order to examine how the truncation ratio $T$ influences the reconstruction quality of
an individual reconstruction disk in LTA, we also computed the mean SSIM of the inner
portions in all reconstructed ROIs that are far from the boundaries
and termed it the ``LTA interior SSIM'' in Fig.~\ref{fig:recon_ssim}.
In this way, the influence of the bowl-profile artifacts
can be excluded. As in
Fig.~\ref{fig:recon_ssim}, this SSIM also drops with diminishing $T$. This indicates
that in addition to boundary artifacts, a low truncation ratio also deteriorates
the intrinsic reconstruction quality of the ROI, which is mainly in the form of
noise caused by out-of-ROI information.
\subsection{Comparison on image registration feasibility}
Registration refers to the processing of finding the relative
positional offset between adjacent tiles in mosaic tomography. For SOA,
registration is done in projection domain before merging the partial
datasets. LTA, on the other hand, involves registration on
reconstructed images. The large number of tiles in mosaic tomography
poses huge difficulties for manual registration, and automatic
registration methods are usually employed. Phase correlation (PC) is
the most popular registration algorithm, where the offset vector
$\vec{c}$ between two images $I_a$ and $I_b$ is determined by
\begin{equation}
\vec{c} =
\argmax\Bigg(\mathcal{F}^{-1}
\Bigg[\frac{\mathcal{F}[I_a(\vec{x})] \mathcal{F}[I^*_b(\vec{x})]}
{|\mathcal{F}[I_a(\vec{x})] \mathcal{F}[I^*_b(\vec{x})]|}
\Bigg](\vec{x})\Bigg).
\label{eqn:shift_vector}
\end{equation}
In Eq.~\ref{eqn:shift_vector}, $\mathcal{F}$ is the Fourier transform operator,
and $I^*_i$ represents the complex conjugate of $I_i$.
The transmission radiographs for specimens that are thick and not
entirely periodic generally do not exhibit a good number of
distinguishable fine features, because the features tend to entangle
and blend into each other when they are superposed along the beam
path. However, this does not imply that SOA projections are
intrinsically harder objects for registration compared to LTA
reconstructions. Although it is conceptually plausible that more
high-frequency features arise in reconstructed images, we should
notice that phase correlation is a technique that is susceptible to
noise. When data are collected with low photon flux, Poisson noise is
more pronounced, and tomographic reconstructions based on the Fourier
slice theorem can be more heavily affected by noise due to the
amplification of high-frequency artifacts by the ramp filter
\cite{kak_2012} (though this issue can be mitigated by adding a Weiner filter).
To investigate the photon noise sensitivity of alignment, we carried
out the following numerical study.
We created a
projection panorama of our whole charcoal specimen, and extracted a row
of $1024\times 1024$ pixel tiles from it with a constant interval of 850
pixels. As the projection panorama was normalized using the dark field
and white field data, all tiles extracted contain pixels with values
ranging between 0 and 1. Here we denote the image by $I$. We then
define a scaling factor $n_{\rm ph}$ to represent a ``mean'' photon
count for each pixel. In other words, $n_{\rm ph}$ is the number of
photons incident on a pixel of the acquired radiograph. Poisson
noises was subsequently applied to all tiles, using the probably
density function of
\begin{equation}
f(k, n_{\rm ph}I) = \frac{(n_{\rm ph}I)^k e^{-n_{\rm ph}I}}{k!}
\end{equation}
where $k$ is the actual number of photon count. The noisy version of
the tiles were then pre-processed by taking their negative logarithm, and
registered using phase correlation. For LTA, different levels of
Poisson noise were added to extracted partial sinograms, from which
reconstruction images were subsequently created and registered. The
field of view in this case is 1024 pixels. Since data fidelity is
guaranteed only within a disk for an LTA reconstruction, we use a
smaller offset of 700 pixels in both the $x$ and $y$ directions in
order to compensate the smaller usable overlapping area.
Figure~\ref{fig:error_phmult}(a) compares the registration accuracy of
LTA and SOA over a range of photon budgets per pixel, which is the
total number of photons to be applied to a specimen voxel during the
experiment. Thus, all comparisons between LTA and SOA are based on the
condition that the total radiation doses are equal. The photon budget
is evenly distributed to all scans and $n_{\rm ph}$ is calculated
accordingly, in which case LTA will have a lower $n_{\rm ph}$ in a
single scan compared to SOA. For our test data, the mean registration
error of SOA is always below 1, while LTA requires a photon budget of
about 2000 for the mean error to diminish into the sub-pixel level.
The number of projection angles can also impact the registration accuracy for LTA
since it is done in the reconstruction domain. In Fig.~\ref{fig:error_phmult}(b),
the mean registration error is plotted with regards to the level of downsampling
in the axis of projection angles. The original data involve 4500 projections,
which were downsampled by factors of powers of 2. The result indicates that the
error starts to exceed the pixel-level boundary when the downsampling level
is larger than 4.
\begin{figure}
\centerline{\includegraphics[width=0.7\textwidth]{figure_07}}
\caption{Mean registration error plotted against (a) the
average photon budget per voxel for both SOA and LTA, and (b)
the downsampling level in projection angles for LTA. The
plot indicates that the accuracy of phase correlation
degrades when projection images become more noisy due to
lower number of incident photons. For this particular
sample, registration in reconstruction domain for LTA
requires a higher incident photon flux in order to give
reliable registration results. In addition, a reduction in
projection angles also causes a significant deterioration in
registration accuracy for LTA.}
\label{fig:error_phmult}
\end{figure}
A critical drawback of SOA is that registration errors are
accumulative, which means that deviations in the offset determined for
any pair of tiles can affect the quality of a large part or even the
entirety of the final reconstruction. On the other hand, registration
errors in LTA involve multiple tiles intersecting on several sides,
giving less opportunity for alignment pathologies along one edge to
dominate global alignment. For SOA, the accumulated registration
error throughout a row in the tile grid would cause the relative
center of rotation to deviate from the true value for tiles that are
far away from the rotation axis. Since the reconstruction of SOA takes
the registration results as an input, this can lead to off-center
distortions on small features at some locations of the full
reconstruction. To show this, we compare the reconstructions for a
part of the data collected from our charcoal sample. To simulate the
SOA result with induced error, we extracted 8 tiles from the full
sinogram with a fixed interval of 795 pixels. The registered positions
of all tiles were then deliberately adjusted by errors following a
Gaussian distribution with a standard deviation of 4, after which they
were stitched and reconstructed. The center of rotation set for the
reconstructor was determined to optimize the reconstruction quality of
the central region of the sample. The LTA results serving as a
reference were obtained by extracting partial sinograms from the full
sinogram, and then reconstructing them individually. Using these
procedures, we show in Fig.~\ref{fig:register} a comparison of two
local regions of the reconstructions obtained using SOA and LTA,
respectively. One of these regions is exactly at the object center,
while the other one is around 1000 pixels above in the object slice
view. The positions of both regions are marked on the full
reconstruction slice. For the central region (shown in the second row
of the image grid in Fig.~\ref{fig:register}), the exhibited images
have nearly the same quality. However, for the off-center region, some
dot-shaped features extracted from the SOA reconstruction become
heavily distorted (as marked by the colored arrows in the SOA figure
on the first row of the grid). This indicates an erroneous
registration outcome for the tile contributing to this region, which
deviates the distance of the projections contained in this tile to the
rotation center away from the accurate value. When the tiles are
correctly registered, as shown in the inset of the SOA figure, the
distortion no longer exists.
\begin{figure}
\centerline{\includegraphics[width=0.47\textwidth]{figure_08}}
\caption{Comparison of the effect of registration errors. Shown
here are sinogram oriented acquisition (SOA; left column in the
grid) and local tomography acquisition (LTA; right column)
reconstructions at an region-of-interest (first row) and the
center (second row) of a slice in the charcoal sample. The SOA
reconstruction was done by stitching 8 tiles extracted from the
full sinogram. Registration errors following a Gaussian
distribution with a standard deviation of 4.0 were exerted to the
tile positions before stitching. The rotation centerfor SOA
reconstruction was calibrated to optimize the quality around the
object center. As a result, the central region of the charcoal
reconstructed using both methods appears similar. However, at
around 1000 pixels above the object center, the SOA reconstruction
shows severe distortion of dot-features (pointed by colored
arrows) due to the deviation of its actual position from the
rotation center inputted to the reconstruction routine. The inset
in the SOA figure shows the appearance of one of the distorted
features when the tiles are correctly registered. }
\label{fig:register}
\end{figure}
Local tomography acquisition (LTA) reconstructions are not globally
affected by registration errors. We further note that in addition to
this feature, LTA is advantageous compared to SOA in several other
aspects. For certain sample geometries, LTA can achieve better dose
efficiency than SOA by using more projection angles for highly
interesting regions of the sample while using fewer angles for the
rest. Also, LTA allows one to flexibly select reconstruction methods
or parameters for different ROIs. For example, an ROI where features
lie in textured backgrounds can be reconstructed using Bayesian
methods with stronger sparsity regularization in order to suppress
background structures.
\section{Conclusion}
We have compared two methods for tomography of objects that extend
beyond the field of view of the illumination system and camera, based
on their radiation dose, reconstruction fidelity, and the presence of
registration artifacts. Sinogram oriented acquisition (SOA) gives
lower radiation dose, and it is also generally free of inter-tile
intensity variations, in-tile intensity ``bowl'' artifacts, and noise
induced by out-of-local-tomogram information. In addition, tile
registration is shown to be no harder than with Local tomography
acquisition (LTA), especially when the noise level is high. The major
drawback of SOA is that registration errors are accumulative and can
affect the entire reconstruction. Our present efforts are directed
towards providing more reliable registration algorithms in order to
improve the reconstruction quality of SOA for thick amorphous samples;
one approach that offers promise is iterative reprojection
\cite{dengler_ultramic_1989,latham_spie9967,gursoy_scirep_2017},
though it will be computationally demanding for large datasets.
\section{Acknowledgement}
This research used resources of the Advanced Photon Source and the
Argonne Leadership Computing Facility, which are U.S. Department of
Energy (DOE) Office of Science User Facilities operated for the DOE
Office of Science by Argonne National Laboratory under Contract
No. DE-AC02-06CH11357. We thank the National Institute of Mental
Health, National Institutes of Health, for support under grant U01
MH109100. We also thank Vincent De Andrade for his help with
acquiring the data on the charcoal sample shown in the paper.
\bibliographystyle{naturemag}
|
2,869,038,153,826 | arxiv | \section*{Acknowledgment}
We wish to thank H.R Christiansen for a critical reading of the manuscript. The Conselho Nacional de Desenvolvimento Cient\'\i fico e tecnol\'ogico-CNPq is gratefully acknowledged for financial support.
\section*{Dedicatory} R. R. Landim - This paper is dedicated to the memory of my wife, Isabel Mara.
|
2,869,038,153,827 | arxiv | \section{Introduction}
Blue stragglers (BSs) are an important population component in
stellar evolution as well as in star clusters. These objects have
remained on the main sequence for a time exceeding that expected
from standard stellar evolution theory, and they may affect the
integrated spectra of their host clusters by contributing excess
spectral energy in the blue and UV bands. Many mechanisms,
including single star models and binary models, have been
presented to account for the existence of BSs (see the review of
\cite{str93}). At present, it is widely believed that more than
one mechanism plays a role for the produce of BSs in one cluster
and that binaries are important or even dominant for the
production of BSs in open clusters and in the field
(\cite{lan07,dal08,sol08}). Binaries may produce BSs by way of
mass transfer, coalescence of the two components, binary-binary
collision and binary-single star collision. The collision of
binary-binary or binary-single may lead binaries to be tighter or
farther apart, and it is relevant to dynamics and environment in
the host cluster. In this contribution, we are only concerned with
BSs resulting from the evolutionary effect of primordial binaries,
i.e. mass transfer and coalescence of two components.
\section{Evolutionary channels}
Before we describe the details of evolutionary channels to BSs
from binary evolution, we first introduce an important parameter
in binary evolution, the critical mass ratio, $q_{\rm c}$, for
dynamically unstable mass transfer, which is crucial to determine
the fate of a binary during mass transfer. The value of $q_{\rm
c}$ differs for different evolutionary stages of the primary at
the onset of mass transfer, and it has been well studied via
polytropic models (\cite{hje87,han01}) and detailed binary
evolutions (\cite{han02,chen08a}). Figure \ref{qc} shows two
examples on these studies. We may see obvious difference for
$q_{\rm c}$ between polytropic models and detailed binary
evolution, which in turn leads to differences on the products,
including BSs, after the mass transfer.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.7in,angle=270]{qc.ps}
\includegraphics[width=1.8in,angle=270]{qfgbl.ps}
\caption{Critical mass ratio $q_{\rm c}$ for dynamically unstable
mass transfer. {\it Left panel:} a polytropic model. $f_{\rm c}$
is the core mass fraction of the mass donor. The cross, plus,
asterisk and circle lines are for mass transfer efficiency $\beta
=0.25$, 0.5, 0.75 and 1.0, respectively. The lost mass is assumed
to carry away the same specific angular momentum as pertains to
the mass donor. The solid line is from \cite{web88} for
conservative mass transfer (see also \cite{han01}). {\it Right
panel:} detailed binary evolution. It is for low-mass binaries
between a first-ascend giant-branch star and a main sequence star.
The giant star has a mass of 1.00, 1.26, 1.60 and $1.90M_\odot$.
The lines are from the fitting formulae of equations (4) to (6) in
\cite{chen08a}.}
\label{qc}
\end{center}
\end{figure}
\subsection{Mass transfer}
In general, if the mass ratio $q=M_1/M_2$ (the donor/the accretor)
at the onset of mass transfer is lower than $q_{\rm c}$, mass
transfer is stable, and the accretor evolves upwards along the
main sequence in response to accretion, if it is a main-sequence
star, and becomes a blue straggler when it is more massive than
the turnoff of the host cluster.
This channel may produce binary BSs with various orbital periods
(\cite{chen04,chen08a}). If the primary is on the main sequence or
in the Hertzsprung gap at the onset of mass transfer, the products
are with short- or relatively short-orbital periods. These include
Algol systems which are still in a slow stage of mass transfer.
Mass transfer between giant stars and main-sequence companions may
produce long-orbital period BSs. The value of $q_{\rm c}$ here
significantly affects the BS number and orbital periods
(\cite{chen08a}).
Since the accreted material may be originating from the unclear
region of the mass donor, it is then rich in helium related to the
gainer's surface and has a higher mean molecular weight, which
results in secular instability during or after the accretion.
Thermohaline mixing will occur in this case. This mixing was once
believed to cause the surface abundance abnormality of the gainer
to be invisible. However, The study of \cite{chen04} showed no
distinction in surface composition between the models with and
without thermohaline mixing during mass transfer, although their
evolutionary tracks diverges. After the mass transfer, the C,N,O
abundance abnormalities may exists for about $10^8$ yr, comparable
to the lifetime of a typical BS. Thus, we could observe C,N,O
abundance abnormalities in BSs this way.
\subsection{Coalescence of contact binaries}
During mass transfer, the accretor likely fills its Roche Lobe and
the system becomes a contact binary. This contact binary
eventually coalesces as a single star (\cite{web76,egg00,lhz05}).
If both components are on the main sequence, their remnant is also
a main-sequence star and evolves similarly to a normal star with
that mass. So the remnant may be a BS if it is more massive than
the turnoff.
By assuming that the matter from the secondary homogeneously mixes
with the envelope of the primary and that no mass is lost from the
system during the merger process, \cite{chen08b} constructed some
mergers of contact binaries and studied their characteristics.
Their study shows that some mergers are on the left of the
zero-age main sequence defined by normal surface composition (i.e.
helium content Y = 0.28 with metallicity Z = 0.02 for Population
I) on a colour-magnitude diagram because of enhanced surface
helium content. In addition, the central hydrogen content of the
mergers is independent of mass. Thus, the concentration towards
the blue side of the main sequence with decreasing mass predicted
by {\cite{ss03}}, does not appear in their models. In fact, there
is no evidence for the concentration from observations.
In old clusters, angular momentum loss (AML) of low-mass binaries
induced by magnetic braking is a main factor to lead binaries to
be contact and merge finally. A simple estimation (\cite{chen08b})
shows that, in old clusters, BSs from the AML are much more
numerous than those from evolutionary effect only , indicating
that the AML of low-mass binaries makes a major contribution to
BSs in old cluster such as in NGC188, NGC 2682 etc.. In clusters
with intermediate age, e.g. in NGC 2660, the models of
\cite{chen08b} can account for several BSs. However, in the most
likely region on the colour-magnitude diagram, no BSs have been
observed yet. About 0.5 $M_\odot$ of mass loss in the merger
process is necessary to resolve this conflict.
\subsection{Coalescence due to dynamically unstable mass transfer}
If the mass ratio $q$ is larger than $q_{\rm c}$, mass transfer is
dynamically unstable, and a common envelope (CE) is formed. The CE
may be ejected if the orbital energy deposited in the envelope
overcomes its binding energy, or the binary will merge into a
single star. If the two components are main sequence stars, the
remnant of coalescence will be on the main sequence, and it is a
BS if its mass is beyond the turn-off mass of the host cluster.
In this case, the core of the secondary spirals in quickly and
remains in the centre of the merger. The merger then has a
chemical composition similar to that of the primary, resembling
the result of smoothed particle hydrodynamic calculations
(\cite{lrs96,sill97,sill01}).
Binary coalescence of a contact binary or dynamically unstable
mass transfer is a popular hypothesis for single BSs
(\cite{mateo90,pol94,and06,chen08b}).
\section{Binary population synthesis}
We performed five sets of simulations for a Population I
composition (X = 0.70, Y = 0.28 and Z = 0.02) to systematically
investigate BS formation from primordial binary evolution. The age
ranges from 0.1 to 20 Gyr. The mass transfer efficiency, $\beta$,
which is defined as the mass fraction of the matter lost from the
primary accreted by the secondary, is an important parameter
remarkably affecting on the final results. Generally, we set
$\beta=1$ when the mass donor is on the main sequence and
$\beta=0.5$ otherwise. However this value is very unclear, and it
is likely higher than 0.5 when the mass donor is in HG while lower
than 0.5 when the mass donor is on FGB or AGB. So we also studied
the case of $\beta=1$ when the mass transfer begins in HG
(\cite{chen09}).
\subsection{Distribution on colour-magnitude diagrams}
Figure \ref{cmd} is colour-magnitude diagrams of a population of
4.3 Gyr for various $\beta$ values when the primary is in HG at
the onset of RLOF. From this figure we see that BSs from $\beta
=1$ may be more massive than those of $\beta =0.5$, indicating
that the high value of $\beta$ may produce BSs far away from the
turnoff. In particular, we may obtain BSs with masses larger than
2 times of the turnoff even these objects have very short
lifetimes. It is appropriate to assume a high value of $\beta$ for
old clusters, since the binaries contributing to BSs are less
massive than those in young clusters.
\begin{figure}
\begin{center}
\includegraphics[width=2.0in,angle=270]{m67.ps}
\caption{Colour-magnitude diagrams when the population is at
4.3Gyr. Here $\beta$ is the mass fraction of the matter lost from
the primary accreted by the secondary when the primary is in HG at
the onset of mass transfer. The crosses and stars are for BSs from
mass transfer and binary coalescence, respectively, and the small
dots are for other objects in the population. The triangles are
observed BSs of M67 from \cite{ss03}.}
\label{cmd}
\end{center}
\end{figure}
We also see in the figure that many BSs are below but bluer than
the turnoff. They may extend into the region 1 mag lower than the
turnoff. These objects are mainly from mass transfer and binary
coalescence of dynamically unstable mass transfer. They are less
evolved than the turnoff and may contribute more to the flux of V
band in the following evolutions. In general, most BSs from mass
transfer and binary coalescence of dynamically unstable mass
transfer are within 1.5 mag of the turnoff, while some from binary
coalescence of contact binaries may stay above the turnoff about
2.3 mag.
\subsection{The Specific Frequency }
The BS number from the simulations, $N_{\rm BS}$, depends slightly
on the population age in our simulations, while the specific
frequency, ${\rm log}F_{\rm BSS}$($ \equiv {\rm log}(N_{\rm
BS}/N_2)$, $N_{\rm 2}$ is the number of stars within 2 mag below
the main-sequence turnoff), heavily depends on the age due to the
increase of $N_{\rm 2}$. In all of the five sets, ${\rm log}F_{\rm
BSS}$ decreases with time first, and then increases when the age
is larger than 10 Gyr. The decrease of ${\rm log}F_{\rm BSS}$
before 1.5 Gyr comes from the increase of $N_{\rm 2}$.
Subsequently, the number of potential binaries which may
contribute to BSs decreases, leading to the formation of fewer
BSs. On the other hand, $N_2$ continues increasing. Thus, ${\rm
log}F_{\rm BSS}$ continues decreasing. Over time, the primaries in
long-orbital-period binaries gradually enter into AGB phase,
dramatically expand, and some of them may fill their Roche lobe
and start mass transfer. Due to large stellar winds in the AGB
phase, these mass donors at the onset of mass transfer are
probably much less massive than before and this mass transfer is
easily stabilized, resulting in some long-orbital-period BSs. As a
consequence, ${\rm log}F_{\rm BSS}$ begins to increase when the
age is longer than 10 Gyr. AML becomes more and more important for
BS formation with time and exceeds that from primordial binary
evolution when the age is older than 2.5 Gyr.
When we investigate the role of primordial binary evolution on BS
formation in Galactic open clusters, we found a serious problem
that the BS specific frequency obtained in our simulations is much
lower (only about 20 per cent of the observed) than that observed,
which may result from the following aspects. (1)Observational
errors, such as $N_2$ counted in clusters, the cluster ages and
the BS sample etc. For example, the study of \cite{car08} showed
that a large fraction of earlier BSs are actually field stars.
(2)The adopted Monte Carlo simulation parameters in the
simulation, including initial mass function, distributions of
initial ratio and initial separation. All these parameters in our
simulation are for field stars and it is very likely that these
parameters are different in clusters. For instance, recent studies
show that the initial mass function might be quite different
between disk stars and halo stars in the Galaxy (\cite{pol08}).
(3)Other channels e.g a more recent era of star formation and
dynamical interaction, to produce BSs in open clusters in addition
to primordial binary evolution and AML.
\subsection{Contribution to ISED}
We plotted ISEDs of a population for various cases in Fig.
\ref{sed}, where SSP means a population without binary
interaction. This figure shows that BSs resulting from binary
evolution are dominant contributors to the ISED in UV and blue
bands between 0.3 and 2.0 Gyr. The BSs and SSP have comparable
energy in UV and blue bands between 2 and 4 Gyr. The contribution
from AML becomes more important with time, and exceeds that from
primordial binary evolution for a population older than $\sim 3$
Gyr. Thus, primordial binaries are important contributors to BSs
over the whole age range.
\begin{figure}
\begin{center}
\includegraphics[width=3.0in,angle=270]{sed.ps}
\caption{Integrated rest-frame intrinsic SEDs for a stellar
population with a mass of $1M_\odot$ at a distance of 10kpc with
$\beta =0.5$ when the primary is in HG at the onset of mass
transfer. The solid lines are for the results of SSP, which means
a population without binary interaction, and the others are the
contributions of BSs from different evolutionary channels. }
\label{sed}
\end{center}
\end{figure}
Since BSs from a high $\beta$ have higher masses, their
contributions to ISED then become more important in ultraviolet
and blue bands (see Fig.5 in \cite{chen09}).
\subsection{Influences on the colours and ages}
Obviously, the excess spectral energy due to BSs in UV and blue
bands inevitably results in some changes in colours involving
these bands. Thus, we studied some Hubble colours and the results
are shown in Fig.4, from which we see that the colours are
affected from about 0.32 Gyr (all of the five colours shown in
this figure) to older than 10 Gyr (i.e F185W-F336W and
F218W-F336W), and the maximum difference may be up to 1.5 mag for
F170W-F336W.
\begin{figure}
\begin{center}
\includegraphics[width=1.9in,angle=270]{dc-3.ps}
\caption{Colour differences versus age for a population with and without BSs.}
\label{fig1}
\end{center}
\end{figure}
\section{Conclusions and Acknowledgments}
In this contribution, we summarized the linkage of binary
evolution and BSs, and BS characteristics. As well, we showed
binary population synthesis results of BSs from primordial binary
evolution, such as the distribution on CMD, the contribution to
ISED, the specific frequency and the influences on colours. This
work was in part supported by the Chinese National Science
Foundation (Grant Nos. 10603013 and 10973036,10821061 and
2007CB815406) and Yunnan National Science Foundation (Grant No.
08YJ041001).
|